Testing C

Motivation

So, I am coming back to C programming. This time I am doing some cool stuff on gesture recognition from accelerometer readings :).

This project is meant to be running on a Cortex processor, compiled with an IAR compiler, with some dependencies from this distribution. But I don’t really have yet a license nor time to waste. So i am working with GCC.

The first thing I am dealing with is Testing.

It’s being really a long time since the last time I worked in C exclusively, and it’s evident that there are many things I forgot. Having work with gradle or maven, or in other realms where building is not a real issue, made my C building skills get rusty.  and probably this is why i am writing a blog post :).

I tried many unit test frameworks, and I should do my own implementation, since all what I found is way too complex to use.

Quick overview

In this section I will talk quickly about each framework I tried and why I end up discarding or choosing

Google test and Boost tests

Google test is probably my favourite, and Boost is not bad at all. It has a clear and simple way to define tests in a C fashion, without having to care much about the minimal C++ infrastructure needed to run tests. There are, however, two important reasons that discard these framework.

  • C++ is not C
    •  In C code, for reason of clarity I need to define things as bool, or specific types that already exist in C ++. I could deal with it with some defines, but that deals to a really invasive and unpredictable result. Tests should run under exactly same conditions to be trustable
    • C++ compiled results has some slight differences with C. There are some assumptions I cannot do in C++, as the memory distribution. And since I am working with a program that needs to be compiled in many platforms, I don’t want to populate C++ problematics taking care of the compiler only for testing support.
  • CMake / AutoTools.
    • This both building systems are quite annoying, complex, arbitrary and lack of consistence. That makes the curve of learning quite high, and the result is quite arbitrary as well. (Don’t tell me that all those magical variables make any sense. You will never convince me)
    • Both building tools outline come with an annoying amount of files. This is a really specific project. After applying a generic CMake for including GTest in my C project, able to compile the test folder with the proper related code I ended up having 3 building files per folder, before compilation. After compilation even more.

Check and CUnit

These both unit test frameworks are interesting but quite bureaucratic. They support the bases for testing, but they lack much usability from the infrastructure point of view (You have to register your self functions into a test suit, and compose your test cases with test suites). You have to have a main per test suit, or having a file the gathers all the tests and knows everything for being able to run. This makes refactoring quite complex, when is supposed to help refactoring.

Despite my complaints, these two frameworks respond to my need of, using only C for compiling and being able to use a simple makefile. I know, it may sound orthodox, old, out of fashion. I don’t care, is only one (really crappy but short) file.

It would be the same to me to use one or the other. I finally choosed Check, because the main seems to be less bureaucratic. That’s all :).

Check

So check has a good enough documentation, but kind of scattered. It’s obvious that the standard C programmer did not arrive yet to the need of modern techniques of quality. Maybe because things are more complex to generally test at this level.

So let me save you some time and show you how to load it and use it in the same place. (Sorry, only for Ubuntu/Debian 🙂 ).

Install check

$ sudo apt-get install check

Add check in your code

#include<check.h>
#include<yourheader.h>

int var;
/** Setup method. Will be configured to run before each test is executed */
static void setUp {
var = 1;
}
/** Teardown method. Will be configured to run after each test is executed */
static void tearDown {
var = 0;
}

/** Unit test method. Will be configured into a test case */
START_TEST(YourTestName) {
ck_assert_int_eq(var, 1);
}

/** Create suit method. Creates a test suite, that contains test cases inside. Mean to be executed with a test runner */
Suite * createSuite(void) {
Suite *suite;
TCase *test_case;
/* Creates a suite type */
suite = suite_create(“suit-name”);
/* Creates a test-case type */
test_case = tcase_create(“test-case-name”);
/* Register a unit test into the test case. You should do this for each unit testfunction you develop */
tcase_add_test(test_case, OneDimensionPeakDetector_Executes);
/* Registers the setup and teardown functions for this test case */
tcase_add_unchecked_fixture(test_case, setUp, tearDown);
/* It registers the test case into the test suite */
suite_add_tcase(suite, test_case);
return suite;
}

int main(void) {

int number_failed;
Suite *suite;
SRunner *testrunner;

/* It creates the suite with the function above */
suite = createSuite();
/* It a suit-test runner */
testrunner = srunner_create(suite);
/* It executes the tests */
srunner_run_all(testrunner, CK_NORMAL);
/* Checks failures */
number_failed = srunner_ntests_failed(testrunner);
/* Release memory */
srunner_free(testrunner);
/* Informs the status */
return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE;

}

Compiling

For compiling with gcc and check you have to add many library flags that are not quite well documented

Normally it should be enough to add them at the very end of your GCC compilation line:

-lcheck_pic -pthread -lrt -lm

Makefile

This is far from being a good model of a makefile. Is quite simplistic and it does not cover many problematics, but for this project is far enough. And, if your project is simple as mine, you may find this awful makefile good enough.

This makefile supposes that you are in a folder with a build folder and a src folder, where you have everything, maybe in subfolders, by example:

src
├── Collection
│   ├── include
│   │   └── Collection.h
│   ├── src
│   │   └── Collection.c
│   └── test
│   └── collection_tests.c
├── Core
│   ├── include
│   │   └── lib.h
│   ├── src
│   │   ├── lib.c
│   │   └── main.c
│   └── test
└──── core_tests.c

.DEFAULT_GOAL := all

CC=gcc
CFLAGS=-I./src/Collection/include -I./src/Core/include -lcheck_pic -pthread -lrt -lm
ODIR=build

### This variable is build taking all the .c files with it respective folder’s name. It avoid all the main.c and files finished by _tests.c. This is because each of this files will have a main function.
SRC=$(shell find . -iname *.c -not \( -iname main.c -or -iname *_tests.c \))

### This variable takes all the generated files in SRC variable and replace .c by .o
OBJ=$(shell echo $(SRC) | sed ‘s/\.c/\.o/’ | sed ‘s/src/build/’ )

$(ODIR)/%.o: src/%.c
mkdir -p $(dir $@)
$(CC) -c -o $@ $< $(CFLAGS)

$(ODIR)/main: $(OBJ)
gcc -o $@ $^ src/Core/src/main.c $(CFLAGS)

$(ODIR)/core_tests: $(OBJ)
gcc -o $@ $^ src/Core/test/core_tests.c $(CFLAGS)

$(ODIR)/collection_tests: $(OBJ)
gcc -o $@ $^ src/Collection/test/collection_tests.c $(CFLAGS)

.PHONY: clean all default

default: all

clean:
rm $(ODIR)/* -rf

all: clean $(ODIR)/main $(ODIR)/core_tests $(ODIR)/collection_tests

This makefile will output all the compilations into a build folder that, as i said before, it must exists before executing the makefile.

Again, this makefile is quite simplistic and is far from being elegant. But it works.

I hope this gives some light to your problems with testing in C.

 

 

Posted in C, makefile, testing, Uncategorized | Leave a comment

Survey through Hadoop: How to cross the Sahara on top of an elephant and do not die in the way – Part I

In this first part I will explain how to install hadoop in your machine. This explanation is for ubuntu, for other linux distributions you just need to change the code-sources.

Hadoop has several problems of User interface, as well as almost apache things, and that’s why reach the tarball to download is quite hard, but in this post I include a link to the ‘last stable release’

First, ensure that you have some of the basic architecture that Hadoop will need:

$ sudo apt-get install ssh
$ sudo apt-get install rsync
$ sudo apt-get install jsvc # OPTIONAL

Usually, you will have both of them already installed and up-to-date, but, you never know.

After, you just need to fetch hadoop. Today the stable release is 2.4.1, you probably want to check in the ftp the current last stable version, if it changed the following course it may not work.

$ wget http://ftp.cixug.es/apache/hadoop/common/stable/hadoop-2.4.1.tar.gz
$ tar -xvf hadoop-2.4.1.tar.gz
$ mv hadoop-2.4.1 hadoop

If we check the distribution of directories inside the hadoop folder
$ ls

4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 bin
4 drwxr-xr-x 3 santiago santiago 4096 Jun 21 08:05 etc
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 include
4 drwxr-xr-x 3 santiago santiago 4096 Jun 21 08:05 lib
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 libexec
16 -rw-r–r– 1 santiago santiago 15458 Jun 21 08:38 LICENSE.txt
4 -rw-r–r– 1 santiago santiago 101 Jun 21 08:38 NOTICE.txt
4 -rw-r–r– 1 santiago santiago 1366 Jun 21 08:38 README.txt
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 sbin
4 drwxr-xr-x 4 santiago santiago 4096 Jun 21 08:05 share

We can say that Apache suggest us to untar this on the root directory. This decision is of course up to you, and your choice will have repercussion on the edition of the environment file we will edit now

In order to work, hadoop need some care, and we will give it to it editing the
hadoop/etc/hadoop/hadoop-env.sh file:

$ vim hadoop/etc/hadoop/hadoop-env.sh

In this file we will face several points of service configuration, such as file locations and JVM parameters.

Reading the file you will notice that you can and may want to configure the JAVA_HOME environment variable. If you are not new in java, you may have it already setup in your own environment. This JAVA_HOME is the one that will be used by the hadoop features.

Since I am working with scala using Java 1.7 and hadoop suggest to use 1.6, I will redefine the default JAVA_HOME parameter in this file

export JAVA_HOME=${JAVA_HOME}

to

export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-amd64

Also reading this file you will realize the dependency of hadoop with Jsvc (In case that we want to run secure datanodes). Jsvc is a set of libraries for letting the unix running code to make some black magics (running as root by example).
Thankfully it is available in the ubuntu repositories and we can download it from there, as we already did. So if you need to go secure, install it. In any case, if you are just starting, you can leave it for later.

If you want to keep going with Jsvc, point the variable to the binary

export JSVC_HOME=/usr/bin/jsvc

We are almost done with the deploying, now we just need to move all the folders to the place they will take, since I want to use the apache suggestion, I will run

$ rm hadoop/*.txt
$ sudo cp hadoop/* / -r

Since commands will be located at standar folders, now you can do something like

$ hadoop version

Hadoop 2.4.1
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318
Compiled by jenkins on 2014-06-21T05:43Z
Compiled with protoc 2.5.0
From source with checksum bb7ac0a3c73dc131f4844b873c74b630
This command was run using /share/hadoop/common/hadoop-common-2.4.1.jar

Great, is already installed, and working. Is nice to know that by default it use the running machine as single node. So, you can already start to do your first experiments if you are a beginner.

If you want to have a more complex installation in order to have better knowledge about the limitations of hadoop and how to deal with the cloud definition it self, you can start heading to the configuratoin files: core-site.xml, hdfs-site.xml and mapred-site.xml, all of them at /etc/hadoop

$ vim /etc/hadoop/core-site.xml

Here you’ll find the default configuration in all of them, an empty one, just with xml definition and

To add several servers, all of them mapped to your own computer, you just need to add the following configurations:

conf/core-site.xml:

fs.defaultFS
hdfs://localhost:9000

conf/hdfs-site.xml:

dfs.replication
1

conf/mapred-site.xml:

mapred.job.tracker
localhost:9001

We will go deeper into configuration meanings in other posts, by know just believeme :).

We are almost there, now we just need to configure an ssh DSA (not RSA) key installed on our machine. You may have one already, if you haven’t you can do the following:

$ ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa

And after, add your public key to authorizeds keys:

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Finally, to start the service we need to execute two files located on sbin.

Be prepared to be prompted for the password of root user.. several times. If you are in Ubuntu, you probably never configured it:

$ sudo su
$ passwd
Enter new UNIX password:
Retype new UNIX password:

Then, the promised, execute the services

Starting the distributed filesystem

$ sudo /sbin/start-dfs.sh

14/07/07 01:24:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [localhost]
root@localhost’s password:
localhost: starting namenode, logging to ///logs/hadoop-root-namenode-Tifa.out
root@localhost’s password:
localhost: starting datanode, logging to ///logs/hadoop-root-datanode-Tifa.out

Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0 (0.0.0.0)’ can’t be established.
ECDSA key fingerprint is ed:01:28:4d:70:8f:8f:1b:7f:91:e8:85:61:0a:a2:87.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added ‘0.0.0.0’ (ECDSA) to the list of known hosts.
root@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to ///logs/hadoop-root-secondarynamenode-Tifa.out
14/07/07 01:25:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Starting the YARN services

$ sudo /sbin/start-yarn.sh

starting yarn daemons
starting resourcemanager, logging to //logs/yarn-root-resourcemanager-Tifa.out
root@localhost’s password:
localhost: starting nodemanager, logging to //logs/yarn-root-nodemanager-Tifa.out

Thats it :). In the next post we will see some extra configuration, FAQ for troubleshooting and after some basic usages of the hdfs and yarn.

Posted in funcional, Java, Uncategorized | Leave a comment

What the hell am i doing in Málaga?

Well, first at all, sun, Mediterranean sea, good food, nice women. I am of IT, and usually antisocial, but, some times i can have satisfaction with that mundane stuff. And also, Ericsson. I am working in a high performance project, starting just last monday. So, we will see how cool is this with some time, C++ and Java.

Quite different of Pharo smalltalk, but is a new brave challenge and i am excited.

I will try to post about programming stuff avoiding to cross the confidential contract.. you know.

 

 

 

 

Posted on by santiagobragagnolo | Leave a comment

Android Studio (Idea) – Gradle Offline mode

 

        Ok, i moved to Málaga, i am having some hard time without watching series nor movies, but, in the end, good time for programming. Or i thought that until i tried to compile my Android app without internet, and i realised Gradle compiler needs to check maven repositories.

 

       Then, if you are wanting to compile without internet ( it really can happen, even if you don’t believeme) be sure you have all your required dependencies before you go offline, and go

File >> Settings >> Gradle,

       You will see a radio button that says ‘Offline work’. Check it! Then test your luck again :).

        Btw, if you need to configure a proxy, i just learned also that you can configure it at the gradle.properties file:

systemProp.http.proxyHost=www.somehost.org
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=userid
systemProp.http.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost

 

 

Posted in Uncategorized | Leave a comment

c-Objetos

C-Objetos is a spanish written framework for object orientedd programming in C. It is something i made back in 2006/2007 when i was coursing a topic at university, where we were pushed to work in C, and i wanted to program in object oriented style.This library includes catch/throw (based on signals and threading), filedescriptors, string, server, collections (list based), dictionary, thread management, automaton.

I will try to translate all the code to english and have a fork of it self, but i cannot promise any thing. In order to use it, you just need to include framework.c file in your code. This will include all the libraries. In order to initialize all the ‘classes’ and to install the needed functions, you need to call

framework_GoLive();

#define framework_GoLive() FdObjClass_GoLive (); \
FileObjClass_GoLive (); \
StringClass_GoLive (); \
colClassGoLive (); \
DiccionarioClass_GoLive (); \
DicHibrid_GoLive (); \
ultClassGoLive (); \
EstadoClass_GoLive ();\
Automata_GoLive (); \
AutomanClass_GoLive (); \
ServerClassGoLive (); \
MutexClass_GoLive (); \
ConditionClass_GoLive (); \
MonitorClass_GoLive (); \
DataControlClass_GoLive (); \
LogClass_GoLive (); \

You can check it out from my github,

Posted in C, GCC, objetos, poo | Tagged , | Leave a comment

[[Network: can’t call something : /127.0.0.1:27017/myMongoDb]] – Or about error messages

Ok, i was making a proper example with controllers in order to allow people to have a base project to start with this in scala.

I had already done all my code, but, after one time everything runned ok, i started to have the following stack trace.

play.api.Application$$anon$1: Execution exception[[Network: can't call something : /127.0.0.1:27017/myMongoDb]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.3]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.3]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:166) [play_2.10.jar:2.2.3]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:163) [play_2.10.jar:2.2.3]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library.jar:na]
Caused by: com.mongodb.MongoException$Network: can't call something : /127.0.0.1:27017/todo
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:227) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:305) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._check(DBCursor.java:369) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._hasNext(DBCursor.java:498) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._fill(DBCursor.java:558) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor.toArray(DBCursor.java:596) ~[mongo-java-driver-2.7.2.jar:na]
Caused by: org.codehaus.jackson.map.JsonMappingException: No suitable constructor found for type [simple type, class models.Excercise]: can not instantiate from JSON object (need to add/enable type information?)
at [Source: de.undercouch.bson4jackson.io.LittleEndianInputStream@4619e42e; pos: 0]
at org.codehaus.jackson.map.JsonMappingException.from(JsonMappingException.java:163) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObjectUsingNonDefault(BeanDeserializer.java:746) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:683) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:580) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.ObjectMapper._readValue(ObjectMapper.java:2704) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1315) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]

The first thing i did, is what i always do: check the error message, go look in a search engine or in the common code places and check what the hell means “Network: can’t call something : /127.0.0.1:27017/myMongoDb”.

Really awful message, the first thing crossed my mind was ‘the url is bad formed’, maybe is needed to add mongo://. But no, the message is like this, and it means that it could not connect.

Ok, i double checked, even when i was using the mongo shell to ensure changes, i double checked that everything was running and accessible

mongo 127.0.0.1:27017/myMongoDb

That ensures i can access it, OK, in localhost, but my program was running also in localhost, so, real network issues are not in range.

I change parameters, add users, change privileges, compile and recompile at least 100 times.

Then i saw it in the middle of the stack trace:

Caused by: org.codehaus.jackson.map.JsonMappingException: No suitable constructor found for type [simple type, class models.Excercise]: can not instantiate from JSON object (need to add/enable type information?)

Ohh god. I think all the most awful error messages i had face in java come when you do not respect the JB contract, but this was really unexpected.

This is what i had


case class Excercise( var name: String, var description: String) { @Id @ObjectId var id: String = null }

And this is what i do have now


case class Excercise( var name: String, var dhttps://knowledgeconvergence.wordpress.com/wp-admin/post.php?post=171&action=editescription: String) {
@Id @ObjectId var id: String = null
def this() = this(null, null)
}

So nothing, you know, when everything fails, or even better, when you are not sure about the error message, remember to check first that what-you-added is javabean compatible :P.

 

 

Posted in JavaBeans, MongoDB, Play framework, scala, Stacktrace, Uncategorized | Tagged | Leave a comment

XSens BUS Accelerometer Driver + ROS-CPP Node

 

One of my problems during my working time in Ecole des mines de Douai, was to make work a XSens Bus accelerometer in ubuntu, and in ROS.

XSens provides a C++ code in windows, compilable in ubuntu (GCC), but, it do not work out of the box in the bus presentation.

The XSens bus presentation is a bus of accelerometers. N Accelerometers connected in a bus-network-layout to a base that put all that data together.

 

XSens bus

 

And this bus pod sends information through blue-tooth technology.

So, here is the driver. Easy to use, there is just one executable, there is no configuration. So, just download, compile as any other catkin package, and execute as any other node.

 

 

 

Posted in CPP, Hardware, robotics, ROS | Tagged | Leave a comment

Jerry odometry

 

Image

 

One of the several ideas i had in my time of robotics in Ecole des mines was the Jerry odometer, jerry like the mouse.

One of our problems was the awful odometry information we had from the differential robot, since the assembly of the wheels was quite low quality, having constant contact with the main frame of the robot. And, like if that was not enough, the robot had not gyroscope nor accelerometer, (until i put my hands on an xsens bus accelerometer).

So, since i cannot install a sensor that measures changes in the robot without needing to modify the robot it self (matter in charge of the technology partner and completely privative), i had the great idea of using a mouse. A mouse touching all the time the floor, or almost (If you work in robotics, you know we can and we do have errors in all our sensors).

Sadly, the restriction of mouse-based odometer is complex enough, and it did not saw the light.

Since i am now making my own robotic experiments (of course with less budget), i decided to go back to that, because i need to have a cheap localization sensor.

 

I can say that if your system can have the restriction of touching the floor all the time, and a speed slower to the equivalent to 1200 baud (the standard mouse information rate), an it has Differential drive movement OR 4 axis based movement ( since a mouse measures dX and dY but, it do not measures orientation variation, we cannot calculate any rotation, but the one of a classic differential drive based system, where the rotation can be represented by dY in term of radians ), the mouse is a great first localization sensor.

You can also use a mouse in the same way of the scroll wheel metaphor: put a mouse touching each wheel of the robot. (This system has, of course, problems with wheel drifts).

In order to provide the mouse information to ROS i have developed a CPP node that receives as parameter the port to read and send the read information to the topic delta_xyt, a Vector3 data type with dX, dY and dT (time) information.

I also started to write a kernel module (driver) for reading the data, without allowing the mouse cursor to move on the screen, but i am having hard time with understanding a bug that makes my machine to blow. (I am waiting for my new machine for putting back my hands back on it)

 

Mean while, i put here both of the projects. The node one, ready to compile inside a catkin workspace. The driver one is compiling, but once you load and use it for some seconds your machine will be unusable.

You can check out the project from here. For the kernel module, check the jerry-driver folder.

 

 

 

 

 

 

 

 

 

 

 

 

In order to have a better resolution, acce

 

Posted in CPP, Hardware, Kernel Module, ROS, Uncategorized | Tagged | Leave a comment

Coming back to Scala – How to put to work scala+play+mongoDB

After 20 months working only on Pharo (an amazing experience), i am coming back to the functional/oo world. Time has happened. Now i am getting deep into Play framework, and learning how to put everything to work with MongoDB on a Heroku server.

It took me some time to put all together in my ubuntu installation. And indeed i did not did it with the last package version but the previous one. (play command instead of activator)

So, here a small how tutorial in order to save time to any one.

#Be sure you have java 7 (If you have Version >7 do not execute this)

sudo apt-get install openjdk-7-jdk (If you have Version >7 do not execute this)

# Be sure is the default

sudo update-java-alternatives -s java-1.7.0-openjdk-amd64

# Be sure there is a mongoDB full server installation

sudo apt-get install mongodb

# We get Play

wget http://downloads.typesafe.com/play/2.2.3/play-2.2.3.zip

unzip play-2.2.3.zip

# Move it to a cool place

sudo mv play-2.2.3 /usr/local/play

# Add it to the path and open and source changes

echo “export PATH=$PATH:/usr/local/play” >> ~/.bashrc

source ~/.bashrc

# Create our workspace for developing

mkdir ~/workspace

cd ~/workspace

# Create our play base skeleton project

play new mongodb

# Add the database connection data to application.conf

echo ‘mongodb.database=”myDataBase”‘ >> mongodb/conf/application.conf

echo ‘mongodb.servers=”127.0.0.1:27017″‘ >> mongodb/conf/application.conf

# If you have a user/password configured for mongo, and running with security

echo ‘mongodb.credentials=”user:pass”‘ >> mongodb/conf/application.conf

 

After all this you need to edit,

vim mongodb/build.sbt

You should have someshing like

name := “mongodb”

version := “1.0-SNAPSHOT”

libraryDependencies ++= Seq(
jdbc,
anorm,
cache,
)

play.Project.playScalaSettings

Then you need to add to your dependencies the mongo-jackson-play mapper (This project is will make almost all the needed magic). In the end your build.sbt file should look like this:

name := “mongodb”

version := “1.0-SNAPSHOT”

libraryDependencies ++= Seq(
jdbc,
anorm,
cache,
“net.vz.mongodb.jackson” %% “play-mongo-jackson-mapper” % “1.1.0”
)

play.Project.playScalaSettings

Now we are ready to push our project to load all the dependencies. It can take some time depending on how much dependencies you’re missing.

cd mongodb

play

[mongodb] run

……..

 

Finally, once everything is installed, you can generate the eclipse or idea needed files in order to use one of these IDE’s

[mongodb] eclipse

 

Before keep going, if you were already in the play-console of our new project when you changed any of the configuration files, you will probably have some problems. So, close it, start it again, excecute run and execute eclipse/idea again ( You need to let play to generate the jar-references inside IDE, if not it will not compile in the IDE).

Ok, now you can start to work.

 

package domain

import play.modules.mongodb.jackson.MongoDB
import net.vz.mongodb.jackson.JacksonDBCollection
import net.vz.mongodb.jackson.Id
import net.vz.mongodb.jackson.ObjectId
import scala.collection.mutable.ArrayBuffer
import scala.collection.JavaConversions._
import scala.collection.mutable.Buffer

class Excercise (
    @Id
	@ObjectId
	var id :String,
	var name : String,
	var description : String) {
}

object Excercise {
	var excercises : JacksonDBCollection[Excercise, String] = MongoDB.getCollection("excercises", classOf[Excercise], classOf[String])
	def findById (id: String) : Excercise = excercises.findOneById(id)
	def findAll (): Buffer[Excercise] = excercises.find().toArray()
	def delete (excercise : Excercise) = excercises.remove(excercise)
	def delete(id: String) = excercises.removeById(id)
	def save (excercise : Excercise) = excercises.save(excercise)
}

After all this work, the only thing that remains is to make a controller that uses this. But that is work for an other post

Posted in funcional, IDE, MongoDB, Play framework, programacion, scala | Tagged | Leave a comment

Closing a phase. Good bye GSoC

        When i began with these i was really ignorant about types, now i can say, i know a little bit of this huge world.

         I lerned about many many things, like graphs alogorithms, tree algorithms, reflection, meta programming, loggin , prediction, tarot, etc.

        As fruit of these days of hard work, i get for the comunity three projects:

    1. Concrete Type Inference
    2. Kwisatz Haderach
    3. Paul Le poulpe

       An inferrer, a behavior analyzer and a logger.

       Making a balance, i’m happy. I want to get more funcitionality to this point, over all i wanted to get inference over reflection and messages like become: but well, i’m not that good i supose, i needed and i need to lern and growup to make each step. But, really i cannot complain, reaching the end of this window of coding, i have added:

        Support: 

    • Subresult comparing mechanism
    • Trait extension to useit 
    • Code patterns (like conditional contexts and multiple dimension analysis)
    • Error management
    • MethodContext and thisContext support
    • Pharo 2.0 ( the core project was not really ported, just a bit)  
    • 40 new primitives 
    • fixes and generalization over all the 46 old supported primitives 
    • Semi generalization matching mechanism for primitives 
    • Named primitives support
    • Symbol as method sender ( a bit of reflection )
    • Collection common usage

     
     A suit of 992 Tests with a huge number 568 green tests ( 165 really-hard more tests from the last post 🙂 )

     An easy-to query and well tested call graph, based on the visitor pattern, builded from an integration of the inferencer-stack  management with the announcement framework.

    A highly configurable logger based on the Log4j api that supports too the logging builded in the image.

     Now, maybe you are asking your self: this is the end? The answer is  “no” Why on earth i should stop programming this? :).
     More than this, i’m about to begin a journey really important for my self. Maybe to france, maybe to spain. At least the next two years i’ll be working in this project by my self. I believe that we can build a better future, i believe in pharo and smalltalk, and i want to make more people to believe in this beatiful community and in this way to make software, but, we still needing better tools. Tools based in a dependable type information.

    
      Where is the code?

          Type inference v1.0 & Kwisatz Haderach v1.0

          Paul le poulpe v1.0
  

              

Posted in Call graph, Collections, Concrete Type Inference, Conditional Contexts, GSoC, Kwisatz Haderach, Mechanisms, Paul le poulpe, Pharo, Primitives | Tagged | Leave a comment