Testing C

Motivation

So, I am coming back to C programming. This time I am doing some cool stuff on gesture recognition from accelerometer readings :).

This project is meant to be running on a Cortex processor, compiled with an IAR compiler, with some dependencies from this distribution. But I don’t really have yet a license nor time to waste. So i am working with GCC.

The first thing I am dealing with is Testing.

It’s being really a long time since the last time I worked in C exclusively, and it’s evident that there are many things I forgot. Having work with gradle or maven, or in other realms where building is not a real issue, made my C building skills get rusty.  and probably this is why i am writing a blog post :).

I tried many unit test frameworks, and I should do my own implementation, since all what I found is way too complex to use.

Quick overview

In this section I will talk quickly about each framework I tried and why I end up discarding or choosing

Google test and Boost tests

Google test is probably my favourite, and Boost is not bad at all. It has a clear and simple way to define tests in a C fashion, without having to care much about the minimal C++ infrastructure needed to run tests. There are, however, two important reasons that discard these framework.

  • C++ is not C
    •  In C code, for reason of clarity I need to define things as bool, or specific types that already exist in C ++. I could deal with it with some defines, but that deals to a really invasive and unpredictable result. Tests should run under exactly same conditions to be trustable
    • C++ compiled results has some slight differences with C. There are some assumptions I cannot do in C++, as the memory distribution. And since I am working with a program that needs to be compiled in many platforms, I don’t want to populate C++ problematics taking care of the compiler only for testing support.
  • CMake / AutoTools.
    • This both building systems are quite annoying, complex, arbitrary and lack of consistence. That makes the curve of learning quite high, and the result is quite arbitrary as well. (Don’t tell me that all those magical variables make any sense. You will never convince me)
    • Both building tools outline come with an annoying amount of files. This is a really specific project. After applying a generic CMake for including GTest in my C project, able to compile the test folder with the proper related code I ended up having 3 building files per folder, before compilation. After compilation even more.

Check and CUnit

These both unit test frameworks are interesting but quite bureaucratic. They support the bases for testing, but they lack much usability from the infrastructure point of view (You have to register your self functions into a test suit, and compose your test cases with test suites). You have to have a main per test suit, or having a file the gathers all the tests and knows everything for being able to run. This makes refactoring quite complex, when is supposed to help refactoring.

Despite my complaints, these two frameworks respond to my need of, using only C for compiling and being able to use a simple makefile. I know, it may sound orthodox, old, out of fashion. I don’t care, is only one (really crappy but short) file.

It would be the same to me to use one or the other. I finally choosed Check, because the main seems to be less bureaucratic. That’s all :).

Check

So check has a good enough documentation, but kind of scattered. It’s obvious that the standard C programmer did not arrive yet to the need of modern techniques of quality. Maybe because things are more complex to generally test at this level.

So let me save you some time and show you how to load it and use it in the same place. (Sorry, only for Ubuntu/Debian 🙂 ).

Install check

$ sudo apt-get install check

Add check in your code

#include<check.h>
#include<yourheader.h>

int var;
/** Setup method. Will be configured to run before each test is executed */
static void setUp {
var = 1;
}
/** Teardown method. Will be configured to run after each test is executed */
static void tearDown {
var = 0;
}

/** Unit test method. Will be configured into a test case */
START_TEST(YourTestName) {
ck_assert_int_eq(var, 1);
}

/** Create suit method. Creates a test suite, that contains test cases inside. Mean to be executed with a test runner */
Suite * createSuite(void) {
Suite *suite;
TCase *test_case;
/* Creates a suite type */
suite = suite_create(“suit-name”);
/* Creates a test-case type */
test_case = tcase_create(“test-case-name”);
/* Register a unit test into the test case. You should do this for each unit testfunction you develop */
tcase_add_test(test_case, OneDimensionPeakDetector_Executes);
/* Registers the setup and teardown functions for this test case */
tcase_add_unchecked_fixture(test_case, setUp, tearDown);
/* It registers the test case into the test suite */
suite_add_tcase(suite, test_case);
return suite;
}

int main(void) {

int number_failed;
Suite *suite;
SRunner *testrunner;

/* It creates the suite with the function above */
suite = createSuite();
/* It a suit-test runner */
testrunner = srunner_create(suite);
/* It executes the tests */
srunner_run_all(testrunner, CK_NORMAL);
/* Checks failures */
number_failed = srunner_ntests_failed(testrunner);
/* Release memory */
srunner_free(testrunner);
/* Informs the status */
return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE;

}

Compiling

For compiling with gcc and check you have to add many library flags that are not quite well documented

Normally it should be enough to add them at the very end of your GCC compilation line:

-lcheck_pic -pthread -lrt -lm

Makefile

This is far from being a good model of a makefile. Is quite simplistic and it does not cover many problematics, but for this project is far enough. And, if your project is simple as mine, you may find this awful makefile good enough.

This makefile supposes that you are in a folder with a build folder and a src folder, where you have everything, maybe in subfolders, by example:

src
├── Collection
│   ├── include
│   │   └── Collection.h
│   ├── src
│   │   └── Collection.c
│   └── test
│   └── collection_tests.c
├── Core
│   ├── include
│   │   └── lib.h
│   ├── src
│   │   ├── lib.c
│   │   └── main.c
│   └── test
└──── core_tests.c

.DEFAULT_GOAL := all

CC=gcc
CFLAGS=-I./src/Collection/include -I./src/Core/include -lcheck_pic -pthread -lrt -lm
ODIR=build

### This variable is build taking all the .c files with it respective folder’s name. It avoid all the main.c and files finished by _tests.c. This is because each of this files will have a main function.
SRC=$(shell find . -iname *.c -not \( -iname main.c -or -iname *_tests.c \))

### This variable takes all the generated files in SRC variable and replace .c by .o
OBJ=$(shell echo $(SRC) | sed ‘s/\.c/\.o/’ | sed ‘s/src/build/’ )

$(ODIR)/%.o: src/%.c
mkdir -p $(dir $@)
$(CC) -c -o $@ $< $(CFLAGS)

$(ODIR)/main: $(OBJ)
gcc -o $@ $^ src/Core/src/main.c $(CFLAGS)

$(ODIR)/core_tests: $(OBJ)
gcc -o $@ $^ src/Core/test/core_tests.c $(CFLAGS)

$(ODIR)/collection_tests: $(OBJ)
gcc -o $@ $^ src/Collection/test/collection_tests.c $(CFLAGS)

.PHONY: clean all default

default: all

clean:
rm $(ODIR)/* -rf

all: clean $(ODIR)/main $(ODIR)/core_tests $(ODIR)/collection_tests

This makefile will output all the compilations into a build folder that, as i said before, it must exists before executing the makefile.

Again, this makefile is quite simplistic and is far from being elegant. But it works.

I hope this gives some light to your problems with testing in C.

 

 

Advertisements
Posted in C, makefile, testing, Uncategorized | Leave a comment

Survey through Hadoop: How to cross the Sahara on top of an elephant and do not die in the way – Part I

In this first part I will explain how to install hadoop in your machine. This explanation is for ubuntu, for other linux distributions you just need to change the code-sources.

Hadoop has several problems of User interface, as well as almost apache things, and that’s why reach the tarball to download is quite hard, but in this post I include a link to the ‘last stable release’

First, ensure that you have some of the basic architecture that Hadoop will need:

$ sudo apt-get install ssh
$ sudo apt-get install rsync
$ sudo apt-get install jsvc # OPTIONAL

Usually, you will have both of them already installed and up-to-date, but, you never know.

After, you just need to fetch hadoop. Today the stable release is 2.4.1, you probably want to check in the ftp the current last stable version, if it changed the following course it may not work.

$ wget http://ftp.cixug.es/apache/hadoop/common/stable/hadoop-2.4.1.tar.gz
$ tar -xvf hadoop-2.4.1.tar.gz
$ mv hadoop-2.4.1 hadoop

If we check the distribution of directories inside the hadoop folder
$ ls

4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 bin
4 drwxr-xr-x 3 santiago santiago 4096 Jun 21 08:05 etc
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 include
4 drwxr-xr-x 3 santiago santiago 4096 Jun 21 08:05 lib
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 libexec
16 -rw-r–r– 1 santiago santiago 15458 Jun 21 08:38 LICENSE.txt
4 -rw-r–r– 1 santiago santiago 101 Jun 21 08:38 NOTICE.txt
4 -rw-r–r– 1 santiago santiago 1366 Jun 21 08:38 README.txt
4 drwxr-xr-x 2 santiago santiago 4096 Jun 21 08:05 sbin
4 drwxr-xr-x 4 santiago santiago 4096 Jun 21 08:05 share

We can say that Apache suggest us to untar this on the root directory. This decision is of course up to you, and your choice will have repercussion on the edition of the environment file we will edit now

In order to work, hadoop need some care, and we will give it to it editing the
hadoop/etc/hadoop/hadoop-env.sh file:

$ vim hadoop/etc/hadoop/hadoop-env.sh

In this file we will face several points of service configuration, such as file locations and JVM parameters.

Reading the file you will notice that you can and may want to configure the JAVA_HOME environment variable. If you are not new in java, you may have it already setup in your own environment. This JAVA_HOME is the one that will be used by the hadoop features.

Since I am working with scala using Java 1.7 and hadoop suggest to use 1.6, I will redefine the default JAVA_HOME parameter in this file

export JAVA_HOME=${JAVA_HOME}

to

export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-amd64

Also reading this file you will realize the dependency of hadoop with Jsvc (In case that we want to run secure datanodes). Jsvc is a set of libraries for letting the unix running code to make some black magics (running as root by example).
Thankfully it is available in the ubuntu repositories and we can download it from there, as we already did. So if you need to go secure, install it. In any case, if you are just starting, you can leave it for later.

If you want to keep going with Jsvc, point the variable to the binary

export JSVC_HOME=/usr/bin/jsvc

We are almost done with the deploying, now we just need to move all the folders to the place they will take, since I want to use the apache suggestion, I will run

$ rm hadoop/*.txt
$ sudo cp hadoop/* / -r

Since commands will be located at standar folders, now you can do something like

$ hadoop version

Hadoop 2.4.1
Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318
Compiled by jenkins on 2014-06-21T05:43Z
Compiled with protoc 2.5.0
From source with checksum bb7ac0a3c73dc131f4844b873c74b630
This command was run using /share/hadoop/common/hadoop-common-2.4.1.jar

Great, is already installed, and working. Is nice to know that by default it use the running machine as single node. So, you can already start to do your first experiments if you are a beginner.

If you want to have a more complex installation in order to have better knowledge about the limitations of hadoop and how to deal with the cloud definition it self, you can start heading to the configuratoin files: core-site.xml, hdfs-site.xml and mapred-site.xml, all of them at /etc/hadoop

$ vim /etc/hadoop/core-site.xml

Here you’ll find the default configuration in all of them, an empty one, just with xml definition and

To add several servers, all of them mapped to your own computer, you just need to add the following configurations:

conf/core-site.xml:

fs.defaultFS
hdfs://localhost:9000

conf/hdfs-site.xml:

dfs.replication
1

conf/mapred-site.xml:

mapred.job.tracker
localhost:9001

We will go deeper into configuration meanings in other posts, by know just believeme :).

We are almost there, now we just need to configure an ssh DSA (not RSA) key installed on our machine. You may have one already, if you haven’t you can do the following:

$ ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa

And after, add your public key to authorizeds keys:

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Finally, to start the service we need to execute two files located on sbin.

Be prepared to be prompted for the password of root user.. several times. If you are in Ubuntu, you probably never configured it:

$ sudo su
$ passwd
Enter new UNIX password:
Retype new UNIX password:

Then, the promised, execute the services

Starting the distributed filesystem

$ sudo /sbin/start-dfs.sh

14/07/07 01:24:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [localhost]
root@localhost’s password:
localhost: starting namenode, logging to ///logs/hadoop-root-namenode-Tifa.out
root@localhost’s password:
localhost: starting datanode, logging to ///logs/hadoop-root-datanode-Tifa.out

Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0 (0.0.0.0)’ can’t be established.
ECDSA key fingerprint is ed:01:28:4d:70:8f:8f:1b:7f:91:e8:85:61:0a:a2:87.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added ‘0.0.0.0’ (ECDSA) to the list of known hosts.
root@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to ///logs/hadoop-root-secondarynamenode-Tifa.out
14/07/07 01:25:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

Starting the YARN services

$ sudo /sbin/start-yarn.sh

starting yarn daemons
starting resourcemanager, logging to //logs/yarn-root-resourcemanager-Tifa.out
root@localhost’s password:
localhost: starting nodemanager, logging to //logs/yarn-root-nodemanager-Tifa.out

Thats it :). In the next post we will see some extra configuration, FAQ for troubleshooting and after some basic usages of the hdfs and yarn.

Posted in funcional, Java, Uncategorized | Leave a comment

Ericsson

What the hell am i doing in Málaga?

Well, first at all, sun, Mediterranean sea, good food, nice women. I am of IT, and usually antisocial, but, some times i can have satisfaction with that mundane stuff. And also, Ericsson. I am working in a high performance project, starting just last monday. So, we will see how cool is this with some time, C++ and Java.

Quite different of Pharo smalltalk, but is a new brave challenge and i am excited.

I will try to post about programming stuff avoiding to cross the confidential contract.. you know.

 

 

 

 

Aside | Posted on by | Leave a comment

Android Studio (Idea) – Gradle Offline mode

 

        Ok, i moved to Málaga, i am having some hard time without watching series nor movies, but, in the end, good time for programming. Or i thought that until i tried to compile my Android app without internet, and i realised Gradle compiler needs to check maven repositories.

 

       Then, if you are wanting to compile without internet ( it really can happen, even if you don’t believeme) be sure you have all your required dependencies before you go offline, and go

File >> Settings >> Gradle,

       You will see a radio button that says ‘Offline work’. Check it! Then test your luck again :).

        Btw, if you need to configure a proxy, i just learned also that you can configure it at the gradle.properties file:

systemProp.http.proxyHost=www.somehost.org
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=userid
systemProp.http.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost

 

 

Posted in Uncategorized | Leave a comment

c-Objetos

C-Objetos is a spanish written framework for object orientedd programming in C. It is something i made back in 2006/2007 when i was coursing a topic at university, where we were pushed to work in C, and i wanted to program in object oriented style.This library includes catch/throw (based on signals and threading), filedescriptors, string, server, collections (list based), dictionary, thread management, automaton.

I will try to translate all the code to english and have a fork of it self, but i cannot promise any thing. In order to use it, you just need to include framework.c file in your code. This will include all the libraries. In order to initialize all the ‘classes’ and to install the needed functions, you need to call

framework_GoLive();

#define framework_GoLive() FdObjClass_GoLive (); \
FileObjClass_GoLive (); \
StringClass_GoLive (); \
colClassGoLive (); \
DiccionarioClass_GoLive (); \
DicHibrid_GoLive (); \
ultClassGoLive (); \
EstadoClass_GoLive ();\
Automata_GoLive (); \
AutomanClass_GoLive (); \
ServerClassGoLive (); \
MutexClass_GoLive (); \
ConditionClass_GoLive (); \
MonitorClass_GoLive (); \
DataControlClass_GoLive (); \
LogClass_GoLive (); \

You can check it out from my github,

Posted in C, GCC, objetos, poo | Tagged , | Leave a comment

[[Network: can’t call something : /127.0.0.1:27017/myMongoDb]] – Or about error messages

Ok, i was making a proper example with controllers in order to allow people to have a base project to start with this in scala.

I had already done all my code, but, after one time everything runned ok, i started to have the following stack trace.

play.api.Application$$anon$1: Execution exception[[Network: can't call something : /127.0.0.1:27017/myMongoDb]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.3]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.3]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:166) [play_2.10.jar:2.2.3]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:163) [play_2.10.jar:2.2.3]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library.jar:na]
Caused by: com.mongodb.MongoException$Network: can't call something : /127.0.0.1:27017/todo
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:227) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:305) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._check(DBCursor.java:369) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._hasNext(DBCursor.java:498) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor._fill(DBCursor.java:558) ~[mongo-java-driver-2.7.2.jar:na]
at com.mongodb.DBCursor.toArray(DBCursor.java:596) ~[mongo-java-driver-2.7.2.jar:na]
Caused by: org.codehaus.jackson.map.JsonMappingException: No suitable constructor found for type [simple type, class models.Excercise]: can not instantiate from JSON object (need to add/enable type information?)
at [Source: de.undercouch.bson4jackson.io.LittleEndianInputStream@4619e42e; pos: 0]
at org.codehaus.jackson.map.JsonMappingException.from(JsonMappingException.java:163) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObjectUsingNonDefault(BeanDeserializer.java:746) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:683) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.deser.BeanDeserializer.deserialize(BeanDeserializer.java:580) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.ObjectMapper._readValue(ObjectMapper.java:2704) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]
at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1315) ~[jackson-mapper-asl-1.9.5.jar:1.9.5]

The first thing i did, is what i always do: check the error message, go look in a search engine or in the common code places and check what the hell means “Network: can’t call something : /127.0.0.1:27017/myMongoDb”.

Really awful message, the first thing crossed my mind was ‘the url is bad formed’, maybe is needed to add mongo://. But no, the message is like this, and it means that it could not connect.

Ok, i double checked, even when i was using the mongo shell to ensure changes, i double checked that everything was running and accessible

mongo 127.0.0.1:27017/myMongoDb

That ensures i can access it, OK, in localhost, but my program was running also in localhost, so, real network issues are not in range.

I change parameters, add users, change privileges, compile and recompile at least 100 times.

Then i saw it in the middle of the stack trace:

Caused by: org.codehaus.jackson.map.JsonMappingException: No suitable constructor found for type [simple type, class models.Excercise]: can not instantiate from JSON object (need to add/enable type information?)

Ohh god. I think all the most awful error messages i had face in java come when you do not respect the JB contract, but this was really unexpected.

This is what i had


case class Excercise( var name: String, var description: String) { @Id @ObjectId var id: String = null }

And this is what i do have now


case class Excercise( var name: String, var dhttps://knowledgeconvergence.wordpress.com/wp-admin/post.php?post=171&action=editescription: String) {
@Id @ObjectId var id: String = null
def this() = this(null, null)
}

So nothing, you know, when everything fails, or even better, when you are not sure about the error message, remember to check first that what-you-added is javabean compatible :P.

 

 

Posted in JavaBeans, MongoDB, Play framework, scala, Stacktrace, Uncategorized | Tagged | Leave a comment

XSens BUS Accelerometer Driver + ROS-CPP Node

 

One of my problems during my working time in Ecole des mines de Douai, was to make work a XSens Bus accelerometer in ubuntu, and in ROS.

XSens provides a C++ code in windows, compilable in ubuntu (GCC), but, it do not work out of the box in the bus presentation.

The XSens bus presentation is a bus of accelerometers. N Accelerometers connected in a bus-network-layout to a base that put all that data together.

 

XSens bus

 

And this bus pod sends information through blue-tooth technology.

So, here is the driver. Easy to use, there is just one executable, there is no configuration. So, just download, compile as any other catkin package, and execute as any other node.

 

 

 

Posted in CPP, Hardware, robotics, ROS | Tagged | Leave a comment