1. Cgreen Quickstart Guide

1.1. What is Cgreen?

Cgreen is a unit tester for the C and C++ software developer, a test automation and software quality assurance tool for programmers and development teams. The tool is completely open source published under the LGPL.

Unit testing is a development practice popularised by the agile development community. It is characterised by writing many small tests alongside the normal code. Often the tests are written before the code they are testing, in a tight test-code-refactor loop. Done this way, the practice is known as Test Driven Development. Cgreen was designed to support this style of development.

Unit tests are written in the same language as the code, in our case C or C++. This avoids the mental overhead of constantly switching language, and also allows you to use any application code in your tests.

Here are some of its features:

  • Fluent API resulting in very readable tests

  • Expressive and clear output using the default reporter

  • Fully functional mocks, both strict and loose

  • Each test runs in its own process for test suite robustness

  • Automatic discovery and running of tests using dynamic library inspection

  • Extensive and expressive constraints for many datatypes

  • BDD-flavoured test declarations with Before and After declarations

  • Extensible reporting mechanism

  • Fully composable test suites

  • An isolated test can be run in a single process for debugging

Cgreen also supports the classic xUnit-style assertions for easy porting from other frameworks.

Cgreen was initially developed to support C programming, but there is also excellent support for C++. It was initially a spinoff from a research project at Wordtracker and created by Marcus Baker.

1.2. Cgreen - Vanilla or Chocolate?

Test driven development (TDD) really catched on when the JUnit framework for Java spread to other langauges, giving us a family of xUnit tools. Cgreen was born in this wave and have many similarities to the xUnit family.

But TDD evolved over time and modern thinking and practice is more along the lines of BDD, an acronym for Behaviour Driven Development, made popular by people like Dan North and frameworks like JBehave, RSpec, Cucumber and Jasmine.

Cgreen follows this trend and has evolved to embrace a BDD-flavoured style of testing. Although the fundamental mechanisms in TDD and 'technical' BDD are much the same, the shift in focus by changing wording from 'tests' to 'behaviour specifications' is very significant.

This document will present Cgreen using the more modern and better BDD-style. In a later section you can have a peek at the classic TDD API.

1.3. Installing Cgreen

There are two ways to install Cgreen in your system.

1.3.1. Installing a package

At this point there are no supported pre-built packages available. For now you’ll have to build from source.

The first way is to use packages provided by the Cgreen Team. If your system uses a package manager ('apt' or 'port' and so on) there might be a prebuilt package that you can just install using your systems package manager.

If no Cgreen package is distributed for your system you can download a package from Cgreen GitHub project. Install it using the normal procedures for your system.

1.3.2. Installing from source

The second way is available for developers and advanced users. Basically this consists of fetching the sources of the project on GitHub, just click on "Download ZIP", and compiling them. To do this you need the CMake build system.

Once you have the CMake tool installed, the steps are:

$ unzip cgreen-master.zip
$ cd cgreen-master
$ make
$ make test
$ make install

The initial make command will configure the build process and create a separate build directory before going there and building using CMake. This is called an 'out of source build'. It compiles Cgreen from outside the sources directory. This helps the overall file organization and enables multi-target builds from the same sources by leaving the complete source tree untouched.

Experienced users may tweak the build configuration by going to the build subdirectory and use ccmake .. to modify the build configuration in that subtree.
The Makefile is just there for convenience, it just creates the build directory and invokes CMake there, so that you don’t have to. This means that experienced CMake users can just do as they normally do with a CMake-based project instead of invoking make.

The build process will create a library (on unix called libcgreen.so) which can be used in conjunction with the cgreen.h header file to compile and link your test code. The created library is installed in the system, by default in the /usr/local/lib/.

1.3.3. Your First Test

We will start demonstration the use of CGreen by writing some tests to confirm that everything is working as it should. Let’s start with a simple test module with no tests, called first_test.c…​

#include <cgreen/cgreen.h>

Describe(Cgreen);
BeforeEach(Cgreen) {}
AfterEach(Cgreen) {}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    return run_test_suite(suite, create_text_reporter());
}

This is very unexciting. It just creates an empty test suite and runs it. It’s usually easier to proceed in small steps, and this is the smallest one I could think of. The only complication is the cgreen.h header file and the mysterious looking "declarations" at the beginning of the file.

The BDD flavoured Cgreen notation calls for a Subject Under Test (SUT), or a 'context'. The declarations give a context to the tests and it also makes it more natural to talk about which module or class, the subject under test, is actually responsible for the functionality we are expressing. In one way we are 'describing', or spec’ing, the functionality of the SUT. That’s what the Describe(); does. And for technical reasons (actually requirements of the C language), you must declare the BeforeEach() and AfterEach() functions even if they are empty. (You will get strange errors if you don’t!)

We are using the name "Cgreen" as the SUT in these first examples, as if Cgreen itself was the object or class we wanted to test or describe.

I am assuming you have the Cgreen folder in the include search path to ensure compilation works, otherwise you’ll need to add that in the compilation command.

Then, building this test is, of course, trivial…​

$ gcc -c first_test.c
$ gcc first_test.o -lcgreen -o first_test
$ ./first_test

Invoking the executable should give…​

Running "main" (0 tests)...
Completed "main": 0 passes, 0 failures, 0 exceptions in 0ms.

All of the above rather assumes you are working in a Unix like environment, probably with 'gcc'. The code is pretty much standard C99, so any C compiler should work. Cgreen should compile on all systems that support the sys/msg.h messaging library. It has been tested on Linux, MacOSX, Cygwin and Windows.

So far we have tried compilation, and shown that the test suite actually runs. Let’s add a meaningless test or two so that you can see how it runs…​

#include <cgreen/cgreen.h>

Describe(Cgreen);
BeforeEach(Cgreen) {}
AfterEach(Cgreen) {}

Ensure(Cgreen, passes_this_test) {
    assert_that(1 == 1);
}

Ensure(Cgreen, fails_this_test) {
    assert_that(0 == 1);
}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Cgreen, passes_this_test);
    add_test_with_context(suite, Cgreen, fails_this_test);
    return run_test_suite(suite, create_text_reporter());
}

A test is denoted by the macro Ensure which takes an optional context (Cgreen) and a, hopefully descriptive, testname (passes_this_test). You add the test to your suite using add_test_with_context().

On compiling and running, we now get the output…​

Running "main" (2 tests)...
first_tests.c:12: Failure: fails_this_test
	Expected [0 == 1] to [be true]

Completed "main": 1 pass, 1 failure, 0 exceptions in 1ms.

The TextReporter, created by the call to create_text_reporter(), is the easiest way to output the test results. It prints the failures as intelligent and expressive text messages on your console.

Of course "0" would never equal "1", but this shows that Cgreen presents the value you expect ([be true]) and the expression that you want to assert ([0 == 1]). We can also see a handy short form for asserting boolean expressions (assert_that(0 == 1);).

1.4. Five Minutes Doing TDD with Cgreen

For a more realistic example we need something to test. We’ll pretend that we are writing a function to split the words of a sentence in place. It would do this by replacing any spaces with string terminators and returns the number of conversions plus one. Here is an example of what we have in mind…​

char *sentence = strdup("Just the first test");
word_count = split_words(sentence);

The variable sentence should now point at "Just\0the\0first\0test". Not an obviously useful function, but we’ll be using it for something more practical later.

This time around we’ll add a little more structure to our tests. Rather than having the test as a stand alone program, we’ll separate the runner from the test cases. That way, multiple test suites of test cases can be included in the main() runner file. This makes it less work to add more tests later.

Here is the, so far empty, test case in words_test.c…​

#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>

#include "words.h"
#include <string.h>

Describe(Words);
BeforeEach(Words) {}
AfterEach(Words) {}

TestSuite *words_tests() {
    TestSuite *suite = create_test_suite();
    return suite;
}

Here is the all_tests.c test runner…​

#include <cgreen/cgreen.h>

TestSuite *words_tests();

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_suite(suite, words_tests());
    if (argc > 1) {
        return run_single_test(suite, argv[1], create_text_reporter());
    }
    return run_test_suite(suite, create_text_reporter());
}

Cgreen has two ways of running tests. The default is to run all tests in their own protected processes. This is what happens if you invoke run_test_suite(). All tests are then completely independent since they run in separate processes, preventing a single run-away test from bringing the whole program down with it. It also ensures that one test cannot leave any state to the next, thus forcing you to setup the prerequisites for each test correctly and clearly.

But if you want to debug any of your tests the constant fork()-ing can make that difficult or impossible. To make debugging simpler, Cgreen does not fork() when only a single test is run by name with the function run_single_test(). And if you want to debug, you can obviously set a breakpoint at that test (but note that its actual name probably have been mangled). But since Cgreen does some book-keeping before actually getting to the test, a better function is the one simply called run().

Building this scaffolding…​

$ gcc -c words_test.c
$ gcc -c all_tests.c
$ gcc words_test.o all_tests.o -lcgreen -o all_tests

…​and executing the result gives the familiar…​

Running "main" (0 tests)...
Completed "words_tests": 0 passes, 0 failures, 0 exceptions in 0ms.
Completed "main": 0 passes, 0 failures, 0 exceptions in 0ms.

Note that we get an extra level of output here, we have both main and words_tests. That’s because all_tests.c adds the words test suite to its own (named main since it was created in the function main()). All this scaffolding is pure overhead, but from now on adding tests will be a lot easier.

Here is a first test for split_words() in words_test.c…​

#include <cgreen/cgreen.h>

#include "words.h"
#include <string.h>

Describe(Words);
BeforeEach(Words) {}
AfterEach(Words) {}

Ensure(Words, returns_word_count) {
    char *sentence = strdup("Birds of a feather");
    int word_count = split_words(sentence);
    assert_that(word_count, is_equal_to(4));
    free(sentence);
}

TestSuite *words_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Words, returns_word_count);
    return suite;
}

The assert_that() macro takes two parameters, the value to assert and a constraint. The constraints comes in various forms. In this case we use the probably most common, is_equal_to(). With the default TextReporter the message is sent to STDOUT.

To get this to compile we need to create the words.h header file…​

int split_words(char *sentence);

…​and to get the code to link we need a stub function in words.c…​

int split_words(char *sentence) {
    return 0;
}

A full build later…​

$ gcc -c all_tests.c
$ gcc -c words_test.c
$ gcc -c words.c
$ gcc all_tests.o words_test.o words.o -lcgreen -o all_tests
$ ./all_tests

…​and we get the more useful response…​

Running "main" (1 tests)...
words_tests.c:13: Failure: words_tests -> returns_word_count
	Expected [word_count] to [equal] [4]
		actual value:			[0]
		expected value:			[4]

Completed "words_tests": 0 passes, 1 failure, 0 exceptions in 1ms.
Completed "main": 0 passes, 1 failure, 0 exceptions in 1ms.

The breadcrumb trail following the "Failure" text is the nesting of the tests. It goes from the test suites, which can be nested in each other, through the test function, and finally to the message from the assertion. In the language of Cgreen, a "failure" is a mismatched assertion, or constraint, and an "exception" occurs when a test fails to complete for any reason, e.g. a segmentation fault.

We could get this to pass just by returning the value 4. Doing TDD in really small steps, you would actually do this, but we’re not teaching TDD here. Instead we’ll go straight to the core of the implementation…​

#include <string.h>

int split_words(char *sentence) {
  int i, count = 1;
  for (i = 0; i < strlen(sentence); i++) {
    if (sentence[i] == ' ') {
      count++;
    }
  }
  return count;
}

Running it gives…​

Running "main" (1 tests)...
Completed "words_tests": 1 pass, 0 failures, 0 exceptions in 2ms.
Completed "main": 1 pass, 0 failures, 0 exceptions in 2ms.

There is actually a hidden problem here, but our tests still passed so we’ll pretend we didn’t notice.

So it’s time to add another test. We want to confirm that the string is broken into separate words…​

...
Ensure(Words, returns_word_count) {
    ...
}

Ensure(Words, converts_spaces_to_zeroes) {
    char *sentence = strdup("Birds of a feather");
    split_words(sentence);
    int comparison = memcmp("Birds\0of\0a\0feather", sentence, strlen(sentence));
    assert_that(comparison, is_equal_to(0));
    free(sentence);
}

Sure enough, we get a failure…​

Running "main" (2 tests)...
words_tests.c:21: Failure: words_tests -> converts_spaces_to_zeroes
	Expected [comparison] to [equal] [0]
		actual value:			[-32]
		expected value:			[0]

Completed "words_tests": 1 pass, 1 failure, 0 exceptions in 1ms.
Completed "main": 1 pass, 1 failure, 0 exceptions in 1ms.

Not surprising given that we haven’t written the code yet.

The fix…​

#include <string.h>

int split_words(char *sentence) {
  int i, count = 1;
  for (i = 0; i < strlen(sentence); i++) {
    if (sentence[i] == ' ') {
      sentence[i] = '\0';
      count++;
    }
  }
  return count;
}

…​reveals our previous hack…​

Running "main" (2 tests)...
words_tests.c:13: Failure: words_tests -> returns_word_count
	Expected [word_count] to [equal] [4]
		actual value:			[2]
		expected value:			[4]

Completed "words_tests": 1 pass, 1 failure, 0 exceptions in 2ms.
Completed "main": 1 pass, 1 failure, 0 exceptions in 2ms.

Our earlier test now fails, because we have affected the strlen() call in our loop. Moving the length calculation out of the loop…​

int split_words(char *sentence) {
  int i, count = 1, length = strlen(sentence);
  for (i = 0; i < length; i++) {
    ...
  }
  return count;
}

…​restores order…​

Running "main" (2 tests)...
Completed "words_tests": 2 passes, 0 failures, 0 exceptions in 1ms.
Completed "main": 2 passes, 0 failures, 0 exceptions in 1ms.

It’s nice to keep the code under control while we are actually writing it, rather than debugging later when things are more complicated.

That was pretty straight forward. Let’s do something more interesting.

1.5. What are Mock Functions?

The next example is a more realistic extension of our previous attempts. As in real life we first implement something basic and then we go for the functionality that we need. In this case a function that invokes a callback for each word found in a sentence. Something like…​

void act_on_word(const char *word, void *memo) { ... }
words("This is a sentence", &act_on_word, &memo);

Here the memo pointer is just some accumulated data that the act_on_word() callback might work with. Other people will write the act_on_word() function and probably many other functions like it. The callback is actually a flex point, and not of interest right now.

The function under test is the words() function and we want to make sure it walks the sentence correctly, dispatching individual words as it goes. So what calls are made are very important. How to test this?

Let’s start with a one word sentence. In this case we would expect the callback to be invoked once with the only word, right? Here is the test for that…​

#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>
...
void mocked_callback(const char *word, void *memo) {
    mock(word, memo);
}

Ensure(Words, invokes_callback_once_for_single_word_sentence) {
    expect(mocked_callback,
           when(word, is_equal_to_string("Word")), when(memo, is_null));
    words("Word", &mocked_callback, NULL);
}

TestSuite *words_tests() {
    TestSuite *suite = create_test_suite();
    ...
    add_test_with_context(suite, Words, invokes_callback_once_for_single_word_sentence);
    return suite;
}

What is the funny looking mock() function?

A mock is basically a programmable object. In C objects are limited to functions, so this is a mock function. The macro mock() compares the incoming parameters with any expected values and dispatches messages to the test suite if there is a mismatch. It also returns any values that have been preprogrammed in the test.

The test is invokes_callback_once_for_single_word_sentence(). It programs the mock function using the expect() macro. It expects a single call, and that single call should use the parameters "Word" and NULL. If they don’t match, we will get a test failure.

So when the code under test (our words() function) calls the injected mocked_callback() it in turn will call mock() with the actual parameters.

Of course, we don’t add the mock callback to the test suite, it’s not a test.

For a successful compile and link, the words.h file must now look like…​

int split_words(char *sentence);
void words(const char *sentence, void (*callback)(const char *, void *), void *memo);

…​and the words.c file should have the stub…​

void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
}

This gives us the expected failing test…​

Running "main" (3 tests)...
words_tests.c:33: Failure: words_tests -> invokes_callback_once_for_single_word_sentence
	Expected call was not made to mocked function [mocked_callback]

Completed "words_tests": 2 passes, 1 failure, 0 exceptions in 1ms.
Completed "main": 2 passes, 1 failure, 0 exceptions in 1ms.

Cgreen reports that the callback was never invoked. We can easily get the test to pass by filling out the implementation with…​

void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
  (*callback)(sentence, memo);
}

That is, we just invoke it once with the whole string. This is a temporary measure to get us moving. For now everything should pass, although it doesn’t drive much functionality yet.

Running "main" (3 tests)...
Completed "words_tests": 4 passes, 0 failures, 0 exceptions in 1ms.
Completed "main": 4 passes, 0 failures, 0 exceptions in 1ms.

That was all pretty conventional, but let’s tackle the trickier case of actually splitting the sentence. Here is the test function we will add to words_test.c…​

Ensure(Words, invokes_callback_for_each_word_in_a_phrase) {
    expect(mocked_callback, when(word, is_equal_to_string("Birds")));
    expect(mocked_callback, when(word, is_equal_to_string("of")));
    expect(mocked_callback, when(word, is_equal_to_string("a")));
    expect(mocked_callback, when(word, is_equal_to_string("feather")));
    words("Birds of a feather", &mocked_callback, NULL);
}

Each call is expected in sequence. Any failures, or left-over or extra calls, and we get failures. We can see all this when we run the tests…​

Running "main" (4 tests)...
words_tests.c:38: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase
	Expected [[word] parameter in [mocked_callback]] to [equal string] ["Birds"]
		actual value:			["Birds of a feather"]
		expected to equal:		["Birds"]

words_tests.c:39: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase
	Expected call was not made to mocked function [mocked_callback]

words_tests.c:40: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase
	Expected call was not made to mocked function [mocked_callback]

words_tests.c:41: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase
	Expected call was not made to mocked function [mocked_callback]

Completed "words_tests": 4 passes, 4 failures, 0 exceptions in 1ms.
Completed "main": 4 passes, 4 failures, 0 exceptions in 1ms.

The first failure tells the story. Our little words() function called the mock callback with the entire sentence. This makes sense, because that was the hack we did to get to the next test.

Although not relevant to this guide, I cannot resist getting these tests to pass. Besides, we get to use the function we created earlier…​

void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
  char *words = strdup(sentence);
  int word_count = split_words(words);
  char *word = words;
  while (word_count-- > 0) {
    (*callback)(word, memo);
    word = word + strlen(word) + 1;
  }
  free(words);
}

And with some work we are rewarded with…​

Running "main" (4 tests)...
Completed "words_tests": 8 passes, 0 failures, 0 exceptions in 1ms.
Completed "main": 8 passes, 0 failures, 0 exceptions in 1ms.

More work than I like to admit as it took me three goes to get this right. I firstly forgot the + 1 added on to strlen(), then forgot to swap sentence for word in the (*callback)() call, and finally third time lucky. Of course running the tests each time made these mistakes very obvious. It’s taken me far longer to write these paragraphs than it has to write the code.

1.6. Using Cgreen with C++

The above example, as well as most of this guide, shows how to uses CGreen with C. You can also use CGreen with C++. This is actually quite simple. If you have installed the Cgreen library for C++ all you have to do is

  • Use the cgreen name space by adding using namespace cgreen; at the beginning of the file with your tests

There is also one extra feature when you use C++, the assert_throws function.

2. Building Cgreen test suites

Cgreen is a tool for building unit tests in the C or C++ languages. These are usually written alongside the production code by the programmer to prevent bugs. Even though the test suites are created by software developers, they are intended to be human readable C code, as part of their function is an executable specification. Used in this way, the test harness delivers constant quality assurance.

In other words you’ll get less bugs.

2.1. Writing Basic Tests

Cgreen tests are simply C, or C++, functions with no parameters and no return value. To signal that they actually are tests we mark them with the Ensure macro. An example might be…​

Ensure(Strlen, returns_five_for_hello) {
    assert_that(strlen("Hello"), is_equal_to(5));
}

The Ensure macro takes two arguments (in the BDD style) where the first is the Subject Under Test (SUT) which must be declared with the Describe macro.

Describe(Strlen);

The second argument is the test name and can be anything you want as long as it fullfills the rules for an identifier in C and C++. A typical way to choose the named of the tests is what we see here, reading the declaration of the test makes sense since it is almost plain english, "Ensure strlen returns five for 'hello'". No problem understanding what we aim to test. And it can be viewed as an example from a description of what strlen should be able to do. In a way, extracting all the Ensure:s from your test might give you all the documentation you’ll need.

The assert_that() call is the primary part of an assertion, which is complemented with a constraint, in this case is_equal_to(), as a parameter. This makes a very fluent interface to the asserts, that actually reads like English.

Sometimes you just want to fail the test explicitly, and there is a function for that too, fail_test(const char *message). And there is a function to explicitly pass, pass_test(void).

Assertions send messages to Cgreen, which in turn outputs the results.

2.2. The Standard Constraints

Here are the standard constraints…​

Constraint

Passes if actual value/expression…​

is_true

evaluates to true

is_false

evaluates to false

is_null

equals null

is_non_null

is a non null value

is_equal_to(value)

'== value'

is_not_equal_to(value)

'!= value'

is_greater_than(value)

'> value'

is_less_than(value)

'< value'

is_equal_to_contents_of(pointer, size)

matches the data pointed to by pointer to a size of size bytes

is_not_equal_to_contents_of(pointer, size)

does not match the data pointed to by pointer to a size of size bytes

is_equal_to_string(value)

are equal when compared using strcmp()

is_not_equal_to_string(value)

are not equal when compared using strcmp()

contains_string(value)

contains value when evaluated using strstr()

does_not_contain_string(value)

does not contain value when evaluated using strstr()

begins_with_string(value)

starts with the string value

is_equal_to_double(value)

are equal to value within the number of significant digits (which you can set with a call to significant_figures_for_assert_double_are(int figures))

is_not_equal_to_double(value)

are not equal to value within the number of significant digits

is_less_than_double(value)

< value withing the number of significant digits

is_greater_than_double(value)

> value within the number of significant digits

The boolean assertion macros accept an int value. The equality assertions accept anything that can be cast to intptr_t and simply perform an == operation. The string comparisons are slightly different in that they use the <string.h> library function strcmp(). If is_equal_to() is used on char * pointers then the pointers have to point at the same string to pass.

A cautionary note about the constraints is that you cannot use C/C++ string literal concatenation (like "don’t" "use" "string" "concatenation") in the parameters to the constraints. If you do, you will get weird error messages about missing arguments to the constraint macros. This is caused by the macros using argument strings to produce nice failure messages.

2.3. Asserting C++ Exceptions

When you use CGreen with C++ there is one extra assertion available:

Assertion

Description

assert_throws(exception, expression)

Passes if evaluating expression throws exception

2.4. BDD Style vs. TDD Style

So far we have encouraged the modern BDD style. It has merits that we really want you to benefit from. But you might come across another style, the standard TDD style, which is more inline with previous thinking and might be more similar to other frameworks.

The only difference, in principle, is the use of the SUT or 'context'. In the BDD style you have it, in the TDD style you don’t.

BDD style:
Describe(Strlen);                                                 (1)
BeforeEach(Strlen) {}                                             (2)
AfterEach(Strlen) {}                                              (3)

Ensure(Strlen, returns_five_for_hello) {                          (4)
    assert_that(strlen("Hello"), is_equal_to(5));
}

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Strlen, returns_five_for_hello); (5)
    return suite;
}
1 The Describe macro must name the SUT
2 The BeforeEach function…​
3 …​ and the AfterEach functions must exist and name the SUT
4 The test need to name the SUT
5 Adding to the test suite
You can only have tests for a single SUT in the same source file.

If you use the older pure-TDD style you skip the Describe macro, the BeforeEach and AfterEach functions. You don’t need a SUT in the Ensure() macro or when you add the test to the suite.

TDD style:
                                                               (1)
Ensure(strlen_returns_five_for_hello) {                        (2)
    assert_that(strlen("Hello"), is_equal_to(5));
}

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test(suite, strlen_returns_five_for_hello);            (3)
    return suite;
}
1 No Describe, BeforeEach() or AfterEach()
2 No SUT/context in the Ensure() macro
3 No SUT/context in add_test() and you should use this function instead of ..with_context().
You might think of the TDD style as the BDD style with a default SUT or context.

2.5. Legacy Style Assertions

Cgreen have been around for a while, developed and matured. There is an older style of assertions that was the initial version, a style that we now call the 'legacy style', because it was more aligned with the original, now older, unit test frameworks. If you are not interested in historical artifacts, I recommend that you skip this section.

But for completeness of documentation, here are the legacy style assertion macros:

Assertion

Description

assert_true(boolean)

Passes if boolean evaluates true

assert_false(boolean)

Fails if boolean evaluates true

assert_equal(first, second)

Passes if 'first == second'

assert_not_equal(first, second)

Passes if 'first != second'

assert_string_equal(char *, char *)

Uses 'strcmp()' and passes if the strings are equal

assert_string_not_equal(char *, char *)

Uses 'strcmp()' and fails if the strings are equal

Each assertion has a default message comparing the two values. If you want to substitute your own failure messages, then you must use the *_with_message() counterparts…​

Assertion

assert_true_with_message(boolean, message, …​)

assert_false_with_message(boolean, message, …​)

assert_equal_with_message(tried, expected, message, …​)

assert_not_equal_with_message(tried, unexpected, message, …​)

assert_string_equal_with_message(char *, char *, message, …​)

assert_string_not_equal_with_message(char *, char *, message, …​)

All these assertions have an additional char * message parameter, which is the message you wished to display on failure. If this is set to NULL, then the default message is shown instead. The most useful assertion from this group is assert_true_with_message() as you can use that to create your own assertion functions with your own messages.

Actually the assertion macros have variable argument lists. The failure message acts like the template in printf(). We could change the test above to be…​

Ensure(strlen_of_hello_is_five) {
    const char *greeting = "Hello";
    int length = strlen(greeting);
    assert_equal_with_message(length, 5, "[%s] should be 5, but was %d", greeting, length);
}

This should produce a slightly more user friendly message when things go wrong. But, actually, Cgreens default messages are so good that you are encouraged to skip the legacy style and go for the more modern constraints style assertions. Particularly in conjuction with the BDD style test notation.

We strongly recommend the use of BDD Style notation with constraints based assertions.

2.6. A Runner

The tests are only run through running a test suite in some form. We can create and run one especially for this test like so…​ (But see also Automatic Test Discovery.)

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Strlen, returns_five_for_hello);
    return suite;
}

In case you have spotted that the reference to returns_five_for_hello should have an ampersand in front of it, add_test_with_context() is actually a macro. The & is added automatically. Further more, the Ensure()-macro actually mangles the tests name, so it is not actually a function name. (This might also make them a bit difficult to find in the debugger…​.)

To run the test suite, we call run_test_suite() on it. So we can just write…​

    return run_test_suite(our_tests(), create_text_reporter());

The results of assertions are ultimately delivered as passes and failures to a collection of callbacks defined in a TestReporter structure. There is a predefined TestReporter in Cgreen called the TextReporter that delivers messages in plain text like we have already seen.

The return value of run_test_suite() is a standard C library/Unix exit code that can be returned directly by the main() function.

The complete test code now looks like…​

#include <cgreen/cgreen.h>
#include <string.h>

Describe(Strlen);
BeforeEach(Strlen) {}
AfterEach(Strlen) {}

Ensure(Strlen, returns_five_for_hello) {
    assert_that(strlen("Hello"), is_equal_to(5));
}

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Strlen, returns_five_for_hello);
    return suite;
}

int main(int argc, char **argv) {
    return run_test_suite(our_tests(), create_text_reporter());
}

Compiling and running gives…​

$ gcc -c strlen_test.c
$ gcc strlen_test.o -lcgreen -o strlen_test
$ ./strlen_test
Running "our_tests" (1 tests)...
Completed "our_tests": 1 pass, 0 failures, 0 exceptions in 1ms.

We can see that the outer test suite is called our_tests since it was in our_tests() we created the test suite. There are no messages shown unless there are failures. So, let’s break our test to see it…​

Ensure(Strlen, returns_five_for_hello) {
    assert_that(strlen("Hiya"), is_equal_to(5));
}

…​we’ll get the helpful message…​

Running "our_tests" (1 tests)...
strlen_tests.c:9: Failure: returns_five_for_hello
	Expected [strlen("Hiya")] to [equal] [5]
		actual value:			[4]
		expected value:			[5]

Completed "our_tests": 0 passes, 1 failure, 0 exceptions in 1ms.

Cgreen starts every message with the location of the test failure so that the usual error message identifying tools (like Emacs’s next-error) will work out of the box.

Once we have a basic test scaffold up, it’s pretty easy to add more tests. Adding a test of strlen() with an empty string for example…​

...
Ensure(Strlen, returns_zero_for_empty_string) {
    assert_equal(strlen("\0"), 0);
}

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Strlen, returns_five_for_hello);
    add_test_with_context(suite, Strlen, returns_zero_for_empty_string);
    return suite;
}
...

And so on.

2.7. BeforeEach and AfterEach

It’s common for test suites to have a lot of duplicate code, especially when setting up similar tests. Take this database code for example…​

#include <cgreen/cgreen.h>
#include <stdlib.h>
#include <mysql.h>
#include "person.h"

Describe(Person);
BeforeEach(Person) {}
AfterEach(Person) {}

static void create_schema() {
    MYSQL *connection = mysql_init(NULL);
    mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
    mysql_query(connection, "create table people (name, varchar(255) unique)");
    mysql_close(connection);
}

static void drop_schema() {
    MYSQL *connection = mysql_init(NULL);
    mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
    mysql_query(connection, "drop table people");
    mysql_close(connection);
}

Ensure(Person, can_add_person_to_database) {
    create_schema();
    Person *person = create_person();
    set_person_name(person, "Fred");
    save_person(person);
    Person *found = find_person_by_name("Fred");
    assert_that(get_person_name(found), is_equal_to_string("Fred"));
    drop_schema();
}

Ensure(Person, cannot_add_duplicate_person) {
    create_schema();
    Person *person = create_person();
    set_person_name(person, "Fred");
    assert_that(save_person(person), is_true);
    Person *duplicate = create_person();
    set_person_name(duplicate, "Fred");
    assert_that(save_person(duplicate), is_false);
    drop_schema();
}

TestSuite *person_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Person, can_add_person_to_database);
    add_test_with_context(suite, Person, cannot_add_duplicate_person);
    return suite;
}

int main(int argc, char **argv) {
    return run_test_suite(person_tests(), create_text_reporter());
}

We have already factored out the duplicate code into its own functions create_schema() and drop_schema(), so things are not so bad. At least not yet. But what happens when we get dozens of tests? For a test subject as complicated as a database ActiveRecord, having dozens of tests is very likely.

We can get Cgreen to do some of the work for us by calling these methods before and after each test in the test suite.

Here is the new version…​

...
static void create_schema() {
    ...
}

static void drop_schema() {
    ...
}

Describe(Person);
BeforeEach(Person) { create_schema(); }
AfterEach(Person) { drop_schema(); }

Ensure(Person, can_add_person_to_database) {
    Person *person = create_person();
    set_person_name(person, "Fred");
    save_person(person);
    Person *found = find_person_by_name("Fred");
    assert_that(get_person_name(found), is_equal_to_string("Fred"));
}

Ensure(Person, cannot_add_duplicate_person) {
    Person *person = create_person();
    set_person_name(person, "Fred");
    assert_that(save_person(person), is_true);
    Person *duplicate = create_person();
    set_person_name(duplicate, "Fred");
    assert_that(save_person(duplicate), is_false);
}

TestSuite *person_tests() {
...

With this new arrangement Cgreen runs the create_schema() function before each test, and the drop_schema() function after each test. This saves some repetitive typing and reduces the chance of accidents. It also makes the tests more focused.

The reason we try so hard to strip everything out of the test functions is the fact that the test suite acts as documentation. In our person.h example we can easily see that Person has some kind of name property, and that this value must be unique. For the tests to act like a readable specification we have to remove as much mechanical clutter as we can.

In this particular case there are more lines that we could move from the tests to BeforeEach():

    Person *person = create_person();
    set_person_name(person, "Fred");

Of course that would require an extra variable, and it might make the tests less clear. And as we add more tests, it might turn out to not be common to all tests. This is a typical judgement call that you often get to make with BeforeEach() and AfterEach().

If you use the pure-TDD notation, not having the test subject named by the Describe macro, you can’t have the BeforeEach() and AfterEach() either. In this case you can still run a function before and after every test. Just nominate any void(void) function by calling the function set_setup() and/or set_teardown() with the suite and the function that you want to run before/after each test, e.g. in the example above set_setup(suite, create_schema); and set_teardown(suite, drop_schema);.

A couple of details. There is only one BeforeEach() and one AfterEach() allowed in each TestSuite. Also, the AfterEach() function may not be run if the test crashes, causing some test interference. This brings us nicely onto the next section…​

2.8. Each Test in its Own Process

Consider this test method…​

Ensure(CrashExample, seg_faults_for_null_dereference) {
    int *p = NULL;
    (*p)++;
}

Crashes are not something you would normally want to have in a test run. Not least because it will stop you receiving the very test output you need to tackle the problem.

To prevent segmentation faults and other problems bringing down the test suites, Cgreen runs every test in its own process.

Just before calling the BeforeEach() (or setup) function, Cgreen fork():s. The main process waits for the test to complete normally or die. This includes the calling the AfterEach()(or teardown) function, if any. If the test process dies, an exception is reported and the main test process carries on.

For example…​

#include <cgreen/cgreen.h>
#include <stdlib.h>

Describe(CrashExample);
BeforeEach(CrashExample) {}
AfterEach(CrashExample) {}

Ensure(CrashExample, seg_faults_for_null_dereference) {
    int *p = NULL;
    (*p)++;
}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
    return run_test_suite(suite, create_text_reporter());
}

When built and run, this gives…​

Running "main" (1 tests)...
crash_tests.c:8: Exception: seg_faults_for_null_dereference
	Test terminated with signal: Segmentation fault: 11

Completed "main": 0 passes, 0 failures, 1 exception in 1447ms.

The obvious thing to do now is to fire up the debugger. Unfortunately, the constant fork():ing of Cgreen can be an extra complication too many when debugging. It’s enough of a problem to find the bug.

To get around this, and also to allow the running of one test at a time, Cgreen has the run_single_test() function. The signatures of the two run methods are…​

  • int run_test_suite(TestSuite *suite, TestReporter *reporter);

  • int run_single_test(TestSuite *suite, char *test, TestReporter *reporter);

The extra parameter of run_single_test(), the test string, is the name of the test to select. This could be any test, even in nested test suites (see below). Here is how we would use it to debug our crashing test…​

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
    return run_single_test(suite, "seg_faults_for_null_dereference", create_text_reporter());
}

When run in this way, Cgreen will not fork().

The function run() is a good place to place a breakpoint.

The following is a typical session:

$ gdb crash2
...
(gdb) break main
(gdb) run
...
(gdb) break run
(gdb) continue
...
Running "main" (1 tests)...

Breakpoint 2, run_the_test_code (suite=suite@entry=0x2003abb0,
    spec=spec@entry=0x402020 <CgreenSpec__CrashExample__seg_faults_for_null_dereference__>,
    reporter=reporter@entry=0x2003abe0) at /cygdrive/c/Users/Thomas/Utveckling/Cgreen/cgreen/src/runner.c:270
270         run(spec);
(gdb) step
run (spec=0x402020 <CgreenSpec__CrashExample__seg_faults_for_null_dereference__>)
    at /cygdrive/c/Users/Thomas/Utveckling/Cgreen/cgreen/src/runner.c:217
217             spec->run();
(gdb) step
CrashExample__seg_faults_for_null_dereference () at crash_test2.c:9
9           int *p = NULL;
(gdb) step
10          (*p)++;
(gdb) step

Program received signal SIGSEGV, Segmentation fault.
0x004011ea in CrashExample__seg_faults_for_null_dereference () at crash_test2.c:10
10          (*p)++;

Which shows exactly where the problem is.

This deals with the case where your code throws an exception like segmentation fault, but what about a process that fails to complete by getting stuck in a loop?

Well, Cgreen will wait forever too. But, using the C signal handlers, we can place a time limit on the process by sending it an interrupt. To save us writing this ourselves, Cgreen includes the die_in() function to help us out.

Here is an example of time limiting a test…​

...
Ensure(CrashExample, seg_faults_for_null_dereference) {
    ...
}

Ensure(CrashExample, will_loop_forever) {
    die_in(1);
    while(0 == 0) { }
}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
    add_test_with_context(suite, CrashExample, will_loop_forever);
    return run_test_suite(suite, create_text_reporter());
}

When executed, the code will slow for a second, and then finish with…​

Running "main" (2 tests)...
crash_tests.c:8: Exception: seg_faults_for_null_dereference
	Test terminated with signal: Segmentation fault: 11

crash_tests.c:13: Exception: will_loop_forever
	Test terminated unexpectedly, likely from a non-standard exception or Posix signal

Completed "main": 0 passes, 0 failures, 2 exceptions in 1087ms.

Note that you see the test results as they come in. Cgreen streams the results as they happen, making it easier to figure out where the test suite has problems.

Of course, if you want to set a general time limit on all your tests, then you can add a die_in() to a BeforeEach() (or setup()) function. Cgreen will then apply the limit to each of the tests in that context, of course.

Another possibility is the use of an environment variable named CGREEN_TIMEOUT_PER_TEST which, if set to a number will apply that timeout to every test run. This will apply to all tests in the same run.

2.9. Building Composite Test Suites

The TestSuite is a composite structure. This means test suites can be added to test suites, building a tree structure that will be executed in order.

Let’s combine the strlen() tests with the Person tests above. Firstly we need to remove the main() functions. E.g…​

Ensure(Strlen, returns_five_for_hello) {
   ...
}

Ensure(Strlen, returns_zero_for_empty_string) {
   ...
}

TestSuite *our_tests() {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Strlen, returns_five_for_hello);
    add_test_with_context(suite, Strlen, returns_zero_for_empty_string);
    return suite;
}

Then we can write a small runner with a new main() function…​

#include <cgreen/cgreen.h>

TestSuite *our_tests();
TestSuite *person_tests();

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_suite(suite, our_tests());
    add_suite(suite, person_tests());
    if (argc > 1) {
        return run_single_test(suite, argv[1], create_text_reporter());
    }
    return run_test_suite(suite, create_text_reporter());
}

It’s usually easier to place the TestSuite prototypes directly in the runner source, rather than have lot’s of header files. This is the same reasoning that let us drop the prototypes for the test functions in the actual test scripts. We can get away with this, because the tests are more about documentation than encapsulation.

As we saw above, we can run a single test using the run_single_test() function, and we’d like to be able to do that from the command line. So we added a simple if block to take the test name as an optional argument. The entire test suite will be searched for the named test. This trick also saves us a recompile when we debug.

When you use the BDD notation you can only have a single test subject (which is actually equivalent of a suite) in a single file because you can only have one Describe() macro in each file. But using this strategy you can create composite suites that takes all your tests and run them in one go.

Rewrite pending. The next couple of sections does not reflect the current best thinking. They are remnants of the TDD notation. Using BDD notation you would create separate contexts, each in its own file, with separate names, for each of the fixture cases.
If you use the TDD (non-BDD) notation you can build several test suites in the same file, even nesting them. We can even add mixtures of test functions and test suites to the same parent test suite. Loops will give trouble, however.
If we do place several suites in the same file, then all the suites will be named the same in the breadcrumb trail in the test message. They will all be named after the function the create call sits in. If you want to get around this, or you just like to name your test suites, you can use create_named_test_suite() instead of create_test_suite(). This takes a single string parameter. In fact create_test_suite() is just a macro that inserts the func constant into create_named_test_suite().

What happens to setup and teardown functions in a TestSuite that contains other TestSuite:s?

Well firstly, Cgreen does not fork() when running a suite. It leaves it up to the child suite to fork() the individual tests. This means that a setup and teardown will run in the main process. They will be run once for each child suite.

We can use this to speed up our Person tests above. Remember we were creating a new connection and closing it again in the fixtures. This means opening and closing a lot of connections. At the slight risk of some test interference, we could reuse the connection accross tests…​

...
static MYSQL *connection;

static void create_schema() {
    mysql_query(connection, "create table people (name, varchar(255) unique)");
}

static void drop_schema() {
    mysql_query(connection, "drop table people");
}

Ensure(can_add_person_to_database) { ... }
Ensure(cannot_add_duplicate_person) { ... }

void open_connection() {
    connection = mysql_init(NULL);
    mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
}

void close_connection() {
    mysql_close(connection);
}

TestSuite *person_tests() {
    TestSuite *suite = create_test_suite();
    set_setup(suite, create_schema);
    set_teardown(suite, drop_schema);
    add_test(suite, can_add_person_to_database);
    add_test(suite, cannot_add_duplicate_person);

    TestSuite *fixture = create_named_test_suite("Mysql fixture");
    add_suite(fixture, suite);
    set_setup(fixture, open_connection);
    set_teardown(fixture, close_connection);
    return fixture;
}

The trick here is creating a test suite as a wrapper whose sole purpose is to wrap the main test suite in the fixture. This is our 'fixture' pointer. This code is a little confusing, because we have two sets of fixtures in the same test script.

We have the MySQL connection fixture. This is runs open_connection() and close_connection() just once at the beginning and end of the person tests. This is because the suite pointer is the only member of fixture.

We also have the schema fixture, the create_schema() and drop_schema(), which is run before and after every test. Those are still attached to the inner suite.

In the real world we would probably place the connection fixture in its own file…​

static MYSQL *connection;

MYSQL *get_connection() {
    return connection;
}

static void open_connection() {
    connection = mysql_init(NULL);
    mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
}

static void close_connection() {
    mysql_close(connection);
}

TestSuite *connection_fixture(TestSuite *suite) {
    TestSuite *fixture = create_named_test_suite("Mysql fixture");
    add_suite(fixture, suite);
    set_setup(fixture, open_connection);
    set_teardown(fixture, close_connection);
    return fixture;
}

This allows the reuse of common fixtures across projects.

3. Automatic Test Discovery

3.1. Forgot to Add Your Test?

When we write a new test we focus on the details about the test we are trying to write. And writing tests is no trivial matter so this might well take a lot of brain power.

So, it comes as no big surprise, that sometimes you write your test and then forget to add it to the suite. When we run it it appears that it passed on the first try! Although this should really make you suspicious, sometimes you get so happy that you just continue with churning out more tests and more code. It’s not until some (possibly looong) time later that you realize, after much headache and debugging, that the test did not actually pass. It was never even run!

There are practices to minimize the risk of this happening, such as always running the test as soon as you can set up the test. This way you will see it fail before trying to get it to pass.

But it is still a practice, something we, as humans, might fail to do at some point. Usually this happens when we are most stressed and in need of certainty.

3.2. The Solution - the 'cgreen-runner'

Cgreen gives you a tool to avoid not only the risk of this happening, but also the extra work and extra code. It is called the cgreen-runner.

The cgreen-runner should come with your Cgreen installation if your platform supports the technique that is required, which is 'programatic access to dynamic loading of libraries'. This means that a program can load an external library of code into memory and inspect it. Kind of self-inspection, or reflexion.

So all you have to do is to build a dynamically loadable library of all tests (and of course your objects under test and other necessary code). Then you can run the cgreen-runner and point it to the library. The runner will then load the library, enumerate all tests in it, and run every test.

It’s automatic, and there is nothing to forget.

3.3. Using the Runner

Assuming your tests are in first_test.c the typical command to build your library using gcc would be

$ gcc -shared -o first_test.so -fPIC first_test.c -lcgreen

The -fPIC means to generate 'position independent code' which is required if you want to load the library dynamically.

How to build a dynamically loadable shared library might vary a lot depending on your platform. Can’t really help you there, sorry!

As soon as we have linked it we can run the tests using the cgreen-runner by just giving it the shared, dynamically loadable, object library as an argument:

$ cgreen-runner first_test.so
Running "first_tests" (2 tests)...
first_tests.c:12: Failure: Cgreen -> fails_this_test
	Expected [0 == 1] to [be true]

Completed "Cgreen": 1 pass, 1 failure, 0 exceptions in 1ms.
Completed "first_tests": 1 pass, 1 failure, 0 exceptions in 1ms.

More or less exactly the same output as when we ran our first test in the beginning of this quickstart tutorial. We can see that the top level of the tests will be named as the library it was discovered in, and the second level is the context for our Subject Under Test, in this case 'Cgreen'. We also see that the context is mentioned in the failure message, giving a fairly obvious 'Cgreen → fails_this_test'.

Now we can actually delete the main function in our source code. We don’t need all this:

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test_with_context(suite, Cgreen, passes_this_test);
    add_test_with_context(suite, Cgreen, fails_this_test);
    return run_test_suite(suite, create_text_reporter());
}

It always feel good to delete code, right?

We can also select which test to run:

$ cgreen-runner first_test.so Cgreen:this_test_should_fail
Running "first_tests" (1 tests)...
first_tests.c:12: Failure: Cgreen -> fails_this_test
	Expected [0 == 1] to [be true]

Completed "Cgreen": 0 passes, 1 failure, 0 exceptions in 0ms.
Completed "first_tests": 0 passes, 1 failure, 0 exceptions in 0ms.

We recommend the BDD notation to discover tests, and you indicate which context the test we want to run is in. In this case Cgreen so the test should be refered to as Cgreen:this_test_should_fail.

If you don’t use the BDD notation there is actually a context anyway, it is called default.

3.4. Cgreen Runner Options

Once you get the build set up right for the cgreen-runner everything is fairly straight-forward. But you have a few options:

--xml <prefix>

Instead of messages on stdout with the TextReporter, write results into one XML-file per suite or context, compatible with Hudson/Jenkins CI. The filename(s) will be <prefix>-<suite>.xml

--suite <name>

Name the top level suite

--no-run

Don’t run the tests

--verbose

Show progress information and list discovered tests

--colours

Use colours (or colors) to emphasis result (requires ANSI-capable terminal)

--quiet

Be more quiet

The verbose option is particularly handy since it will give you the actual names of all tests discovered. So if you have long test names you can avoid mistyping them by copying and pasting from the output of cgreen-runner --verbose. It will also give the mangled name of the test which should make it easier to find in the debugger. Here’s an example:

Discovered Cgreen:fails_this_test (Cgreen__fails_this_test)
Discovered Cgreen:passes_this_test (Cgreen__passes_this_test)
Discovered 2 test(s)
Opening [first_tests.so] to only run one test: 'Cgreen:fails_this_test' ...
Running "first_tests" (1 tests)...
first_tests.c:12: Failure: Cgreen -> fails_this_test
	Expected [0 == 1] to [be true]

Completed "Cgreen": 0 passes, 1 failure, 0 exceptions in 0ms.
Completed "first_tests": 0 passes, 1 failure, 0 exceptions in 0ms.

3.5. Selecting Tests To Run

You can name a single test to be run by giving it as the last argument on the command line. The name should be in the format <SUT>:<test>. If not obvious you can get that name by using the --verbose command option which will show you all tests discovered and both there C/C++ and Cgreen names. Copying the Cgreen name from that output is an easy way to run only that particular test. When a single test is named it is run using run_single_test(). As described in Five Minutes Doing TDD with Cgreen this means that it is not protected by fork()-ing it to run in its own process.

The cgreen-runner supports selecting tests with limited pattern matching. Using an asterisk as a simple 'match many' symbol you can say things like

$ cgreen-runner <library> Cgreen:*
$ cgreen-runner <library> C*:*this*

3.6. Multiple Test Libraries

You can also run tests in multiple libraries in one go by adding them to the cgreen-runner command:

$ cgreen-runner first_set.so second_set.so ...

3.7. Setup, Teardown and Custom Reporters

The cgreen-runner will only run setup and teardown functions if you use the BDD-ish style with BeforeEach() and AfterEach() as described above. The runner does not pickup setup() and teardown() added to suites, because it actually doesn’t run suites. It discovers all tests and runs them one by one. The macros required by the BDD-ish style ensures that the corresponding BeforeEach() and AfterEach() are run before and after each test.

The cgreen-runner will discover your tests in a shared library even if you don’t use the BDD-ish style. But it will not run the setup() and/or teardown() attached to your suite(s). But in case you have non-BDD style tests without any setup() and/or teardown() you can still use the runner. The default suite/context where the tests live in this case is called default. But why don’t you convert your tests to BDD notation? This would save you from the frustrating trouble-shooting that you risk when you added setup() and teardown() and can’t understand why they are not run…​

So, the runner incourages you to use the BDD notation. But since we recommend that you do anyway, that’s no extra problem if you are starting out from scratch. But see Changing Style for some easy tips on how to get you there if you already have non-BDD tests.

You can choose between the TextReporter, which we have been seeing so far, and the built-in JUnit/Ant compatible XML-reporter using the --xml option. But it is not currently possible to use custom reporters as outlined in Changing Cgreen Reporting with the runner.

If you require another custom reporter you need to resort to the standard, programatic, way of invoking your tests. For now…​

4. Mocking functions with Cgreen

When testing you want certainty above all else. Random events destroy confidence in your test suite and force needless extra runs "to be sure". A good test places the subject under test into a tightly controlled environment. A test chamber if you like. This makes the tests fast, repeatable and reliable.

To create a test chamber for testing code, we have to control any outgoing calls from the code under test. We won’t believe our test failure if our code is making calls to the internet for example. The internet can fail all by itself. Not only do we not have total control, but it means we have to get dependent components working before we can test the higher level code. This makes it difficult to code top down.

The solution to this dilemma is to write stub code for the components whilst the higher level code is written. This pollutes the code base with temporary code, and the test isolation disappears when the system is eventually fleshed out.

The ideal is to have minimal stubs written for each individual test. Cgreen encourages this approach by making such tests easier to write.

4.1. The Problem with Streams

How would we test the following code…​?

char *read_paragraph(int (*read)(void *), void *stream) {
    int buffer_size = 0, length = 0;
    char *buffer = NULL;
    int ch;
    while ((ch = (*read)(stream)) != EOF) {
        if (++length > buffer_size) {
            buffer_size += 100;
            buffer = (char *)realloc(buffer, buffer_size + 1);
        }
        if ((buffer[length] = ch) == '\n') {
            break;
        }
        buffer[length + 1] = '\0';
    }
    return buffer;
}

This is a fairly generic stream filter that turns the incoming characters into C string paragraphs. Each call creates one paragraph, returning a pointer to it or returning NULL if there is no paragraph. The paragraph has memory allocated to it and the stream is advanced ready for the next call. That’s quite a bit of functionality, and there are plenty of nasty boundary conditions. I really want this code tested before I deploy it.

The problem is the stream dependency. We could use a real stream, but that will cause all sorts of headaches. It makes the test of our paragraph formatter dependent on a working stream. It means we have to write the stream first, bottom up coding rather than top down. It means we will have to simulate stream failures - not easy. It will also mean setting up external resources. This is more work, will run slower, and could lead to spurious test failures.

By contrast, we could write a simulation of the stream for each test, called a "server stub".

For example, when the stream is empty nothing should happen. We hopefully get NULL from read_paragraph when the stream is exhausted. That is, it just returns a steady stream of `EOF`s.

static int empty_stream(void *stream) {
    return EOF;
}

Describe(ParagraphReader);
BeforeEach(ParagraphReader) {}
AfterEach(ParagraphReader) {}

Ensure(ParagraphReader, gives_null_when_reading_empty_stream) {
    assert_that(read_paragraph(&empty_stream, NULL), is_null);
}

Our simulation is easy here, because our fake stream returns only one value. Things are harder when the function result changes from call to call as a real stream would. Simulating this would mean messing around with static variables and counters that are reset for each test. And of course, we will be writing quite a few stubs. Often a different one for each test. That’s a lot of clutter.

Cgreen can handle this clutter for us by letting us write a single programmable function for all our tests.

4.2. Record and Playback

We can redo our example by creating a stream_stub() function. We can call it anything we want, and since I thought we wanted to have a stubbed stream…​

static int stream_stub(void *stream) {
    return (int)mock(stream);
}

Hardly longer that our trivial server stub above, it is just a macro to generate a return value, but we can reuse this in test after test. Let’s see how.

For our simple example above we just tell it to always return EOF…​

#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>

char *read_paragraph(int (*read)(void *), void *stream);

static int stream_stub(void *stream) {
    return (int)mock(stream);
}

Describe(ParagraphReader);
BeforeEach(ParagraphReader) {}
AfterEach(ParagraphReader) {}

Ensure(ParagraphReader, gives_null_when_reading_empty_stream) {
    always_expect(stream_stub, will_return(EOF));                                 (1)
    assert_that(read_paragraph(&stream_stub, NULL), is_null);
}
1 The always_expect() macro takes as arguments the function name and defines the return value using the call to will_return(). This is a declaration of an expectation of a call to the stub, and we have told our stream_stub() to always return EOF when called.

Let’s see if our production code actually works…​

Running "stream" (1 tests)...
Completed "ParagraphReader": 1 pass, 0 failures, 0 exceptions in 1ms.
Completed "stream": 1 pass, 0 failures, 0 exceptions in 1ms.

So far, so good. On to the next test.

If we want to test a one character line, we have to send the terminating EOF or "\n" as well as the single character. Otherwise our code will loop forever, giving an infinite line of that character.

Here is how we can do this…​

Ensure(ParagraphReader, gives_one_character_line_for_one_character_stream) {
    expect(stream_stub, will_return('a'));
    expect(stream_stub, will_return(EOF));
    char *line = read_paragraph(&stream_stub, NULL);
    assert_that(line, is_equal_to_string("a"));
    free(line);
}

Unlike the always_expect() instruction, expect() sets up an expectation of a single call and specifying will_return() sets the single return value for just that call. It acts like a record and playback model. Successive expectations map out the return sequence that will be given back once the test proper starts.

We’ll add this test to the suite and run it…​

Running "stream" (2 tests)...
stream_tests.c:23: Failure: ParagraphReader -> gives_one_character_line_for_one_character_stream
	Expected [line] to [equal string] ["a"]
		actual value:			[""]
		expected to equal:		["a"]

Completed "ParagraphReader": 1 pass, 1 failure, 0 exceptions in 1ms.
Completed "stream": 1 pass, 1 failure, 0 exceptions in 1ms.

Oops. Our code under test doesn’t work. Already we need a fix…​

char *read_paragraph(int (*read)(void *), void *stream) {
    int buffer_size = 0, length = 0;
    char *buffer = NULL;
    int ch;
    while ((ch = (*read)(stream)) != EOF) {
        if (++length > buffer_size) {
            buffer_size += 100;
            buffer = (char *)realloc(buffer, buffer_size + 1);
        }
        if ((buffer[length - 1] = ch) == '\n') {              (1)
            break;
        }
        buffer[length] = '\0';                                (2)
    }
    return buffer;
}
1 After moving the indexing here…​
2 and here…​

around a bit everything is fine:

Running "stream" (2 tests)...
Completed "ParagraphReader": 2 passes, 0 failures, 0 exceptions in 1ms.
Completed "stream": 2 passes, 0 failures, 0 exceptions in 1ms.

How do the Cgreen stubs work? Each expect() describes one call to the stub and the calls to will_return() build up a static list of return values which are used and returned in order as those calls arrive. The return values are cleared between tests.

The mock() macro captures the parameter names and the func property (the name of the stub function). Cgreen can then use these to look up entries in the return list, and also to generate more helpful messages.

We can crank out our tests quite quickly now…​

Ensure(ParagraphReader, gives_one_word_line_for_one_word_stream) {
    expect(stream_stub, will_return('t'));
    expect(stream_stub, will_return('h'));
    expect(stream_stub, will_return('e'));
    always_expect(stream_stub, will_return(EOF));
    assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string("the"));
}

I’ve been a bit naughty. As each test runs in its own process, I haven’t bothered to free the pointers to the paragraphs. I’ve just let the operating system do it. Purists may want to add the extra clean up code.

I’ve also used always_expect() for the last instruction. Without this, if the stub is given an instruction it does not expect, it will throw a test failure. This is overly restrictive, as our read_paragraph() function could quite legitimately call the stream after it had run off of the end. OK, that would be odd behaviour, but that’s not what we are testing here. If we were, it would be placed in a test of its own. The always_expect() call tells Cgreen to keep going after the first three letters, allowing extra calls.

As we build more and more tests, they start to look like a specification of the wanted behaviour…​

Ensure(ParagraphReader, drops_line_ending_from_word_and_stops) {
    expect(stream_stub, will_return('t'));
    expect(stream_stub, will_return('h'));
    expect(stream_stub, will_return('e'));
    expect(stream_stub, will_return('\n'));
    assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string("the"));
}

…​and just for luck…​

Ensure(ParagraphReader, gives_empty_line_for_single_line_ending) {
    expect(stream_stub, will_return('\n'));
    assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string(""));
}

This time we musn’t use always_return(). We want to leave the stream where it is, ready for the next call to read_paragraph(). If we call the stream beyond the line ending, we want to fail.

Oops, that was a little too fast. Turns out we are failing anyway…​

Running "stream"" (5 tests)...
stream_tests.c:40: Failure: ParagraphReader -> drops_line_ending_from_word_and_stops
	Expected [read_paragraph(&stream_stub, ((void *)0))] to [equal string] ["the"]
		actual value:			["the
"]
		expected to equal:		["the"]

stream_tests.c:45: Failure: ParagraphReader -> gives_empty_line_for_single_line_ending
	Expected [read_paragraph(&stream_stub, ((void *)0))] to [equal string] [""]
		actual value:			["
"]
		expected to equal:		[""]

Completed "ParagraphReader": 3 passes, 2 failures, 0 exceptions in 2ms.
Completed "stream"": 3 passes, 2 failures, 0 exceptions in 2ms.

Clearly we are passing through the line ending. Another fix later…​

char *read_paragraph(int (*read)(void *), void *stream) {
    int buffer_size = 0, length = 0;
    char *buffer = NULL;
    int ch;
    while ((ch = (*read)(stream)) != EOF) {
        if (++length > buffer_size) {
            buffer_size += 100;
            buffer = (char *)realloc(buffer, buffer_size + 1);
        }
        if ((buffer[length - 1] = ch) == '\n') {
            buffer[--length] = '\0';
            break;
        }
        buffer[length] = '\0';
    }
    return buffer;
}

And we are passing again…​

Running "stream" (5 tests)...
Completed "ParagraphReader": 5 passes, 0 failures, 0 exceptions in 2ms.
Completed "stream": 5 passes, 0 failures, 0 exceptions in 2ms.

There are no limits to the number of stubbed methods within a test, only that two stubs cannot have the same name. The following will cause problems…​

static int stream_stub(void *stream) {
    return (int)mock(stream);
}

Ensure(Streams, bad_test) {
    expect(stream_stub, will_return('a'));
    do_stuff(&stream_stub, &stream_stub);
}

You could program the same stub to return values for the two streams, but that would make a very brittle test. Since we’d be making it heavily dependent on the exact internal behaviour that we are trying to test, or test drive, it will break as soon as we change that implementation. The test will also become very much harder to read and understand. And we really don’t want that.

So, it will be necessary to have two stubs to make this test behave, but that’s not a problem…​

static int first_stream_stub(void *stream) {
    return (int)mock(stream);
}

static int second_stream_stub(void *stream) {
    return (int)mock(stream);
}

Ensure(Streams, good_test) {
    expect(first_stream_stub, will_return('a'));
    expect(second_stream_stub, will_return('a'));
    do_stuff(&first_stream_stub, &second_stream_stub);
}

We now have a way of writing fast, clear tests with no external dependencies. The information flow is still one way though, from stub to the code under test. When our code calls complex procedures, we won’t want to pick apart the effects to infer what happened. That’s too much like detective work. And why should we? We just want to know that we dispatched the correct information down the line.

Things get more interesting when we think of the traffic going the other way, from code to stub. This gets us into the same territory as mock objects.

4.3. Setting Expectations on Mock Functions

To swap the traffic flow, we’ll look at an outgoing example instead. Here is the prewritten production code…​

void by_paragraph(int (*read)(void *), void *in, void (*write)(void *, char *), void *out) {
    while (1) {
        char *line = read_paragraph(read, in);
        if ((line == NULL) || (strlen(line) == 0)) {
            return;
        }
        (*write)(out, line);
        free(line);
    }
}

This is the start of a formatter utility. Later filters will probably break the paragaphs up into justified text, but right now that is all abstracted behind the void write(void *, char *) interface. Our current interests are: does it loop through the paragraphs, and does it crash?

We could test correct paragraph formation by writing a stub that collects the paragraphs into a struct. We could then pick apart that struct and test each piece with assertions. This approach is extremely clumsy in C. The language is just not suited to building and tearing down complex edifices, never mind navigating them with assertions. We would badly clutter our tests.

Instead we’ll test the output as soon as possible, right in the called function…​

...
void expect_one_letter_paragraph(void *stream, char *paragraph) {
    assert_that(paragraph, is_equal_to_string("a"));
}

Ensure(Formatter, makes_one_letter_paragraph_from_one_character_input) {
    by_paragraph(
            &one_character_stream,
            NULL,
            &expect_one_letter_paragraph,
            NULL);
}
...

By placing the assertions into the mocked function, we keep the tests minimal. The catch with this method is that we are back to writing individual functions for each test. We have the same problem as we had with hand coded stubs.

Again, Cgreen has a way to automate this. Here is the rewritten test…​

static int reader(void *stream) {
    return (int)mock(stream);
}

static void writer(void *stream, char *paragraph) {
    mock(stream, paragraph);
}

Ensure(Formatter, makes_one_letter_paragraph_from_one_character_input) {
    expect(reader, will_return('a'));
    always_expect(reader, will_return(EOF));
    expect(writer, when(paragraph, is_equal_to_string("a")));
    by_paragraph(&reader, NULL, &writer, NULL);
}

Where are the assertions?

Unlike our earlier stub, reader() can now check its parameters. In object oriented circles, an object that checks its parameters as well as simulating behaviour is called a mock object. By analogy reader() is a mock function, or mock callback.

Using the expect macro, we have set up the expectation that writer() will be called just once. That call must have the string "a" for the paragraph parameter. If the actual value of that parameter does not match, the mock function will issue a failure straight to the test suite. This is what saves us writing a lot of assertions.

When specifying behavior of mocks there are three parts. First, how often the specified behaviour or expectation will be executed:

Macro

Description

expect(function, …​)

Expected once, in order

always_expect(function, …​)

Expect this behavior from here onwards

never_expect(function)

From this point this mock function must never be called

You can specify constraints and behaviours for each expectation (except for never_expect() naturally). A constraint places restrictions on the parameters (and will tell you if the expected restriction was not met), and a behaviour specifies what the mock should do if the parameter constraints are met.

A parameter constraint is defined using the when(parameter, constraint) macro. It takes two parameters:

Parameter

Description

parameter

The name of the parameter to the mock function

constraint

A constraint placed on that parameter

There is a multitude of constraints available (actually, exactly the same as for the assertions we saw earlier):

Constraint

Type

is_equal_to(value)

Integers

is_equal_to_hex(value)

Integers

is_not_equal_to(value)

Integers

is_greater_than(value)

Integers

is_less_than(value)

Integers

is_equal_to_contents_of(pointer, size_of_contents)

Bytes/Structures

is_not_equal_to_contents_of(pointer, size_of_contents)

Bytes/Structures

is_equal_to_string(value)

String

is_not_equal_to_string(value)

String

contains_string(value)

String

does_not_contain_string(value)

String

begins_with_string(value)

String

is_equal_to_double(value)

Double

is_not_equal_to_double(value)

Double

is_less_than_double(value)

Double

is_greater_than_double(value)

Double

For the double valued constraints you can set the number of significant digits to consider a match with a call to significant_figures_for_assert_double_are(int figures).

Then there are two ways to return results:

Macro

Description

will_return(value)

Return the value from the mock function (which needs to be declared returning that type

will_set_contents_of_parameter(parameter_name, value, size)

Writes the value in the referenced parameter

You can combine these in various ways:

  expect(mocked_file_writer,
        when(data, is_equal_to(42)),
        will_return(EOF));
  expect(mocked_file_reader,
        when(file, is_equal_to_contents_of(&FD, sizeof(FD))),
        when(input, is_equal_to_string("Hello world!"),
        will_set_contents_of_parameter(status, FD_CLOSED, sizeof(bool))));

If multiple when() are specified they all need to be fullfilled. You can of course only have one for each of the parameters of your mock function.

You can also have multiple will_set_contents_of_parameter() in an expectation, one for each reference parameter, but naturally only one will_return().

It’s about time we actually ran our test…​

Running "formatter" (1 tests)...
Completed "Formatter": 1 pass, 0 failures, 0 exceptions in 1ms.
Completed "formatter": 1 pass, 0 failures, 0 exceptions in 1ms.

Confident that a single character works, we can further specify the behaviour. Firstly an input sequence…​

Ensure(Formatter, makes_one_paragraph_if_no_line_endings) {
    expect(reader, will_return('a'));
    expect(reader, will_return(' '));
    expect(reader, will_return('b'));
    expect(reader, will_return(' '));
    expect(reader, will_return('c'));
    always_expect(reader, will_return(EOF));
    expect(writer, when(paragraph, is_equal_to_string("a b c")));
    by_paragraph(&reader, NULL, &writer, NULL);
}

A more intelligent programmer than me would place all these calls in a loop.

Running "formatter" (2 tests)...
Completed "Formatter": 2 passes, 0 failures, 0 exceptions in 1ms.
Completed "formatter": 2 passes, 0 failures, 0 exceptions in 1ms.

Next, checking an output sequence…​

Ensure(Formatter, generates_separate_paragraphs_for_line_endings) {
    expect(reader, will_return('a'));
    expect(reader, will_return('\n'));
    expect(reader, will_return('b'));
    expect(reader, will_return('\n'));
    expect(reader, will_return('c'));
    always_expect(reader, will_return(EOF));
    expect(writer, when(paragraph, is_equal_to_string("a")));
    expect(writer, when(paragraph, is_equal_to_string("b")));
    expect(writer, when(paragraph, is_equal_to_string("c")));
    by_paragraph(&reader, NULL, &writer, NULL);
}

Again we can se that the expect() calls follow a record and playback model. Each one tests a successive call. This sequence confirms that we get "a", "b" and "c" in order.

Running "formatter" (3 tests)...
Completed "Formatter": 5 passes, 0 failures, 0 exceptions in 2ms.
Completed "formatter": 5 passes, 0 failures, 0 exceptions in 2ms.

So, why the 5 passes? Each expect() with a constrait is actually an assert. It asserts that the call specified is actually made with the parameters given and in the specified order. In this case all the expected calls were made.

Then we’ll make sure the correct stream pointers are passed to the correct functions. This is a more realistic parameter check…​

Ensure(Formatter, pairs_the_functions_with_the_resources) {
    expect(reader, when(stream, is_equal_to(1)), will_return('a'));
    always_expect(reader, when(stream, is_equal_to(1)), will_return(EOF));
    expect(writer, when(stream, is_equal_to(2)));
    by_paragraph(&reader, (void *)1, &writer, (void *)2);
}
Running "formatter" (4 tests)...
Completed "Formatter": 9 passes, 0 failures, 0 exceptions in 2ms.
Completed "formatter": 9 passes, 0 failures, 0 exceptions in 2ms.

And finally we’ll specify that the writer is not called if there is no paragraph.

Ensure(Formatter, ignores_empty_paragraphs) {
    expect(reader, will_return('\n'));
    always_expect(reader, will_return(EOF));
    never_expect(writer);
    by_paragraph(&reader, NULL, &writer, NULL);
}

This last test is our undoing…​

Running "formatter" (5 tests)...
formatter_tests.c:59: Failure: Formatter -> ignores_empty_paragraphs
	Mocked function [writer] has an expectation that it will never be called, but it was

Completed "Formatter": 9 passes, 1 failure, 0 exceptions in 6ms.
Completed "formatter": 9 passes, 1 failure, 0 exceptions in 6ms.

Obviously blank lines are still being dispatched to the writer(). Once this is pointed out, the fix is obvious…​

void by_paragraph(int (*read)(void *), void *in, void (*write)(void *, char *), void *out) {
    while (1) {
        char *line = read_paragraph(read, in);
        if ((line == NULL) || (strlen(line) == 0)) {
            return;
        }
        (*write)(out, line);
        free(line);
    }
}

Tests with never_expect() can be very effective at uncovering subtle bugs.

Running "formatter" (5 tests)...
Completed "Formatter": 9 passes, 0 failures, 0 exceptions in 3ms.
Completed "formatter": 9 passes, 0 failures, 0 exceptions in 3ms.

All done.

4.4. Mocks Are…​

Using mocks is a very handy way to isolate a unit and catch and control calls to external units. Depending on your style of coding two schools of thinking have emerged. And of course Cgreen supports both!

4.4.1. Strict or Loose Mocks

The two schools are thinking a bit differently about what mock expectations means. Does it mean that all external calls must be declared and expected? What happens if a call was made to a mock that wasn’t expected? And vice versa, if an expected call was not made?

Actually, the thinking is not only a school of thought, but you might want to switch from one to the other. So Cgreen allows for that too.

By default Cgreen mocks are 'strict', which means that a call to an non-expected mock will be considered a failure. So will an expected call that was not fullfilled. You might consider this a way to define a unit through all its exact behaviours towards its neighbours.

On the other hand, 'loose' mocks are looser. They allow both unfullfilled expectations and try to handle unexpected calls in a reasonable way.

You can use both with in the same suite of tests using the call cgreen_mocks_are(strict_mocks); and cgreen_mocks_are(loose_mocks); respectively. Typically you would place that call at the beginning of the test, or in a setup or BeforeEach() if it applies to all tests in a suite.

4.4.2. Learning Mocks

Working with legacy code and trying to apply TDD, BDD or even simply add some unit tests is not easy. You’re working with unknown code that does unknown things with unknown counterparts.

So the first step would be to isolate the unit. We won’t go into details on how to do that here, but basically you would replace the interface to other units with mocks. This is a somewhat tedious manual labor, but will result in an isolated unit where you can start applying your unit tests.

Once you have your unit isolated in a harness of mocks, we need to figure out which calls it does to other units, now replaced by mocks, in the specific case we are trying to test.

This might be complicated, so Cgreen makes that a bit simpler. There is a third 'mode' of the Cgreen mocks, the learning mocks.

If you temporarily add the call cgreen_mocks_are(learning_mocks); at the beginning of your unit test, the mocks will record all calls and present a list of those calls in order, including the actual parameter values, on the standard output.

So let’s look at the following example from the Cgreen unit tests. It’s a bit contorted since the test actually call the mocked functions directly, but I believe it will serve as an example.

static int integer_out() {
    return (int)mock();
}

static char *string_out(int p1) {
    return (char *)mock(p1);
}

Ensure(LearningMocks, emit_pastable_code) {
    cgreen_mocks_are(learning_mocks);
    string_out(1);
    string_out(2);
    integer_out();
    integer_out();
    string_out(3);
    integer_out();
}

We can see the call to cgreen_mocks_are() starting the test and setting the mocks into learning mode.

If we run this, just as we usually run tests, the following will show up in our terminal:

Running "learning_mocks" (1 tests)...
LearningMocks -> emit_pastable_code : Learned mocks are
        expect(string_out, when(p1, is_equal_to(1)));
        expect(string_out, when(p1, is_equal_to(2)));
        expect(integer_out);
        expect(integer_out);
        expect(string_out, when(p1, is_equal_to(3)));
        expect(integer_out);
Completed "LearningMocks": 0 passes, 0 failures, 0 exceptions.
Completed "learning_mocks": 0 passes, 0 failures, 0 exceptions.

If this was for real we could just copy this and paste it in place of the call to cgreen_mocks_are() and we have all the expectations done.

You still need to implement the mock functions, of course. I.e. write functions that calls mock() and replaces the real functions.

5. Context, Subject Under Test & Suites

As mentioned earlier, Cgreen promotes the behaviour driven style of test driving code. The thinking behind BDD is that we don’t really want to test anything, if we just could specify the behaviour of our code and ensure that it actually behaves this way we would be fine.

This might seem like an age old dream, but when you think about it, there is actually very little difference in the mechanics from vanillla TDD. First we write how we want it, then implement it. But the small change in wording, from `test´ to `behaviour´, from `test that´ to `ensure that´, makes a huge difference in thinking, and also very often in quality of the resulting code.

5.1. The SUT - Subject Under Test

Since BDD talks about behaviour, there has to be something that we can talk about as having the wanted behaviour. This is usually called the SUT, the Subject Under Test. Cgreen in BDD-ish mode requires that you define a name for it.

#include <cgreen/cgreen.h>
Describe(SUT);

Cgreen supports C++ and there you naturally have the objects and also the Class Under Test. But in plain C you will have to think about what is actually the "class" under test. E.g. in sort_test.c you might see

#include <cgreen/cgreen.h>
Describe(Sorter);

Ensure(Sorter, can_sort_an_empty_list) {
  assert_that(sorter(NULL), is_null);
}

In this example you can clearly see what difference the BDD-ish style makes when it comes to naming. Convention, and natural language, dictates that typical names for what TDD would call tests, now starts with 'can' or 'finds' or other verbs, which makes the specification so much easier to read.

Yes, I wrote 'specification'. Because that is how BDD views what TDD basically calls a test suite. The suite specifies the behaviour of a `class´. (That’s why some BDD frameworks draw on 'spec', like RSpec.)

5.2. Contexts and Before and After

The complete specification of the behaviour of a SUT might become long and require various forms of setup. When using TDD style you would probably break this up into multiple suites having their own setup() and teardown().

With BDD-ish style we could consider a suite as a behaviour specification for our SUT 'in a particular context'. E.g.

#include <cgreen/cgreen.h>

Describe(shopping_basket_for_returning_customer);

Customer *customer;

BeforeEach(shopping_basket_for_returning_customer){
  customer = create_test_customer();
  login(customer);
}

AfterEach(shopping_basket_for_returning_customer) {
  logout(customer);
  destroy_customer(customer);
}

Ensure(shopping_basket_for_returning_customer, allows_use_of_discounts) {
  ...
}

The 'context' would then be shopping_basket_for_returning_customer, with the SUT being the shopping basket 'class'.

So 'context', 'subject under test' and 'suite' are mostly interchangable concepts in Cgreen lingo. It’s a named group of 'tests' that share the same BeforeEach and AfterEach and lives in the same source file.

6. Changing Style

If you already have some TDD style Cgreen test suites, it is quite easy to change them over to BDD-ish style. Here are the steps required

  • Add Describe(SUT);

  • Turn your current setup function into a BeforeEach() definition by changing its signature to match the macro, or simply call the existing setup function from the BeforeEach(). If you don’t have any setup function you still need to define an empty BeforeEach().

  • Ditto for AfterEach().

  • Add the SUT to each Ensure() by inserting it as a first parameter.

  • Change the call to add the tests to add_test_with_context() by adding the name of the SUT as the second parameter.

  • Optionally remove the calls to set_setup() and set_teardown().

Done.

If you want to continue to run the tests using a hand-coded runner, you can do that by keeping the setup and teardown functions and their corresponding set_-calls.

It’s nice that this is a simple process, because you can change over from TDD style to BDD-ish style in small steps. You can convert one source file at a time, by just following the recipe above. Everything will still work as before but your tests and code will likely improve.

And once you have changed style you can fully benefit from the automatic discovery of tests as described in Automatic Test Discovery.

7. Changing Cgreen Reporting

7.1. Replacing the Reporter

In every test suite so far, we have run the tests with this line…​

return run_test_suite(our_tests(), create_text_reporter());

We can change the reporting mechanism just by changing this method.

Here is the code for create_text_reporter()…​

TestReporter *create_text_reporter(void) {
    TestReporter *reporter = create_reporter();
    if (reporter == NULL) {
        return NULL;
    }
    reporter->start_suite = &text_reporter_start_suite;
    reporter->start_test = &text_reporter_start_test;
    reporter->show_fail = &show_fail;
    reporter->show_incomplete = &show_incomplete;
    reporter->finish_test = &text_reporter_finish_test;
    reporter->finish_suite = &text_reporter_finish;
    return reporter;
}

The TestReporter structure contains function pointers that control the reporting. When called from create_reporter() constructor, these pointers are set up with functions that display nothing. The text reporter code replaces these with something more dramatic, and then returns a pointer to this new object. Thus the create_text_reporter() function effectively extends the object from create_reporter().

The text reporter only outputs content at the start of the first test, at the end of the test run to display the results, when a failure occurs, and when a test fails to complete. A quick look at the text_reporter.c file in Cgreen reveals that the overrides just output a message and chain to the versions in reporter.h.

To change the reporting mechanism ourselves, we just have to know a little about the methods in the TestReporter structure.

7.2. The TestReporter Structure

The Cgreen TestReporter is a pseudo class that looks something like…​

typedef struct _TestReporter TestReporter;
struct _TestReporter {
    void (*destroy)(TestReporter *reporter);
    void (*start_suite)(TestReporter *reporter, const char *name, const int count);
    void (*start_test)(TestReporter *reporter, const char *name);
    void (*show_pass)(TestReporter *reporter, const char *file, int line,
                                   const char *message, va_list arguments);
    void (*show_fail)(TestReporter *reporter, const char *file, int line,
                                   const char *message, va_list arguments);
    void (*show_incomplete)(TestReporter *reporter, const char *file, int line,
                                   const char *message, va_list arguments);
    void (*assert_true)(TestReporter *reporter, const char *file, int line, int result,
                                   const char * message, ...);
    void (*finish_test)(TestReporter *reporter, const char *file, int line);
    void (*finish_suite)(TestReporter *reporter, const char *file, int line);
    int passes;
    int failures;
    int exceptions;
    void *breadcrumb;
    int ipc;
    void *memo;
    void *options;
};

The first block are the methods that can be overridden:

void (*destroy)(TestReporter *reporter)

This is the destructor for the default structure. If this is overridden, then the overriding function must call destroy_reporter(TestReporter *reporter) to finish the clean up.

void (*start_suite)(TestReporter *reporter, const char *name, const int count)

This is the first of the callbacks. At the start of each test suite Cgreen will call this method on the reporter with the name of the suite being entered and the number of tests in that suite. The default version keeps track of the stack of tests in the breadcrumb pointer of TestReporter. If you make use of the breadcrumb functions, as the defaults do, then you will need to call reporter_start_suite() to keep the book keeping in sync.

void (*start_test)(TestReporter *reporter, const char *name)

At the start of each test Cgreen will call this method on the reporter with the name of the test being entered. Again, the default version keeps track of the stack of tests in the breadcrumb pointer of TestReporter. If you make use of the breadcrumb functions, as the defaults do, then you will need to call reporter_start_test() to keep the book keeping in sync.

void (*show_pass)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)

This method is initially empty as there most reporters see little point in reporting passing tests (but you might do), so there is no need to chain the call to any other function. Besides the pointer to the reporter structure, Cgreen also passes the file name of the test, the line number of failed assertion, the message to show and any additional parameters to substitute into the message. The message comes in as printf() style format string, and so the variable argument list should match the substitutions.

void (*show_fail)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)

The partner of show_pass(), and the one you’ll likely overload first.

void (*show_incomplete)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)

When a test fails to complete, this is the handler that is called. As it’s an unexpected outcome, no message is received, but we do get the name of the test. The text reporter combines this with the breadcrumb to produce the exception report.

void (*assert_true)(TestReporter *reporter, const char *file, int line, int result, const char * message, …​)

This is not normally overridden and is really internal. It is the raw entry point for the test messages from the test suite. By default it dispatches the call to either show_pass() or show_fail().

void (*finish_test)(TestReporter *reporter, const char *file, int line)

The counterpart to the (*start_test)() call. It is called on leaving the test. It needs to be chained to the reporter_finish() to keep track of the breadcrumb book keeping.

void (*finish_suite)(TestReporter *reporter, const char *file, int line)

The counterpart to the (*start_suite)() call called on leaving the test suite, and similar to the (*finish_test)() if your reporter needs a handle on that event too. The default text reporter chains both this and (*finish_test)() to the same function where it figures out if it is the end of the top level suite. If so, it prints the familiar summary of passes and fails.

The second block is simply resources and book keeping that the reporter can use to liven up the messages…​

passes

The number of passes so far.

failures

The number of failures generated so far.

exceptions

The number of test functions that have failed to complete so far.

breadcrumb

This is a pointer to the list of test names in the stack.

The breadcrumb pointer is different and needs a little explanation. Basically it is a stack, analogous to the breadcrumb trail you see on websites. Everytime a start() handler is invoked, the name is placed in this stack. When a finish() message handler is invoked, a name is popped off.

There are a bunch of utility functions in cgreen/breadcrumb.h that can read the state of this stack. Most useful are get_current_from_breadcrumb() which takes the breadcrumb pointer and returns the current test name, and get_breadcrumb_depth() which gives the current depth of the stack. A depth of zero means that the test run has finished.

If you need to traverse all the names in the breadcrumb, then you can call walk_breadcrumb(). Here is the full signature…​

void walk_breadcrumb(Breadcrumb *breadcrumb, void (*walker)(const char *, void *), void *memo);

The void (*walker)(const char *, void *) is a callback that will be passed the name of the test suite for each level of nesting.

It is also passed the memo pointer that was passed to the walk_breadcrumb() call. You can use this pointer for anything you want, as all Cgreen does is pass it from call to call. This is so aggregate information can be kept track of whilst still being reentrant.

The last parts of the TestReporter structure are…​

ipc

This is an internal structure for handling the messaging between reporter and test suite. You shouldn’t touch this.

memo

By contrast, this is a spare pointer for your own expansion.

options

A pointer to a reporter specific structure that can be used to set options. E.g. the textreporter defines the structure TextReporterOptions which can be used by calling code to define the use of colors when printing passes and failures. You set it with set_reporter_options(*void).

7.3. An Example XML Reporter

Let’s make things real with an example. Suppose we want to send the output from Cgreen in XML format, say for storing in a repository or for sending across the network.

The cgreen-runner already has an XML-reporter that you can use. See Cgreen Runner Options.

Suppose also that we have come up with the following format…​

<?xml?>
<suite name="Top Level">
    <suite name="A Group">
        <test name="a_test_that_passes">
        </test>
        <test name="a_test_that_fails">
            <fail>
                <message>A failure</message>
                <location file="test_as_xml.c" line="8"/>
            </fail>
        </test>
    </suite>
</suite>

In other words a simple nesting of tests with only failures encoded. The absence of "fail" XML node is a pass.

Here is a test script, test_as_xml.c that we can use to construct the above output…​

#include <cgreen/cgreen.h>

Describe(XML_reporter);
BeforeEach(XML_reporter) {}
AfterEach(XML_reporter) {}

Ensure(XML_reporter, reports_a_test_that_passes) {
    assert_that(1 == 1);
}

Ensure(XML_reporter, reports_a_test_that_fails) {
    fail_test("A failure");
}

TestSuite *create_test_group() {
    TestSuite *suite = create_named_test_suite("A Group");
    add_test_with_context(suite, XML_reporter, reports_a_test_that_passes);
    add_test_with_context(suite, XML_reporter, reports_a_test_that_fails);
    return suite;
}

int main(int argc, char **argv) {
    TestSuite *suite = create_named_test_suite("Top Level");
    add_suite(suite, create_test_group());
    return run_test_suite(suite, create_text_reporter());
}

We can’t use the auto-discovering cgreen-runner here since we need to ensure that the nested suites are reported as a nested xml structure. And we’re not actually writing real tests, just something that we can use to drive our new reporter.

The text reporter is used just to confirm that everything is working. So far it is.

Running "Top Level" (2 tests)...
test_as_xml.c:12: Failure: A Group -> reports_a_test_that_fails
	A failure

Completed "A Group": 1 pass, 1 failure, 0 exceptions in 1ms.
Completed "Top Level": 1 pass, 1 failure, 0 exceptions in 1ms.

Our first move is to switch the reporter from text, to our not yet written XML version…​

#include "xml_reporter.h"
...

int main(int argc, char **argv) {
    TestSuite *suite = create_named_test_suite("Top Level");
    add_suite(suite, create_test_group());
    return run_test_suite(suite, create_xml_reporter());
}

We’ll start the ball rolling with the xml_reporter.h header file…​

#ifndef _XML_REPORTER_HEADER_
#define _XML_REPORTER_HEADER_

#include <cgreen/reporter.h>

TestReporter *create_xml_reporter();

#endif

…​and the simplest possible reporter in xml_reporter.c.

#include <cgreen/reporter.h>

#include "xml_reporter.h"

TestReporter *create_xml_reporter() {
    TestReporter *reporter = create_reporter();
    return reporter;
}

One that outputs nothing.

$ gcc -c test_as_xml.c
$ gcc -c xml_reporter.c
$ gcc xml_reporter.o test_as_xml.o -lcgreen -o xml
$ ./xml

Yep, nothing.

Let’s add the outer XML tags first, so that we can see Cgreen navigating the test suite…​

#include <cgreen/reporter.h>
#include <cgreen/breadcrumb.h>

#include <stdio.h>
#include "xml_reporter.h"


static void xml_reporter_start_suite(TestReporter *reporter, const char *name, int count) {
    printf("<suite name=\"%s\">\n", name);
    reporter_start_suite(reporter, name, count);
}

static void xml_reporter_start_test(TestReporter *reporter, const char *name) {
    printf("<test name=\"%s\">\n", name);
    reporter_start_test(reporter, name);
}

static void xml_reporter_finish_test(TestReporter *reporter, const char *filename, int line, const char *message, uint32_t duration_in_milliseconds) {
    reporter_finish_test(reporter, filename, line, message, duration_in_milliseconds);
    printf("</test>\n");
}

static void xml_reporter_finish_suite(TestReporter *reporter, const char *filename, int line, uint32_t duration_in_milliseconds) {
    reporter_finish_suite(reporter, filename, line, duration_in_milliseconds);
    printf("</suite>\n");
}

TestReporter *create_xml_reporter() {
    TestReporter *reporter = create_reporter();
    reporter->start_suite = &xml_reporter_start_suite;
    reporter->start_test = &xml_reporter_start_test;
    reporter->finish_test = &xml_reporter_finish_test;
    reporter->finish_suite = &xml_reporter_finish_suite;
    return reporter;
}

Although chaining to the underlying reporter_start_*() and reporter_finish_*() functions is optional, I want to make use of some of the facilities later.

Our output meanwhile, is making its first tentative steps…​

<suite name="Top Level">
<suite name="A Group">
<test name="reports_a_test_that_passes">
</test>
<test name="reports_a_test_that_fails">
</test>
</suite>
</suite>

We don’t require an XML node for passing tests, so the show_fail() function is all we need…​

...

static void xml_show_fail(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
    printf("<fail>\n");
    printf("\t<message>");
    vprintf(message, arguments);
    printf("</message>\n");
    printf("\t<location file=\"%s\" line=\"%d\"/>\n", file, line);
    printf("</fail>\n");
...

TestReporter *create_xml_reporter() {
    TestReporter *reporter = create_reporter();
    reporter->start_suite = &xml_reporter_start_suite;
    reporter->start_test = &xml_reporter_start_test;
    reporter->show_fail = &xml_show_fail;
    reporter->finish_test = &xml_reporter_finish_test;
    reporter->finish_suite = &xml_reporter_finish_suite;
    return reporter;
}

We have to use vprintf() to handle the variable argument list passed to us. This will probably mean including the stdarg.h header as well as stdio.h.

This gets us pretty close to what we want…​

<suite name="Top Level">
<suite name="A Group">
<test name="reports_a_test_that_passes">
</test>
<test name="reports_a_test_that_fails">
<fail>
	<message>A failure</message>
	<location file="test_as_xml.c" line="15"/>
</fail>
</test>
</suite>
</suite>

For completeness we should add a tag for a test that doesn’t complete. We’ll output this as a failure, although we don’t bother with the location this time…​

static void xml_show_incomplete(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
    printf("<fail>\n");
    printf("\t<message>Failed to complete</message>\n");
    printf("</fail>\n");
}
...
TestReporter *create_xml_reporter() {
    TestReporter *reporter = create_reporter();
    reporter->start_suite = &xml_reporter_start_suite;
    reporter->start_test = &xml_reporter_start_test;
    reporter->show_fail = &xml_show_fail;
    reporter->show_incomplete = &xml_show_incomplete;
    reporter->finish_test = &xml_reporter_finish_test;
    reporter->finish_suite = &xml_reporter_finish_suite;
    return reporter;
}

All that’s left then is the XML declaration and the thorny issue of indenting. Although the indenting is not strictly necessary, it would make the output a lot more readable.

Given that the test depth is kept track of for us with the breadcrumb object in the TestReporter structure, indentation will actually be quite simple. We’ll add an indent() function that outputs the correct number of tabs…​

static void indent(TestReporter *reporter) {
    int depth = get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb);
    while (depth-- > 0) {
        printf("\t");
    }
}

The get_breadcrumb_depth() function just gives the current test depth as recorded in the reporters breadcrumb (from cgreen/breadcrumb.h). As that is just the number of tabs to output, the implementation is trivial.

We can then use this function in the rest of the code. Here is the complete listing…​

#include <cgreen/reporter.h>
#include <cgreen/breadcrumb.h>

#include <stdio.h>
#include "xml_reporter.h"

static void indent(TestReporter *reporter) {
    int depth = get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb);
    while (depth-- > 0) {
        printf("\t");
    }
}

static void xml_reporter_start_suite(TestReporter *reporter, const char *name, int count) {
    if (get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb) == 0) {
        printf("<?xml?>\n");
    }
    indent(reporter);
    printf("<suite name=\"%s\">\n", name);
    reporter_start_suite(reporter, name, count);
}

static void xml_reporter_start_test(TestReporter *reporter, const char *name) {
    indent(reporter);
    printf("<test name=\"%s\">\n", name);
    reporter_start_test(reporter, name);
}

static void xml_show_fail(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
    indent(reporter);
    printf("<fail>\n");
    indent(reporter);
    printf("\t<message>");
    vprintf(message, arguments);
    printf("</message>\n");
    indent(reporter);
    printf("\t<location file=\"%s\" line=\"%d\"/>\n", file, line);
    indent(reporter);
    printf("</fail>\n");
}

static void xml_show_incomplete(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
    indent(reporter);
    printf("<fail>\n");
    indent(reporter);
    printf("\t<message>Failed to complete</message>\n");
    indent(reporter);
    printf("</fail>\n");
}


static void xml_reporter_finish_test(TestReporter *reporter, const char *filename, int line, const char *message, uint32_t duration_in_milliseconds) {
    reporter_finish_test(reporter, filename, line, message, duration_in_milliseconds);
    indent(reporter);
    printf("</test>\n");
}

static void xml_reporter_finish_suite(TestReporter *reporter, const char *filename, int line, uint32_t duration_in_milliseconds) {
    reporter_finish_suite(reporter, filename, line, duration_in_milliseconds);
    indent(reporter);
    printf("</suite>\n");
}

TestReporter *create_xml_reporter() {
    TestReporter *reporter = create_reporter();
    reporter->start_suite = &xml_reporter_start_suite;
    reporter->start_test = &xml_reporter_start_test;
    reporter->show_fail = &xml_show_fail;
    reporter->show_incomplete = &xml_show_incomplete;
    reporter->finish_test = &xml_reporter_finish_test;
    reporter->finish_suite = &xml_reporter_finish_suite;
    return reporter;
}

And finally the desired output…​

<?xml?>
<suite name="Top Level">
	<suite name="A Group">
		<test name="reports_a_test_that_passes">
		</test>
		<test name="reports_a_test_that_fails">
			<fail>
				<message>A failure</message>
				<location file="test_as_xml.c" line="15"/>
			</fail>
		</test>
	</suite>
</suite>

Job done.

Possible other reporter customizations include reporters that write to syslog, talk to IDE plug-ins, paint pretty printed documents or just return a boolean for monitoring purposes.

8. Hints and Tips

This chapter is in its infancy. It will contain tips for situations that you need some help with.

8.1. Compiler Error Messages

Sometimes you can get cryptic and strange error messages from the compiler. Since Cgreen uses some C/C++ macro magic this can happen and the error messages might not be straight forward to interpret.

Compiler error message

Probable cause…​

"contextFor<X>" is undeclared here

You forgot the BeforeEach() function

undefined reference to 'AfterEach_For_<X>'

You forgot the AfterEach() function

CgreenSpec<X><Y>__ is undeclared

You forgot to specify the test subject/context in the Ensure of a BDD style test

8.2. Signed, Unsigned, Hex and Byte

Cgreen attempts to handle primitive type comparisons with a single constraint, is_equal_to(). This means that it must store the actual and expected values in a form that will accomodate all possible values that primitive types might take, typically an intptr_t.

This might sometimes cause unexpected comparisons since all actual values will be cast to match intptr_t, which is a signed value. E.g.

Ensure(Char, can_compare_byte) {
  char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(chars[0], is_equal_to(0xaa));
}

On a system which considers char to be signed this will cause the following Cgreen assertion error:

char_tests.c:11: Failure: Char -> can_compare_byte
        Expected [chars[0]] to [equal] [0xaa]
                actual value:                   [-86]
                expected value:                 [170]

This is caused by the C rules forcing an implicit cast of the signed char to intptr_t by sign-extension. This might not be what you expected. The correct solution, by any standard, is to cast the actual value to unsigned char which will then be interpreted correctly. And the test passes.

Casting to unsigned will not always suffice since that is interpreted as unsigned int which will cause a sign-extension from the signed char and might or might not work depending on the size of int on your machine.

In order to reveal what really happens you might want to see the actual and expected values in hex. This can easily be done with the is_equal_to_hex().

Ensure(Char, can_compare_byte) {
  char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(chars[0], is_equal_to_hex(0xaa));
}

This might make the mistake easier to spot:

char_tests.c:11: Failure: Char -> can_compare_byte
        Expected [chars[0]] to [equal] [0xaa]
        actual value:                   [0xfffffffffffffaa]
        expected value:                 [0xaa]

8.3. Cgreen and Coverage

Cgreen is compatible with coverage tools, in particular gcov/lcov. So generating coverage data for your application should be straight forward.

This is what you need to do (using gcc or clang):

  • compile with -ftest-coverage and -fprofile-arcs

  • run tests

  • lcov --directory . --capture --output-file coverage.info

  • genhtml -o coverage coverage.info

Your coverage data will be available in coverage/index.html.

8.4. Garbled Output

If the output from your Cgreen based tests appear garbled or duplicated, this can be caused by the way Cgreen terminates its test-running child process. In many unix-like environments the termination of a child process should be done with _exit(). However, this interfers severily with the ability to collect coverage data. As this is important for many of us, Cgreen instead terminates its child process with the much cruder exit() (note: no underscore).

Under rare circumstances this might have the unwanted effect of output becoming garbled and/or duplicated.

If this happens you can change that behaviour using an environment variable CGREEN_CHILD_EXIT_WITH__EXIT (note: two underscores). If set, Cgreen will terminate its test-running child process with the more POSIX-compliant _exit(). But as mentioned before, this is, at least at this point in time, incompatible with collecting coverage data.

So, it’s coverage or POSIX-correct child exits and guaranteed output consistency. You can’t have both…​

Appendix A: License

Copyright (c) 2006-2016, Cgreen Development Team and contributors
(https://github.com/cgreen-devs/cgreen/graphs/contributors)

Permission to use, copy, modify, and/or distribute this software and its documentation for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies, regardless of form, including printed and compiled.

THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHORS DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.