1. Cgreen Quickstart Guide
1.1. What is Cgreen?
Cgreen is a unit testing framework for the C and C++ software developer, a test automation and software quality assurance tool for programmers and development teams. The tool is completely open source published under the ISC, OpenBSD, license.
Unit testing is a development practice popularised by the agile development community. It is characterised by writing many small tests alongside the normal code. Often the tests are written before the code they are testing, in a tight test-code-refactor loop. Done this way, the practice is known as Test Driven Development. Cgreen was designed specifically to support this style of development.
Unit tests are written in the same language as the code, in our case C or C++. This avoids the mental overhead of constantly switching language, and also allows you to use any application code in your tests.
Here are some of its features:
-
Fluent API resulting in very readable tests
-
Expressive and clear output using the default reporter
-
Fully functional mocks, both strict, loose and learning
-
Mocks with side effects
-
Each test runs in its own process for test suite robustness
-
Automatic discovery and running of tests using dynamic library inspection
-
Extensive and expressive constraints for many datatypes
-
Custom constraints can be constructed by user
-
BDD-flavoured test declarations with Before and After declarations
-
Extensible reporting mechanism
-
Fully composable test suites
-
A single test can be run in a single process for easier debugging
Cgreen also supports the classic xUnit-style assertions for easy porting from other frameworks.
Cgreen was initially developed to support C programming, but there is also support for C++. It was initially a spinoff from a research project at Wordtracker and created by Marcus Baker. Significant additions by Matt Hargett and continuous nurturing by Thomas Nilefalk has made Cgreen what it is today.
1.2. Cgreen - Vanilla or Chocolate?
Test driven development (TDD) really catched on when the JUnit framework for Java spread to other langauges, giving us a family of xUnit tools. Cgreen was born in this wave and have many similarities to the xUnit family.
But TDD evolved over time and modern thinking and practice is more along the lines of BDD, an acronym for Behaviour Driven Development, made popular by people like Dan North and frameworks like JBehave, RSpec, Cucumber and Jasmine.
Cgreen follows this trend and has evolved to embrace a BDD-flavoured style of testing. Although the fundamental mechanisms in TDD and 'technical' BDD are much the same, the shift in focus by changing wording from 'tests' to 'behaviour specifications' is very significant.
This document will present Cgreen using the more modern and better BDD-inspired style. In a later section you can have a peek at the classic xUnit-family TDD API, but you should consider that as outdated.
1.3. Installing Cgreen
There are two ways to install Cgreen in your system.
1.3.1. Installing a package
The first way is to use packages provided by the Cgreen Team and porters for the various operating systems. If your system uses a package manager ('apt', 'yum', 'brew' and so on) there might be a prebuilt package that you can just install using your systems package manager.
If no Cgreen package is distributed for your system you can download a package from Cgreen GitHub project. Install it using the normal procedures for your system.
At this point there are pre-built packages available for quite a few environments. They are not all using the latest version, though. If you need that, you can still build from source. |
1.3.2. Installing from source
A second way is available for developers and advanced users. Basically this consists of fetching the sources of the project on GitHub, just click on "Download ZIP", and then compile them. To do this you need the CMake build system.
Once you have the CMake tool installed, the steps are:
$ unzip cgreen-master.zip $ cd cgreen-master $ make $ make test $ make install
The initial make
command will configure the build process and create a separate build
directory before going there and building using CMake.
This is called an 'out of source build'.
It compiles Cgreen from outside the sources directory.
This helps the overall file organization and enables multi-target builds from the same sources by leaving the complete source tree untouched.
Experienced users may tweak the build configuration by going to the build subdirectory and use ccmake .. to modify the build configuration in that subtree.
|
The Makefile is just there for convenience, it creates the build directory and invokes CMake there, so that you don’t have to.
This means that experienced CMake users can just do as they normally do with a CMake-based project instead of invoking make .
|
The build process will create a library (on unix called libcgreen.so
) which can be used in conjunction with the cgreen.h
header file to compile and link your test code.
The created library is installed in the system directories, by default in /usr/local/lib/
.
1.3.3. Your First Test
We will start demonstrating the use of Cgreen by writing some tests for Cgreen itself to confirm that everything is working as it should.
Let’s start with a simple test module with no tests, called first_test.c
…
#include <cgreen/cgreen.h>
Describe(Cgreen);
BeforeEach(Cgreen) {}
AfterEach(Cgreen) {}
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
return run_test_suite(suite, create_text_reporter());
}
This is very unexciting.
It just creates an empty test suite and runs it.
It’s usually easier to proceed in small steps, and this is the smallest one I could think of.
The only complication is the cgreen.h
header file and the mysterious looking "declarations" at the beginning of the file.
The BDD flavoured Cgreen notation calls for a System Under Test
(SUT), or a 'context'.
The declarations give a context to the tests and it also makes it more natural to talk about which module or class, the system under test, is actually responsible for the functionality we are describing.
In one way we are 'describing', or spec’ing, the functionality of the SUT.
That’s what the Describe();
does.
And for technical reasons (actually requirements of the C language), you must declare the BeforeEach()
and AfterEach()
functions even if they are empty.
(You will get strange errors if you don’t!)
We are using the name "Cgreen" as the SUT in these first examples, as Cgreen itself is the object or class we want to test or describe. |
I am assuming you have the Cgreen folder in the include search path to ensure compilation works, otherwise you’ll need to add that in the compilation command.
Then, building this test is, of course, trivial…
$ gcc -c first_test.c $ gcc first_test.o -lcgreen -o first_test $ ./first_test
Invoking the executable should give…
Running "main" (0 tests)... Completed "main": No assertions.
All of the above rather assumes you are working in a Unix like environment, probably with 'gcc'.
The code is pretty much standard C99, so any C compiler should work.
Cgreen should compile on all systems that support the sys/msg.h
messaging library.
It has been tested on Linux, MacOSX, Cygwin.
If you are on Windows we would be glad if you could figure out how to build there.
So far we have tried compilation, and shown that the test suite actually runs. Let’s add a meaningless test or two so that you can see how it runs…
#include <cgreen/cgreen.h>
Describe(Cgreen);
BeforeEach(Cgreen) {}
AfterEach(Cgreen) {}
Ensure(Cgreen, passes_this_test) {
assert_that(1 == 1);
}
Ensure(Cgreen, fails_this_test) {
assert_that(0 == 1);
}
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Cgreen, passes_this_test);
add_test_with_context(suite, Cgreen, fails_this_test);
return run_test_suite(suite, create_text_reporter());
}
A test is denoted by the macro Ensure
which takes an optional context (Cgreen
) and a, hopefully descriptive, testname (passes_this_test
).
You add the test to your suite using add_test_with_context()
.
On compiling and running, we now get the output…
Running "main" (2 tests)... first_tests.c:12: Failure: fails_this_test Expected [0 == 1] to [be true] "main": 1 pass, 1 failure in 42ms. Completed "main": 1 pass, 1 failure in 42ms.
The TextReporter
, created by the call to create_text_reporter()
, is the easiest way to output the test results.
It prints the failures as intelligent and expressive text messages on your console.
Of course "0" would never equal "1", but this shows that Cgreen presents the value you expect ([be true]
) and the expression that you want to assert ([0 == 1]
).
We can also see a handy short form for asserting boolean expressions (assert_that(0 == 1);
).
1.4. Five Minutes Doing TDD with Cgreen
For a more realistic example we need something to test. We’ll pretend that we are writing a function to split the words of a sentence in place. It would do this by replacing any spaces with string terminators and returns the number of conversions plus one. Here is an example of what we have in mind…
char *sentence = strdup("Just the first test");
word_count = split_words(sentence);
The variable sentence
should now point at "Just\0the\0first\0test".
Not an obviously useful function, but we’ll be using it for something more practical later.
This time around we’ll add a little more structure to our tests.
Rather than having the test as a stand alone program, we’ll separate the runner from the test cases.
That way, multiple test suites of test cases can be included in the main()
runner file.
This makes it less work to add more tests later.
Here is the, so far empty, test case in words_test.c
…
#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>
#include "words.h"
#include <string.h>
Describe(Words);
BeforeEach(Words) {}
AfterEach(Words) {}
TestSuite *words_tests() {
TestSuite *suite = create_test_suite();
return suite;
}
Here is the all_tests.c
test runner…
#include <cgreen/cgreen.h>
TestSuite *words_tests();
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_suite(suite, words_tests());
if (argc > 1) {
return run_single_test(suite, argv[1], create_text_reporter());
}
return run_test_suite(suite, create_text_reporter());
}
Cgreen has two ways of running tests.
The default is to run all tests in their own protected processes.
This is what happens if you invoke run_test_suite()
.
All tests are then completely independent since they run in separate processes, preventing a single run-away test from bringing the whole program down with it.
It also ensures that one test cannot leave any state to the next, thus forcing you to setup the prerequisites for each test correctly and clearly.
Building this scaffolding…
$ gcc -c words_test.c $ gcc -c all_tests.c $ gcc words_test.o all_tests.o -lcgreen -o all_tests
…and executing the result gives the familiar…
Running "main" (0 tests)... "words_tests": No assertions. Completed "main": No assertions.
Note that we get an extra level of output here, we have both main
and words_tests
.
That’s because all_tests.c
adds the words test suite to its own (named main
since it was created in the function main()
).
All this scaffolding is pure overhead, but from now on adding tests will be a lot easier.
Here is a first test for split_words()
in words_test.c
…
#include <cgreen/cgreen.h>
#include "words.h"
#include <string.h>
Describe(Words);
BeforeEach(Words) {}
AfterEach(Words) {}
Ensure(Words, returns_word_count) {
char *sentence = strdup("Birds of a feather");
int word_count = split_words(sentence);
assert_that(word_count, is_equal_to(4));
free(sentence);
}
TestSuite *words_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Words, returns_word_count);
return suite;
}
The assert_that()
macro takes two parameters, the value to assert and a constraint.
The constraints comes in various forms.
In this case we use the probably most common, is_equal_to()
.
With the default TextReporter
the message is sent to STDOUT
.
To get this to compile we need to create the words.h
header file…
int split_words(char *sentence);
…and to get the code to link we need a stub function in words.c
…
int split_words(char *sentence) {
return 0;
}
A full build later…
$ gcc -c all_tests.c $ gcc -c words_test.c $ gcc -c words.c $ gcc all_tests.o words_test.o words.o -lcgreen -o all_tests $ ./all_tests
…and we get the more useful response…
Running "main" (1 test)... words_tests.c:13: Failure: words_tests -> returns_word_count Expected [word_count] to [equal] [4] actual value: [0] expected value: [4] "words_tests": 1 failure in 42ms. Completed "main": 1 failure in 42ms.
The breadcrumb trail following the "Failure" text is the nesting of the tests. It goes from the test suites, which can be nested in each other, through the test function, and finally to the message from the assertion. In the language of Cgreen, a "failure" is a mismatched assertion, or constraint, and an "exception" occurs when a test fails to complete for any reason, e.g. a segmentation fault.
We could get this to pass just by returning the value 4. Doing TDD in really small steps, you would actually do this, but we’re not teaching TDD here. Instead we’ll go straight to the core of the implementation…
#include <string.h>
int split_words(char *sentence) {
int i, count = 1;
for (i = 0; i < strlen(sentence); i++) {
if (sentence[i] == ' ') {
count++;
}
}
return count;
}
Running it gives…
Running "main" (1 test)... "words_tests": 1 pass in 42ms. Completed "main": 1 pass in 42ms.
There is actually a hidden problem here, but our tests still passed so we’ll pretend we didn’t notice.
So it’s time to add another test. We want to confirm that the string is broken into separate words…
...
Ensure(Words, returns_word_count) {
...
}
Ensure(Words, converts_spaces_to_zeroes) {
char *sentence = strdup("Birds of a feather");
split_words(sentence);
int comparison = memcmp("Birds\0of\0a\0feather", sentence, strlen(sentence));
assert_that(comparison, is_equal_to(0));
free(sentence);
}
Sure enough, we get a failure…
Running "main" (2 tests)... words_tests.c:21: Failure: words_tests -> converts_spaces_to_zeroes Expected [comparison] to [equal] [0] actual value: [-32] expected value: [0] "words_tests": 1 pass, 1 failure in 42ms. Completed "main": 1 pass, 1 failure in 42ms.
Not surprising given that we haven’t written the code yet.
The fix…
#include <string.h>
int split_words(char *sentence) {
int i, count = 1;
for (i = 0; i < strlen(sentence); i++) {
if (sentence[i] == ' ') {
sentence[i] = '\0';
count++;
}
}
return count;
}
…reveals our previous hack…
Running "main" (2 tests)... words_tests.c:13: Failure: words_tests -> returns_word_count Expected [word_count] to [equal] [4] actual value: [2] expected value: [4] "words_tests": 1 pass, 1 failure in 42ms. Completed "main": 1 pass, 1 failure in 42ms.
Our earlier test now fails, because we have affected the strlen()
call in our loop.
Moving the length calculation out of the loop…
int split_words(char *sentence) {
int i, count = 1, length = strlen(sentence);
for (i = 0; i < length; i++) {
...
}
return count;
}
…restores order…
Running "main" (2 tests)... "words_tests": 2 passes in 42ms. Completed "main": 2 passes in 42ms.
It’s nice to keep the code under control while we are actually writing it, rather than debugging later when things are more complicated.
That was pretty straight forward. Let’s do something more interesting.
1.5. What are Mock Functions?
The next example is a more realistic extension of our previous attempts. As in real life we first implement something basic and then we go for the functionality that we need. In this case a function that invokes a callback for each word found in a sentence. Something like…
void act_on_word(const char *word, void *memo) { ... }
words("This is a sentence", &act_on_word, &memo);
Here the memo
pointer is just some accumulated data that the act_on_word()
callback might work with.
Other people will write the act_on_word()
function and probably many other functions like it.
The callback is actually a flex point, and not of interest right now.
The function under test is the words()
function and we want to make sure it walks the sentence correctly, dispatching individual words as it goes.
So what calls are made are very important.
How do we go about to test this?
Let’s start with a one word sentence. In this case we would expect the callback to be invoked once with the only word, right? Here is the test for that…
#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>
...
void mocked_callback(const char *word, void *memo) {
mock(word, memo);
}
Ensure(Words, invokes_callback_once_for_single_word_sentence) {
expect(mocked_callback,
when(word, is_equal_to_string("Word")), when(memo, is_null));
words("Word", &mocked_callback, NULL);
}
TestSuite *words_tests() {
TestSuite *suite = create_test_suite();
...
add_test_with_context(suite, Words, invokes_callback_once_for_single_word_sentence);
return suite;
}
What is the funny looking mock()
function?
A mock is basically a programmable object.
In C objects are limited to functions, so this is a mock function.
The macro mock()
compares the incoming parameters with any expected values and dispatches messages to the test suite if there is a mismatch.
It also returns any values that have been preprogrammed in the test.
The test is invokes_callback_once_for_single_word_sentence()
.
It programs the mock function using the expect()
macro.
It expects a single call, and that single call should use the parameters "Word"
and NULL
.
If they don’t match, we will get a test failure.
So when the code under test (our words()
function) calls the injected mocked_callback()
it in turn will call mock()
with the actual parameters.
Of course, we don’t add the mock callback to the test suite, it’s not a test.
For a successful compile and link, the words.h
file must now look like…
int split_words(char *sentence);
void words(const char *sentence, void (*callback)(const char *, void *), void *memo);
…and the words.c
file should have the stub…
void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
}
This gives us the expected failing test…
Running "main" (3 tests)... words_tests.c:32: Failure: words_tests -> invokes_callback_once_for_single_word_sentence Expected call was not made to mocked function [mocked_callback] "words_tests": 2 passes, 1 failure in 42ms. Completed "main": 2 passes, 1 failure in 42ms.
Cgreen reports that the callback was never invoked. We can easily get the test to pass by filling out the implementation with…
void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
(*callback)(sentence, memo);
}
That is, we just invoke it once with the whole string. This is a temporary measure to get us moving. For now everything should pass, although it doesn’t drive much functionality yet.
Running "main" (3 tests)... "words_tests": 4 passes in 42ms. Completed "main": 4 passes in 42ms.
That was all pretty conventional, but let’s tackle the trickier case of actually splitting the sentence.
Here is the test function we will add to words_test.c
…
Ensure(Words, invokes_callback_for_each_word_in_a_phrase) {
expect(mocked_callback, when(word, is_equal_to_string("Birds")));
expect(mocked_callback, when(word, is_equal_to_string("of")));
expect(mocked_callback, when(word, is_equal_to_string("a")));
expect(mocked_callback, when(word, is_equal_to_string("feather")));
words("Birds of a feather", &mocked_callback, NULL);
}
Each call is expected in sequence. Any failures, or left-over or extra calls, and we get failures. We can see all this when we run the tests…
Running "main" (4 tests)... words_tests.c:38: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase Expected [[word] parameter in [mocked_callback]] to [equal string] ["Birds"] actual value: ["Birds of a feather"] expected to equal: ["Birds"] words_tests.c:39: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase Expected call was not made to mocked function [mocked_callback] words_tests.c:40: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase Expected call was not made to mocked function [mocked_callback] words_tests.c:41: Failure: words_tests -> invokes_callback_for_each_word_in_a_phrase Expected call was not made to mocked function [mocked_callback] "words_tests": 4 passes, 4 failures in 42ms. Completed "main": 4 passes, 4 failures in 42ms.
The first failure tells the story.
Our little words()
function called the mock callback with the entire sentence.
This makes sense, because that was the hack we did to get to the next test.
Although not relevant to this guide, I cannot resist getting these tests to pass. Besides, we get to use the function we created earlier…
void words(const char *sentence, void (*callback)(const char *, void *), void *memo) {
char *words = strdup(sentence);
int word_count = split_words(words);
char *word = words;
while (word_count-- > 0) {
(*callback)(word, memo);
word = word + strlen(word) + 1;
}
free(words);
}
And with some work we are rewarded with…
Running "main" (4 tests)... "words_tests": 8 passes in 42ms. Completed "main": 8 passes in 42ms.
More work than I like to admit as it took me three goes to get this right.
I firstly forgot the + 1
added on to strlen()
, then forgot to swap sentence
for word
in the (*callback)()
call, and finally third time lucky.
Of course running the tests each time made these mistakes very obvious.
It’s taken me far longer to write these paragraphs than it has to write the code.
2. Building Cgreen test suites
Cgreen is a tool for building unit tests in the C or C++ languages. These are usually written alongside the production code by the programmer to prevent bugs. Even though the test suites are created by software developers, they are intended to be human readable C code, as part of their function is an executable specification. Used in this way, the test harness delivers constant quality assurance.
In other words you’ll get less bugs.
2.1. Writing Basic Tests
Cgreen tests are like C, or C++, functions with no parameters and no return value.
To signal that they actually are tests we mark them with the Ensure
macro.
Here’s an example…
Ensure(Strlen, returns_five_for_hello) {
assert_that(strlen("Hello"), is_equal_to(5));
}
The Ensure
macro takes two arguments (in the BDD style) where the first is the System Under Test (SUT) which must be declared with the Describe
macro.
Describe(Strlen);
The second argument is the test name and can be anything you want as long as it fullfills the rules for an identifier in C and C++.
A typical way to choose the named of the tests is what we see here, reading the declaration of the test makes sense since it is almost plain english, "Ensure strlen returns five for 'hello'".
No problem understanding what we aim to test, or in TDD lingo, test drive.
And it can be viewed as an example from a description of what strlen should be able to do.
In a way, extracting all the Ensure
:s from your test might give you all the documentation you’ll need.
The call to assert_that()
is the primary part of an assertion, which is complemented with a constraint, in this case
is_equal_to()
, as a parameter.
This makes a very fluent interface to the asserts, that actually reads like English.
The general format is then
assert_that(actual, <constraint>);
Sometimes you just want to fail the test explicitly, and there is a function for that too, fail_test(const char *message) .
And there is a function to explicitly pass, pass_test(void) .
There is also a function to programmatically skip a test, skip_test(void) to complement the xEnsure notation (see Skipping Tests).
|
Assertions send messages to Cgreen, which in turn outputs the results.
2.2. The Standard Constraints
Here are the standard constraints…
Constraint |
Passes if actual value/expression… |
Basic |
|
|
evaluates to true, buy you can also just leave out the constraint,
e.g. |
|
evaluates to false |
|
equals null |
|
is a non null value |
|
d:o |
Integer compatible |
|
|
'== value' |
|
'== value', but will show values in HEX |
|
'!= value' |
|
'> value' |
|
'< value' |
Structs and general data |
|
|
matches the data pointed
to by |
|
does not match the data
pointed to by |
Strings |
|
|
are equal when compared using |
|
are not equal when compared using |
|
contains |
|
does not contain |
|
starts with the string |
|
does not start with the string |
|
ends with the string |
|
does not end with the string |
Double floats |
|
|
are equal to |
|
are not equal to |
|
|
|
|
The boolean assertion macros accept an int
value.
The equality assertions accept anything that can be cast to intptr_t
and simply perform an ==
operation.
The string comparisons are slightly different in that they use the <string.h>
library function strcmp()
.
If you use is_equal_to()
with char *
pointers then it is the value of the pointers themselves that has to be the same, i.e. the pointers have to point at the same string for the test to pass.
The constraints above should be used as the second argument to one of the assertion functions:
Assertion |
Description |
|
Passes if |
|
Passes if |
You cannot use C/C++ string literal concatenation (like "don’t" "use" "string" "concatenation" ) in the parameters to the constraints.
If you do, you will get weird error messages about missing arguments to the constraint macros.
This is caused by the macros using argument strings to produce nice failure messages.
|
2.3. Asserting C++ Exceptions
When you use CGreen with C++ there is one extra assertion available:
Assertion |
Description |
|
Passes if evaluating |
2.4. BDD Style vs. TDD Style
So far we have encouraged the modern BDD style. It has merits that we really want you to benefit from. But you might come across Cgreen test in another style, more like the standard TDD style, which is more inline with previous thinking and might be more similar to other frameworks.
The only difference, in principle, is the use of the SUT or 'context'. In the BDD style you have it, in the TDD style you don’t.
Describe(Strlen); (1)
BeforeEach(Strlen) {} (2)
AfterEach(Strlen) {} (3)
Ensure(Strlen, returns_five_for_hello) { (4)
assert_that(strlen("Hello"), is_equal_to(5));
}
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Strlen, returns_five_for_hello); (5)
return suite;
}
1 | The Describe macro must name the SUT |
2 | The BeforeEach function… |
3 | … and the AfterEach functions must exist and name the SUT |
4 | The test need to name the SUT |
5 | Adding to the test suite |
You can only have tests for a single SUT in the same source file. |
If you use the older pure-TDD style you skip the Describe
macro, the BeforeEach
and AfterEach
functions.
You don’t need a SUT in the Ensure()
macro or when you add the test to the suite.
(1)
Ensure(strlen_returns_five_for_hello) { (2)
assert_that(strlen("Hello"), is_equal_to(5));
}
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test(suite, strlen_returns_five_for_hello); (3)
return suite;
}
1 | No Describe , BeforeEach() or AfterEach() |
2 | No SUT/context in the Ensure() macro |
3 | No SUT/context in add_test() and you should use this function instead
of ..with_context() . |
You might think of the TDD style as the BDD style with a default SUT or context. |
2.5. A Runner
The tests are only run by running a test suite in some form. (But see also Using the Runner.) We can create and run one especially for this test like so…
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Strlen, returns_five_for_hello);
return suite;
}
In case you have spotted that the reference to returns_five_for_hello
should have an ampersand in front of it, add_test_with_context()
is actually a macro.
The &
is added automatically.
Further more, the Ensure()
-macro actually mangles the tests name, so it is not actually a function name.
(This might also make them a bit difficult to find in the debugger….)
To run the test suite, we call run_test_suite()
on it.
So we can just write…
return run_test_suite(our_tests(), create_text_reporter());
The results of assertions are ultimately delivered as passes and failures to a collection of callbacks defined in a TestReporter
structure.
There is a predefined TestReporter
in Cgreen called the TextReporter
that delivers messages in plain text like we have already seen.
The return value of run_test_suite()
is a standard C library/Unix exit code that can be returned directly by the main()
function.
The complete test code now looks like…
#include <cgreen/cgreen.h>
#include <string.h>
Describe(Strlen);
BeforeEach(Strlen) {}
AfterEach(Strlen) {}
Ensure(Strlen, returns_five_for_hello) {
assert_that(strlen("Hello"), is_equal_to(5));
}
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Strlen, returns_five_for_hello);
return suite;
}
int main(int argc, char **argv) {
return run_test_suite(our_tests(), create_text_reporter());
}
Compiling and running gives…
$ gcc -c strlen_test.c $ gcc strlen_test.o -lcgreen -o strlen_test $ ./strlen_test Running "our_tests" (1 test)... "our_tests": 1 pass in 42ms. Completed "our_tests": 1 pass in 42ms.
We can see that the outer test suite is called our_tests
since it was in our_tests()
we created the test suite.
There are no messages shown unless there are failures.
So, let’s break our test to see it…
Ensure(Strlen, returns_five_for_hello) {
assert_that(strlen("Hiya"), is_equal_to(5));
}
…we’ll get the helpful message…
Running "our_tests" (1 test)... strlen_tests.c:9: Failure: returns_five_for_hello Expected [strlen("Hiya")] to [equal] [5] actual value: [4] expected value: [5] "our_tests": 1 failure in 42ms. Completed "our_tests": 1 failure in 42ms.
Cgreen starts every message with the location of the test failure so that the usual error message identifying tools (like Emacs’s next-error
) will work out of the box.
Once we have a basic test scaffold up, it’s pretty easy to add more tests.
Adding a test of strlen()
with an empty string for example…
...
Ensure(Strlen, returns_zero_for_empty_string) {
assert_equal(strlen("\0"), 0);
}
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Strlen, returns_five_for_hello);
add_test_with_context(suite, Strlen, returns_zero_for_empty_string);
return suite;
}
...
And so on.
2.6. BeforeEach and AfterEach
It’s common for test suites to have a lot of duplicate code, especially when setting up similar tests. Take this database code for example…
#include <cgreen/cgreen.h>
#include <stdlib.h>
#include <mysql.h>
#include "person.h"
Describe(Person);
BeforeEach(Person) {}
AfterEach(Person) {}
static void create_schema() {
MYSQL *connection = mysql_init(NULL);
mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
mysql_query(connection, "create table people (name, varchar(255) unique)");
mysql_close(connection);
}
static void drop_schema() {
MYSQL *connection = mysql_init(NULL);
mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
mysql_query(connection, "drop table people");
mysql_close(connection);
}
Ensure(Person, can_add_person_to_database) {
create_schema();
Person *person = create_person();
set_person_name(person, "Fred");
save_person(person);
Person *found = find_person_by_name("Fred");
assert_that(get_person_name(found), is_equal_to_string("Fred"));
drop_schema();
}
Ensure(Person, cannot_add_duplicate_person) {
create_schema();
Person *person = create_person();
set_person_name(person, "Fred");
assert_that(save_person(person), is_true);
Person *duplicate = create_person();
set_person_name(duplicate, "Fred");
assert_that(save_person(duplicate), is_false);
drop_schema();
}
TestSuite *person_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Person, can_add_person_to_database);
add_test_with_context(suite, Person, cannot_add_duplicate_person);
return suite;
}
int main(int argc, char **argv) {
return run_test_suite(person_tests(), create_text_reporter());
}
We have already factored out the duplicate code into its own functions create_schema()
and drop_schema()
, so things are not so bad.
At least not yet.
But what happens when we get dozens of tests?
For a test subject as complicated as a database ActiveRecord, having dozens of tests is very likely.
We can get Cgreen to do some of the work for us by calling these methods before and after each test in the test suite.
Here is the new version…
...
static void create_schema() {
...
}
static void drop_schema() {
...
}
Describe(Person);
BeforeEach(Person) { create_schema(); }
AfterEach(Person) { drop_schema(); }
Ensure(Person, can_add_person_to_database) {
Person *person = create_person();
set_person_name(person, "Fred");
save_person(person);
Person *found = find_person_by_name("Fred");
assert_that(get_person_name(found), is_equal_to_string("Fred"));
}
Ensure(Person, cannot_add_duplicate_person) {
Person *person = create_person();
set_person_name(person, "Fred");
assert_that(save_person(person), is_true);
Person *duplicate = create_person();
set_person_name(duplicate, "Fred");
assert_that(save_person(duplicate), is_false);
}
TestSuite *person_tests() {
...
With this new arrangement Cgreen runs the create_schema()
function before each test, and the drop_schema()
function after each test.
This saves some repetitive typing and reduces the chance of accidents.
It also makes the tests more focused.
The reason we try so hard to strip everything out of the test functions is the fact that the test suite acts as documentation.
In our person.h
example we can easily see that Person
has some kind of name property, and that this value must be unique.
For the tests to act like a readable specification we have to remove as much mechanical clutter as we can.
In this particular case there are more lines that we could move from the tests to BeforeEach()
:
Person *person = create_person();
set_person_name(person, "Fred");
Of course that would require an extra variable, and it might make the tests less clear.
And as we add more tests, it might turn out to not be common to all tests.
This is a typical judgement call that you often get to make with BeforeEach()
and AfterEach()
.
If you use the pure-TDD notation, not having the test subject named by the Describe macro, you can’t have the BeforeEach() and AfterEach() either.
In this case you can still run a function before and after every test.
Just nominate any void(void) function by calling the function set_setup() and/or set_teardown() with the suite and the function that you want to run before/after each test.
In the example above that would be set_setup(suite, create_schema); and set_teardown(suite, drop_schema); .
|
A couple of details.
There is only one BeforeEach()
and one AfterEach()
allowed in each TestSuite
.
Also, the AfterEach()
function might not be run if the test crashes, causing some test interference.
This brings us nicely onto the next section…
2.7. Each Test in its Own Process
Consider this test method…
Ensure(CrashExample, seg_faults_for_null_dereference) {
int *p = NULL;
(*p)++;
}
Crashes are not something you would normally want to have in a test run. Not least because it will stop you receiving the very test output you need to tackle the problem.
To prevent segmentation faults and other problems bringing down the test suites, Cgreen runs every test in its own process.
Just before calling the BeforeEach()
(or setup
) function, Cgreen fork()
:s.
The main process waits for the test to complete normally or die.
This includes calling the AfterEach()
(or teardown
) function, if any.
If the test process dies, an exception is reported and the main test process carries on with the next test.
For example…
#include <cgreen/cgreen.h>
#include <stdlib.h>
Describe(CrashExample);
BeforeEach(CrashExample) {}
AfterEach(CrashExample) {}
Ensure(CrashExample, seg_faults_for_null_dereference) {
int *p = NULL;
(*p)++;
}
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
return run_test_suite(suite, create_text_reporter());
}
When built and run, this gives…
Running "main" (1 test)... crash_tests.c:8: Exception: seg_faults_for_null_dereference Test terminated with signal: Segmentation fault "main": 1 exception in 42ms. Completed "main": 1 exception in 42ms.
The normal thing to do in this situation is to fire up the debugger.
Unfortunately, the constant fork()
:ing of Cgreen can be one extra complication too many when debugging.
It’s enough of a problem to find the bug.
To get around this, and also to allow the running of one test at a time, Cgreen has the run_single_test()
function.
The signatures of the two run methods are…
-
int run_test_suite(TestSuite *suite, TestReporter *reporter);
-
int run_single_test(TestSuite *suite, char *test, TestReporter *reporter);
The extra parameter of run_single_test()
, the test
string, is the name of the test to select.
This could be any test, even in nested test suites (see below).
Here is how we would use it to debug our crashing test…
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
return run_single_test(suite, "seg_faults_for_null_dereference", create_text_reporter());
}
When run in this way, Cgreen will not fork()
.
But see the section on Debugging Cgreen tests.
The function run() is a good place to place a breakpoint.
|
The following is a typical session:
$ gdb crash2 ... (gdb) break main (gdb) run ... (gdb) break run (gdb) continue ... Running "main" (1 tests)... Breakpoint 2, run_the_test_code (suite=suite@entry=0x2003abb0, spec=spec@entry=0x402020 <CgreenSpec__CrashExample__seg_faults_for_null_dereference__>, reporter=reporter@entry=0x2003abe0) at /cygdrive/c/Users/Thomas/Utveckling/Cgreen/cgreen/src/runner.c:270 270 run(spec); (gdb) step run (spec=0x402020 <CgreenSpec__CrashExample__seg_faults_for_null_dereference__>) at /cygdrive/c/Users/Thomas/Utveckling/Cgreen/cgreen/src/runner.c:217 217 spec->run(); (gdb) step CrashExample__seg_faults_for_null_dereference () at crash_test2.c:9 9 int *p = NULL; (gdb) step 10 (*p)++; (gdb) step Program received signal SIGSEGV, Segmentation fault. 0x004011ea in CrashExample__seg_faults_for_null_dereference () at crash_test2.c:10 10 (*p)++;
Which shows exactly where the problem is.
This deals with the case where your code throws an exception like segmentation fault, but what about a process that fails to complete by getting stuck in a loop?
Well, Cgreen will wait forever too.
But, using the C signal handlers, we can place a time limit on the process by sending it an interrupt.
To save us writing this ourselves, Cgreen includes the die_in()
function to help us out.
Here is an example of time limiting a test…
...
Ensure(CrashExample, seg_faults_for_null_dereference) {
...
}
Ensure(CrashExample, will_loop_forever) {
die_in(1);
while(0 == 0) { }
}
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, CrashExample, seg_faults_for_null_dereference);
add_test_with_context(suite, CrashExample, will_loop_forever);
return run_test_suite(suite, create_text_reporter());
}
When executed, the code will slow for a second, and then finish with…
Running "main" (2 tests)... crash_tests.c:8: Exception: seg_faults_for_null_dereference Test terminated with signal: Segmentation fault crash_tests.c:13: Exception: will_loop_forever Test terminated unexpectedly, likely from a non-standard exception or Posix signal "main": 2 exceptions in 42ms. Completed "main": 2 exceptions in 42ms.
Note that you see the test results as they come in. Cgreen streams the results as they happen, making it easier to figure out where the test suite has problems.
Of course, if you want to set a general time limit on all your tests, then you can add a die_in()
to a BeforeEach()
(or setup()
) function.
Cgreen will then apply the limit to each of the tests in that context, of course.
Another possibility is the use of an environment variable named CGREEN_TIMEOUT_PER_TEST
which, if set to a number will apply that timeout to every test run.
This will apply to all tests in the same run.
2.8. Debugging Cgreen tests
Cgreen protects itself from being torn down by an exception in a test by fork()
-ing each test into a separate process.
A catastrophic error will then only affect the child process for that specific test and Cgreen can catch it, rather than crashing too.
It can then report the exception and continue with the next test.
2.8.1. No fork, please
If you want to debug any of your tests the constant fork()
-ing might make that difficult or impossible.
There are also other circumstances that might require that you don’t use fork()
.
There are two ways to make Cgreen refrain from fork()
-ing.
Cgreen does not fork()
when only a single test is run by name with the function run_single_test()
.
To debug you can then obviously set a breakpoint at that test (but note that its actual name probably have been mangled).
Cgreen does some book-keeping before actually getting to the test, so a function easier to find might be the one simply called run()
.
The second way is to define the environment variable CGREEN_NO_FORK
.
If Cgreen can get that variable from the environment using getenv()
it will run the test(s) in the same process.
In this case the non-forking applies to all tests run, so all test will run in the same process, namely *Cgreen*s main process.
This might bring your whole test suite down if a single test causes an exception. So it is not a recommended setting for normal use. |
2.8.2. Debugging with cgreen-runner
If you use the convenient auto-discovery feature of Cgreen (see
Automatic Test Discovery) by running dynamic loadable libraries through
cgreen-runner
, it might be tricky to figure out to where to put
breaks and to get them to "take".
cgreen-runner
obviously loads the library (or libraries) with your
tests dynamically so the tests are not available before executing the
code that loads them.
The function run() is a good place to place a breakpoint.
|
2.8.3. cgreen-debug
For some platforms a utility script, cgreen-debug
, is installed when you install Cgreen.
It makes it very convenient to start a debugging session for a particular test.
Find out the logical name of the test, which is composed of the Context and the Testname, in the form <context>:<testname>.
Then just invoke cgreen-debug
$ cgreen-debug <library> <context>:<testname>
The script will try to find a debugger, invoke it on the
cgreen-runner
and break on that test.
Currently it only supports gdb and will prefer cgdb if
that’s available.
|
2.9. Building Composite Test Suites
The TestSuite
is a composite structure.
This means test suites can be added to test suites, building a tree structure that will be executed in order.
Let’s combine the strlen()
tests with the Person
tests above.
Firstly we need to remove the main()
functions.
E.g…
Ensure(Strlen, returns_five_for_hello) {
...
}
Ensure(Strlen, returns_zero_for_empty_string) {
...
}
TestSuite *our_tests() {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Strlen, returns_five_for_hello);
add_test_with_context(suite, Strlen, returns_zero_for_empty_string);
return suite;
}
Then we can write a small runner with a new main()
function…
#include <cgreen/cgreen.h>
TestSuite *our_tests();
TestSuite *person_tests();
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_suite(suite, our_tests());
add_suite(suite, person_tests());
if (argc > 1) {
return run_single_test(suite, argv[1], create_text_reporter());
}
return run_test_suite(suite, create_text_reporter());
}
It’s usually easier to place the TestSuite
prototypes directly in the runner source, rather than have lot’s of header files.
This is the same reasoning that let us drop the prototypes for the test functions in the actual test scripts.
We can get away with this, because the tests are more about documentation than encapsulation.
As we saw above, we can run a single test using the run_single_test()
function, and we’d like to be able to do that from the command line.
So we added a simple if
block to take the test name as an optional argument.
The entire test suite will be searched for the named test.
This trick also saves us a recompile when we debug.
When you use the BDD notation you can only have a single test subject (which is actually equivalent of a suite) in a single file because you can only have one Describe()
macro in each file.
But using this strategy you can create composite suites that takes all your tests and run them in one go.
Rewrite pending. The next couple of sections does not reflect the current best thinking. They are remnants of the TDD notation. Using BDD notation you would create separate contexts, each in its own file, with separate names, for each of the fixture cases. |
If you use the TDD (non-BDD) notation you can build several test suites in the same file, even nesting them. We can even add mixtures of test functions and test suites to the same parent test suite. Loops will give trouble, however. |
If we do place several suites in the same file, then all the suites will be named the same in the breadcrumb trail in the test message.
They will all be named after the function the create call sits in.
If you want to get around this, or you just like to name your test suites, you can use create_named_test_suite() instead of create_test_suite() .
This takes a single string parameter.
In fact create_test_suite() is just a macro that inserts the func constant into create_named_test_suite() .
|
What happens to setup
and teardown
functions in a TestSuite
that contains other TestSuite
:s?
Well firstly, Cgreen does not fork()
when running a suite.
It leaves it up to the child suite to fork()
the individual tests.
This means that a setup
and teardown
will run in the main process.
They will be run once for each child suite.
We can use this to speed up our Person
tests above.
Remember we were creating a new connection and closing it again in the fixtures.
This means opening and closing a lot of connections.
At the slight risk of some test interference, we could reuse the connection accross tests…
...
static MYSQL *connection;
static void create_schema() {
mysql_query(connection, "create table people (name, varchar(255) unique)");
}
static void drop_schema() {
mysql_query(connection, "drop table people");
}
Ensure(can_add_person_to_database) { ... }
Ensure(cannot_add_duplicate_person) { ... }
void open_connection() {
connection = mysql_init(NULL);
mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
}
void close_connection() {
mysql_close(connection);
}
TestSuite *person_tests() {
TestSuite *suite = create_test_suite();
set_setup(suite, create_schema);
set_teardown(suite, drop_schema);
add_test(suite, can_add_person_to_database);
add_test(suite, cannot_add_duplicate_person);
TestSuite *fixture = create_named_test_suite("Mysql fixture");
add_suite(fixture, suite);
set_setup(fixture, open_connection);
set_teardown(fixture, close_connection);
return fixture;
}
The trick here is creating a test suite as a wrapper whose sole purpose is to wrap the main test suite in the fixture. This is our 'fixture' pointer. This code is a little confusing, because we have two sets of fixtures in the same test script.
We have the MySQL connection fixture.
This will run open_connection()
and close_connection()
just once at the beginning and end of the person tests.
This is because the suite
pointer is the only member of fixture
.
We also have the schema fixture, the create_schema()
and drop_schema()
, which is run before and after every test.
Those are still attached to the inner suite
.
In the real world we would probably place the connection fixture in its own file…
static MYSQL *connection;
MYSQL *get_connection() {
return connection;
}
static void open_connection() {
connection = mysql_init(NULL);
mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
}
static void close_connection() {
mysql_close(connection);
}
TestSuite *connection_fixture(TestSuite *suite) {
TestSuite *fixture = create_named_test_suite("Mysql fixture");
add_suite(fixture, suite);
set_setup(fixture, open_connection);
set_teardown(fixture, close_connection);
return fixture;
}
This allows the reuse of common fixtures across projects.
3. Mocking functions with Cgreen
When testing you want certainty above all else. Random events destroy confidence in your test suite and force needless extra runs "to be sure". A good test places the system under test into a tightly controlled environment. A test chamber if you like. This makes the tests fast, repeatable and reliable.
To create a test chamber for testing code, we have to control any outgoing calls from the code under test. We won’t believe our test failure if our code is making calls to the internet for example. The internet can fail all by itself. Not only do we not have total control, but it also means we have to get dependent components working before we can test the higher level code. This makes it difficult to code top down.
The solution to this dilemma is to write stub code for the components whilst the higher level code is written. This pollutes the code base with temporary code, and the test isolation disappears when the system is eventually fleshed out.
The ideal is to have minimal stubs written for each individual test. Cgreen encourages this approach by making such tests easier to write.
3.1. The Problem with Streams
How would we test the following code…?
char *read_paragraph(int (*read)(void *), void *stream) {
int buffer_size = 0, length = 0;
char *buffer = NULL;
int ch;
while ((ch = (*read)(stream)) != EOF) {
if (++length > buffer_size) {
buffer_size += 100;
buffer = (char *)realloc(buffer, buffer_size + 1);
}
if ((buffer[length] = ch) == '\n') {
break;
}
buffer[length + 1] = '\0';
}
return buffer;
}
This is a fairly generic stream filter that turns the incoming characters into C string paragraphs.
Each call creates one paragraph, returning a pointer to it or returning NULL
if there is no paragraph.
The paragraph has memory allocated to it and the stream is advanced ready for the next call.
That’s quite a bit of functionality, and there are plenty of nasty boundary conditions.
I really want this code tested before I deploy it.
The problem is the stream dependency. We could use a real stream, but that will cause all sorts of headaches. It makes the test of our paragraph formatter dependent on a working stream. It means we have to write the stream first, bottom up coding rather than top down. It means we will have to simulate stream failures - not easy. It will also mean setting up external resources. This is more work, will run slower, and could lead to spurious test failures.
By contrast, we could write a simulation of the stream for each test, called a "server stub".
For example, when the stream is empty nothing should happen.
We hopefully get NULL
from read_paragraph
when the stream is exhausted.
That is, it just returns a steady stream of `EOF`s.
Fortunately, this function takes the stream as a parameter. This is called dependency injection and is a very important concept. Thanks to this we can write a small function, a stub, with the same signature, that simulates a real stream, and inject that instead of a real stream, which the production code probably does.
If the code does not inject the dependency this way we can often compile the stub separately and link with that instead the real stream. In this case your stub will have to have the same name as the original function, of course. (This is sometimes called the linkage seam.) |
static int empty_stream(void *stream) {
return EOF;
}
Describe(ParagraphReader);
BeforeEach(ParagraphReader) {}
AfterEach(ParagraphReader) {}
Ensure(ParagraphReader, gives_null_when_reading_empty_stream) {
assert_that(read_paragraph(&empty_stream, NULL), is_null);
}
Our simulation is easy here, because our fake stream returns only one value. Things are harder when the function result changes from call to call as a real stream would. Simulating this would mean messing around with static variables and counters that are reset for each test. And of course, we will be writing quite a few stubs. Often a different one for each test. That’s a lot of clutter.
Cgreen can handle this clutter for us by letting us write a single programmable function for all our tests.
3.2. Record and Playback
We can redo our example by creating a stream_stub()
function.
We can call it anything we want, and since I thought we wanted to have a stubbed stream…
static int stream_stub(void *stream) {
return (int)mock(stream);
}
Hardly longer that our trivial server stub above, it is just a macro to generate a return value, but we can reuse this in test after test. Let’s see how.
For our simple example above we just tell it to always return EOF
…
#include <cgreen/cgreen.h>
#include <cgreen/mocks.h>
char *read_paragraph(int (*read)(void *), void *stream);
static int stream_stub(void *stream) {
return (int)mock(stream);
}
Describe(ParagraphReader);
BeforeEach(ParagraphReader) {}
AfterEach(ParagraphReader) {}
Ensure(ParagraphReader, gives_null_when_reading_empty_stream) {
always_expect(stream_stub, will_return(EOF)); (1)
assert_that(read_paragraph(&stream_stub, NULL), is_null);
}
1 | The always_expect() macro takes the function name as an argument and also defines the return value using the call to will_return() .
This is a declaration of an expectation of a call to the stub, and we have told our stream_stub() to always return EOF when called. |
Let’s see if our production code actually works…
Running "stream" (1 test)... "ParagraphReader": 1 pass in 42ms. Completed "stream": 1 pass in 42ms.
So far, so good. On to the next test.
If we want to test a one character line, we have to send the terminating EOF
or "\n"
as well as the single character.
Otherwise our code will loop forever, giving an infinite line of that character.
Here is how we can do this…
Ensure(ParagraphReader, gives_one_character_line_for_one_character_stream) {
expect(stream_stub, will_return('a'));
expect(stream_stub, will_return(EOF));
char *line = read_paragraph(&stream_stub, NULL);
assert_that(line, is_equal_to_string("a"));
free(line);
}
Unlike the always_expect()
instruction, expect()
sets up an expectation of a single call and specifying will_return()
sets the single return value for just that call.
It acts like a record and playback model.
Successive expectations map out the return sequence that will be given back once the test proper starts.
We’ll add this test to the suite and run it…
Running "stream" (2 tests)... stream_tests.c:23: Failure: ParagraphReader -> gives_one_character_line_for_one_character_stream Expected [line] to [equal string] ["a"] actual value: [""] expected to equal: ["a"] "ParagraphReader": 1 pass, 1 failure in 42ms. Completed "stream": 1 pass, 1 failure in 42ms.
Oops. Our code under test doesn’t work. Already we need a fix…
char *read_paragraph(int (*read)(void *), void *stream) {
int buffer_size = 0, length = 0;
char *buffer = NULL;
int ch;
while ((ch = (*read)(stream)) != EOF) {
if (++length > buffer_size) {
buffer_size += 100;
buffer = (char *)realloc(buffer, buffer_size + 1);
}
if ((buffer[length - 1] = ch) == '\n') { (1)
break;
}
buffer[length] = '\0'; (2)
}
return buffer;
}
1 | After moving the indexing here… |
2 | and here… |
around a bit everything is fine:
Running "stream" (2 tests)... "ParagraphReader": 2 passes in 42ms. Completed "stream": 2 passes in 42ms.
So, how do the Cgreen stubs work?
Each expect()
describes one call to the stub and when a call to will_return()
is included, the return values will be collected and returned in order as the expected calls arrive.
The mock()
macro captures the parameter names, their values and the func
property (the name of the stub function).
Cgreen can then use these to look up entries in the return list, and also to generate more helpful messages.
We can now crank out our tests quite quickly…
Ensure(ParagraphReader, gives_one_word_line_for_one_word_stream) {
expect(stream_stub, will_return('t'));
expect(stream_stub, will_return('h'));
expect(stream_stub, will_return('e'));
always_expect(stream_stub, will_return(EOF));
assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string("the"));
}
I’ve been a bit naughty. As each test runs in its own process, I haven’t bothered to free the pointers to the paragraphs. I’ve just let the operating system do it. Purists may want to add the extra clean up code.
I’ve also used always_expect()
for the last instruction.
Without this, if the stub is given an instruction it does not expect, it will throw a test failure.
This is overly restrictive, as our read_paragraph()
function could quite legitimately call the stream after it had run off of the end.
OK, that would be odd behaviour, but that’s not what we are testing here.
If we were, it would be placed in a test of its own.
The always_expect()
call tells Cgreen to keep going after the first three letters, allowing extra calls.
As we build more and more tests, they start to look like a specification of the wanted behaviour…
Ensure(ParagraphReader, drops_line_ending_from_word_and_stops) {
expect(stream_stub, will_return('t'));
expect(stream_stub, will_return('h'));
expect(stream_stub, will_return('e'));
expect(stream_stub, will_return('\n'));
assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string("the"));
}
…and just for luck…
Ensure(ParagraphReader, gives_empty_line_for_single_line_ending) {
expect(stream_stub, will_return('\n'));
assert_that(read_paragraph(&stream_stub, NULL), is_equal_to_string(""));
}
This time we must not use always_return()
.
We want to leave the stream where it is, ready for the next call to read_paragraph()
.
If we call the stream beyond the line ending, we want to fail.
Oops, that was a little too fast. Turns out we are failing anyway…
Running "stream" (5 tests)... stream_tests.c:40: Failure: ParagraphReader -> drops_line_ending_from_word_and_stops Expected [read_paragraph(&stream_stub, ((void *)0))] to [equal string] ["the"] actual value: ["the "] expected to equal: ["the"] stream_tests.c:45: Failure: ParagraphReader -> gives_empty_line_for_single_line_ending Expected [read_paragraph(&stream_stub, ((void *)0))] to [equal string] [""] actual value: [" "] expected to equal: [""] "ParagraphReader": 3 passes, 2 failures in 42ms. Completed "stream": 3 passes, 2 failures in 42ms.
Clearly we are passing through the line ending. Another fix later…
char *read_paragraph(int (*read)(void *), void *stream) {
int buffer_size = 0, length = 0;
char *buffer = NULL;
int ch;
while ((ch = (*read)(stream)) != EOF) {
if (++length > buffer_size) {
buffer_size += 100;
buffer = (char *)realloc(buffer, buffer_size + 1);
}
if ((buffer[length - 1] = ch) == '\n') {
buffer[--length] = '\0';
break;
}
buffer[length] = '\0';
}
return buffer;
}
And we are passing again…
Running "stream" (5 tests)... "ParagraphReader": 5 passes in 42ms. Completed "stream": 5 passes in 42ms.
There are no limits to the number of stubbed methods within a test, only that two stubs cannot have the same name. The following will cause problems…
static int stream_stub(void *stream) {
return (int)mock(stream);
}
Ensure(Streams, bad_test) {
expect(stream_stub, will_return('a'));
do_stuff(&stream_stub, &stream_stub);
}
You could program the same stub to return values for the two streams, but that would make a very brittle test. Since we’d be making it heavily dependent on the exact internal behaviour that we are trying to test, or test drive, it will break as soon as we change that implementation. The test will also become very much harder to read and understand. And we really don’t want that.
So, it will be necessary to have two stubs to make this test behave, but that’s not a problem…
static int first_stream_stub(void *stream) {
return (int)mock(stream);
}
static int second_stream_stub(void *stream) {
return (int)mock(stream);
}
Ensure(Streams, good_test) {
expect(first_stream_stub, will_return('a'));
expect(second_stream_stub, will_return('a'));
do_stuff(&first_stream_stub, &second_stream_stub);
}
We now have a way of writing fast, clear tests with no external dependencies. The information flow is still one way though, from stub to the code under test. When our code calls complex procedures, we won’t want to pick apart the effects to infer what happened. That’s too much like detective work. And why should we? We just want to know that we dispatched the correct information down the line.
Things get more interesting when we think of the traffic going the other way, from code to stub. This gets us into the same territory as mock objects.
3.3. Setting Expectations on Mock Functions
To swap the traffic flow, we’ll look at an outgoing example instead. Here is the prewritten production code…
void by_paragraph(int (*read)(void *), void *in, void (*write)(void *, char *), void *out) {
while (1) {
char *line = read_paragraph(read, in);
if ((line == NULL) || (strlen(line) == 0)) {
return;
}
(*write)(out, line);
free(line);
}
}
This is the start of a formatter utility.
Later filters will probably break the paragaphs up into justified text, but right now that is all abstracted behind the void write(void *, char *)
interface.
Our current interests are: does it loop through the paragraphs, and does it crash?
We could test correct paragraph formation by writing a stub that collects the paragraphs into a struct
.
We could then pick apart that struct
and test each piece with assertions.
This approach is extremely clumsy in C.
The language is just not suited to building and tearing down complex edifices, never mind navigating them with assertions.
We would badly clutter our tests.
Instead we’ll test the output as soon as possible, right in the called function…
...
void expect_one_letter_paragraph(void *stream, char *paragraph) {
assert_that(paragraph, is_equal_to_string("a"));
}
Ensure(Formatter, makes_one_letter_paragraph_from_one_character_input) {
by_paragraph(
&one_character_stream,
NULL,
&expect_one_letter_paragraph,
NULL);
}
...
By placing the assertions into the mocked function, we keep the tests minimal. The catch with this method is that we are back to writing individual functions for each test. We have the same problem as we had with hand coded stubs.
Again, Cgreen has a way to automate this. Here is the rewritten test…
static int reader(void *stream) {
return (int)mock(stream);
}
static void writer(void *stream, char *paragraph) {
mock(stream, paragraph);
}
Ensure(Formatter, makes_one_letter_paragraph_from_one_character_input) {
expect(reader, will_return('a'));
always_expect(reader, will_return(EOF));
expect(writer, when(paragraph, is_equal_to_string("a")));
by_paragraph(&reader, NULL, &writer, NULL);
}
Where are the assertions?
Unlike our earlier stub, reader()
can now check its parameters.
In object oriented circles, an object that checks its parameters as well as simulating behaviour is called a mock object.
By analogy reader()
is a mock function, or mock callback.
Using the expect()
macro, we have set up the expectation that writer()
will be called just once.
That call must have the string "a"
for the paragraph
parameter.
If the actual value of that parameter does not match, the mock function will issue a failure straight to the test suite.
This is what saves us writing a lot of assertions.
3.4. Running Tests With Mocked Functions
It’s about time we actually ran our test…
Running "formatter" (1 test)... "Formatter": 1 pass in 42ms. Completed "formatter": 1 pass in 42ms.
Confident that a single character works, we can further specify the behaviour. Firstly an input sequence…
Ensure(Formatter, makes_one_paragraph_if_no_line_endings) {
expect(reader, will_return('a'));
expect(reader, will_return(' '));
expect(reader, will_return('b'));
expect(reader, will_return(' '));
expect(reader, will_return('c'));
always_expect(reader, will_return(EOF));
expect(writer, when(paragraph, is_equal_to_string("a b c")));
by_paragraph(&reader, NULL, &writer, NULL);
}
A more intelligent programmer than me would place all these calls in a loop.
Running "formatter" (2 tests)... "Formatter": 2 passes in 42ms. Completed "formatter": 2 passes in 42ms.
Next, checking an output sequence…
Ensure(Formatter, generates_separate_paragraphs_for_line_endings) {
expect(reader, will_return('a'));
expect(reader, will_return('\n'));
expect(reader, will_return('b'));
expect(reader, will_return('\n'));
expect(reader, will_return('c'));
always_expect(reader, will_return(EOF));
expect(writer, when(paragraph, is_equal_to_string("a")));
expect(writer, when(paragraph, is_equal_to_string("b")));
expect(writer, when(paragraph, is_equal_to_string("c")));
by_paragraph(&reader, NULL, &writer, NULL);
}
Again we can se that the expect()
calls follow a record and playback model.
Each one tests a successive call.
This sequence confirms that we get "a"
, "b"
and "c"
in order.
Running "formatter" (3 tests)... "Formatter": 5 passes in 42ms. Completed "formatter": 5 passes in 42ms.
So, why the 5 passes?
Each expect()
with a constrait is actually an assert.
It asserts that the call specified is actually made with the parameters given and in the specified order.
In this case all the expected calls were made.
Then we’ll make sure the correct stream pointers are passed to the correct functions. This is a more realistic parameter check…
Ensure(Formatter, pairs_the_functions_with_the_resources) {
expect(reader, when(stream, is_equal_to(1)), will_return('a'));
always_expect(reader, when(stream, is_equal_to(1)), will_return(EOF));
expect(writer, when(stream, is_equal_to(2)));
by_paragraph(&reader, (void *)1, &writer, (void *)2);
}
Running "formatter" (4 tests)... "Formatter": 9 passes in 42ms. Completed "formatter": 9 passes in 42ms.
And finally we’ll specify that the writer is not called if there is no paragraph.
Ensure(Formatter, ignores_empty_paragraphs) {
expect(reader, will_return('\n'));
always_expect(reader, will_return(EOF));
never_expect(writer);
by_paragraph(&reader, NULL, &writer, NULL);
}
This last test is our undoing…
Running "formatter" (5 tests)... formatter_tests.c:59: Failure: Formatter -> ignores_empty_paragraphs Mocked function [writer] has an expectation that it will never be called, but it was "Formatter": 9 passes, 1 failure in 42ms. Completed "formatter": 9 passes, 1 failure in 42ms.
Obviously blank lines are still being dispatched to the writer()
.
Once this is pointed out, the fix is obvious…
void by_paragraph(int (*read)(void *), void *in, void (*write)(void *, char *), void *out) {
while (1) {
char *line = read_paragraph(read, in);
if ((line == NULL) || (strlen(line) == 0)) {
return;
}
(*write)(out, line);
free(line);
}
}
Tests with never_expect()
can be very effective at uncovering subtle
bugs.
Running "formatter" (5 tests)... "Formatter": 10 passes in 42ms. Completed "formatter": 10 passes in 42ms.
All done.
3.5. Mocks Are…
Using mocks is a very handy way to isolate a unit by catching and controlling calls to external units. Depending on your style of coding two schools of thinking have emerged. And of course Cgreen supports both!
3.5.1. Strict or Loose Mocks
The two schools are thinking a bit differently about what mock expectations means. Does it mean that all external calls must be declared and expected? What happens if a call was made to a mock that wasn’t expected? And vice versa, if an expected call was not made?
Actually, the thinking is not only a school of thought, you might want to switch from one to the other depending on the test. So Cgreen allows for that too.
By default Cgreen mocks are 'strict', which means that a call to an non-expected mock will be considered a failure. So will an expected call that was not fullfilled. You might consider this a way to define a unit through all its exact behaviours towards its neighbours.
On the other hand, 'loose' mocks are looser. They allow both unfulfilled expectations and try to handle unexpected calls in a reasonable way.
You can use both with in the same suite of tests using the call cgreen_mocks_are(strict_mocks);
and cgreen_mocks_are(loose_mocks);
respectively.
Typically you would place that call at the beginning of the test, or in a setup or BeforeEach()
if it applies to all tests in a suite.
3.5.2. Learning Mocks
Working with legacy code and trying to apply TDD, BDD, or even simply adding some unit tests, is not easy. You’re working with unknown code that does unknown things with unknown counterparts.
So the first step would be to isolate the unit. We won’t go into details on how to do that here, but basically you would replace the interface to other units with mocks. This is a somewhat tedious manual labor, but will result in an isolated unit where you can start applying your unit tests.
Once you have your unit isolated in a harness of mocks, we need to figure out which calls it does to other units, now replaced by mocks, in the specific case we are trying to test.
This might be complicated, so Cgreen can make that a bit simpler. There is a third 'mode' of the Cgreen mocks, the learning mocks.
If you temporarily add the call cgreen_mocks_are(learning_mocks);
at the beginning of your unit test, the mocks will record all calls and present a list of those calls in order, including the actual parameter values, on the standard output.
So let’s look at the following example from the Cgreen unit tests. It’s a bit contorted since the test actually call the mocked functions directly, but I believe it will serve as an example.
static int integer_out() {
return (int)mock();
}
static char *string_out(int p1) {
return (char *)mock(p1);
}
Ensure(LearningMocks, emit_pastable_code) {
cgreen_mocks_are(learning_mocks);
string_out(1);
string_out(2);
integer_out();
integer_out();
string_out(3);
integer_out();
}
We can see the call to cgreen_mocks_are()
starting the test and
setting the mocks into learning mode.
If we run this, just as we usually run tests, the following will show up in our terminal:
Running "learning_mocks" (1 tests)... LearningMocks -> emit_pastable_code : Learned mocks are expect(string_out, when(p1, is_equal_to(1))); expect(string_out, when(p1, is_equal_to(2))); expect(integer_out); expect(integer_out); expect(string_out, when(p1, is_equal_to(3))); expect(integer_out); Completed "LearningMocks": 0 passes, 0 failures, 0 exceptions. Completed "learning_mocks": 0 passes, 0 failures, 0 exceptions.
If this was for real we could just copy this and paste it in place of
the call to cgreen_mocks_are()
and we have all the expectations
done.
Before you can do this you need to implement the mock functions, of course.
I.e. write functions that replaces the real functions and instead calls mock() .
|
If a test fails with an exception, you won’t get the learned calls unfortunately. They are collected and printed at the end of the test. This might be improved at some future time. |
You can try the cgreen-mocker for this, as described in cgreen-mocker - Automated Mocking.
|
4. More on expect()
and mock()
4.1. Important Things To Remember About expect()
and mock()
Using expect()
and mock()
is a very powerful way to isolate your code under test from its dependencies.
But it is not always easy to follow what happens, and when.
Here are some important things to remember when working with Cgreen mocks.
-
calls to
expect()
collects constraints and any other required information when it is called -
this also goes for
will_return()
which will save the value of its parameter when it is called -
the actual evaluation and execution of those constraints occur when
mock()
is called in the function named in theexpect()
call(s) -
calls to a function specified by the
expect()
calls are evaluated in the same order as theexpect()
s were executed, but only for that named function -
the lexical scope of the first parameter in a
when()
is always inside the mocked function where themock()
call is made -
the lexical scope of arguments to an
is_equal_to…()
is where that call is made
In summary, expect() does early collection, including evaluation of return value expression, and mock() does late evaluation of the constraints collected against the given arguments to mock() .
|
4.2. Refactoring Tests with Mocks - CAUTION!
After a while you are bound to get tests with calls to expect()
.
You might even have common patterns in multiple tests.
So your urge to refactor starts to set in.
And that is good, go with it, we have tests to rely on.
But there are a lot of things going on behind the scenes when you use Cgreen, often with the help of some serious macro-magic, so special care needs to be taken when refactoring tests that have expect()
in them.
4.2.1. Renaming
The first "gotcha" is when you rename a function that you mock. You are likely to have `expect()`s for that function too.
the function name in an expect() is "text" so it will not be catched by a refactoring tool.
You will need to change the name there manually.
|
4.2.2. Local Variables
For example, consider this code
Ensure(Readline, can_read_some_characters) {
char canned_a = 'a';
char canned_b = 'b';
char canned_c = 'c';
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &canned_a, sizeof(char)),
will_return(1));
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &canned_b, sizeof(char)),
will_return(1));
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &canned_c, sizeof(char)),
will_return(1));
...
<call something that calls mocked_read()>
...
It is very tempting to break out the common expect:
static void expect_char(char ch) {
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &ch, sizeof(char)),
will_return(1));
}
Ensure(Readline, can_read_some_characters) {
char canned_a = 'a';
char canned_b = 'b';
char canned_c = 'c';
expect_char(canned_a);
expect_char(canned_b);
expect_char(canned_c);
...
<call something that calls mocked_read()>
...
Much nicer, right?
This will most likely lead to a segmentation fault or illegal memory reference, something that can be really tricky to track down.
The problem is that when mocked_read()
is actually called, as an effect of calling something that calls mocked_read()
, the parameter ch
to the nicely extracted expect_char()
does not exist anymore.
Good thing that you run the tests after each and every little refactoring, right? Because then you know that it was the extraction you just did that was the cause. Then you can come here and read up on what the problem might be and what to do about it.
At first glance the fix might look easy:
static void expect_char(char ch) {
char saved_ch = ch;
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &saved_ch, sizeof(char)),
will_return(1));
}
Ensure(Readline, can_read_some_characters) {
...
Close! But the local variable is also gone at the call to mocked_read()
. Of course.
Ok, so let’s make it static:
static void expect_char(char ch) {
static char saved_ch = ch;
expect(mocked_read,
will_set_contents_of_output_parameter(buf, &saved_ch, sizeof(char)),
will_return(1));
}
Ensure(Readline, can_read_some_characters) {
...
Ok, so then it must exist.
But the problem then becomes the three consequtive calls to expect_char()
.
Ensure(Readline, can_read_some_characters) {
char canned_a = 'a';
char canned_b = 'b';
char canned_c = 'c';
expect_char(canned_a);
expect_char(canned_b);
expect_char(canned_c);
...
<call something that calls mocked_read()>
...
Each of those have a different actual parameter, which is hard to store in one variable. Even if it is static.
The solution is now quite obvious:
static void expect_char(char *ch_p) {
expect(mocked_read,
will_set_contents_of_output_parameter(buf, ch_p, sizeof(char)),
will_return(1));
}
Ensure(Readline, can_read_some_characters) {
char canned_a = 'a';
char canned_b = 'b';
char canned_c = 'c';
expect_char(&canned_a);
expect_char(&canned_b);
expect_char(&canned_c);
...
<call something that calls mocked_read()>
...
By using pointers to the variables in the test, we can ensure that the values are live when the expected call is made. So we don’t have to make the character variables used in the test static, because as local variables those will remain live long enough.
And this is the moral here, you cannot use local variables in an extracted function as data for a mocked function call.
Variables that are to be sent to a mocked function MUST be live at the call to that mocked function. |
4.3. Other Use Cases For Mocks
4.3.1. Out Parameters
In C all function parameters are by value so if a function needs to return a value through a parameter that has to be done using a pointer. Typically this is a pointer to the area or variable the function should fill.
Cgreen provides will_set_contents_of_output_parameter()
to handle this use case.
For example
void convert_to_uppercase(char *converted_string, const char *original_string) {
mock(converted_string, original_string);
}
Ensure(setting_content_of_out_parameter) {
expect(convert_to_uppercase,
when(original_string, is_equal_to_string("upper case")),
will_set_contents_of_output_parameter(converted_string,
"UPPER CASE", 11));
...
When the mock for convert_to_uppercase()
is called it will write the string "UPPER CASE" in the area pointed to by converted_string
.
4.3.2. Setting fields
Sometimes you need to set a field in a struct sent by reference to a mocked function.
You cannot use the will_set_contents_of_output_parameter()
directly since you can’t, or even don’t want to, know the complete information in the structure.
But with a little bit of boilerplate in your mock function you can still write to a single field.
In the mock function you need to create a local variable that points to the field you want to update. You can then use this pointer variable in the mock call to supplement the real parameters.
This local variable will then be accessible in expect()
calls as if it was a parameter, and you can use it to wrote data to where it points, which then should be the field in the incoming structure.
struct structure {
int field;
char *string;
};
void update_field(struct structure *struct_to_update) {
int *field = &struct_to_update->field;
mock(struct_to_update, field);
}
Ensure(setting_field_of_parameter) {
int fourty_two = 42;
expect(update_field,
will_set_contents_of_output_parameter(field, &fourty_two, sizeof(int)));
}
...
The local variable field
in the mock function is set to point to the field that we need to update.
It is then exposed by including it in the mock()
call, and will_set_contents_of_output_parameter()
will use it to update whatever it points to with the data provided in the expect()
.
Both the local variable and the data argument in the call to will_set_contents_of_output_parameter() must be pointers.
You cannot use literals as data, except when it is a string literal which as per C convention is converted to a pointer.
|
4.3.3. Side Effects
Sometimes returning simple values is not enough. The function that you want to mock might have some side effect, like setting a global error code, or aggregate some data.
Let’s assume that the reader
increments a counter every time it gets called and we need to mimic that behaviour.
There are many ways to do this, but here is one using the side effect feature.
It works by calling a callback function that you provide, allowing you to feed some data to it.
We create the "side effect function" which needs to take a single argument which should be a pointer to the "side effect data". You will have to cast that datapointer to the correct type.
static void update_counter(void * counter) {
*(int*)counter = *(int*)counter + 1;
}
Ensure(using_side_effect) {
int number_of_times_reader_was_called = 0;
expect(reader, will_return('\n'));
always_expect(reader,
will_return(EOF),
with_side_effect(&update_counter,
&number_of_times_reader_was_called));
never_expect(writer);
by_paragraph(&reader, NULL, &writer, NULL);
assert_that(number_of_times_reader_was_called, is_equal_to(1));
}
4.4. The Mock Macros
When specifying behavior of mocks there are three parts. First, how often the specified behaviour or expectation will be executed:
Macro |
Description |
|
Expected once, in the specified order, for the same function |
|
Expect this behavior from here onwards |
|
From this point this mocked function must never be called |
You can specify constraints and behaviours for each expectation (except for never_expect()
naturally).
A constraint places restrictions on the parameters (and will tell you if the expected restriction was not met), and a behaviour specifies what the mock should do if the parameter constraints are met.
A parameter constraint is defined using the when(parameter, constraint)
macro.
It takes two parameters:
Parameter |
Description |
|
The name of the parameter to the mock function |
|
A constraint placed on that parameter |
There is a multitude of constraints available (actually, exactly the same as for the assertions we saw earlier):
Constraint |
Type |
|
Integers |
|
Integers |
|
Integers |
|
Integers |
|
Bytes/Structures |
|
Bytes/Structures |
|
String |
|
String |
|
String |
|
String |
|
String |
|
Double |
|
Double |
|
Double |
|
Double |
For the double valued constraints you can set the number of significant digits to consider a match with a call to significant_figures_for_assert_double_are(int figures)
.
The section on how to work with doubles has a more detailed discussion of the algorithm used for comparing floating point numbers.
Then there are a couple of ways to return results from the mocks.
They all provide ways to return various types of values through mock()
.
In your mocked function you can then simply return that value, or manipulate it as necessary.
Macro |
|
|
return |
|
return |
|
return a pointer to an allocated copy of the |
|
write |
|
capture the value of the parameter and store it in the named local variable |
|
call the side effect |
will_return_double() : The "boxed double" returned by mock() have to be "unboxed" by the caller see Double Mocks for details.
|
will_return_by_value : The memory allocated for the copy of the struct returned by mock() needs to be deallocated by the caller or it will be lost. You can do this with the code in the Box example below.
|
will_set_contents_of_output_parameter : The data to set must be correct at the time of the call to the mock function, and not be overwritten or released between the call to the expect() and the mock function. See Refactoring Tests with Mocks - CAUTION! for details.
|
will_set_contents_of_output_parameter : The previous name of this macro was will_set_contents_of_parameter and it is still available. The new name is prefered due to readability.
|
will_capture_parameter : The local variable to capture the value in must be live at the time of the call to the mock function, so using a local variable in a function called by your test will not work. See Refactoring Tests with Mocks - CAUTION! for details.
|
4.5. Combining Expectations
You can combine the expectations for a mock()
in various ways:
expect(mocked_file_writer,
when(data, is_equal_to(42)),
will_return(EOF));
expect(mocked_file_reader,
when(file, is_equal_to_contents_of(&FD, sizeof(FD))),
when(input, is_equal_to_string("Hello world!"),
with_side_effect(&update_counter, &counter),
will_set_contents_of_output_parameter(status, FD_CLOSED, sizeof(bool))));
If multiple when()
are specified they all need to be fullfilled.
You can of course only have one for each of the parameters of your mock function.
You can also have multiple will_set_contents_of_output_parameter()
in an expectation, one for each reference parameter, but naturally only one will_return()
.
To ensure that a specific call happens n
times the macro times(number_times_called)
can be passed as a constraint to a specific call:
expect(mocked_file_writer,
when(data, is_equal_to(42)),
times(1));
This feature only works for expect()
.
4.6. Order of constraints
When you have multiple constraints in an expect
the order in which they are executed is not always exactly then order in which they where given.
First all constraints are inspected for validity, such as if the parameter name given cannot be found, but primarily to see if the parameters, if any, matche the actual parameters in the call.
Then all read-only constraints are processed, followed by constraints that set contents.
Finally all side effect constraints are executed.
4.7. Order of multiple `expect`s
The expections still need to respect the order of calling, so if we call the function
mocker_file_writer
with the following pattern:
mocked_file_writer(42);
mocked_file_writer(42);
mocked_file_writer(43);
mocked_file_writer(42);
The expectation code should look like the following
expect(mocked_file_writer,
when(data, is_equal_to(42)),
times(2));
expect(mocked_file_writer,
when(data, is_equal_to(43)),
times(1));
expect(mocked_file_writer,
when(data, is_equal_to(42)),
times(1));
4.8. Handling out-parameters
TBD. Hint: this involves using will_set_contents_of_output_parameter()
.
4.9. Returning struct
If the function we are mocking returns structs by value, then our mock function need to do that too.
To do this we must use specific return macro, will_return_by_value()
.
Below is some example code using an imaginary struct typedef’ed as Struct
and a corresponding function, retrieve_struct()
, which we want to mock.
The underlying mechanism of this is that in the test we create the struct that we want to return.
The macro will_return_by_value()
then copies that to a dynamically allocated area, saving it so that a pointer to that area can be returned by mock()
.
Struct returned_struct = {...};
expect(retrieve_struct,
will_return_by_value(returned_struct, sizeof(Struct));
/* `returned_struct` has been copied to an allocated area */
In some future version the size argument will be removed from will_return_by_value() size since the macro can easily calculate that for you.
|
The mock function will then look like this:
Struct retrieve_struct() {
return *(Struct *)mock(); /* De-reference the returned pointer to the allocated area */
}
This would cause a memory leak since the area allocated by the return_by_value()
macro is not deallocated.
And in many scenarious this might not be a big problem, and you could make do with that simple version.
We might want to be sure, e.g. if we want to use valgrind
on when unittesting to catch leaks early.
Then we don’t want our unittests to pollute the actual leakage analysis.
In that case the mock function needs to free up the area that was automatically allocated by will_return_by_value()
.
The pointer returned by mock()
will point to that area.
So, here’s a better, although slightly more complicated, version:
Struct retrieve_struct() {
Struct *struct_p = (Struct *)mock(); /* Get the pointer */
Struct the_struct = *struct_p; /* Dereference to get a struct */
free(struct_p); /* Deallocate the returned area */
return the_struct; /* Finally we can return the struct by value */
}
4.10. Mocking struct
Parameters
Modern C standards allows function parameters to be struct
s by value.
Since our mock()
only can handle scalar values this presents a bit of a conundrum.
typedef struct {
int i;
char *string;
} Struct;
int function_taking_struct(Struct s) {
return (int)mock(?);
And we also can not compare a non-scalar value with any of the is_equal_to…()
constraint macros in the expect()
call.
Also remember that the C language does not allow comparing non-scalar values using ==
.
There are a couple of ways to handle this and which one to select depends on what you want to do.
4.10.1. Checking Single struct
Fields
In an expect(when())
we probably want to check one, or more, of the fields in the struct.
Since mock()
actually can "mock" anything we can use a normal field expression to access the value we want to check:
int function_checking_a_field(Struct s) {
return (int)mock(s.i);
The trick here is that mock()
just saves the "name", as a string, given as the argument, in this case "s.i", and pair it with the value of that expression.
There is no requirement that the "name" is actually a parameter, it can be anything.
The only thing to remember is that the exact same string needs to be used when invoking when()
:
expect(function_checking_a_field, when(s.i, is_equal_to(12)),
will_return(12));
You can do this with as many fields as you need.
And there is no (reasonable) limit to how many arguments mock()
can take, so you can start with the ones that you require and add more as you need them.
int function_checking_multiple_fields(Struct s) {
return (int)mock(s.i, s.string);
}
Ensure(StructParameters, can_mock_muultiple_fields_in_parameter) {
Struct struct_to_send = { .i = 13, .string = "hello world!" };
expect(function_checking_multiple_fields,
when(s.i, is_equal_to(13)),
when(s.string, begins_with_string("hello")),
will_return(13));
assert_that(function_checking_multiple_fields(struct_to_send), is_equal_to(13));
}
In both example we use an explicit value in will_return() instead of the value of the field, "s.i".
That is because it is not possible to use the value of a mocked value in will_return() .
Remember, expect() does early collection.
At the time of executing it, there is no parameter available, so the value must come from that run-time environment.
Also, since we already explicitly know the value, we have to use it in the when() clause, there will be no uncertainty of what it should be.
The only concern might be duplication of an explicit value, but that is not a big problem in a unittest, clarity over DRY, and you can easily fix that with a suitably named local variable.
|
4.11. Capturing Parameters
TBD. Hint: this involves using will_capture_parameter()
.
5. Special Cases
5.1. Working with doubles
We are not talking about
test doubles
here, but about values of C/C++ double
type (a.k.a. double float.)
Cgreen is designed to make it easy and natural to write assertions and expectations.
Many functions can be used for multiple data types, e.g. is_equal_to()
applies to all integer type values, actually including pointers.
But the C language has its quirks.
One of them is the fact that it is impossible to inspect the datatypes of values during run-time.
This has e.g. forced the introduction of is_equal_to_string()
to enable string comparisons.
5.1.1. Assertions and Constraints
When it comes to double typed values this has spilled over even further. For double typed values we have
Constraint |
|
|
|
|
But there is also the special assert that you must use when asserting doubles
Assertion |
|
and the utility function
Utility |
|
And of course they are designed to go together.
So, if you want to assert an expression yeilding a double
typed value, you need to combine them:
Ensure(Doubles, can_assert_double_values) {
significant_figures_for_assert_double_are(3);
assert_that_double(3.14, is_equal_to_double(5.0));
}
You have to use assert_that_double() and is_equal_to_double()
together.
|
and you would get
double_tests.c:13: Failure: can_assert_double_values Expected [3.14] to [equal double] [5.0] within [3] significant figures actual value: [3.140000] expected value: [5.000000]
5.1.2. Double Mocks
The general mechanism Cgreen uses to transport values to and from mock functions is based on the simple idea that most types fit into a "large" integer and can be type converted to and from whatever type you need.
Since a double float
will not fit into the same memory space as an integer Cgreen handles that by encapsulating ("boxing") the double
into an area which is represented by the pointer to it.
And that pointer can fit into the integer type value (intptr_t
) that Cgreen uses to transport values into and out of mock()
.
To get the value back you "unbox" it.
There are two possible uses of double
that you need to be aware of
-
When a parameter to the mocked function is of
double
type and needs to be matched in an constraint in anexpect()
call. -
When the mock function itself should return a
double
type value.
In the test you should use the special double
type constraints and the will_return_double()
convenience function.
In the mock function you will have to take care to box and unbox as required.
Boxing and unboxing in mock functions |
Description |
|
Wrap the value in an allocated memory area and return a pointer to it |
|
Unwrap the value by freeing the area and returning the value |
Here’s an example of that:
static double double_out(int i, double d) {
return unbox_double(mock(i, box_double(d))); (1)
}
Ensure(Doubles, can_be_arguments_to_mock_and_returned) {
expect(double_out,
when(i, is_equal_to(15)),
when(d, is_equal_to_double(31.32)), (2)
will_return_double(3.1415926)); (3)
assert_that_double(double_out(15, 31.32), is_equal_to_double(3.1415926));
}
1 | We can see that the parameter d to the mock function, since it is a
double , it will have to be used as box_double(d) in the call to
mock() . |
2 | The corresponding expect() uses a double constraint. |
3 | The mock function in this small example also returns a double .
The expect() uses will_return_double() so the mock function needs to unbox the return value from mock() to be able to return the double type value. |
Strange errors may occur if you box and/or unbox or combine double constraints incorrectly.
|
5.1.3. Details of Floating Point Comparison Algorithm
The number of significant digits set with significant_figures_for_assert_double_are()
specifies a relative tolerance.
Cgreen considers two double precision numbers x and y equal if their difference normalized by the larger of the two is smaller than 10^(1 - significant_figures)^.
Mathematically, we check that |x - y| < max(|x|, |y|) * 10^(1 - significant_figures)^.
Well documented subtleties arise when comparing floating point numbers close to zero using this algorithm. The article Comparing Floating Point Numbers, 2012 Edition by Bruce Dawson has an excellent discussion of the issue. The essence of the problem can be appreciated if we consider the special case where y == 0. In that case, our condition reduces to |x| < |x| * 10^(1 - significant_figures)^. After cancelling |x| this simplifies to 1 < 10^(1 - significant_figures)^. But this is only true if significant_figures < 1. In words this can be summarized by saying that, in a relative sense, all numbers are very different from zero. To circumvent this difficulty we recommend to use a constraint of the following form when comparing numbers close to zero:
assert_that(fabs(x - y) < abs_tolerance);
5.2. Using Cgreen with C++
The examples in this guide uses the C langauge to shows how to use CGreen. You can also use CGreen with C++.
The following needs expansion and more details as the support for C++ is extended. |
All you have to do is
-
Use the
cgreen
namespace by addingusing namespace cgreen;
at the beginning of the file with your tests
There is also one extra feature when you use C++, the assert_throws
function.
If you use the autodiscovering runner, as described in Using the Runner, and thus link your tests into a shared library, don’t forget to link it with the same C++ library that was used to create the cgreen-runner .
|
6. Context, System Under Test & Suites
As mentioned earlier, Cgreen promotes the behaviour driven style of test driving code. The thinking behind BDD is that we don’t really want to test anything, if we just could specify the behaviour of our code and ensure that it actually behaves this way we would be fine.
This might seem like an age old dream, but when you think about it, there is actually very little difference in the mechanics from vanillla TDD. First we write how we want it, then implement it. But the small change in wording, from `test´ to `behaviour´, from `test that´ to `ensure that´, makes a huge difference in thinking, and also very often in quality of the resulting code.
6.1. The SUT - System Under Test
Since BDD talks about behaviour, there has to be something that we can talk about as having that wanted behaviour. This is usually called the SUT, the System Under Test. The "system" might be whatever we are testing, such as a C module ("MUT"), class ("CUT"), object ("OUT"), function ("FUT") or method ("MUT"). We will stick with SUT in this document. To use Cgreen in BDD-ish mode you must define a name for it.
#include <cgreen/cgreen.h>
Describe(SUT);
Cgreen supports C++ and there you naturally have the objects and also the Class Under Test.
But in plain C you will have to think about what is actually the "class" under test.
E.g. in sort_test.c
you might see
#include <cgreen/cgreen.h>
Describe(Sorter);
Ensure(Sorter, can_sort_an_empty_list) {
assert_that(sorter(NULL), is_null);
}
In this example you can clearly see what difference the BDD-ish style makes when it comes to naming. Convention, and natural language, dictates that typical names for what TDD would call tests, now starts with 'can' or 'finds' or other verbs, which makes the specification so much easier to read.
Yes, I wrote 'specification'. Because that is how BDD views what TDD basically calls a test suite. The suite specifies the behaviour of a `class´. (That’s why some BDD frameworks draw on 'spec', like RSpec.)
6.2. Contexts and Before and After
The complete specification of the behaviour of a SUT might become long and require various forms of setup.
When using TDD style you would probably break this up into multiple suites having their own setup()
and teardown()
.
With BDD-ish style we could consider a suite as a behaviour specification for our SUT 'in a particular context'. E.g.
#include <cgreen/cgreen.h>
Describe(shopping_basket_for_returning_customer);
Customer *customer;
BeforeEach(shopping_basket_for_returning_customer){
customer = create_test_customer();
login(customer);
}
AfterEach(shopping_basket_for_returning_customer) {
logout(customer);
destroy_customer(customer);
}
Ensure(shopping_basket_for_returning_customer, allows_use_of_discounts) {
...
}
The 'context' would then be shopping_basket_for_returning_customer
, with the SUT being the shopping basket 'class'.
So 'context', 'system under test' and 'suite' are mostly interchangable concepts in Cgreen lingo.
It’s a named group of 'tests' that share the same BeforeEach
and AfterEach
and lives in the same source file.
7. Automatic Test Discovery
7.1. Forgot to Add Your Test?
When we write a new test we focus on the details about the test we are trying to write. And writing tests is no trivial matter so this might well take a lot of brain power.
So, it comes as no big surprise, that sometimes you write your test and then forget to add it to the suite. When we run it it appears that it passed on the first try! Although this should really make you suspicious, sometimes you get so happy that you just continue with churning out more tests and more code. It’s not until some (possibly looong) time later that you realize, after much headache and debugging, that the test did not actually pass. It was never even run!
There are practices to minimize the risk of this happening, such as always running the test as soon as you can set up the test. This way you will see it fail before trying to get it to pass.
But it is still a practice, something we, as humans, might fail to do at some point. Usually this happens when we are most stressed and in need of certainty.
7.2. The Solution - the 'cgreen-runner'
Cgreen gives you a tool to avoid not only the risk of this happening, but also the extra work and extra code.
It is called the cgreen-runner
.
The cgreen-runner
should come with your Cgreen installation if your platform supports the technique that is required, which is 'programatic access to dynamic loading of libraries'.
This means that a program can load an external library of code into memory and inspect it.
Kind of self-inspection, or reflexion.
So all you have to do is to build a dynamically loadable library of all tests (and of course your objects under test and other necessary code).
Then you can run the cgreen-runner
and point it to the library.
The runner will then load the library, enumerate all tests in it, and run every test.
It’s automatic, and there is nothing to forget.
7.3. Using the Runner
Assuming your tests are in first_test.c
the typical command to build your library using gcc
would be
$ gcc -shared -o first_test.so -fPIC first_test.c -lcgreen
The -fPIC
means to generate 'position independent code' which is required if you want to load the library dynamically.
To explicitly state this is required on many platforms.
How to build a dynamically loadable shared library might vary a lot depending on your platform. Can’t really help you there, sorry!
As soon as we have linked it we can run the tests using the cgreen-runner
by just giving it the shared, dynamically loadable, object library as an argument:
$ cgreen-runner first_test.so Running "first_tests" (2 tests)... first_tests.c:12: Failure: Cgreen -> fails_this_test Expected [0 == 1] to [be true] "Cgreen": 1 pass, 1 failure in 42ms. Completed "first_tests": 1 pass, 1 failure in 42ms.
More or less exactly the same output as when we ran our first test in the beginning of this quickstart tutorial. We can see that the top level of the tests will be named as the library it was discovered in, and the second level is the context for our System Under Test, in this case 'Cgreen'.
We also see that the context is mentioned in the failure message, giving a fairly obvious Cgreen → fails_this_test
.
Now we can actually delete the main function in our source code. We don’t need all this, since the runner will discover all tests automatically.
int main(int argc, char **argv) {
TestSuite *suite = create_test_suite();
add_test_with_context(suite, Cgreen, passes_this_test);
add_test_with_context(suite, Cgreen, fails_this_test);
return run_test_suite(suite, create_text_reporter());
}
It always feel good to delete code, right?
We can also select which test to run:
$ cgreen-runner first_test.so Cgreen:this_test_should_fail Running "first_tests" (1 test)... first_tests.c:12: Failure: Cgreen -> fails_this_test Expected [0 == 1] to [be true] "Cgreen": 1 failure in 42ms. Completed "first_tests": 1 failure in 42ms.
We recommend the BDD notation to discover tests, and you indicate which context the test we want to run is in.
In this example it is Cgreen
so the test should be refered to as Cgreen:this_test_should_fail
.
If you don’t use the BDD notation there is actually a context anyway, it is called default
.
7.4. Cgreen Runner Options
Once you get the build set up right for the cgreen-runner everything is fairly straight-forward. But you have a few options:
- --xml <prefix>
-
Instead of messages on stdout with the TextReporter, write results into one XML-file per suite or context, compatible with Hudson/Jenkins CI. The filename(s) will be
<prefix>-<suite>.xml
- --suite <name>
-
Name the top level suite
- --no-run
-
Don’t run the tests
- --verbose
-
Show progress information and list discovered tests
- --colours
-
Use colours (or colors) to emphasis result (requires ANSI-capable terminal)
- --quiet
-
Be more quiet
The verbose
option is particularly handy since it will give you the actual names of all tests discovered.
So if you have long test names you can avoid mistyping them by copying and pasting from the output of cgreen-runner --verbose
.
It will also give the mangled name of the test which should make it easier to find in the debugger.
Here’s an example:
Discovered Cgreen:fails_this_test (CgreenSpec__Cgreen__fails_this_test__) Discovered Cgreen:passes_this_test (CgreenSpec__Cgreen__passes_this_test__) Discovered 2 test(s) Opening [first_tests.so] to only run one test: 'Cgreen:fails_this_test' ... Running "first_tests" (1 test)... first_tests.c:12: Failure: Cgreen -> fails_this_test Expected [0 == 1] to [be true] "Cgreen": 1 failure in 42ms. Completed "first_tests": 1 failure in 42ms.
7.5. Selecting Tests To Run
You can name a single test to be run by giving it as the last argument on the command line.
The name should be in the format <SUT>:<test>
.
If not obvious you can get that name by using the --verbose
command option which will show you all tests discovered and both there C/C++ and Cgreen names.
Copying the Cgreen name from that output is an easy way to run only that particular test.
When a single test is named it is run using run_single_test()
.
As described in Five Minutes Doing TDD with Cgreen this means that it is not protected by fork()
-ing it to run in its own process.
The cgreen-runner
supports selecting tests with limited pattern matching.
Using an asterisk as a simple 'match many' symbol you can say things like
$ cgreen-runner <library> Cgreen:* $ cgreen-runner <library> C*:*this*
7.6. Multiple Test Libraries
You can run tests in multiple libraries in one go by adding them to the cgreen-runner
command:
$ cgreen-runner first_set.so second_set.so ...
7.7. Setup, Teardown and Custom Reporters
The cgreen-runner
will only run setup and teardown functions if you use the BDD-ish style with BeforeEach()
and AfterEach()
as described above.
The runner does not pickup setup()
and teardown()
added to suites, because it actually doesn’t run suites.
It discovers all tests and runs them one by one.
The macros required by the BDD-ish style ensures that the corresponding BeforeEach()
and AfterEach()
are run before and after each test.
The cgreen-runner will discover your tests in a shared library even if you don’t use the BDD-ish style.
But it will not be able to find and run the setup() and/or teardown() attached to your suite(s).
This will probably cause your tests to fail or crash.
|
In case you have non-BDD style tests without any setup()
and/or teardown()
you can still use the runner.
The default suite/context where the tests live in this case is called default
.
But why don’t you convert your tests to BDD notation?
This removes the risk of frustrating trouble-shooting when you added setup()
and teardown()
and can’t understand why they are not run…
So, the runner encourages you to use the BDD notation. But since we recommend that you do anyway, that’s no extra problem if you are starting out from scratch. But see Changing Style for some easy tips on how to get you there if you already have non-BDD tests.
You can choose between the TextReporter, which we have been seeing so far, and the built-in JUnit/Ant compatible XML-reporter using the --xml
option.
But it is not currently possible to use custom reporters as outlined in Changing Cgreen Reporting with the runner.
If you require another custom reporter you need to resort to the standard, programatic, way of invoking your tests. For now…
7.8. Skipping Tests
Sometimes you find that you need to temporarily remove a test, perhaps to do a refactoring when you have a failing test. Ignoring that test will allow you to do the refactoring while still in the green.
An old practice is then to comment it out. That is a slightly cumbersome. It is also hazardous habit as there is no indication of a missing test if you forget to uncomment it when you are done.
Cgreen offers a much better solution.
You can just add an 'x' infront of the Ensure
for the test and that test will be skipped.
...
xEnsure(Reader, ...) {
...
}
...
With this method, it is a one character change to temporarily ignore, and un-ignore, a test. It is also easily found using text searches through a complete source tree. Cgreen will also tally the skipped tests, so it is clearly visible that you have some skipped test when you run them.
You can also programmatically decide to skip a test depending on some run-time information.
You do that simply by checking for the condition and calling skip_test() which will tally the test as skipped.
|
skip_test() does not exit your test function so you need to take care that continuing after the call does not trigger any undecided behaviour.
|
8. Changing Style
If you already have some TDD style Cgreen test suites, it is quite easy to change them over to BDD-ish style. Here are the steps required
-
Add
Describe(SUT);
-
Turn your current setup function into a
BeforeEach()
definition by changing its signature to match the macro, or simply call the existing setup function from the BeforeEach(). If you don’t have any setup function you still need to define an emptyBeforeEach()
. -
Ditto for
AfterEach()
. -
Add the SUT to each
Ensure()
by inserting it as a first parameter. -
Change the call to add the tests to
add_test_with_context()
by adding the name of the SUT as the second parameter. -
Optionally remove the calls to
set_setup()
andset_teardown()
.
Done.
If you want to continue to run the tests using a hand-coded runner,
you can do that by keeping the setup and teardown functions and their
corresponding set_
-calls.
It’s nice that this is a simple process, because you can change over from TDD style to BDD-ish style in small steps. You can convert one source file at a time, by just following the recipe above. Everything will still work as before but your tests and code will likely improve.
And once you have changed style you can fully benefit from the automatic discovery of tests as described in Automatic Test Discovery.
9. Changing Cgreen Reporting
9.1. Replacing the Reporter
In every test suite so far, we have run the tests with this line…
return run_test_suite(our_tests(), create_text_reporter());
We can change the reporting mechanism just by changing this call to create another reporter.
9.2. Built-in Reporters
Cgreen has the following built-in reporters that you can choose from when your code runs the test suite.
Reporter | Purpose | Signature | Note |
---|---|---|---|
Text |
Human readable, with clear messages |
|
|
XML |
ANT/Jenkins compatible |
|
|
CUTE |
CUTE Eclipse-plugin (http://cute-test.org) compatible output |
|
|
CDash |
CMake (http://cmake.org) dashboard |
|
|
If you write a runner function like in most examples above, you can just substitute which runner to create.
If you use the cgreen-runner
(Automatic Test Discovery) to dynamically find all your tests you can force it to use the XML-reporter with the -x <prefix>
option.
Currently cgreen-runner only supports the built-in text and XML reporters.
|
9.3. Rolling Our Own
Although Cgreen has a number of options, there are times when you’d like a different output from the reporter, the CUTE and CDash reporters are examples that grew out of such a need.
Perhaps your Continuous Integration server want the result in a different format, or you just don’t like the text reporter…
Writing your own reporter is supported. And we’ll go through how that can be done using an XML-reporter as an example.
Cgreen already has an XML-reporter compatible with ANT/Jenkins, see Built-in Reporters. |
Here is the code for create_text_reporter()
…
TestReporter *create_text_reporter(void) {
TestReporter *reporter = create_reporter();
if (reporter == NULL) {
return NULL;
}
reporter->start_suite = &text_reporter_start_suite;
reporter->start_test = &text_reporter_start_test;
reporter->show_fail = &show_fail;
reporter->show_skip = &show_skip;
reporter->show_incomplete = &show_incomplete;
reporter->finish_test = &text_reporter_finish_test;
reporter->finish_suite = &text_reporter_finish;
return reporter;
}
The TestReporter
structure contains function pointers that control the reporting.
When called from create_reporter()
constructor, these pointers are set up with functions that display nothing.
The text reporter code replaces these with something more dramatic, and then returns a pointer to this new object.
Thus the create_text_reporter()
function effectively extends the object from create_reporter()
.
The text reporter only outputs content at the start of the first test, at the end of the test run to display the results, when a failure occurs, and when a test fails to complete.
A quick look at the text_reporter.c
file in Cgreen reveals that the overrides just output a message and chain to the versions in reporter.h
.
To change the reporting mechanism ourselves, we just have to know a little about the methods in the TestReporter
structure.
9.4. The TestReporter Structure
The Cgreen TestReporter
is a pseudo class that looks
something like…
typedef struct _TestReporter TestReporter;
struct _TestReporter {
void (*destroy)(TestReporter *reporter);
void (*start_suite)(TestReporter *reporter, const char *name, const int count);
void (*start_test)(TestReporter *reporter, const char *name);
void (*show_pass)(TestReporter *reporter, const char *file, int line,
const char *message, va_list arguments);
void (*show_skip)(TestReporter *reporter, const char *file, int line);
void (*show_fail)(TestReporter *reporter, const char *file, int line,
const char *message, va_list arguments);
void (*show_incomplete)(TestReporter *reporter, const char *file, int line,
const char *message, va_list arguments);
void (*assert_true)(TestReporter *reporter, const char *file, int line, int result,
const char * message, ...);
void (*finish_test)(TestReporter *reporter, const char *file, int line);
void (*finish_suite)(TestReporter *reporter, const char *file, int line);
int passes;
int failures;
int exceptions;
void *breadcrumb;
int ipc;
void *memo;
void *options;
};
The first block are the methods that can be overridden:
void (*destroy)(TestReporter *reporter)
-
This is the destructor for the default structure. If this is overridden, then the overriding function must call
destroy_reporter(TestReporter *reporter)
to finish the clean up. void (*start_suite)(TestReporter *reporter, const char *name, const int count)
-
This is the first of the callbacks. At the start of each test suite Cgreen will call this method on the reporter with the name of the suite being entered and the number of tests in that suite. The default version keeps track of the stack of tests in the
breadcrumb
pointer ofTestReporter
. If you make use of the breadcrumb functions, as the defaults do, then you will need to callreporter_start_suite()
to keep the book-keeping in sync. void (*start_test)(TestReporter *reporter, const char *name)
-
At the start of each test Cgreen will call this method on the reporter with the name of the test being entered. Again, the default version keeps track of the stack of tests in the
breadcrumb
pointer ofTestReporter
. If you make use of the breadcrumb functions, as the defaults do, then you will need to callreporter_start_test()
to keep the book-keeping in sync. void (*show_pass)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)
-
This method is initially empty as most reporters see little point in reporting passing tests (but you might do), so there is no need to chain the call to any other function. Besides the pointer to the reporter structure, Cgreen also passes the file name of the test, the line number of failed assertion, the message to show and any additional parameters to substitute into the message. The message comes in as
printf()
style format string, and so the variable argument list should match the substitutions. void (*show_fail)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)
-
The partner of
show_pass()
, and the one you’ll likely overload first. void (*show_skip)(TestReporter *reporter, const char *file, int line)
-
This method will be called when a skipped test is encountered, see Skipping Tests.
void (*show_incomplete)(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments)
-
When a test fails to complete, this is the handler that is called. As it’s an unexpected outcome, no message is received, but we do get the name of the test. The text reporter combines this with the breadcrumb to produce the exception report.
void (*assert_true)(TestReporter *reporter, const char *file, int line, int result, const char * message, …)
-
This is not normally overridden and is really internal. It is the raw entry point for the test messages from the test suite. By default it dispatches the call to either
show_pass()
orshow_fail()
. void (*finish_test)(TestReporter *reporter, const char *file, int line)
-
The counterpart to the
(*start_test)()
call. It is called on leaving the test. It needs to be chained to thereporter_finish()
to keep track of the breadcrumb book keeping. void (*finish_suite)(TestReporter *reporter, const char *file, int line)
-
The counterpart to the
(*start_suite)()
call called on leaving the test suite, and similar to the(*finish_test)()
if your reporter needs a handle on that event too. The default text reporter chains both this and(*finish_test)()
to the same function where it figures out if it is the end of the top level suite. If so, it prints the familiar summary of passes and fails.
The show_fail() and show_pass() functions are called from the child process, i.e. the isolated process that is fork() :ed to run a single test case.
All others, notably start_…() , finish_…() , show_incomplete() and show_skip() are run in the main (parent) process.
This fact might be important since the processes do not share memory.
Information is passed from the child to the parent using messaging performed within the show_…() functions.
|
The second block is simply resources and book keeping that the reporter can use to liven up the messages…
passes
|
The number of passes so far. |
skips
|
The number of tests that has been skipped by the |
failures
|
The number of failures generated so far. |
exceptions
|
The number of test functions that have failed to complete so far. |
breadcrumb
|
This is a pointer to the list of test names in the stack. |
The breadcrumb
pointer is different and needs a little explanation.
Basically it is a stack, analogous to the breadcrumb trail you see on websites.
Everytime a start()
handler is invoked, the name is placed in this stack.
When a finish()
message handler is invoked, a name is popped off.
There are a bunch of utility functions in cgreen/breadcrumb.h
that can read the state of this stack.
Most useful are get_current_from_breadcrumb()
which takes the breadcrumb pointer and returns the current test name, and get_breadcrumb_depth()
which gives the current depth of the stack.
A depth of zero means that the test run has finished.
If you need to traverse all the names in the breadcrumb, then you can call walk_breadcrumb()
.
Here is the full signature…
void walk_breadcrumb(Breadcrumb *breadcrumb, void (*walker)(const char *, void *), void *memo);
The void (*walker)(const char *, void *)
is a callback that will be
passed the name of the test suite for each level of nesting.
It is also passed the memo
pointer that was passed to the walk_breadcrumb()
call.
You can use this pointer for anything you want, as all Cgreen does is pass it from call to call.
This is so aggregate information can be kept track of whilst still being reentrant.
The last parts of the TestReporter
structure are…
ipc
|
This is an internal structure for handling the messaging between reporter and test suite. You shouldn’t touch this. |
memo
|
By contrast, this is a spare pointer for your own expansion. |
options
|
A pointer to a reporter specific structure that can be used to set options.
E.g. the textreporter defines the structure |
9.5. An Example XML Reporter
Let’s make things real with an example. Suppose we want to send the output from Cgreen in XML format, say for storing in a repository or for sending across the network.
The cgreen-runner already has an XML-reporter that you can
use if you need to produce Jenkins/ANT compatible XML output.
See Cgreen Runner Options.
|
Suppose also that we have come up with the following format…
<?xml?>
<suite name="Top Level">
<suite name="A Group">
<test name="a_test_that_passes">
</test>
<test name="a_test_that_fails">
<fail>
<message>A failure</message>
<location file="test_as_xml.c" line="8"/>
</fail>
</test>
</suite>
</suite>
In other words a simple nesting of tests with only failures encoded. The absence of "fail" XML node is a pass.
Here is a test script, test_as_xml.c
that we can use to construct the
above output…
#include <cgreen/cgreen.h>
Describe(XML_reporter);
BeforeEach(XML_reporter) {}
AfterEach(XML_reporter) {}
Ensure(XML_reporter, reports_a_test_that_passes) {
assert_that(1 == 1);
}
Ensure(XML_reporter, reports_a_test_that_fails) {
fail_test("A failure");
}
TestSuite *create_test_group() {
TestSuite *suite = create_named_test_suite("A Group");
add_test_with_context(suite, XML_reporter, reports_a_test_that_passes);
add_test_with_context(suite, XML_reporter, reports_a_test_that_fails);
return suite;
}
int main(int argc, char **argv) {
TestSuite *suite = create_named_test_suite("Top Level");
add_suite(suite, create_test_group());
return run_test_suite(suite, create_text_reporter());
}
We can’t use the auto-discovering cgreen-runner
(see Automatic Test Discovery) here since we need to ensure that the nested suites are reported as a nested xml structure.
And we’re not actually writing real tests, just something that we can use to drive our new reporter.
The text reporter is used just to confirm that everything is working. So far it is.
Running "Top Level" (2 tests)... test_as_xml.c:12: Failure: A Group -> reports_a_test_that_fails A failure "A Group": 1 pass, 1 failure in 42ms. Completed "Top Level": 1 pass, 1 failure in 42ms.
Our first move is to switch the reporter from text, to our not yet written XML version…
#include "xml_reporter.h"
...
int main(int argc, char **argv) {
TestSuite *suite = create_named_test_suite("Top Level");
add_suite(suite, create_test_group());
return run_test_suite(suite, create_xml_reporter());
}
We’ll start the ball rolling with the xml_reporter.h
header file…
#ifndef _XML_REPORTER_HEADER_
#define _XML_REPORTER_HEADER_
#include <cgreen/reporter.h>
TestReporter *create_xml_reporter();
#endif
…and the simplest possible reporter in xml_reporter.c
.
#include <cgreen/reporter.h>
#include "xml_reporter.h"
TestReporter *create_xml_reporter() {
TestReporter *reporter = create_reporter();
return reporter;
}
One that outputs nothing.
$ gcc -c test_as_xml.c $ gcc -c xml_reporter.c $ gcc xml_reporter.o test_as_xml.o -lcgreen -o xml $ ./xml
Yep, nothing.
Let’s add the outer XML tags first, so that we can see Cgreen navigating the test suite…
#include <cgreen/reporter.h>
#include <cgreen/breadcrumb.h>
#include <stdio.h>
#include "xml_reporter.h"
static void xml_reporter_start_suite(TestReporter *reporter, const char *name, int count) {
printf("<suite name=\"%s\">\n", name);
reporter_start_suite(reporter, name, count);
}
static void xml_reporter_start_test(TestReporter *reporter, const char *name) {
printf("<test name=\"%s\">\n", name);
reporter_start_test(reporter, name);
}
static void xml_reporter_finish_test(TestReporter *reporter, const char *filename, int line, const char *message) {
reporter_finish_test(reporter, filename, line, message);
printf("</test>\n");
}
static void xml_reporter_finish_suite(TestReporter *reporter, const char *filename, int line) {
reporter_finish_suite(reporter, filename, line);
printf("</suite>\n");
}
TestReporter *create_xml_reporter() {
TestReporter *reporter = create_reporter();
reporter->start_suite = &xml_reporter_start_suite;
reporter->start_test = &xml_reporter_start_test;
reporter->finish_test = &xml_reporter_finish_test;
reporter->finish_suite = &xml_reporter_finish_suite;
return reporter;
}
Although chaining to the underlying reporter_start_*()
and reporter_finish_*()
functions is optional, I want to make use of some of the facilities later.
Our output meanwhile, is making its first tentative steps…
<suite name="Top Level">
<suite name="A Group">
<test name="reports_a_test_that_passes">
</test>
<test name="reports_a_test_that_fails">
</test>
</suite>
</suite>
We don’t require an XML node for passing tests, so the show_fail()
function is all we need…
...
static void xml_show_fail(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
printf("<fail>\n");
printf("\t<message>");
vprintf(message, arguments);
printf("</message>\n");
printf("\t<location file=\"%s\" line=\"%d\"/>\n", file, line);
printf("</fail>\n");
...
TestReporter *create_xml_reporter() {
TestReporter *reporter = create_reporter();
reporter->start_suite = &xml_reporter_start_suite;
reporter->start_test = &xml_reporter_start_test;
reporter->show_fail = &xml_show_fail;
reporter->finish_test = &xml_reporter_finish_test;
reporter->finish_suite = &xml_reporter_finish_suite;
return reporter;
}
We have to use vprintf()
to handle the variable argument list passed to us.
This will probably mean including the stdarg.h
header as well as stdio.h
.
This gets us pretty close to what we want…
<suite name="Top Level">
<suite name="A Group">
<test name="reports_a_test_that_passes">
</test>
<test name="reports_a_test_that_fails">
<fail>
<message>A failure</message>
<location file="test_as_xml.c" line="15"/>
</fail>
</test>
</suite>
</suite>
For completeness we should add a tag for a test that doesn’t complete. We’ll output this as a failure, although we don’t bother with the location this time…
static void xml_show_incomplete(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
printf("<fail>\n");
printf("\t<message>Failed to complete</message>\n");
printf("</fail>\n");
}
...
TestReporter *create_xml_reporter() {
TestReporter *reporter = create_reporter();
reporter->start_suite = &xml_reporter_start_suite;
reporter->start_test = &xml_reporter_start_test;
reporter->show_fail = &xml_show_fail;
reporter->show_incomplete = &xml_show_incomplete;
reporter->finish_test = &xml_reporter_finish_test;
reporter->finish_suite = &xml_reporter_finish_suite;
return reporter;
}
All that’s left then is the XML declaration and the thorny issue of indenting. Although the indenting is not strictly necessary, it would make the output a lot more readable.
Given that the test depth is kept track of for us with the breadcrumb
object in the TestReporter
structure, indentation will actually be quite simple.
We’ll add an indent()
function that outputs the correct number of tabs…
static void indent(TestReporter *reporter) {
int depth = get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb);
while (depth-- > 0) {
printf("\t");
}
}
The get_breadcrumb_depth()
function just gives the current test depth as recorded in the reporters breadcrumb (from cgreen/breadcrumb.h
).
As that is just the number of tabs to output, the implementation is trivial.
We can then use this function in the rest of the code. Here is the complete listing…
#include <cgreen/reporter.h>
#include <cgreen/breadcrumb.h>
#include <stdio.h>
#include "xml_reporter.h"
static void indent(TestReporter *reporter) {
int depth = get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb);
while (depth-- > 0) {
printf("\t");
}
}
static void xml_reporter_start_suite(TestReporter *reporter, const char *name, int count) {
if (get_breadcrumb_depth((CgreenBreadcrumb *)reporter->breadcrumb) == 0) {
printf("<?xml?>\n");
}
indent(reporter);
printf("<suite name=\"%s\">\n", name);
reporter_start_suite(reporter, name, count);
}
static void xml_reporter_start_test(TestReporter *reporter, const char *name) {
indent(reporter);
printf("<test name=\"%s\">\n", name);
reporter_start_test(reporter, name);
}
static void xml_show_fail(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
indent(reporter);
printf("<fail>\n");
indent(reporter);
printf("\t<message>");
vprintf(message, arguments);
printf("</message>\n");
indent(reporter);
printf("\t<location file=\"%s\" line=\"%d\"/>\n", file, line);
indent(reporter);
printf("</fail>\n");
}
static void xml_show_incomplete(TestReporter *reporter, const char *file, int line, const char *message, va_list arguments) {
indent(reporter);
printf("<fail>\n");
indent(reporter);
printf("\t<message>Failed to complete</message>\n");
indent(reporter);
printf("</fail>\n");
}
static void xml_reporter_finish_test(TestReporter *reporter, const char *filename, int line, const char *message) {
reporter_finish_test(reporter, filename, line, message);
indent(reporter);
printf("</test>\n");
}
static void xml_reporter_finish_suite(TestReporter *reporter, const char *filename, int line) {
reporter_finish_suite(reporter, filename, line);
indent(reporter);
printf("</suite>\n");
}
TestReporter *create_xml_reporter() {
TestReporter *reporter = create_reporter();
reporter->start_suite = &xml_reporter_start_suite;
reporter->start_test = &xml_reporter_start_test;
reporter->show_fail = &xml_show_fail;
reporter->show_incomplete = &xml_show_incomplete;
reporter->finish_test = &xml_reporter_finish_test;
reporter->finish_suite = &xml_reporter_finish_suite;
return reporter;
}
And finally the desired output…
<?xml?>
<suite name="Top Level">
<suite name="A Group">
<test name="reports_a_test_that_passes">
</test>
<test name="reports_a_test_that_fails">
<fail>
<message>A failure</message>
<location file="test_as_xml.c" line="15"/>
</fail>
</test>
</suite>
</suite>
Job done.
Possible other reporter customizations include reporters that write to
syslog
, talk to IDE plug-ins, paint pretty printed documents or just
return a boolean for monitoring purposes.
10. Advanced Usage
10.1. Custom Constraints
Sometimes the built-in constraints that Cgreen provide are not sufficient. With Cgreen it is possible to create custom constraints, although you will be depending on some internal structures if you do so.
Here’s how to implement a simple example custom constraint that asserts that the value is bigger than 5. We’ll implement this using a static constraint since it does not take any parameter.
static constraints are a bad idea… |
First we need the actual compare function:
#include <cgreen/cgreen.h>
bool compare_want_greater_than_5(Constraint *constraint, CgreenValue actual) {
return actual.value.integer_value > 5;
}
And then the static constraint structure, for which we’ll need some of Cgreen's internal functions:
#include <cgreen/message_formatting.h>
#include "constraint_internal.h"
Constraint static_is_bigger_than_5 = {
/* .type */ CGREEN_VALUE_COMPARER_CONSTRAINT,
/* .name */ "be bigger than 5",
/* .destroy */ destroy_static_constraint,
/* .compare */ compare_want_greater_than_5,
/* .test */ test_want,
/* .format_failure_message_for */ failure_message_for,
/* .actual_value_message */ "",
/* .expected_value_message */ "",
/* .expected_value */ {CGREEN_INTEGER, {5}},
/* .stored_value_name */ "null",
/* .parameter_name */ NULL,
/* .size_of_stored_value */ 0
};
This implementation can use a statically declared Constraint
structure that is prefilled since it does not need to store the value to be checked.
This static custom constraint can then be used directly in the assert
like this:
Ensure(TestConstraint, custom_constraint_using_static_function) {
Constraint * is_bigger_than_5 = &static_is_bigger_than_5;
assert_that(10, is_bigger_than_5);
}
To create a custom constraint that takes an input parameter, we need to add a function that creates a constraint structure that correctly saves the value to be checked, and, for convenience, a macro.
This time we need to dig into how Cgreen stores expected values and we’ll also make use of Cgreen's utility function string_dup()
.
#include <cgreen/message_formatting.h>
#include "cgreen_value_internal.h"
#include "utils.h"
bool compare_want_smaller_value(Constraint *constraint, CgreenValue actual) {
return actual.value.integer_value < constraint->expected_value.value.integer_value ;
}
Constraint *create_smaller_than_constraint(intptr_t expected_value, const char *expected_value_name) {
Constraint *constraint = create_constraint();
constraint->expected_value = make_cgreen_integer_value(expected_value);
constraint->expected_value_name = string_dup(expected_value_name);
constraint->type = CGREEN_VALUE_COMPARER_CONSTRAINT;
constraint->compare = &compare_want_smaller_value;
constraint->execute = &test_want;
constraint->name = "be smaller than";
constraint->size_of_expected_value = sizeof(intptr_t);
return constraint;
}
#define is_smaller_than(value) create_smaller_than_constraint(value, #value)
This gives a custom constraint that can be used in the assert
in the
same way as Cgreen's built-in constraints:
Ensure(TestConstraint, custom_constraint_using_a_function_with_arguments_function) {
assert_that(9, is_smaller_than(10));
}
The last, and definitely more complex, example is a constraint that takes two structures and compares fields in them. The constraint will, given a structure representing a piece and another structure representing a box, check if the piece can fit inside the box using a size field.
Assuming two "application" structures with size
fields:
typedef struct Box {
int id;
int size;
} Box;
typedef struct Piece {
int id;
int size;
} Piece;
We want to be able to write a test like this:
Ensure(TestConstraint, more_complex_custom_constraint_function) {
Box box1 = {.id = 1, .size = 5};
Piece piece99 = {.id = 99, .size = 6};
assert_that(&piece99, can_fit_in_box(&box1));
}
To implement the can_fit_in_box
constraint we first need a comparer
function:
bool compare_piece_and_box_size(Constraint *constraint, CgreenValue actual) {
return ((Piece *)actual.value.pointer_value)->size
< ((Box*)constraint->expected_value.value.pointer_value)->size ;
}
And this time we can’t rely on Cgreen's checker and message generating function test_want()
which we used in the previous examples.
So we also need a custom function that calls the comparison and formats a possible error message:
static void test_fit_piece(Constraint *constraint, const char *function_name, CgreenValue actual,
const char *test_file, int test_line, TestReporter *reporter) {
(*reporter->assert_true)(
reporter,
test_file,
test_line,
(*constraint->compare)(constraint, actual),
"Piece [%f], does not fit in [%f] in function [%s] parameter [%s]",
((Piece *)constraint->expected_value.value.pointer_value)->id,
((Box *)actual.value.pointer_value)->id,
function_name,
constraint->parameter_name);
}
Finally we’ll use both of those in the constraint creating function and add the convenience macro:
Constraint *create_piece_fit_in_box_constraint(intptr_t expected_value, const char *expected_value_name) {
Constraint *constraint = create_constraint();
constraint->expected_value = make_cgreen_pointer_value((void*)expected_value);
constraint->expected_value_name = string_dup(expected_value_name);
constraint->type = CGREEN_CONTENT_COMPARER_CONSTRAINT;
constraint->compare = &compare_piece_and_box_size;
constraint->execute = &test_fit_piece;
constraint->name = "fit in box";
constraint->size_of_expected_value = sizeof(intptr_t);
return constraint;
}
#define can_fit_in_box(box) create_piece_fit_in_box_constraint((intptr_t)box, #box)
As stated above, using custom constraints makes your tests vulnurable to changes in Cgreen's internals. Hopefully a method to avoid this will emerge in the future. |
You can write custom constraints directly in a test file, but they can of course also be collected into a separately compiled module which is linked with your tests. |
11. Hints and Tips
This chapter is intended to contain tips for situations that you might need some help with, but it is nowhere near complete at this time. |
11.1. cgreen-mocker
- Automated Mocking
Are you starting out with Cgreen on a largish legacy system? And there are loads and loads of functions to mock to get a unit under test?
You could try the cgreen-mocker
that is supplied as a contributed part of the Cgreen source distribution.
It is a Python program that parses C language header files and tries to create a corresponding .mock
file where each function declaration is replaced with a call to mock()
.
Usage: cgreen-mocker.py <headerfile> { <cpp_directive> } <headerfile>: file with function declarations that you want to mock <cpp_directive>: any 'cpp' directive but most useful is e.g. "-I <directory>" to ensure cpp finds files.
So given a header file containing lines like
extern CgreenValue make_cgreen_integer_value(intptr_t integer);
extern CgreenValue make_cgreen_string_value(const char *string);
cgreen-mocker
will, given that there are no errors, print something
like this on the screen:
CgreenValue make_cgreen_integer_value(intptr_t integer) {
return mock(integer);
}
CgreenValue make_cgreen_string_value(const char *string) {
return mock(string);
}
Of course, you would pipe this output to a file.
To use cgreen-mocker
you need Python, and the following packages:
-
packaging
— (https://github.com/pypa/packaging) -
pycparser
— (https://github.com/eliben/pycparser)
These can easily be installed with:
$ pip install -r requirements.txt
cgreen-mocker is an unsupported contribution to the Cgreen
project by Thomas Nilefalk.
|
11.2. Compiler Error Messages
Sometimes you might get cryptic and strange error messages from the compiler. Since Cgreen uses some C/C++ macro magic this can happen and the error messages might not be straight forward to interpret.
Here are some examples, but the exact messages differ between compilers and versions.
Compiler error message |
Probable cause… |
|
Missing |
|
Missing |
|
Missing test subject/context in the |
|
Missing |
11.3. Signed, Unsigned, Hex and Byte
Cgreen attempts to handle primitive type comparisons with a single constraint, is_equal_to()
.
This means that it must store the actual and expected values in a form that will accomodate all possible values that primitive types might take, typically an intptr_t
.
This might sometimes cause unexpected comparisons since all actual values will be cast to match intptr_t
, which is a signed value.
E.g.
Ensure(Char, can_compare_byte) {
char chars[4] = {0xaa, 0xaa, 0xaa, 0};
assert_that(chars[0], is_equal_to(0xaa));
}
On a system which considers char
to be signed this will cause the
following Cgreen assertion error:
char_tests.c:11: Failure: Char -> can_compare_byte Expected [chars[0]] to [equal] [0xaa] actual value: [-86] expected value: [170]
This is caused by the C rules forcing an implicit cast of the signed char
to intptr_t
by sign-extension.
This might not be what you expected.
The correct solution, by any standard, is to cast the actual value to unsigned char
which will then be interpreted correctly.
And the test passes.
Casting to unsigned will not always suffice since that is interpreted as unsigned int which will cause a sign-extension from the signed char and might or might not work depending on the size of int on your machine.
|
In order to reveal what really happens you might want to see the actual and expected values in hex. This can easily be done with the is_equal_to_hex()
.
Ensure(Char, can_compare_byte) {
char chars[4] = {0xaa, 0xaa, 0xaa, 0};
assert_that(chars[0], is_equal_to_hex(0xaa));
}
This might make the mistake easier to spot:
char_tests.c:11: Failure: Char -> can_compare_byte Expected [chars[0]] to [equal] [0xaa] actual value: [0xfffffffffffffaa] expected value: [0xaa]
11.4. Cgreen and Coverage
Cgreen is compatible with coverage tools, in particular gcov
/lcov
.
So generating coverage data for your application should be straight forward.
This is what you need to do (using gcc
or clang
):
-
compile with
-ftest-coverage
and-fprofile-arcs
-
run tests
-
lcov --directory . --capture --output-file coverage.info
-
genhtml -o coverage coverage.info
Your coverage data will be available in coverage/index.html
.
11.5. Garbled Output
If the output from your Cgreen based tests appear garbled or duplicated, this can be caused by the way Cgreen terminates its test-running child process.
In many unix-like environments the termination of a child process should be done with _exit()
.
However, this interfers severily with the ability to collect coverage data.
As this is important for many of us, Cgreen instead terminates its child process with the much cruder exit()
(note: no underscore).
Under rare circumstances this might have the unwanted effect of output becoming garbled and/or duplicated.
If this happens you can change that behaviour using an environment variable CGREEN_CHILD_EXIT_WITH__EXIT
(note: two underscores).
If set, Cgreen will terminate its test-running child process with the more POSIX-compliant _exit()
.
But as mentioned before, this is, at least at this point in time, incompatible with collecting coverage data.
So, it’s coverage or POSIX-correct child exits and guaranteed output consistency. You can’t have both…
Appendix A: Legacy Style Assertions
Cgreen have been around for a while, developed and matured. There is an older style of assertions that was the initial version, a style that we now call the 'legacy style', because it was more aligned with the original, now older, unit test frameworks. If you are not interested in historical artifacts, I recommend that you skip this section.
But for completeness of documentation, here are the legacy style assertion macros:
Assertion |
Description |
|
Passes if boolean evaluates true |
|
Fails if boolean evaluates true |
|
Passes if 'first == second' |
|
Passes if 'first != second' |
|
Uses 'strcmp()' and passes if the strings are equal |
|
Uses 'strcmp()' and fails if the strings are equal |
Each assertion has a default message comparing the two values.
If you want to substitute your own failure messages, then you must use the *_with_message()
counterparts…
Assertion |
|
|
|
|
|
|
All these assertions have an additional char *
message parameter, which is the message you wished to display on failure.
If this is set to NULL
, then the default message is shown instead.
The most useful assertion from this group is assert_true_with_message()
as you can use that to create your own assertion functions with your own messages.
Actually the assertion macros have variable argument lists.
The failure message acts like the template in printf()
.
We could change the test above to be…
Ensure(strlen_of_hello_is_five) {
const char *greeting = "Hello";
int length = strlen(greeting);
assert_equal_with_message(length, 5, "[%s] should be 5, but was %d", greeting, length);
}
This should produce a slightly more user friendly message when things go wrong. But, actually, Cgreens default messages are so good that you are encouraged to skip the legacy style and go for the more modern constraints style assertions. This is particularly true when you use the BDD style test notation.
We strongly recommend the use of BDD Style notation with constraints based assertions. |
Appendix B: Release History
In this section only the introduction or changes of major features are listed, and thus only MINOR versions. For a detailed log of features, enhancements and bug fixes visit the projects repository on GitHub, https://github.com/cgreen-devs/cgreen.
Since 1.4.1 Cgreen has included the following C pre-processer definition variables
-
CGREEN_VERSION
, a SemVer string -
CGREEN_VERSION_MAJOR
-
CGREEN_VERSION_MINOR
-
CGREEN_VERSION_PATCH
You can use them to conditionally check for Cgreen features introduced as declared in the following sections.
Since 1.2.0 Cgreen has featured a public version variable in the loaded library, cgreen_library_version
.
This is mainly used by the cgreen-runner
to present version of the loaded library, but it can also be used to check for availability of features in the same way.
B.1. 1.6.0
-
Reverted use of
libbfd
introduced in 1.5.0 due to portability issues and Debian deeming it to be a serious bug due tolibbfd
not having a stable interface
B.2. 1.5.1
-
Fixed a problem with
ends_with_string()
which randomly crashed
B.3. 1.5.0
-
Replaced calling
nm
with BFD library calls, this makes thecgreen-runner
a bit more fiddly to build on some systems -
Introduced
will_capture_parameter()
B.4. 1.4.0
-
A memory leak in
will_return_by_value()
was fixed but now requires user deallocation.
B.5. 1.3.0
-
Renamed CgreenValueType values to avoid clash, now all start with
CGREEN_
B.6. 1.2.0
-
Introduced
will_return_by_value()
-
Introduced
with_side_effect()
B.7. 1.1.0
None.
B.8. 1.0.0
First official non-beta release.
Appendix C: License
Copyright (c) 2006-2021, Cgreen Development Team and contributors
(https://github.com/cgreen-devs/cgreen/graphs/contributors)
Permission to use, copy, modify, and/or distribute this software and its documentation for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies, regardless of form, including printed and compiled.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHORS DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Appendix D: Acknowledgements
Thanks to
-
Marcus Baker <marcus@lastcraft.com> - initiator and substantial inital work
-
Matt Hargett <plaztiksyke@gmail.com> - upgrading to the modern BDD-ish syntax
-
João Freitas <joaohf@gmail.com> - asciidoc documentation and Cmake build system
-
Thomas Nilefalk <thomas@junovagen.se> - cgreen-runner and current maintainer
Thanks also go to @gardenia, @d-meiser, @stevemadsenblippar and others for their contributions.