A String Literal is not a string


The other day, I discovered a bug in some code that can be simplified to something like this:

A library of functions that handles a lot of different types:

void doSomethingWith(int i)           { cout << "int"    << endl; };
void doSomethingWith(double d)        { cout << "double" << endl; };
void doSomethingWith(const string& s) { cout << "string" << endl; };
void doSomethingWith(const MyType& m) { cout << "MyType" << endl; };

Used like this:

    doSomethingWith(3);
    doSomethingWith("foo");

It of course outputs:

int
string

Then someone wanted to handle void pointers as well, and added this function to the library:

void doSomethingWith(const void* i) { cout << "void*" << endl; };

What is the output now? Make up your mind before looking.

int
void*

What happened? Why did C++ decide to use the const void * function instead of const string& that we wanted it to use?

The type of a string literal is not string, but const char[]. When deciding on the overloaded function to use, C++ will first see if any of them can be used directly. A const void* is the only type in our example than can point directly to the const char[], so that one is picked.

Before that function was introduced, none of the functions could be used directly, as neither const string& nor const MyType& can refer to a const char[], and it cannot be cast to an int or a double. C++ then looked for implicit constructors that could convert the const char[] into a usable type, and found std::string::string(const char * s). It then went on to create a temporary std::string object, and passed a reference to this object to void doSomethingWith(const string& s), like this:

doSomethingWith(std::string("foo"))

But then, when the const void* version appeared as an alternative, it preferred to use that one instead as it could be used without constructing any temporary objects.

As usual, the code for this blog post is available on GitHub.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Why we should see an uptake in <algorithm> usage


With C++11 out, I think we should see an uptake in use of the good old std <algorithm>. Why?

A common thing to do in a program is to iterate over a container of objects, producing another container of other objects. Imagine for instance you have a vector of domain objects:

struct DomainObject
{
    string label;
};
vector<DomainObject> objects;

Now you want to produce a vector containing the labels of all your domain objects. This is the “classical” solution:

    vector<string> labels(objects.size());
    for (size_t i = 0; i < objects.size(); ++i)
        labels[i] = objects[i].label;


You can however instead use std::transform, which is more declarative, immune to Off-by-one errors, possibly more optimization friendly etc. This is how it looks:

    vector<string> labels(objects.size());
    transform(objects.begin(), objects.end(), labels.begin(), label_for);


The problem is however that you need a function / function object to provide as the last argument to transform. Here is the one I used:

string label_for(const DomainObject& obj)
{
    return obj.label;
}


This reduces locality, and makes the code harder to read. Unless the helper is sufficiently advanced that you would want to either reuse it a lot or test it, it would be better to be able to write it directly in the transform call. This is exactly what C++11 lambdas are good for, and where I’ll think we’ll see them used a lot:

    vector<string> labels(objects.size());
    transform(objects.begin(), objects.end(), labels.begin(), [](const DomainObject& o){return o.label;});


This isn’t a complete introduction to lambdas, but if you haven’t seen them before, here is a quick intro. Lambdas are just a fancy name for functions without a name. That means you can simply type them in directly where you’d normally call a function. [] means “anonymous function follows” (at least for the purposes of this article), and then you just type out any normal function body. Mine takes a reference to a DomainObject and returns its label, just like label_for() did.

Here is another example, using std::find_if to look for a specific element in a container:

    auto matched = find_if(objects.begin(), objects.end(), [](const DomainObject& o) { return o.label == "two"; });
    cout << matched->label << endl;


Notice the use of auto, another C++11 feature. It uses type inference to deduce the type of the variable by looking at the rest of the expression. Here it understands that you will be getting a vector<DomainObject>::iterator from find_if(), so there is no need for you to type that out.

As usual, the code for this blog post is available on GitHub.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Zero-overhead test doubles (mocks without virtual)


In which I explain why some coders are opposed to unit-testing with test-doubles due to the overhead of virtual functions, admit that they are sometimes right, and present a zero-overhead solution.

(Sorry if this posts looks like a lot of code. It’s not as bad as it looks though, the three examples are almost identical.)

It is not uncommon to find coders shying away from virtual functions due to the performance overhead. Often they are exaggerating, but some actually have the profiler data to prove it. Consider the following example, with a simulator class that calls the model heavily in an inner loop:

    class Model {
    public:
        double getValue(size_t i);
        size_t size();
    };

    class Simulator {
        Model* model;
    public:
        Simulator() : model(new Model()) {}

        void inner_loop() {
            double values[model->size()];
            while (simulate_more) {
                for (size_t i = 0; i < model->size(); ++i) {
                    values[i] = model->getValue(i);
                }
                doStuffWith(values);
            }
        }
    };

Imagine that getValue() is a light-weight function, light-weight enough that a virtual indirection would incur a noticeable performance-hit. Along comes the TDD-guy, preaching the values of unit testing, sticking virtual all over the place to facilitate test doubles, and suddenly our simulator is slower than the competition.

Here’s how he would typically go about doing it (the modifications are marked with comments):

    class Model {
    public:
        virtual double getValue(size_t i); //<--- virtual
        virtual size_t size();
    };

    class Simulator {
        Model* model;
    public:
        Simulator(Model* model) : model(model) {} //<--- inject dependency on Model

        void inner_loop() {
            double values[model->size()];
            while (simulate_more) {
                for (size_t i = 0; i < model->size(); ++i) {
                    values[i] = model->getValue(i);
                }
                doStuffWith(values);
            }
        }
    };

Now that the methods in Model are virtual, and the Model instance is passed in to the Simulator constructor, it can be faked/mocked in a test, like this:

    class FakeModel : public Model {
    public:
        virtual double getValue(size_t i);
        virtual size_t size();
    };

    void test_inner_loop() {
        FakeModel fakeModel;
        Simulator simulator(&fakeModel);
        //Do test
    }

Unfortunately, the nightly profiler build complains that our simulations now run slower than they used to. What to do?

The use of inheritance and dynamic polymorphism is actually not needed in this case. We know at compile time whether we will use a fake or a real Model, so we can use static polymorphism, aka. templates:

    class Model {
    public:
        double getValue(size_t i); //<--- look mom, no virtual!
        size_t size();
    };

    template <class ModelT> //<--- type of model as a template parameter
    class Simulator {
        ModelT* model;
    public:
        Simulator(ModelT* model) : model(model) {} //<--- Model is still injected

        void inner_loop() {
            double values[model->size()];
            while (simulate_more) {
                for (size_t i = 0; i < model->size(); ++i) {
                    values[i] = model->getValue(i);
                }
                doStuffWith(values);
            }
        }
    };

We have now parameterized Simulator with the Model type, and can decide at compile time which one to use, thus eliminating the need for virtual methods. We still need to inject the Model instance though, to be able to insert the fake in the test like this:

    class FakeModel { //<--- doesn't need to inherit from Model, only implement the methods used in inner_loop()  
        double getValue(size_t i);
        size_t size();
    };

    void test_inner_loop() {
        FakeModel fakeModel; 
        Simulator<FakeModel> simulator(&fakeModel);
        //Do test
    }

That’s it, all the benefits of test doubles, with zero performance overhead. There is a drawback however, in that we end up with templated code that wouldn’t need to be if it wasn’t for the purpose of testing. Not an ideal solution, but if both correctness and speed is important, and you have profiler data to prove the need, this is probably the way to go.

As usual, the sourcecode used in this post is available on GitHub.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter. Also, if you write code-heavy posts like this in vim and want to automate copying in code snippets from an external file, check out my vim-plugin SnippetySnip (also available on GitHub).

Disempower Every Variable


In which I argue you should reduce the circle of influence, and the ability to change, of every variable.

The more a variable can do, the harder it is to reason about. If you want to change a single line of code involving the variable, you need to understand all its other uses. To make your code more readable and maintainable, you should disempower all your variables as much as possible.

Here are two things you can do to minimize the power of a variable:

1: Reduce its circle of influence (minimize the scope)
I once had to make a bugfix in a 400 line function, containing tens of for-loops. They all reused a single counter variable:

{
  int i;
  (...)
  for (i = 0; i < n; ++i) {
  }
  (...)
  for (i = 0; i < n; ++i) {
  }
  //350 lines later...
  for (i = 0; i < n; ++i) {
  }
}

When looking at a single for-loop, how am I to know that the value of i is not used after the specific loop I was working on? Someone might be doing something like

for (i = 0; i < n; ++i) {
}
some_array[i] = 23

or

for (i = 0; i < n; ++i) {
}
for (; i < m; ++i) {
}

The solution here is of course to use a local variable to each for-loop (unless of course it actually is used outside of the loop):

for (int i = 0; i < n; ++i) {
}
for (int i = 0; i < n; ++i) {
}

Now I can be sure that if I change i in one for-loop, it won’t affect the rest of the function.

2: Take away its ability to change (make it const)

(I have blogged about const a few times before. It is almost always a good idea to make everything that doesn’t need to change const.)

Making a local variable const helps the reader to reason about the variable, since he will instantly know that its value will never change:

void foo() {
  const string key = getCurrentKey();
  (...) //Later...
  doSomethingWith(key);
  (...) //Even later...
  collection.getItem(key).process();

Here the reader knows that we are always working with the same key throughout foo().

In summary: Reduce the circle of influence (by reducing the scope) and take away the ability to change (by using const).

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Undefined Behaviour — Worse Than its Reputation?


Last week I wrote about The Difference Between Unspecified and Undefined Behaviour. This week I’d like to expand a bit more on the severity of undefined behaviour. If however you have a lot of time, instead go read A Guide to Undefined Behavior in C and C++ by John Regehr of the University of Utah, and then What Every C Programmer Should Know About Undefined Behavior by Chris Lattner of the LLVM project, as they cover this material in much more depth (and a lot more words!) than I do here.

To expand on the example from last week, what is the output of this program?

int main()
{
    int array[] = {1,2,3};
    cout << array[3] << endl;
    cout << "Goodbye, cruel world!" << endl;
}

A good guess would be a random integer on one line, then “Goodbye, cruel world!” on another line. A better guess would be that anything can happen on the first line, but then “Goodbye, cruel world!” for sure is printed. The answer is however that we can’t even know that, since If any step in a program’s execution has undefined behavior, then the entire execution is without meaning. [Regehr p.1].

This fact has two implications that I want to emphasize:

1: An optimizing compiler can move the undefined operation to a different place than it is given in the source code
[Regehr p.3] gives a good example of this:

int a;

void foo (unsigned y, unsigned z)
{
  bar();
  a = y%z; //Possible divide by zero
}

What happens if we call foo(1,0)? You would think bar() gets called, and then the program crashes. The compiler is however allowed to reorder the two lines in foo(), and [Regehr p.3] indeed shows that Clang does exactly this.

What are the implications? If you are investigating a crash in your program and never see the results of bar(), you might falsely conclude that the bug in the sourcecode must be before bar() is called, or in its very beginning. To find the real bug in this case you would have to turn off optimization, or step through the program in a debugger.

2: Seemingly unrelated code can be optimized away near a possible undefined behaviour
[Lattner p.1] presents a good example:

void contains_null_check(int *P) {
  int dead = *P;
  if (P == 0)
    return;
  *P = 4;
}

What happens if P is NULL? Maybe some garbage gets stored in int dead? Maybe dereferencing P crashes the program? At least we can be sure that we will never reach the last line, *P = 4 because of the check if (P == 0). Or can we?

An optimizing compiler applies its optimizations in series, not in one omniscient operation. Imagine two optimizations acting on this code, “Redundant Null Check Elimination” and “Dead Code Elimination” (in that order).

During Redundant Null Check Elimination, the compiler figures that if P == NULL, then int dead = *P; results in undefined behaviour, and the entire execution is undefined. The compiler can basically do whatever it wants. If P != NULL however, there is no need for the if-check. So it safley optimizes it away:

void contains_null_check(int *P) {
  int dead = *P;
  //if (P == 0)
    //return;
  *P = 4;
}

During Dead Code Elimination, the compiler figures out that dead is never used, and optimizes that line away as well. This invalidates the assumption made by Redundant Null Check Elimination, but the compiler has no way of knowing this, and we end up with this:

void contains_null_check(int *P) {
  *P = 4;
}

When we wrote this piece of code, we were sure (or so we thought) that *P = 4 would never be reached when P == NULL, but the compiler (correctly) optimized away the guard we meticulously had put in place.

Concluding notes
If you thought undefined behaviour only affected the operation in which it appears, I hope I have convinced you otherwise. And if you found the topic interesting, I really recommend reading the two articles I mentioned in the beginning (A Guide to Undefined Behavior in C and C++ and What Every C Programmer Should Know About Undefined Behavior). And the morale of the story is of course to avoid undefined behaviour like the plague.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

The Difference Between Unspecified and Undefined Behaviour


What is the output of this program?

int main()
{
    int array[] = {1,2,3};
    cout << array[3] << endl;
}

Answer: Noone knows!

What is the output of this program?

void f(int i, int j){}

int foo()
{
    cout << "foo ";
    return 42;
}

int bar()
{
    cout << "bar ";
    return 42;
}

int main()
{
    f(foo(), bar());
}

Answer: Noone knows!

There is a difference in the severity of uncertainty though. The first case results in undefined behaviour (because we are indexing outside of the array), whereas the second results in unspecified behaviour (because we don’t know the order in which the function arguments will be evaluated). What is the difference?

In the case of undefined behaviour, we are screwed. Anything can happen, from what you thought should happen, to the program sending threatening letters to your neighbour’s cat. Probably it will read the memory right after where the array is stored, interpret whatever garbage is there and print it, but there is no way to know this.

In the case of unspecified behaviour however, we are probably OK. The implementation is allowed to choose from a set of well-defined behaviours. In our case, there are two possibilities, calling foo() then bar(), or bar() then foo(). Note that if foo() and bar() have some side-effects that we rely on being executed in a specific order, this unspecified behaviour would still mean we have a bug in our code.

To summarize, never write code that results in undefined behaviour, and never write code that relies on unspecified behaviour.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

Don’t be Afraid of Returning by Value, Know the Return Value Optimization


In which I argue you shouldn’t be afraid of returning even large objects by value.

If you have somewhat large collections of somewhat large objects in a performance-critical application, which of the following functions would you prefer?

void getObjects(vector<C>& objs);
vector<C> getObjects();

The first version looks faster, right? After all, the second one returns a copy of the vector, and to do that, all the elements have to be copied. Sounds expensive! Better then, to pass inn a reference to a vector that is filled, and avoid the expensive return.

The second version is however easier to use, since it communicates more clearly what it does, and does not require the caller to define the vector to be filled. Compare

    doSomethingWith(getObjects());

against the more cubmersome

    vector<C> temp;
    getObjects(temp);
    doSomethingWith(temp);

Sounds like a classic tradeoff between speed and clarity then. Except it isn’t! Both functions incur the exact same number of copies, even on the lowest optimization levels, and without inlining anything. How is that possible? The answer is the Return Value Optimization (RVO), which allows the compiler to optimize away the copy by having the caller and the callee use the same chunk of memory for both “copies”.

If you got the point, and take my word for it, you can stop reading now. What follows is a somewhat lengthy example demonstrating the RVO being used in several typical situations.

Example
Basically, I have a class C, which counts the times it is constructed or copy constructed, and a library of functions that demonstrate slightly different ways of returning instances of C.

Here are the getter functions:

C getTemporaryC() {
	return C();
}

C getLocalC() {
	C c;
	return c;
}

C getDelegatedC() {
	return getLocalC();
}

vector<C> getVectorOfC() {
	vector<C> v;
	v.push_back(C());
	return v;
}

I then call each of these functions, measuring the number of constructors and copy constructors called:

int main() {
	C c1;
	print_copies("1: Constructing");

	C c2(c1);
	print_copies("2: Copy constructing");

	C c3 = getTemporaryC();
	print_copies("3: Returning a temporary");

	C c4 = getLocalC();
	print_copies("4: Returning a local");

	C c5 = getDelegatedC();
	print_copies("5: Returning through a delegate");

	vector<C> v = getVectorOfC();
	print_copies("6: Returning a local vector");
}

Update: I used gcc 4.5.2 to test this. Since then, people have tested using other compilers, getting less encouraging results. Please see the comments, and the summary table near the end.

This is the result:

1: Constructing used 0 copies, 1 ctors.
2: Copy constructing used 1 copies, 0 ctors.
3: Returning a temporary used 0 copies, 1 ctors.
4: Returning a local used 0 copies, 1 ctors.
5: Returning through a delegate used 0 copies, 1 ctors.
6: Returning a local vector used 1 copies, 1 ctors.

Discussion
1 and 2 are just there to demonstrate that the counting works. In 1, the constructor is called once, and in 2 the copy constructor is called once.

Then we get to the interesting part; In 3 and 4, we see that returning a copy does not invoke the copy constructor, even when the initial C is allocated on the stack in a local variable.

Then we get to 5, which also returns by value, but where the initial object is not allocated by the function itself. Rather, it gets its object from calling yet antother function. Even this chaining of methods doesn’t defeat the RVO, there is still not a single copy being made.

Finally, in 6, we try returing a container, a vector. Aha! A copy was made! But the copy that gets counted is made by vector::push_back(), not by returning the vector. So we see that the RVO also works when returning containers.

A curious detail
The normal rule for optimization used by the C++ standard is that the compiler is free to use whatever crazy cheating tricks it can come up with, as long as the result is no different from the non-optimized code. Can you spot where this rule is broken? In my example, the copy constructor has a side effect, incrementing the counter of copies made. That means that if the copy is optimized away, the result of the program is now different with and without RVO! This it what makes the RVO different from other optimizations, in that the compiler is actually allowed to optimize away the copy constructor even if it has side effects.

Conclusion
This has been my longest post so far, but the conclusion is simple: Don’t be afraid of returning large objects by value! Your code will be simpler, and just as fast.

UPDATE: Several people have been nice enough to try the examples in various compilers, here is a summary of the number of copies made in examples 3-6:

Compiler Temporary Local Delegate Vector SUM Contributed by
Clang 3.2.1 0 0 0 1 1 Anders S. Knatten
Embarcadero RAD Studio 10.1 U. 2 (clang) bcc32c/bcc64 0 0 0 1 1 Eike
GCC 4.4.5 0 0 0 1 1 Anders S. Knatten
GCC 4.5.2 0 0 0 1 1 Anders S. Knatten
GCC 4.5.2 -std=c++0x 0 0 0 1 1 Anders S. Knatten
GCC 4.6.4 -std=c++0x 0 0 0 1 1 Anders S. Knatten
GCC 4.7.3 -std=c++0x 0 0 0 1 1 Anders S. Knatten
Visual Studio 2008 0 0 0 1 1 Anders S. Knatten
Visual Studio 2010 0 0 0 1 1 Dakota
Visual Studio 2012 0 0 0 1 1 Dakota
Visual Studio 2013 Preview 0 0 0 1 1 Dakota
Visual Studio 2005 0 0 0 2 2 Dakota
IBM XL C/C++ for AIX, V10.1 0 0 0 2 2 Olexiy Buyanskyy
IBM XL C/C++ for AIX, V11.1 (5724-X13) 0 0 0 2 2 Olexiy Buyanskyy
IBM XL C/C++ for AIX, V12.1 (5765-J02, 5725-C72) 0 0 0 2 2 Olexiy Buyanskyy
Embarcadero RAD Studio 10.1 Update 2 (prev gen) bcc32 0 1 1 2 4 Eike
Embarcadero RAD Studio XE relase build 0 1 1 2 4 Rob
Sun C++ 5.8 Patch 121017-14 2008/04/16 0 1 1 2 4 Bruce Stephens
Sun C++ 5.11 SunOS_i386 2010/08/13 0 1 1 2 4 Asgeir S. Nilsen
Sun C++ 5.12 SunOS_sparc Patch 148506-18 2014/02/11 0 1 1 2 4 Olexiy Buyanskyy
Visual C++ 6 SP6 (Version 12.00.8804) [0-3] 0 1 1 2 4 Martin Moene
HP ANSI C++ B3910B A.03.85 0 1 2 2 5 Bruce Stephens

UPDATE 2: Thomas Braun has written a similar post, including more intricate examples and move semantics. Read it here (pdf).

You can download all the example code from this post at Github.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.