Prefer Using References With Range Based For Loops


Now and again I see people forgetting the & in range based for loops, like this:

    for (auto a : a_vec)
    {
    }

What some people seem to forget, or don’t know, is that this creates a copy of the element for each iteration. Unless you actually need a copy, there is no need to perform it. And if the objects you are copying are any larger than a built-in type (integer, pointer etc.), there is a potential performance penalty. So by default, do this instead:

    for (const auto& a : a_vec)
    {
    }

Notice the &? Now you get a reference instead of a copy, which is typically cheaper. Here is the full program:

#include <iostream>
#include <vector>

using namespace std;

class A
{
public:
    A() = default;
    A(const A& rhs) { cout << "Copy" << endl; }
};

int main()
{
    vector<A> a_vec(2);

    cout << "Range based for without &" << endl;
    for (auto a : a_vec)
    {
    }

    cout << "Range based for with &" << endl;
    for (const auto& a : a_vec)
    {
    }
}

And its output:

Range based for without &
Copy
Copy
Range based for with &

Afterthought: Why?

Why are people doing this? It might be that people are used to iterating with iterators, where you get a cheap copy of the iterator, not the actual object:

    for (auto a = a_vec.cbegin(); a != a_vec.cend(); ++a)
    {
    }

Here, a is an iterator, not the actual object, so copying it is inexpensive.

As usual, the code for this blog post is available on GitHub.

If you enjoyed this post, you can subscribe to my blog, or follow me on Twitter.

6 thoughts on “Prefer Using References With Range Based For Loops

  1. People do it because 99% of the time, they don’t give a shit about some potential micro-optimization, and it’s a lot quicker and simpler to just do auto x every time without thinking about it, and then come back to it later if your profiler shows that lots of redundant copies are made which is seriously slowing down your program.

    1. I don’t agree. On one end you have premature optimization, on the other end you have premature pessimization. At some point you have to draw a line. For me, using references for function parameters and loop bodies is avoiding premature pessimization. These are just simple things I routinely do to avoid having to pull up the profiler later.

    2. Avoiding copies is not a micro-optimization. When programming C++, that should be on your mind whenever you’re passing arguments, or looping over containers.

  2. I think it is becouse people just don’t know, I don’t know the optmization of using “const” until on somebody explain me on the intership I do the last year.

    anyway, I have a question: There is some reason for the birth of new “Range For”?

    1. The range based for loop is very common in many languages, and generally considered to be both faster to type and easier to read. This is especially true for C++, where you quickly end up with things like


      for (vector<pair>::const_iterator it = ints.begin(); it != ints.end(); ++it)
      {
      cout <first << ":" <second << " ";
      }

    2. const is not an optimization. const is purely for the programmer – to tell yourself and others what *should* and what *should not* change in a given context, and catch you if you make a mistake. The compiler does not care. It cannot make any additional assumptions just because you said it was const, that is not enough of a guarantee for complex programs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s