ned Productions Consulting

Technology musings by Niall Douglas
ned Productions Consulting
(an expert advice and services company based in Ireland)

Wednesday 13th May 2015 11.12pm

Link shared:

As part of publicising my C++ Now 2015 talk this week, here is part 12 of 20 from its accompanying Handbook of Examples of Best Practice for C++ 1114 (Boost) libraries:

12. CONVENIENCE: Consider having Travis send your unit test code coverage results to

There is quite a neat web service called free for open source projects which graphically displays unit test line coverage in a pretty colour coded source code browser UI. You also get a badge which shows how much of your code is covered as a percentage. It might sound like a bit of a gimmick, but in terms of ease of quickly visualising what you haven't covered when you thought you should it's very handy. Also, if you hook coveralls into your github using travis, coveralls will comment on your pull requests and commits whether your test coverage has risen or fallen, and that can be more than useful when you send in a commit and an unexpected catastrophic fall in coverage occurs as that probably means you just committed buggy code.

Anyway, firstly take a look at these libraries which use and decide if you like what you see:


Assuming you are now convinced, firstly you obviously need travis working. You can use coveralls without travis, but it's a one click enable with travis and github, so we'll assume you've done that. Your next problem with be getting travis to calculate line coverage for you, and to send the results to coveralls.

There are two approaches to this, and we'll start with the official one. Firstly you'll need a coveralls API key securely encoded into travis,  see this page for how. Next have a look at, with the key line being:

  - if [ "${TRAVIS_BRANCH}" == "cpp14" ] && [ "${VARIANT}" == "coverage" ]; then (sudo pip install requests[security] cpp-coveralls && coveralls -r . -b test/ –gcov /usr/bin/${GCOV} –repo-token c3V44Hj0ZTKzz4kaa3gIlVjInFiyNRZ4f); fi

This makes use of the coveralls c++ tool at to do the analysis, and you'll also need to adjust your Jamfile as per with some variant addition like:

extend-feature variant : coverage ;
compose <variant>coverage :
    <cxxflags>"-fprofile-arcs -ftest-coverage" <linkflags>"-fprofile-arcs"

… which gets the gcov files to be output when the unit tests are executed.

That's the official way, and you should try that first. However, I personally couldn't get the above working, though admittedly when I implemented coveralls support it was a good two years ago and I spent a large chunk of it fighting the tooling, so I eventually gave up and wrote my own coveralls coverage calculator which was partially borrowed from others. You can see mine at where you will note that I inject the fprofile-arcs etc arguments into b2 via its cxxflags from the outside. I then invoke a shell script at

# Adapted from
# which itself was adapted from

if [ 0 -eq $(find -iname ".gcda" | wc -l) ]
  exit 0

gcov-4.8 –source-prefix $1 –preserve-paths –relative-only $(find -iname "
.gcda") 1>/dev/null || exit 0

cat >coverage.json <<EOF
  "service_name": "travis-ci",
  "service_job_id": "${TRAVIS_JOB_ID}",
  "run_at": "$(date –iso-8601=s)",
  "source_files": [

for file in $(find * -iname '.gcov' -print | egrep '.' | egrep -v 'valgrind|SpookyV2|bindlib|test')
  FILEPATH=$(echo ${file} | sed -re 's%#%\/%g; s%.gcov$%%')
  echo Reporting coverage for $FILEPATH …
  cat >>coverage.json <<EOF
      "name": "$FILEPATH",
      "source": $(cat $FILEPATH | python test/,
      "coverage": [$(tail n +3 ${file} | cut -d ':' -f 1 | sed -re 's%^ +%%g; s%%null%g; s%1+$%0%;' | tr $'\n' ',' | sed -re 's%,$%%')]

#cat coverage.json
mv coverage.json coverage.json.tmp
cat >coverage.json <(head -n -1 coverage.json.tmp) <(echo -e "    }\n  ]\n}")
rm *.gcov coverage.json.tmp

#head coverage.json
curl -F json_file=@coverage.json #head coverage.json

This manually invokes gcov to convert the gcda files into a unified coverage dataset. I then use egrep to include all and egrep -v to exclude anything matching the pattern which is all the stuff not in the actual AFIO library. You'll note I build a JSON fragment as I go into the coverage.json temporary file, and the coverage is generated by chopping up the per line information into a very long string matching the coveralls JSON specification as per its API docs. Do note the separate bit of python called to convert the C++ source code into encoded JSON text (, I had some problems with UTF-8 in my source code, and forcing them through a ISO-8859 JSON string encode made coveralls happy. I then push the JSON to coveralls using curl. All in all a very blunt instrument, and essentially doing exactly the same thing as the official C++ coveralls tool now does, but you may find the manual method useful if the official tool proves too inflexible for your needs.