CERN Accelerating science

Extracting coverage information for FTS3

It took a while, but finally I have got the coverage information showing up in our Sonarqube instance (only accessible within CERN network).

Now we have the coverage information for FTS3, FTS3 REST API and GFAL2.

Extracting the coverage information for the Python code was fairly easy: just run nosetests with the option --with-coverage --cover-xml --cover-xml-file=coverage.xml and there you go. Of course, you need to configure the sonar-project.properties properly.

sonar.core.codeCoveragePlugin=cobertura

sonar.python.xunit.reportPath=nosetests.xml
sonar.python.coverage.reportPath=coverage.xml

However, extracting the coverage information for the C/C++ code (both for FTS3 and GFAL2) was a little bit trickier, specially for FTS3, since we need to run the server side compiled with the coverage instrumentation.

Basically, these were the steps:

Compile with coverage instrumentation

Pice of cake. Just compile as follows:

CFLAGS=--coverage CXXFLAGS=--coverage cmake "${SOURCE_DIR}" \
    -DMAINBUILD=ON -DSERVERBUILD=ON -DCLIENTBUILD=ON -DMYSQLBUILD=ON \
    -DTESTBUILD=ON
make -j2

With that, pretty much done for GFAL2. After the build, run ctest and you have the coverage data (more on how to send this coverage data to Sonarqube later). However, FTS3 must be started as a server before we get integration coverage.

Build inside a mock environment

Why? Because we need to install a bunch of dependencies without polluting the build machine.

Run the server inside the mock environment

The FTS3 code assumed several things that made running the compiled code just for the tests harder. For instance:

  1. Always switches to user and group fts3. It requires, of course, the user and group fts3 to exists on the node, which is not true for the CI machines. Also, if you build as root and then switch to fts3, the coverage data can not be collected.
  2. SOAP interface always run, which requires permissions to bind, a valid certificate...
    1. Eventually this could be done, but we are deprecating anyway
  3. Assumes log directory location, permissions,...

 

So I had to modify several bits on the code to make it easier to run FTS3 in this limited environment.

Run the tests

There is an external REST host listening for the requests that the test instance will run, since they share the DB. Jobs are submitted there, and picked by the instrumented FTS3.

Shut down the services

Sound easy, right? It wasn't. In order to store the coverage data, the process needs to terminate normally (this is, exit being called). So far so good. But, oh my, FTS3 tended to use _exit instead, which will *not* call cleanup and destructors, so no coverage data.

Bummer. Well, sed s/_exit/exit/g and it should be good, shouldn't it? Nope. It turned out that if you called exit, then destructors are called, which is what we wanted. But that triggered a lot of race conditions on the tear down of the services. Meaning, segfaults. Meaning, no coverage either.

So I had to go through a lot of debugging until I managed to get rid of these race conditions, and get a nicely ordered shut down of the service. With this, the coverage data is generated.

Colllect the coverage

Once here, we have a bunch of .gcda files which we need to aggregate. We use lcov first to aggregate into an .info, and lcov_covertura.py to generate an XML that can be parsed by, for instance, Jenkins... but not by the Sonarqube Community CXX Plugin.

Luckily, that was easy. Just use an existing XSLT that transforms one into the other, and good to go. This is how it looks on the sonar-project.properties file

sonar.core.codeCoveragePlugin=cobertura

sonar.cxx.cppcheck.reportPath=cppcheck.xml
sonar.cxx.rats.reportPath=rats.xml
sonar.cxx.vera.reportPath=vera.xml
sonar.cxx.pclint.reportPath=pclint.xml
sonar.cxx.compiler.parser=gcc
sonar.cxx.compiler.reportPath=build.log
sonar.cxx.coverage.reportPath=coverage-unit.xml
sonar.cxx.coverage.itReportPath=coverage-integration.xml
sonar.cxx.coverage.overallReportPath=coverage-overall.xml
sonar.cxx.xunit.reportPath=tests.xml
sonar.cxx.xunit.xsltURL=https://raw.githubusercontent.com/SonarOpenCommunity/sonar-cxx/master/sonar-cxx-plugin/src/main/resources/xsl/boosttest-1.x-to-junit-1.0.xsl

Putting all together

Four scripts put everything together so it can be easily run: coverage.sh, coverage-unit.sh, coverage-integration.sh and coverage-overall.sh. An additional one orchestrates the whole process in Jenkins.