Submit and vote on feature ideas.

Welcome to the new Parasoft forums! We hope you will enjoy the site and try out some of the new features, like sharing an idea you may have for one of our products or following a category.

Running SA & UT to cover conditionally compiled code

Options
dmoody
dmoody Posts: 2
edited February 2020 in C/C++test

I am trying to figure out the best solution for running static analysis and unit tests on all of our C++ code.

There are 3 different pieces of hardware that our code runs on. We have sections of our code that are compiled out depending on name we pass in to the compiler, IE: “-DHW1”. This is the primary way our main application knows which version of the hardware it is running on and the way the code is currently structured it will not compile if you were to pass in more than one name. Sections of the code look like this:

#if defined (HW1)

// Some code specific to one type of hardware

#endif

#if defined (HW2)

// Some code specific to a different type of hardware

#endif

What we would like is to be able to run static analysis and unit tests on all of our code at once or automatically run it on all 3 variants sequentially without manually having to switch between them.

Our software is built on Linux and I have primarily been using Eclipse with the C++test plugin 10.4.2 although I have done some experimenting the the standalone c++test 10.4.1 application.

Our current project structure in Eclipse is a C++ managed build and to run on all of our code we would need to run static analysis and unit test on each hardware variant individually, adjusting the names we pass in between each run. IE: Run and generate reports with “-DHW1”, repeat by removing that and using “-DHW2”, repeat again with “-DHW3”.

We are exploring using a Makefile project in lieu of the C++ managed build project and have it structured with one “top-level” Makefile which invokes several other Makefiles that live in the different folders with our source code. Using this method we are able to build all 3 variants of the application with a single build command using Eclipse but when static analysis is run it appears to only analyze one variant of the code.

I also tried generating a BDF and making a project from that to see if that had any different effect. It took some playing around to get that to work, generating one using cpptesttrace as I had done in the past did not seem to work (perhaps due the way our Makefiles are written/structured or because they are using a specific g++ for compiling and linking in the Makefile and I was not using the –cpptesttraceTraceCommand option correctly). I was able to figure out how to make one with cpptestscan but it seems to still only analyze one variant at a time.

Ultimately we would like to get this setup to run in Jenkins but for now we are just trying to generate the reports in Eclipse.

Let me know what else I can provide or if you need any more details.

Comments

  • dmoody
    dmoody Posts: 2
    Options

    To be clear, a separate executable is generated for each variant of our hardware.

  • Mirek
    Mirek Posts: 141 admin
    Options

    Hello @dmoody,

    With three different variants/builds of the software for three hardware platforms, I suggest running the analysis sequentially in the CI/CD or in any other automated way.

    For CDT based projects (I understand this is your primary setup) I can recommend two alternative approaches:

    1. Maintain 3 workspaces with different build configurations selected in Project Properties->Parasoft->C/C++test->Build Settings [Project settings: Configuration] and run the analysis sequentially (or in parallel).
    2. Use one workspace but perform a small modification of the .parasoft file that keeps project-specific settings. Before the scan of the code, use some scripting to search/replace the following key in the file:

    /com.parasoft.xtest.checkers.api.cpp.options.cdt.configuration=

    Replace the with the real name of your CDT build configuration and run the analysis in the command line. C/C++test will scan the code using compilation options (including defines) that are specific for given build configuration.

    With Unit testing, the same approach should work.

    Let me know if it helped.