Experience with static C++ code analysis tools for quant development

(Last Updated On: January 4, 2012)
Learn the Secret

Get  our 2 Free Books

Get these now which land directly to their inbox.
Invalid email address

Experience with static C++ code analysis tools for quant development

Hi, did anyone have a long-term experience in integrating static code analysis tools into large C++ projects’ development process?
I’d like to get some feedback and any statistics if possible. Sales from PVS-studio reached me some time ago, I ran a trial which showed rather good but I’m not sure about long-term thing (we managed to fix a lot of hidden problems during bug rush done during 30-day trial 🙂 )

 

I tried it one time, though that project was a Java codebase, not C++. It is too long ago for me to remember the specifics, but I do remember that we found doing useful integration quite challenging. It was difficult to even select the metrics we wanted to focus on, because there were so many possible ones (once we got into the reports) and it is not obvious which ones give you the best ROI. I think what we ended up doing was selecting a handful and sent it out with the daily build report, and we never got further than that. The experience to me was such that I became sceptical of code analysis tools. Not in the sense that they don’t add value, they do, but most development teams will have other things they can do (other tools to integrate) that gives them a better return per hour spent.

 

I’ll recapitulate my recent status update. 😉

There’s a great article on static code analysis by John Carmack here:

http://altdevblogaday.com/2011/12/24/static-code-analysis/

If nothing else, you should be using /analyze on Visual C++ builds and clang on OS X/iOS builds.

 

thank you for your comment. Do you mind going into more detail about ‘other tools’ you’ve mentioned?
Currently I’m thinking about integrating static analysis, but might consider other options if they prove to be more efficient.

 

I remember having done this kind of thing on a medium-sized C++ code base. The tool of choice was flexelint (which despite of its price seems pretty much without alternative for C++ code on Unix systems). Though it was a bit of an effort getting that license purchase through I’d say it was worth every cent. We were able to locate lots of bugs which were difficult to find otherwise. I vehemently recommend static code analysis as every bug which is found around compile time is a unit test less to write (knowing that there are *always* too few of them).

 

about other tools, it was mentioned unit tests, which is something I am a big fan off. I only had the one experience with the code analysis tool, but I would have swapped that out for implementing more unit tests and we would have found more bugs that way, frankly. Another tool integration I would love to see in my build process is some kind of automated greybox testing with a memory checker. For instance, getting a memory report with the results of my unit tests. Based on  experience, maybe the tools have gotten better since I tried it. The tool I tried do not look to be available in the form (and name) it had then, but it was a popular one at the time.

Update: I read John Cormack’s article (thanks Richard), and my experience was almost identical to his first experience with PC Lint. /analyze does sound like it is worth a look.

 

I’d never want to live without unit tests. But there’s a class of bugs which is difficult to find in unit test, first of all everything that is connected to memory issues. And there’s lots of more stuff than just NULL pointers – in particular in complex applications where ownership of memory is not *by design* well defined, double releasing of memory is a huge issue. For this kind of thing static analysis tools can be a great help – if you agree to (re-) design according to some rules which allow the tool to understand your code.

In plain C, splint is a good – and free! – tool which is based on annotations in comments. To fully benefit from it, you’ll have to pretty much follow its paradigm in some ways, but it usually pays off, because it can give you very precise information on what goes wrong.

For C++, as I mentioned, FlexeLint (for Unix, and PC-Lint for PC-platforms) is pretty much without alternative. When we used it as mentioned above, I was quite amazed to see how many problems we found with its help. Especially in C++, apart from the memory issues, there is this nasty class of errors related to implicit conversions through which the standard operators / constructors you *don’t* want are called. These things are known at compile time, but many compilers don’t warn you about it (and let’s be honest, there’s a few compilers out there which you want to make silent because some of the warnings are something from confusing to plain rubbish, I’m thinking of the MS and IBM compilers in the 90s, maybe still, but I haven’t used the more recent versions).

And when I say “stuff you won’t have to write unit tests about” then I’d even go as far to say that writing unit tests for such compile time problems is plain wrong. Compile time problems should be tackled at compile time, runtime problems at runtime. That does not keep us from running the code in valgrind nevertheless 🙂

 

With the risk of saying something contrary to your expectations… here it goes:

I used to believe that static code analysis is a cool idea for C++ and after doing some research on the subject I even started to implement a new tool for this purpose (checkhttp://www.inspirel.com/vera/). Things looked good until I have discovered a strange pattern. It looks like C++ in its entirety is just too liberal to allow really valuable code analysis and one has to do some compromises in order for the code to be digestible for the tool and the analysis to be strong and complete. Those compromises mean subsetting the language and removing many of its properties and abilities so that the code can be reliably reasoned about. Subsetting is not a bad idea and every coding standard does it anyway, but at some point I have discovered that the (sub)language I’m supposed to use is so simplified that it does not have any of the original properties that would distinguish it from other languages. That is, the analysable subset of C++ is more or less the same as the analysable subset of any other language – which means that I might as well use any other language in those places where static analysis is mostly expected (note: that does not have to be the whole project). And it looks like there are other languages that were designed specifically to enable static code analysis and that contain features that make static code analysis easier – as opposed to hammering the code analysis onto the language that was not designed for it in the first place.

That is, if you ask me today to implement a project where some parts need to be statically analyzed, I would write those specific parts in a language that makes it easier.
OK, since you’re going to ask anyway ;-), I would pick Ada or SPARK, since they not only support static code analysis in a much better way, but they can also easily link with the remaining C++ parts of the project to form a single executable. I believe it can be much more effective than trying to apply static code analysis to a monolithic C++ codebase.

http://www.inspirel.com

 

I’d put it that way: if you decide to use C++ for a project there are probably a couple of external and / or internal reasons for doing this. The language having good properties for static code analysis will most likely not be among them 🙂
However if for whatever reasons you do your project in C++ you’ll want to use whatever is available to get the code bug free. In C++ there’s a number of typical error patterns which can be well detected using static analysis tools. And honestly, when I started exploring the use of such tools in my project (as I was the one assigned to do that) I was pretty amazed *how much* could actually be found that way.

Discussing the choice of language for its suitability for static analysis seems a bit academical to me. The combination of object-oriented and generic programming plus very good machine level optimization makes C++ pretty alternative-less for a certain class of problems. Thus the task is: try to get the code bug free while getting the best out of the language.

Another side note: I’m rather opposed to *subsetting* the language. I do believe that good code conventions (beyond naming and formatting) are essentially important in C++, but particularly some of the advanced features which are usually subsetted away first, often expose their usefulness once you’ve got more experienced with the language.

 

I find it to be a bit self-contradictory. Why do you think that a language that is well fitted for static analysis would be incompatible with the reasons for which C++ is chosen for the given project? The only single case I can imagine is when you already have a team with C++ competencies and want to benefit from them out of the box without investing into learning new technology. That’s a perfectly valid reason – but then, isn’t the introduction of static analysis an investment, too? Are you sure it is significantly cheaper to introduce new tools that heavily influence the way you work (and that might not be well fitted), require training, etc. than to introduce new technology that is well fitted for static analysis from the very beginning? How would you judge the cost balance of these two alternatives in different time scales, like several weeks vs. several years?

In any case, there is another interesting thing here – that there is a number of typical error patterns. Not sure what you mean here, but if the error pattern is, for example, “division by zero”, then indeed it seems to be a single category, but there are infinite ways to get there and this is true for any other “single” error pattern. Without proper support from the language itself (type system, contracts, proof functions, etc.) the ability of the tool to detect such a bug is very limited, which normally leads to the limitation of the scope of codebase that can be analyzed. This in turn has high expense (tool integration, usage culture, unnecessarily defensive local coding patterns, etc.) with little or misleading added value.

 

As to why one chooses C++ there’s many possible reasons. The most popular ones are that C++ still provides the best compromise between efficient machine-level optimization and language constructs supporting object oriented and generic programming. Guess why it is still repeatedly chosen as “favorite” language in times of .net, java, RoR etc? Also dependence on existing libraries, frameworks etc. may be a factor, too. But as we’re in a C++ group here I don’t see the point in going any further on this bit…

In my eyes the most typical error patterns in C++ are non-virtual destructors in classes with virtual member functions, the unwanted use of standard implementations to some operators and constructors due to implicit conversion and messy memory management when objects keep references to heap-allocated storage from elsewhere (-> ownership). All of these problems can be kept relatively well under control by enforcing code conventions, e.g. using const references for function arguments whenever technically possible, by deriving complex classes from boost::noncopyable until copying is needed, also by sensible use of smart pointers.

C++ has a type system much better than some people think. There are ways to circumvent it in some situations but again that’s a problem that can be solved by code conventions. A particular strength of C++ in this respect are templates because they move resolving of types into compile time and help avoiding some of the ambiguities you may run across with dynamic binding. I don’t like complaining about the lack of strong typing while most people don’t even use what the language has to offer.

Finally, as I talked about code conventions, this is another point where static code checking comes in handy, as such tools support enforcing of code conventions (e.g. “you can make this parameter static const&”). I see this as a step in the build process. If for instance you need to call a non-const method on a const-ref parameter you’ll need to think about it – either make the parameter writable or – more often – make a copy of it, which will then make you think about if copying is supported at all. If the set of rules is well chosen you can avoid “bad practice” constructions (which may for instance make it impossible for a code analysis tool to determine memory ownership).

 

Now I see your point.
Still, I think there is a lot of space for misunderstanding here, as we probably have different expectations with regard to our tools. To continue with your example, the non-virtual destructors in polymorphic classes is indeed an error pattern, but one that does not require any depth of analysis to discover and no significant cost to fix. That is, it is rather the “oops I made a typo” kind of error rather than “oops I messed up the logic”. Such errors are very simple (they are results of easily identified language impurities) and reporting them with “static analysis” is nothing more than a glorified compiler warning. In fact, for this kind of assistance I tend to just switch on all orthodox compiler options and use more than one compiler (say g++ with VC++) to have enough information to clean the code with respect to such errors.
I wrote “static analysis” above in quotes as I’m not sure if the term is really deserved here. The problem is that having the code clean with respect to such language-related errors is very far from the state that would allow me to declare that the code is “bug free”. The bugs that I had to fight recently where just not there, so even if I have my destructors declared properly I’m still not feeling safe.

What I mean by static analysis is a process that can tell me whether my sorting function actually sorts or whether my I/O routines do only I/O and nothing else or whether a piece of the program has some expected timing properties like being non-blocking on all execution paths or having the progress guarantees for all possible input combinations. This is not sci-fi, but it requires much deeper analysis than discovery of missing keyword or flagging a surprising implicit conversion. I am not aware of any comprehensive tool that would be able to work with non-restricted C++ and offer this level of assistance.
And if it cannot do that, then it just does not have the added value that I expect.

 

I must admit I’ve never tried to verify the correctness of a sort algorithm using static code analysis (this sort of thing simply never was relevant in the projects I was involved in). I see the focus of tools like FlexeLint on technical errors. The non-virtual destructor is of course an over-simple example. It’s getting more interesting when dealing with implicit copying of objects through default assignment operators and copy constructors, then again – as mentioned – memory management. It’s been a while, but as far as I remember I also got lots of hints on things that might be wrong in algorithms, and I’d be really curious to see how well a recent version of the tool performs in that respect (my last contact with the tool dates back more than 4 years now)..

 

I would suggest QAC++ with a new tool called QAVerify for integration from www.programmingresearch.com.

 

I’m in agreement. You’re probably better off to put a boundary around the bits you need to analyze and write them in SPARK. Then again – I’m biased because I work for Altran-Praxis who maintain the tool and the language!

We also use QAC and programming research products here so I can vouch for both their integrity and their ease of use. However, if you absolutely need proof of partial correctness, you won’t get that from QAC.

 

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!
This entry was posted in Uncategorized and tagged , , , , , on by .

About caustic

Hi i there My name is Bryan Downing. I am part of a company called QuantLabs.Net This is specifically a company with a high profile blog about technology, trading, financial, investment, quant, etc. It posts things on how to do job interviews with large companies like Morgan Stanley, Bloomberg, Citibank, and IBM. It also posts different unique tips and tricks on Java, C++, or C programming. It posts about different techniques in learning about Matlab and building models or strategies. There is a lot here if you are into venturing into the financial world like quant or technical analysis. It also discusses the future generation of trading and programming Specialties: C++, Java, C#, Matlab, quant, models, strategies, technical analysis, linux, windows P.S. I have been known to be the worst typist. Do not be offended by it as I like to bang stuff out and put priorty of what I do over typing. Maybe one day I can get a full time copy editor to help out. Do note I prefer videos as they are much easier to produce so check out my many video at youtube.com/quantlabs