The Beehive, City Place, Gatwick, RH6 0PA, United Kingdom
+44 (0)20 801 74646

Managing reported defects from multiple test tools

It’s widely accepted that performing Software Testing is a good idea. The sooner issues are discovered in the development and testing cycle, the easier they are to address and most importantly, are cheaper fix. Consequently, there is a huge choice of testing solutions, that either focus on a specific stage of the cycle, for example, static, dynamic, network, or penetration testing, and within each category, multiple offerings, both free and commercial.

Which to choose then? Well, thats not a straight-forward answer, and isn’t in the scope of this blog post. However, what is acknowledged, even by most testing tool vendors is that it is beneficial to deploy many of them, not just either a static or a dynamic or network testing tool, but several of each type. The reason for this is simple: you will discover more issues if you apply more testing just as you would discover more problems if you were to assign more people to manual bug discovery. But just like individuals have their strengths and weaknesses, the same is true of automated testing tools. So the advantage is clear, but this multi-tool (even human) approach creates several issues:

  1. Having to deploy and run multiple analysis tools.
  2. How to triage all of these sources of information.
  3. How to deal with multiple equivalent issues.
  4. Having somehow dealt with 1 and 2, how to track the resolution of the issue

It might be fair to assume that between a group of manual testers, issues 2 and 3 could be less arduous to handle, but where automated testing tools are concerned, not so true. As implied above, the set of tools employed will return their own set of results, some of which some will be unique to each tool, but plenty will be repeatedly reported, not just by a single tool, but perhaps even by the same tool. Then theres the fact results will all be presented in a proprietary form, and with differing triaging (or not) capabilities. Quite quickly, the real intended benefits of all these tools can get overwhelmed by the impracticalities of managing all of them, especially when a percentage of the repeated results are either false or unimportant – no one likes spending too much time triaging a single false positive, never mind the same one repeatedly.

And then that fourth problem, how to track the resolution and mark it as so back in each tool that reported it? Some of these tools can assist in creating issue tracking tickets in environments like Jira BUT there should only be one ticket in your issue tracking system for each unique issue, so how make sure thats the case?

Fortunately (if you’ve been reading along, you knew this was coming!), there is a solution for these problems in the form of a Vulnerability Management tool, in this case CodeDX. Not another tool I can hear! Well, yes, it is, but by deploying this one, you will end up hiding all your existing (and I mean all) analysis tools behind one consistent interface, so you only have to trigger one analysis to trigger all your analysis tools, so you only have one UI to interact with to triage the results, that will rather nicely only present one instance of each unique issue as it will automatically take care of identifying duplicate and correlated issues, and because theres just one primary triage tool, the hassle of how to connect your issue tracker solves itself.

 

This screen shows what to expect once CodeDX has been configured and its analysis has been ran. Of immediate note is that this analysis ingested 1019 results from multiple analysis sources (more later), but already, this has been compressed down to 713 results. This is due to the de-duplication of per tool results that codeDX has carried out. It does this based on several factors including the CWE (Common Weakness Enumeration) assigned by the underlying analysis tool, and line number.

 

By clicking the “Show Inputs” button, it is revealed what the underlying set of analysis tools were that provided the 1019 results. In this case, there were 8 of them, including several freely available tools that CodeDX integrates automatically out of the box and the last one, a commercial tool.

 

Down the right hand side of this screen, there are numerous filters to allow different views of the data, such as by standards compliance (e.g. only show me the MISRA warnings), or by file/directory, etc, etc. One of the most interesting filters though is one that allows only correlated results to be shown that were reported by several underlying tools. Here we see that I’ve asked to be shown only the set of results reported by 3 tools resulting in just 26 results being shown. Remember, we started with over 1000 results across 8 tools. That would have been a lot of triaging effort!

However, we’re now looking at just the common 26 across them all. Also notice that in the issue column, there is an issue tracking ticket assigned. This is the indication that this is single issue, which is now a sole unique issue representing all the duplicates across all the tools, that we need to track for resolution. Once the issue tracker has been configured for CodeDx, its simply point and click stuff to issue track warnings, and rather nicely, the common status that might be assigned in either the issue tracker or Code Dx for the issue can be synchronised automatically if desired. In other words, the developer responsible for addressing this issue will set the status of the issue in the issue tracker once its addressed and that status change will automatically reflect for the paired issue in CodeDx (or vice versa). This allows whoever’s responsible for monitoring whether issues are dealt with the straight-forwards task of reviewing that in the Code Dx user interface.

Did you enjoy this post?

Subscribe to our newsletter and to keep up to date on blog posts, product updates and events.