Method of static code checking. Application of static analysis in program development. From the practice of working with legacy programs

You can view the original article using the Wayback Machine - Internet Archive: Static Code Analysis.

Since all articles on our website are presented in Russian and English, we have translated the Static Code Analysis article into Russian. And at the same time we decided to publish it on Habré. A retelling of this article has already been published here. But I’m sure many will be interested in reading the translation.

I consider my most important achievement as a programmer in recent years to be my acquaintance with the technique of static code analysis and its active use. It's not so much the hundreds of serious bugs that were kept out of the code thanks to it, but the change caused by this experience in my programming worldview regarding issues of software reliability and quality.

It should be noted right away that everything cannot be reduced to quality, and admitting this does not at all mean betraying any of your moral principles. Value has the product you create as a whole, and code quality is only one of its components, along with cost, functionality and other characteristics. The world knows many super-successful and respected game projects, crammed with bugs and constantly crashing; and it would be stupid to approach writing a game with the same seriousness with which they create software for the space shuttle. Yet quality is undoubtedly an important component.

I've always tried to write good code. By nature, I am like a craftsman, driven by the desire to continually improve something. I read piles of books with boring chapter titles like “Strategies, Standards and Quality Plans,” and working at Armadillo Aerospace introduced me to a completely different world of security-intensive software development from my previous experience.

More than ten years ago, when we were developing Quake 3, I bought a license for PC-Lint and tried to use it in my work: I was attracted by the idea of ​​automatically detecting defects in the code. However, the need to run from the command line and see long lists of diagnostic messages discouraged me from using the tool, and I soon abandoned it.

Since then, both the number of programmers and the size of the code base have grown by an order of magnitude, and the emphasis in programming has shifted from C to C++. All this prepared much more fertile ground for software errors. Several years ago, after reading a selection of scientific articles on modern static code analysis, I decided to check how things have changed in this field over the past ten years since I tried working with PC-Lint.

At that time, our code was compiled at warning level 4, and we left only a few highly specialized diagnostics disabled. With this approach - knowingly treating every warning as an error - programmers were forced to strictly adhere to this policy. And although in our code you could find a few dusty corners in which all sorts of “garbage” had accumulated over the years, overall it was quite modern. We thought we had a pretty good code base.

Coverity

It all started when I contacted Coverity and signed up for a trial diagnostic of our code with their tool. This is a serious program, the cost of the license depends on the total number of lines of code, and we settled on a price expressed in five figures. When showing us the results of the analysis, the experts from Coverity noted that our database was one of the cleanest in its “weight class” that they had ever seen (perhaps they say this to all clients to reassure them), but the report that they It was given to us and contained about a hundred problem areas. This approach was very different from my previous experience with PC-Lint. The signal-to-noise ratio in this case was extremely high: most of the warnings issued by Coverity actually indicated clearly incorrect sections of code that could have serious consequences.

This incident literally opened my eyes to static analysis, but the high price of all the fun kept me from buying the tool for some time. We thought that in the code remaining before release we would not have many errors.

Microsoft /analyze

It's possible that I would eventually decide to buy Coverity, but while I was thinking about it, Microsoft put an end to my doubts by implementing a new /analyze feature in the 360 ​​SDK. /Analyze was previously available as a component of the top-end, insanely expensive version of Visual Studio, and then suddenly it was given free to every developer on the xbox 360. As I understand it, Microsoft cares more about the quality of games on the 360 ​​platform than about the quality of software on Windows. :-)

From a technical point of view, the Microsoft analyzer only performs local analysis, i.e. it is inferior to Coverity's global analysis, but when we turned it on, it failed mountains messages - much more than Coverity issued. Yes, there were a lot of false positives, but even without them there were a lot of scary, truly creepy bugs.

I slowly started editing the code - first of all, I worked on my own, then the system one, and finally the game one. I had to work in fits and starts in my free time, so the whole process dragged on for a couple of months. However, this delay also had a beneficial side effect: we verified that /analyze actually caught important defects. The fact is that, simultaneously with my edits, our developers carried out a large multi-day hunt for bugs, and it turned out that each time they attacked the trail of some error that had already been marked /analyze, but had not yet been fixed by me. Besides this, there were other, less dramatic, cases where debugging led us to code already marked /analyze. These were all real mistakes.

In the end, I got all the code used to compile into a 360 executable file without a single warning when /analyze was enabled, and set this compilation mode as the default for 360 builds. After that, every programmer working on the same platform had their code checked for errors every time it was compiled, so that he could immediately fix bugs as they were introduced into the program, instead of me having to deal with them later. Of course, this did slow down the compilation process a bit, but /analyze is by far the fastest tool I've ever used, and trust me, it's worth it.

Once we accidentally turned off static analysis in some project. A few months passed, and when I noticed this and turned it back on, the tool was throwing up a bunch of new error warnings that had been added to the code during that time. Likewise, PC or PS3-only programmers contribute buggy code to the repository and remain in the dark until they receive an email reporting a “failed 360 build.” These examples clearly demonstrate that developers make certain types of mistakes over and over again in the course of their daily activities, and /analyze reliably protected us from most of them.

PVS-Studio

Since we could only use /analyze on 360 code, a large portion of our code base was still left uncovered by static analysis - this included code for the PC and PS3 platforms, as well as all programs that run only on PC.

The next tool I got acquainted with was PVS-Studio. It integrates easily into Visual Studio and offers a convenient demo mode (try it yourself!). Compared to /analyze, PVS-Studio is terribly slow, but it managed to catch a number of new critical bugs, even in code that had already been completely cleaned from the /analyze point of view. In addition to obvious errors, PVS-Studio catches many other defects, which are erroneous programming clichés, even if they seem normal code at first glance. Because of this, a certain percentage of false positives is almost inevitable, but, damn it, we found such patterns in our code, and we corrected them.

On the PVS-Studio website you can find a large number of wonderful articles about the tool, and many of them contain examples from real open-source projects, illustrating specifically the types of errors discussed in the article. I thought about inserting here a few illustrative diagnostic messages produced by PVS-Studio, but much more interesting examples have already appeared on the site. So visit the page and see for yourself. And yes - when you read these examples, don’t grin and say that you would never write like that.

PC-Lint

In the end, I returned to the option of using PC-Lint in conjunction with Visual Lint for integration into the development environment. In keeping with the legendary tradition of the Unix world, the tool can be configured to perform almost any task, but its interface is not very user-friendly and you cannot just “pick up and run” it. I purchased a set of five licenses, but it turned out to be so labor-intensive that, as far as I know, all other developers eventually abandoned it. Flexibility does have its benefits - for example, I was able to configure it to test all our code for the PS3 platform, although this took a lot of time and effort.

And again, in the code that was already clean from the point of view of /analyze and PVS-Studio, new important errors were found. I honestly tried to clean it so that lint wouldn’t swear, but I didn’t succeed. I fixed all the system code, but gave up when I saw how many warnings it gave for the game code. I sorted the errors into classes and dealt with the most critical of them, ignoring a lot of others that were more related to stylistic flaws or potential problems.

I believe that trying to fix a huge amount of code as much as possible from a PC-Lint point of view is doomed to failure. I wrote a certain amount of code from scratch in those places where I obediently tried to get rid of every annoying “Lint” comment, but for most experienced C/C++ programmers this approach to working on errors is too much. I still have to tinker with PC-Lint's settings to find the best set of warnings to get the most out of the tool.

conclusions

I learned a lot by going through all this. I'm afraid that some of my conclusions will be difficult for people who haven't had to personally sort through hundreds of bug reports in a tight time frame and feel sick every time they start editing them, and the standard reaction to my words will be “well, we have then everything is fine” or “everything is not so bad.”

The first step on this path is to honestly admit to yourself that your code is riddled with bugs. For most programmers, this is a bitter pill to swallow, but without swallowing it, you will inevitably perceive any proposal to change and improve the code with irritation, or even undisguised hostility. You must want subject your code to criticism.

Automation is necessary. It's hard not to feel a sense of schadenfreude when you see reports of horrendous failures in automated systems, but for every automation failure there are a legion of human errors. Calls to “write better code,” well-intentioned calls for more code reviews, pair programming, and so on simply don’t work, especially when there are dozens of people involved in the project and the work has to be done in a rush. The enormous value of static analysis lies in the ability every time you start find at least small portions of errors accessible to this technique.

I noticed that with each update, PVS-Studio found more and more errors in our code thanks to new diagnostics. From this we can conclude that when the code base reaches a certain size, it seems to introduce all the errors that are acceptable from a syntax point of view. In large projects, code quality is subject to the same statistical laws as the physical properties of matter - defects are ubiquitous, and you can only try to minimize their impact on users.

Static analysis tools are forced to work with one hand tied behind their back: they have to make inferences based on parsing languages ​​that do not necessarily provide the information for such inferences, and generally make very cautious assumptions. Therefore, you should help your parser as much as possible - favor indexing over pointer arithmetic, keep the call graph in a single source file, use explicit annotations, etc. Anything that may not seem obvious to a static analyzer will almost certainly confuse your fellow programmers. The characteristic “hacker” aversion to languages ​​with strict static typing (“bondage and discipline languages”) turns out to be short-sighted: the needs of large, long-lived projects, in the development of which large teams of programmers are involved, are radically different from small and quick tasks performed for oneself.

Null pointers are the most pressing problem in the C/C++ language, at least here. The possibility of dual use of a single value as both a flag and an address leads to an incredible number of critical errors. Therefore, whenever possible, C++ should favor references over pointers. Although the reference is “really” nothing more than a pointer, it is bound by an implicit commitment that it cannot be null. Perform null checks on pointers when they are turned into references - this will allow you to forget about this problem later. There are a lot of deeply ingrained programming patterns in the game industry that are potentially dangerous, but I don't know of a way to completely and painlessly move from null checks to references.

The second most important problem in our codebase was errors with printf functions. It was further aggravated by the fact that passing idStr instead of idStr::c_str() resulted in the program crashing almost every time. However, when we started using /analyze annotations on variadic functions to ensure type checks were performed correctly, the problem was solved once and for all. In the useful warnings of the analyzer, we came across dozens of such defects that could lead to a crash if some erroneous condition happened to trigger the corresponding code branch - this, by the way, also shows how small the percentage of our code was covered by tests.

Many of the serious bugs reported by the analyzer were due to modifications to the code made long after it was written. An incredibly common example is where ideal code, which previously checked pointers for null before performing an operation, was subsequently changed in such a way that pointers suddenly began to be used without checking. If we look at this problem in isolation, we could complain about the high cyclomatic complexity of the code, but if we look into the history of the project, it turns out that the reason is more likely that the author of the code failed to clearly communicate the premises to the programmer who was later responsible for the refactoring.

A person, by definition, is not able to maintain attention on everything at once, so first of all, focus on the code that you will deliver to clients, and pay less attention to the code for internal needs. Actively migrate code from the sales database to internal projects. An article was recently published that said that all code quality metrics in all their diversity correlate almost as perfectly with code size as the error rate, which makes it possible to predict the number of errors with high accuracy based on code size alone. So reduce the part of your code that is critical from a quality point of view.

If you haven't been completely terrified by all the extra challenges that parallel programming brings, it looks like you just haven't thought about it enough.

It is impossible to conduct reliable benchmark tests in software development, but our success with using code analysis has been so clear that I can simply say: Not using code analysis is irresponsible! Automated console crash logs provide objective data that clearly shows that Rage, while a pioneer in many ways, has proven to be much more stable and healthier than most cutting-edge games. The launch of Rage on PC unfortunately failed - I'm willing to bet that AMD does not use static analysis when developing their graphics drivers.

Here's a ready-made recipe: if your version of Visual Studio has built-in /analyze, enable it and try it like that. If I were asked to choose one of many tools, I would choose this solution from Microsoft. I advise everyone else who works in Visual Studio to at least try PVS-Studio in demo mode. If you are developing commercial software, purchasing static analysis tools will be one of the best investments you can make.

And finally, a comment from Twitter.

People make mistakes when writing code in C and C++. Many of these errors are found thanks to -Wall , assertions, tests, meticulous code reviews, warnings from the IDE, building the project with different compilers for different OSes running on different hardware, and so on. But even with all these measures, errors often go undetected. Static code analysis can improve the situation a little. In this note we will get acquainted with some tools for performing this very static analysis.

CppCheck

CppCheck is a free, open source (GPLv3) cross-platform static analyzer. It is available in packages of many *nix systems out of the box. CppCheck can also integrate with many IDEs. At the time of this writing, CppCheck is a live, developing project.

Usage example:

cppcheck ./src/

Example output:

: (error) Common realloc mistake: "numarr" nulled but not
freed upon failure

: (error) Dangerous usage of "n" (strncpy doesn't always
null-terminate it)

CppCheck is good because it works quite quickly. There is no reason not to add its run to the continuous integration system in order to correct all, all, all warnings it displays. Even though many of them turn out to be false positives in practice.

Clang Static Analyzer

Another free, open-source, cross-platform static analyzer. It is part of the so-called LLVM stack. Unlike CppCheck, it works significantly slower, but it also finds much more serious errors.

Example of building a report for PostgreSQL:

CC =/ usr/ local/ bin/ clang38 CFLAGS ="-O0 -g" \
./ configure --enable-cassert --enable-debug
gmake clean
mkdir../report-201604/
/ usr/ local/ bin/ scan-build38 -o ../ report-201604 / gmake -j2

An example of generating a report for the FreeBSD kernel:

# using your own MAKEOBJDIR allows you to build the kernel without root
mkdir/tmp/freebsd-obj
# the assembly itself
COMPILER_TYPE =clang / usr/ local/ bin/ scan-build38 -o ../ report-201604 / \
make buildkernel KERNCONF =GENERIC MAKEOBJDIRPREFIX =/ tmp/ freebsd-obj

The idea, as you might guess, is to do a clean and then run the build under scan-build.

The output is a very nice HTML report with detailed explanations, the ability to filter errors by type, and so on. Be sure to look at the website to see what it looks like.

In this context, I cannot help but note that in the world of Clang/LLVM there are also tools dynamic analysis, so-called “sanitizers”. There are many of them, they find very cool errors and work faster than Valgrind (though only on Linux). Unfortunately, a discussion of sanitizers is beyond the scope of this note, so read them yourself.

PVS-Studio

A closed static analyzer, distributed for money. PVS-Studio works only under Windows and only with Visual Studio. There is numerous information about the existence of a Linux version, but it is not available on the official website. As far as I understand, the price of the license is discussed individually with each client. Trial available.

I tested PVS-Studio 6.02 on Windows 7 SP1 running under KVM with Visual Studio 2013 Express Edition installed. During the installation of PVS-Studio, the .NET Framework 4.6 was also downloaded. It looks something like this. You open the project (I tested on PostgreSQL) in Visual Studio, in PVS-Studio click “now I’ll start building the project”, then in Visual Studio click Build, when the build is complete in PVS-Studio click “I’m finished” and see the report.

PVS-Studio really finds very cool errors that Clang Static Analyzer does not see (for example). I also really liked the interface, which allows you to sort and filter errors by their type, severity, file in which they were found, and so on.

On the one hand, it’s sad that in order to use PVS-Studio, the project must be able to compile under Windows. On the other hand, using CMake in a project and building and testing it under different OSes, including Windows, is a very good idea in any case. So, perhaps this is not such a big drawback. In addition, at the following links you can find some hints regarding how people managed to run PVS-Studio on projects that are not built for Windows: one, two, three, four.

Addition: I tried the beta version of PVS-Studio for Linux. It turned out to be very easy to use. We create pvs.conf with approximately the following content:

lic-file=/home/afiskon/PVS-Studio.lic
output-file=/home/afiskon/postgresql/pvs-output.log

Then we say:

make clean
./ configure ...
pvs-studio-analyzer trace -- make
# a large (for me ~40 MB) strace_out file will be created
pvs-studio-analyzer analyze --cfg ./ pvs.conf
plog-converter -t tasklist -o result.task pvs-output.log

Addition: PVS-Studio for Linux has left beta and is now available to everyone.

Coverity Scan

Coverity is considered one of the most sophisticated (and therefore expensive) static analyzers. Unfortunately, it is impossible to download even its trial version on the official website. You can fill out the form and if you are an IBM person, you may be contacted. At Very Coverity's strong desire for some prehistoric version can be found through unofficial channels. It is available for Windows and Linux, and works on approximately the same principle as PVS-Studio. Very But without a serial number or medication, Coverity will not show you reports. And to find a serial number or a medicine, you need to have not just a very strong desire, but a very, very...

strong.

Luckily, Coverity has a SaaS version - Coverity Scan. Not only is Coverity Scan accessible to mere mortals, it's also completely free. There is no connection to a specific platform. However, only open projects are allowed to be analyzed using Coverity Scan.

Here's how it works. You register your project via the web interface (or join an existing one, but this is a less interesting case). To view reports, you need to go through moderation, which takes 1-2 business days.

Reports are built this way. First, you build your project locally using a special utility called Coverity Build Tool. This utility is similar to scan-build from Clang Static Analyzer and is available for all imaginable platforms, including all exotics like FreeBSD or even NetBSD.

Installing Coverity Build Tool:
tar -xvzf cov-analysis-linux64-7.7.0.4.tar.gz

export PATH =/ home/ eax/ temp/ cov-analysis-linux64-7.7.0.4/ bin:$PATH

Let's prepare a test project (I used the code from the post Let's continue learning OpenGL: simple text output):
git clone git @ github.com:afiskon/ c-opengl-text.git
cd c-opengl-text
git submodule init
git submodule update
mkdir build
cd build

cmake..

Then we assemble the project under cov-build:

cov-build --dir cov-int make -j2 demo emdconv Important!

Do not change the name of the cov-int directory.

Archive the cov-int directory:

tar -cvzf c-opengl-text.tgz cov-int

Upload the archive via the Upload a Project Build form. There are also instructions on the Coverity Scan website for automating this step using curl. We wait a little, and we can see the results of the analysis. Please note that in order to pass moderation, you must submit at least one build for analysis.

Please note that you do not have to be the owner to analyze a project in Coverity Scan. I personally was quite successful in analyzing PostgreSQL code without joining an existing project. It also seems that if you really want to (for example, using Git submodules), you can slip in a little and not very open source code for review.

Conclusion

Here are a few more static analyzers that were not included in the review:

Each of the analyzed analyzers finds errors that others do not find. Therefore, ideally it is better to use them all at once. Doing this directly all the time will most likely not work objectively. But doing at least one run before each release will definitely not be a bad idea. At the same time, Clang Static Analyzer seems to be the most versatile and at the same time quite powerful. If you are interested in one analyzer that is a must-have for any project, use this one. But still, I would recommend additionally using at least PVS-Studio or Coverity Scan.

What static analyzers have you tried and/or used regularly and what are your impressions of them?

Refinement of business applications from global developers (CRM, ERP, billing, etc.) taking into account the features and specific requirements of the company’s work processes and development of their own business applications from scratch is the standard for Russian enterprises. At the same time, during the implementation and operation of business applications, information security threats arise due to programming errors (buffer overflow, uninitialized variables, etc.) and the presence of functions that lead to bypassing the built-in security mechanisms. Such functions are built in at the stage of testing and debugging the application, but due to the lack of code security control procedures, they remain in the production version of the business application. It also cannot be guaranteed that at the stage of creating a business application, developers will not intentionally embed additional code that leads to the presence of undeclared capabilities (software bookmarks).

Experience shows that the earlier an error or bug is discovered in a business application, the fewer resources will be required to fix it. The complexity and cost of the task of diagnosing and eliminating a software error or bug at the stage of commercial operation of a business application is disproportionately higher than similar parameters of such a task at the stage of writing the source code.

ELVIS-PLUS offers a source code analysis system based on products from the world leader in this field - HP Fortify Static Code Analyzer (SCA), as well as Russian companies - InfoWatch Appercut And Positive Technologies Application Inspector.

Goals of creating the System
  • Reducing the risk of direct financial losses and reputational risks resulting from attacks on business applications containing errors or bookmarks.
  • Increasing the level of security of the corporate information system by organizing control, automation and centralization of management of the processes of analysis of the source code of business applications.
Problems solved by the System
  • Statistical analysis of software for errors/vulnerabilities, performed without running applications at the source code level.
  • Dynamic analysis of software for errors/vulnerabilities, performed after building and launching the application.
  • Implementation of technologies for secure development of business applications during creation and throughout the entire life cycle of the application by integrating the System into the application development environment.
Architecture and main functions of the system (using the example of the HP Fortify SCA product)

Static analysis, also known as static application security testing (SAST - Static Application Security Testing):

  • Identifies potential application vulnerabilities directly at the code level.
  • Helps identify problems during the development process.

Dynamic analysis, also known as dynamic application security testing (DAST - Dynamic Application Security Testing):

  • Detects vulnerabilities in running web applications and web services by simulating full-fledged attack scenarios.
  • Tests whether a particular vulnerability can be exploited in practice.
  • Accelerates the implementation of corrective actions by providing insight into which problems need to be addressed first and why.

The system based on HP Fortify Software Security Center allows you to implement automated processes for detecting, prioritizing, eliminating, tracking, checking and managing vulnerabilities in business application software. HP Fortify tools can be embedded into integrated development environments (IDE), quality assurance (QA), and issue tracking systems.

Main functions of the System
  • Implementation and automation of the process of detecting application software vulnerabilities at various stages of development (creation, debugging, testing, operation, modification).
  • Detection of vulnerabilities in deployed web applications at various stages of software development (creation, debugging, testing, operation, modification).
  • Setting up conditions and criteria for detecting vulnerabilities in software.
  • Support for modern programming languages ​​for identifying software security threats (C/C++, Java, JSP, ASP, .NET, JavaScript, PHP, COBOL, PL/SQL, ABAP, VB6, etc.).
  • Support for various development tools for integration with the software development environment.
  • Generating reports on security problems of the software being tested with ranking of found vulnerabilities by risk level.
  • Sending practical recommendations for eliminating detected vulnerabilities in the code to the software development environment.
  • Updating the vulnerability database to identify current software security threats based on information provided by the vendor.
Benefits from implementing the System
  • Reducing the cost of developing, testing and fixing errors in business applications.
  • Reducing the cost of restoring compromised business applications.
  • Increasing the efficiency of the department responsible for providing information security in the company.

Introduction

Standard capabilities of software products and various control systems are insufficient for most customers. Website management systems (for example, WordPress, Joomla or Bitrix), accounting programs, customer management systems (CRM), enterprise and production (for example, 1C and SAP) provide ample opportunities to expand functionality and adapt to the needs of specific customers. Such capabilities are implemented using custom-made third-party modules or customization of existing ones. These modules are program code written in one of the built-in programming languages ​​that interacts with the system and implements the functionality required by customers.

Not all organizations realize that custom-made embedded code or a website may contain serious vulnerabilities, the exploitation of which by an attacker can lead to the leakage of confidential information, and software bookmarks are special sections of code designed to perform any operations using secret commands known to the code developer . In addition, custom code may contain errors that can destroy or corrupt databases or disrupt smooth business processes.

Companies that are familiar with the risks described above try to involve auditors and specialists in analyzing source codes of programs for the acceptance of ready-made modules, so that experts can determine the security of the developed solution and make sure that there are no vulnerabilities, errors or software bugs in them. But this control method has a number of disadvantages. Firstly, this service seriously increases the development budget; secondly, conducting an audit and analysis takes a long time - from a week to several months; and thirdly, this approach does not guarantee the complete absence of problems with the analyzed code - there is a possibility of human error and the discovery of previously unknown attack vectors after the code has been accepted and put into operation.

There is a secure development methodology that provides for the integration of audit and code control processes at the stage of creating a software product - SDL (Security Development Lifecycle, secure development life cycle). However, only a software developer can apply this methodology; if we talk about customers, then SDL is not applicable for them, since the process involves restructuring the code creation algorithms and it is too late to use it during acceptance. Additionally, many developments involve a small portion of existing code, in which case SDL is also not applicable.

To solve the problem of source code auditing and provide protection against exploitation of vulnerabilities in embedded codes and web applications, there are source code analyzers.

Classification of source code analyzers

Source code analyzers are a class of software products created to identify and prevent the exploitation of software errors in source codes. All products aimed at source code analysis can be divided into three types:

  • The first group includes web application code analyzers and tools to prevent the exploitation of website vulnerabilities.
  • The second group is embedded code analyzers that allow you to detect problem areas in the source code of modules designed to expand the functionality of corporate and production systems. Such modules include programs for the 1C product line, extensions of CRM systems, enterprise management systems and SAP systems.
  • The last group is designed to analyze source code in various programming languages ​​that are not related to business applications and web applications. Such analyzers are intended for customers and software developers. This group of analyzers is also used to use the methodology for secure software development. Static code analyzers find problems and potential vulnerabilities in source codes and provide recommendations for eliminating them.

It is worth noting that most of the analyzers are of mixed types and perform functions for analyzing a wide range of software products - web applications, embedded code and regular software. However, this review focuses on the use of analyzers by development customers, so more attention is paid to analyzers for web applications and embedded code.

Analyzers may contain various analysis mechanisms, but the most common and universal is static analysis of source code - SAST (Static Application Security Testing), there are also dynamic analysis methods - DAST (Dynamic Application Security Testing), which perform code checks during its execution, and various hybrid options combining different types of analyzes. Dynamic analysis is an independent verification method that can expand the capabilities of static analysis or be used independently in cases where access to source codes is not available. This review covers only static analyzers.

Analyzers for embedded code and web applications differ in their set of characteristics. It includes not only the quality of the analysis and the list of supported software products and programming languages, but also additional mechanisms: the ability to automatically correct errors, the presence of functions to prevent the exploitation of errors without code changes, the ability to update the built-in database of vulnerabilities and programming errors, the availability of certificates of conformity and ability to meet the requirements of various regulators.

Operating principles of source code analyzers

The general principles of operation are similar for all classes of analyzers: both web application source code analyzers and embedded code analyzers. The difference between these types of products is only in the ability to determine the features of the execution and interaction of code with the outside world, which is reflected in the vulnerability databases of the analyzers. Most of the analyzers on the market perform the functions of both classes, checking both code embedded in business applications and web application code equally well.

The input data for the source code analyzer is an array of program source texts and its dependencies (loadable modules, third-party software used, etc.). As a result of their work, all analyzers produce a report on detected vulnerabilities and programming errors; in addition, some analyzers provide functions for automatic error correction.

It is worth noting that automatic error correction does not always work correctly, so this functionality is intended only for developers of web applications and embedded modules. The customer of the product should rely only on the final report of the analyzer and use the data obtained to make a decision on accepting and implementing the developed code or sending it for revision.

Figure 1. Algorithm of the source code analyzer

When assessing source codes, analyzers use various databases containing descriptions of vulnerabilities and programming errors:

  • Own database of vulnerabilities and programming errors - each developer of source code analyzers has its own analytics and research departments that prepare specialized databases for analyzing program source codes. The quality of your own database is one of the key criteria affecting the overall quality of the product. In addition, your own database must be dynamic and constantly updated - new vectors of attacks and exploitation of vulnerabilities, as well as changes in programming languages ​​and development methods require analyzer developers to constantly update the database to maintain high quality scanning. Products with a static, non-updating database most often lose in comparative tests.
  • State databases of programming errors - there are a number of state databases of vulnerabilities, the compilation and support of which is carried out by regulators in different countries. For example, in the USA the CWE - Common Weakness Enumeration database is used, which is maintained by the MITER organization, which is supported, among other things, by the US Department of Defense. Russia does not yet have a similar database, but in the future the FSTEC of Russia plans to supplement its database of vulnerabilities and threats with a database of programming errors. Vulnerability analyzers implement support for the CWE database by integrating it into their own vulnerability database or using it as a separate verification mechanism.
  • Standards requirements and recommendations for secure programming - there are a number of government and industry standards that describe the requirements for secure application development, as well as a number of recommendations and “best practices” from world experts in the field of software development and security. These documents do not directly describe programming errors, unlike CWE, but contain a list of methods that can be converted for use in a static source code analyzer.

The quality of the analysis, the number of false positives and missed errors directly depends on which databases are used in the analyzer. In addition, analysis of compliance with regulatory requirements makes it possible to facilitate and simplify the procedure for external audit of infrastructure and information systems if the requirements are mandatory. For example, PCI DSS requirements are mandatory for web applications and embedded code that works with payment information for bank cards, while an external audit of PCI DSS compliance is carried out, including an analysis of the software products used.

World market

There are many different analyzers on the global market - both from well-known security vendors and niche players dealing only with this class of products. The Gartner Analytical Center has been classifying and evaluating source code analyzers for more than five years, while until 2011, Gartner separately identified the static analyzers discussed in this article, later combining them into a higher class - Application Security Testing ).

In the 2015 Gartner Magic Quadrant, the leaders in the security testing market are HP, Veracode and IBM. At the same time, Veracode is the only leading company that does not have an analyzer as a software product, and the functionality is provided only as a service in the Veracode cloud. The rest of the leading companies offer either exclusively products that perform checks on user computers, or the ability to choose between a product and a cloud service. HP and IBM have remained the world market leaders over the past five years; an overview of their products is given below. The closest product to the leading position is the product of Checkmarx, which specializes only in this class of products, so it is also included in the review.

Figure 2. Magic Quadrant for AnalystsGartner on application security analysis market players in August 2015

According to a report by analysts ReportsnReports, in the United States, the size of the source code analyzer market in 2014 amounted to $2.5 billion; by 2019, a two-fold increase is predicted to $5 billion with an annual growth of 14.9%. More than 50% of organizations surveyed during the report are planning to allocate and increase budgets for source code analysis for custom development, and only 3% spoke negatively about the use of these products.

The large number of products in the challengers area confirms the popularity of this product class and the rapid development of the industry. Over the past five years, the total number of manufacturers in this quadrant has nearly tripled, with three products added since the 2014 report.

Russian market

The Russian market for source code analyzers is quite young - the first public products began appearing on the market less than five years ago. At the same time, the market was formed from two directions - on the one hand, companies developing products for testing to identify undeclared capabilities in the laboratories of FSTEC, FSB and the Ministry of Defense of the Russian Federation; on the other side are companies involved in various areas of security and who have decided to add a new class of products to their portfolio.

The most notable players in the new market are Positive Technologies, InfoWatch, and Solar Security. Positive Technologies have long specialized in finding and analyzing vulnerabilities; Their portfolio includes the MaxPatrol product, one of the leaders in the domestic market in external security monitoring, so it is not surprising that the company decided to engage in internal analysis and develop its own source code analyzer. InfoWatch developed as a developer of DLP systems, eventually becoming a group of companies in search of new market niches. In 2012, Appercut became part of InfoWatch, adding a source code analysis tool to the InfoWatch portfolio. Investments and experience of InfoWatch allowed us to quickly develop the product to a high level. Solar Security officially presented their product Solar inCode only at the end of October 2015, but already at the time of release they had four official deployments in Russia.

Companies that have been developing source code analyzers for certification testing for decades are generally in no hurry to offer analyzers for business, so our review includes only one such product - from the Echelon company. Perhaps in the future, it will be able to displace other market players, primarily due to the extensive theoretical and practical experience of the developers of this product in the field of searching for vulnerabilities and undeclared capabilities.

Another niche player in the Russian market is Digital Security, a consulting company in the field of information security. Having extensive experience in auditing and implementing ERP systems, she found an unoccupied niche and began developing a product for analyzing the security of ERP systems, which, among other functions, contained mechanisms for analyzing source codes for embedded programs.

Brief overview of analyzers

The first source code analysis tool in our review is a product from Fortify, owned by Hewlett-Packard since 2010. The HP Fortify line contains various products for analyzing program codes: there is a SaaS service Fortify On-Demand, which involves uploading source code to the HP cloud, and a full-fledged HP Fortify Static Code Analyzer application, installed in the customer’s infrastructure.

HP Fortify Static Code Analyzer supports a wide range of programming languages ​​and platforms, including web applications written in PHP, Python, Java/JSP, ASP.Net and JavaScript, and embedded code in ABAP (SAP), Action Script and VBScript.

Figure 3. HP Fortify Static Code Analyzer interface

Among the product features, it is worth highlighting the presence in HP Fortify Static Code Analyzer of support for integration with various development management systems and error tracking. If the code developer provides the customer with access to direct bug reporting to Bugzilla, HP Quality Center, or Microsoft TFS, the analyzer can automatically generate bug reports on those systems without the need for manual intervention.

The product's operation is based on HP Fortify's own knowledge bases, formed by adapting the CWE database. The product implements analysis to meet the requirements of DISA STIG, FISMA, PCI DSS and OWASP recommendations.

Among the disadvantages of HP Fortify Static Code Analyzer, it should be noted the lack of localization of the product for the Russian market - the interface and reports are in English, the lack of materials and documentation for the product in Russian, the analysis of embedded code for 1C and other domestic enterprise-level products is not supported.

Benefits of HP Fortify Static Code Analyzer:

  • famous brand, high quality solution;
  • a large list of analyzed programming languages ​​and supported development environments;
  • availability of integration with development management systems and other HP Fortify products;
  • support for international standards, recommendations and “best practices”.

Checkmarx CxSAST is a tool of the American-Israeli company Checkmarx, specializing in the development of source code analyzers. This product is intended primarily for the analysis of conventional software, but due to its support for the programming languages ​​PHP, Python, JavaScript, Perl and Ruby, it is excellent for analyzing web applications. Checkmarx CxSAST is a universal analyzer that has no distinct specifics and is therefore suitable for use at any stage of the software product life cycle - from development to application.

Figure 4. Checkmarx CxSAST Interface

Checkmarx CxSAST implements support for the CWE code error database, supports checks for compliance with OWASP and SANS 25 recommendations, PCI DSS, HIPAA, MISRA, FISMA and BSIMM standards. All problems detected by Checkmarx CxSAST are divided by risk level - from minor to critical. Among the features of the product is the presence of functions for visualizing code with the construction of block diagrams of execution routes and recommendations for correcting problems with binding to a graphical diagram.

Disadvantages of the product include the lack of support for analyzing code embedded in business applications, lack of localization, and the difficulty of using the product for customers of software code, since the solution is intended primarily for developers and is closely integrated with development environments.

Benefits of Checkmarx CxSAST:

  • a large number of supported programming languages;
  • high speed of the product, the ability to scan only by named sections of the code;
  • the ability to visualize execution graphs of the analyzed code;
  • visual reports and graphically designed metrics of source codes.

Another product from a well-known vendor is the IBM Security AppScan Source source code analyzer. The AppScan line includes many products related to secure software development, but the remaining products are not suitable for using software code with customers, as they have a lot of unnecessary functionality. IBM Security AppScan Source, like Checkmarx CxSAST, is primarily intended for development organizations, while supporting even fewer web development languages ​​- only PHP, Perl and JavaScript. Programming languages ​​for code embedded in business applications are not supported.

Figure 5. IBM Security AppScan Source interface

IBM Security AppScan Source integrates tightly with the IBM Rational development platform, so the product is most often used during the development and testing phase of software products and is not well suited for performing acceptance or verification of a custom application.

A special feature of IBM Security AppScan Source is that it supports program analysis for IBM Worklight, a platform for mobile business applications. The list of supported standards and requirements is scanty - PCI DSS and recommendations of DISA and OWASP, the vulnerability database compares the problems found with CWE.

No particular advantages of this solution for development customers have been identified.

AppChecker from the domestic company NPO Eshelon CJSC is a solution that appeared on the market quite recently. The first version of the product was released only a year ago, but one should take into account the experience of the Echelon company in analyzing program code. NPO Eshelon is a testing laboratory of FSTEC, FSB and the Ministry of Defense of the Russian Federation and has extensive experience in the field of static and dynamic analysis of program source codes.

Figure 6. AppChecker “Echelon” interface

AppChecker is designed to analyze a variety of software and web applications written in PHP, Java and C/C++. Fully supports CWE vulnerability classification and takes into account OWASP, CERT and NISP recommendations. The product can be used to perform audits for compliance with PCI DSS requirements and the Bank of Russia standard IBBS-2.6-2014.

The product's shortcomings are due to the early stage of development of the solution - there is not enough support for popular web development languages ​​and the ability to analyze embedded code.

Advantages:

  • the ability to conduct an audit according to domestic requirements and PCI DSS;
  • taking into account the influence of programming language features due to the flexible configuration of the analyzed projects;
  • low cost.

PT Application Inspector is a product of the Russian developer Positive Technologies, distinguished by its approach to solving the problem of source code analysis. PT Application Inspector is aimed primarily at finding vulnerabilities in code, rather than identifying common software errors.

Unlike all other products in this review, PT Application Inspector has not only the ability to generate a report and demonstrate vulnerabilities, but also the ability to automatically create exploits for certain categories and types of vulnerabilities - small executable modules that exploit the vulnerabilities found. Using the created exploits, you can practically check the danger of the found vulnerabilities, as well as control the developer by checking the operation of the exploit after the declared closure of the vulnerability.

Figure 7. PT Application Inspector interface

PT Application Inspector supports both web application development languages ​​(PHP, JavaScript) and embedded code for business applications - SAP ABAP, SAP Java, Oracle EBS Java, Oracle EBS PL/SQL. PT Application Inspector also supports visualization of program execution routes.

PT Application Inspector is a one-stop solution for both developers and customers running custom web applications and business application plugins. The database of vulnerabilities and errors in the program code contains Positive Technologies' own developments, the CWE database and WASC (web consortium vulnerability database, an analogue of CWE for web applications).

Using PT Application Inspector allows you to meet the requirements of PCI DSS standards, STO BR IBBS, as well as 17th FSTEC order and requirements for the absence of undeclared capabilities (relevant for code certification).

Advantages:

  • support for analysis of web applications and a wide range of development systems for business applications;
  • domestic, localized product;
  • a wide range of supported state standards;
  • using the WASC web application vulnerability database and the CWE classifier;
  • the ability to visualize program code and search for program bookmarks.

InfoWatch Appercut was developed by the Russian company InfoWatch. The main difference between this product and all others in this collection is its specialization in providing services for business application customers.

InfoWatch Appercut supports almost all programming languages ​​in which web applications are created (JavaScript, Python, PHP, Ruby) and built-in modules for business proposals - 1C, ABAP, X++ (ERP Microsoft Axapta), Java, Lotus Script. InfoWatch Appercut has the ability to adapt to the specifics of a specific application and the uniqueness of each company's business processes.

Figure 8. InfoWatch Appercut interface

InfoWatch Appercut supports many requirements for effective and secure programming, including the general requirements of PCI DSS and HIPPA, recommendations and “best practices” of CERT and OWAST, as well as recommendations from business process platform manufacturers - 1C, SAP, Oracle, Microsoft.

Advantages:

  • domestic, localized product, certified by FSTEC of Russia;
  • the only product that supports all popular business platforms in Russia, including 1C, SAP, Oracle EBS, IBM Collaboration Solutions (Lotus) and Microsoft Axapta;
  • A fast scanner that performs checks in seconds and can only check changed code and code fragments.

Digital Security ERPScan is a specialized product for analyzing and monitoring the security of business systems built on SAP products, the first version was released in 2010. In addition to the module for analyzing configurations, vulnerabilities and access control (SOD), ERPScan includes a module for assessing the security of source code, which implements the functions of searching for bookmarks, critical calls, vulnerabilities and programming errors in code in the ABAP and Java programming languages. At the same time, the product takes into account the specifics of the SAP platform, correlates detected vulnerabilities in the code with configuration settings and access rights, and performs analysis better than non-specialized products that work with the same programming languages.

Figure 9. Digital Security ERPScan interface

Additional features of ERPScan include the ability to automatically generate patches for detected vulnerabilities, as well as generate signatures for possible attacks and upload these signatures to intrusion detection and prevention systems (in partnership with CISCO). In addition, the system contains mechanisms for evaluating the performance of embedded code, which is critical for business applications, since the slow operation of additional modules can seriously affect business processes in the organization. The system also supports analysis in accordance with specific recommendations for analyzing business application code, such as EAS-SEC and BIZEC, as well as general PCI DSS and OWASP recommendations.

Advantages:

  • deep specialization on one business application platform with correlation of analysis with configuration settings and access rights;
  • embedded code performance tests;
  • automatic creation of fixes for found vulnerabilities and virtual patches;
  • search for zero-day vulnerabilities.

Solar inCode is a static code analysis tool designed to identify information security vulnerabilities and undeclared capabilities in software source codes. The main distinguishing feature of the product is the ability to restore application source code from a working file using decompilation technology (reverse engineering).

Solar inCode allows you to analyze source code written in the programming languages ​​Java, Scala, Java for Android, PHP and Objective C. Unlike most competitors, the list of supported programming languages ​​includes development tools for the Android and iOS mobile platforms.

Figure 10. Interface

In cases where the source code is not available, Solar inCode allows analysis of ready-made applications, this functionality supports web applications and mobile applications. In particular, for mobile applications, you just need to copy the link to the application from Google Play or Apple Store into the scanner, the application will be automatically downloaded, decompiled and checked.

Using Solar inCode allows you to comply with the requirements of PCI DSS standards, STO BR IBBS, as well as 17th FSTEC order and the requirements for the absence of undeclared capabilities (relevant for code certification).

Advantages:

  • Support for analyzing applications for mobile devices running Android and iOS;
  • supports the analysis of web applications and mobile applications without using the source code of the programs;
  • provides analysis results in the format of specific recommendations for eliminating vulnerabilities;
  • generates detailed recommendations for setting up security tools: SIEM, WAF, FW, NGFW;
  • easily integrated into the secure software development process by supporting work with source code repositories.

conclusions

The presence of software errors, vulnerabilities and backdoors in custom-developed software, be it web applications or plug-ins for business applications, is a serious risk to the security of corporate data. The use of source code analyzers can significantly reduce these risks and keep under control the quality of work performed by program code developers without the need for additional expenditure of time and money on the services of experts and external auditors. At the same time, the use of source code analyzers, most often, does not require special training, the allocation of separate employees and does not introduce other inconveniences if the product is used only for acceptance and error correction is carried out by the developer. All this makes this tool mandatory for use when using custom developments.

When choosing a source code analyzer, you should take into account the functionality of the products and the quality of their work. First of all, you should pay attention to the product’s ability to perform checks for the programming languages ​​in which the source codes being checked are implemented. The next criterion in choosing a product should be the quality of testing, which can be determined by the competencies of the development company and during demonstration operation of the product. Another factor for choosing a product may be the availability of an audit for compliance with the requirements of state and international standards, if their implementation is required for corporate business processes.

In this review, the clear leader among foreign products in terms of programming language support and scanning quality is the HP Fortify Static Code Analyzer solution. Checkmarx CxSAST is also a good product, but it can only analyze regular applications and web applications; the product does not support plug-ins for business applications. The IBM Security AppScan Source solution looks lackluster compared to its competitors and does not differ in either functionality or quality of checks. However, this product is not intended for business users and is aimed at use in development companies, where it can be more effective than its competitors.

Among Russian products it is difficult to single out a clear leader; the market is represented by three main products - InfoWatch Appercut, PT Application Inspector and Solar inCode. At the same time, these products differ significantly technologically and are intended for different target audiences - the first supports more business application platforms and is faster due to the search for vulnerabilities exclusively using static analysis methods. The second one combines static and dynamic analysis, as well as a combination of them, which, while improving the quality of scanning, leads to an increase in the time it takes to check the source code. The third is aimed at solving the problems of business users and information security specialists, and also allows you to test applications without access to the source code.

The AppChecker “echelon” does not yet live up to its competitors and has a small set of functionality, but given the early stage of product development, it is quite possible that in the near future it may claim the top positions in the ratings of source code analyzers.

Digital Security ERPScan is an excellent product for solving the highly specialized task of analyzing business applications for the SAP platform. Concentrating only on this market, Digital Security has developed a product that is unique in its functionality, which not only analyzes the source code, but also takes into account all the specifics of the SAP platform, specific configuration settings and access rights of business applications, and also has the ability to automatically create patches for detected vulnerabilities.

Each of the ][ team has their own preferences regarding software and utilities for
pentest. After consulting, we found out: the choice varies so much that we can make
a real gentleman's set of proven programs. That's what we decided on. To
Don’t make a hodgepodge, the entire list is divided into topics. Today we will look
static code analyzers
to search for vulnerabilities in applications when
hands - their sources.

The availability of program source codes greatly simplifies the search for vulnerabilities.
Instead of blindly manipulating various parameters that
are transferred to the application, which makes it much easier to see in the sources how it
processes. For example, if data from the user is transmitted without checks and
transformations reach an SQL query - we have a vulnerability of the SQL injection type.
If they get to the output in HTML code, we get classic XSS. From
a static scanner needs to clearly detect such situations, but, to
Unfortunately, this is not always as easy as it seems.

Modern compilers

May seem funny, but one of the most effective analyzers
code
are the compilers themselves. Of course, they are intended entirely for
another, but as a bonus, each of them offers a good verifier
source codes, capable of detecting a large number of errors. Why doesn't he
saves? Initially, the settings for such code verification are set sufficiently
loyal: as a result, in order not to confuse the programmer, the compiler starts
swear only in case of the most serious mistakes. But in vain - if you put
the level of warnings is higher, it is quite possible to dig up many dubious places
in the code. It looks something like this. Let's say there is a missingness in the code
checking for the length of a string before copying it to the buffer. The scanner finds the function,
copying a string (or a fragment of it) to a fixed-size buffer without
preliminary check of its length. He traces the transmission trajectory
arguments: from input data to the vulnerable function and looks: is it possible
choose a string length that would cause an overflow in the vulnerable
function and would not be cut off by the checks preceding it. In case such
There is no check, we find almost 100% buffer overflow. The main difficulty is
used to test the compiler - make it “swallow” someone else’s code.
If you've ever tried to compile an application from source, then you know
how difficult it is to satisfy all dependencies, especially in large projects. But
the result is worth it! Moreover, in addition to the compiler, powerful IDEs also have built-in
some other means for code analysis. For example, on the next
section of code in Visual Studio will issue a warning about use in
loop of the _alloca function, which can quickly overflow the stack:

char *b;
do (
b = (char*)_alloca(9)
) while(1)

This is thanks to the PREfast static analyzer. Like FxCop,
designed for managed code analysis, PREfast is natively
distributed as a separate utility and only later became part of Visual Studio.

RATS - Rough Auditing Tool for Security

Website: www.securesoftware.com
License: GNU GPL
Platform: Unix, Windows
Languages: C++, PHP, Python, Ruby

Error to error - discord. Some of the mistakes that programmers make are
is uncritical and only threatens program instability. Others, on the contrary,
allow you to inject shellcode and execute arbitrary commands remotely
server. Particularly risky in the code are commands that allow you to execute buffer
overflow and other similar types of attacks. There are a lot of such commands, in the case of C/C++
these are functions for working with strings (xstrcpy(), strcat(), gets(), sprintf(),
printf(), snprintf(), syslog()), system commands (access(), chown(), chgrp(),
chmod(), tmpfile(), tmpnam(), tempnam(), mktemp()), as well as system commands
calls (exec(), system(), popen()). Manually examine all code (especially
if it consists of several thousand lines) is quite tedious. Which means it’s possible
It’s easy to overlook passing unchecked parameters to some function.
Special audit tools can greatly facilitate the task, including
famous utility RATS (Rough Auditing Tool for Security) from
famous company Fortify. Not only will she successfully handle code processing,
written in C/C++, but can also process scripts in Perl, PHP and Python.
The utility database contains an impressive selection with a detailed description of problematic
places in the code. With the help of an analyzer, she will process the sorrel fed to her and
will try to identify bugs, after which it will provide information about the defects found.
RATS
works via the command line, both under Windows and *nix systems.

Yasca

Website: www.yasca.org
License: Open Source
Platform: Unix, Windows
Languages: C++, Java, .NET, ASP, Perl, PHP, Python and others.

Yasca just like RATS does not need installation, and has no
only a console interface, but also a simple GUI. Developers recommend
run the utility through the console - they say, this way there are more possibilities. It's funny what
Yasca engine is written in PHP 5.2.5, and the interpreter (in its most stripped-down version)
option) lies in one of the subfolders of the archive with the program. The whole program is logically
consists of a front-end, a set of scanning plugins, a report generator and
the engine itself, which makes all the gears rotate together. Plugins
dumped into the plugins directory - additional ones need to be installed there too
addons. Important point! Three of the standard plugins included
Yasca
, have unpleasant dependencies. JLint, which scans Java's
.class files, requires jlint.exe in the resource/utility directory. Second
plugin - antiC, used to analyze Java and C/C++ sources, requires antic.exe
in the same directory. And for PMD, which processes Java code, to work, you need
installed on a Java JRE 1.4 or higher system. Check installation is correct
you can by typing the command "yasca ./resources/test/". What does a scan look like?
Having processed the varieties fed to the program, Yasca gives the result as
special report. For example, one of the standard GREP plugins allows you to
using patterns described in .grep files, indicate vulnerable structures and
Easily identify a range of vulnerabilities. A set of such patterns is already included in
program: to search for weak encryption, authorization using “password equals login”,
possible SQL injections and much more. When will you want to see in the report
For more detailed information, don’t be lazy to install additional plugins. What
One thing worth mentioning is that with their help you can additionally scan the code on
.NET (VB.NET, C#, ASP.NET), PHP, ColdFusion, COBOL, HTML, JavaScript, CSS,
Visual Basic, ASP, Python, Perl.

Cppcheck

Website:
License: Open Source
Platform: Unix, Windows
Languages: C++

Developers Cppcheck decided not to waste time on trifles, and therefore
They catch only strictly defined categories of bugs and only in C++ code.
Don't expect the program to duplicate the compiler's warnings - it will do without
prompter. Therefore, do not be lazy to set the compiler to the maximum level
warnings, and use Cppcheck to check for memory leaks and violations
allocation-deallocation operations, various buffer overflows, usage
outdated features and much more. Important detail: Cppcheck developers
We tried to reduce the number of false positives to a minimum. Therefore, if
program records an error, you can most likely say: “She really
yes!" You can run the analysis either from the console or using a nice
GUI interface written in Qt and running on any platform.

gradit

Website:
www.justanotherhacker.com/projects/graudit.html
License: Open Source
Platform: Unix, Windows
Languages: C++, PHP, Python, Perl

This simple script, combined with a set of signatures, allows you to find a number of
critical vulnerabilities in the code, and the search is carried out using all
the well-known grep utility. It’s inappropriate to even mention the GUI interface here: that’s all
carried out via the console. There are several keys to start, but in the most
In a simple case, it is enough to specify the path to the sources as a parameter:

gradit /path/to/scan

The reward for your efforts will be a colorful report on potentially exploited
places in the code. I must say that, in addition to the script itself (and this is only 100 lines
code in Bash), the value comes from the signature databases in which
regexps and names of potentially vulnerable functions in different languages. Default
included databases for Python, Perl, PHP, C++ - you can take files from the signatures folder
and use it in your own developments.

SWAAT

Website: www.owasp.org
License: Open Source
Platform: Unix, Windows
Languages: Java, JSP, ASP .Net, PHP

If graudit uses text files to set the vulnerability signature,
then in SWAAT– a more progressive approach using XML files. Like this
A typical signature looks like:

vuln match - regular expression for searching;
type - indicates the type of vulnerability:
severity - indicates the risk level (high, medium or low)
alt - alternative code to solve the problem

SWAAT reads the signature database and uses it to try to find problematic ones
code sections in source codes in Java, JSP, ASP .Net, and PHP. The base is constantly
is growing and in addition to the list of “dangerous” functions, it includes typical errors in
using string formatting and writing SQL queries. It is noteworthy that
that the program is written in C#, but works fine under niks, thanks
to the Mono project - an open implementation of the .Net platform.

PHP Bug Scanner

Website:
raz0r.name/releases/php-bug-scanner
License: Freeware
Platform: Windows
Languages: PHP

If you need to perform static analysis of a PHP application, I recommend
try PHP Bug Scanner, which was written by our author - raz0r. Job
The program is based on scanning various functions and variables in PHP scripts,
which can be used in web attacks. Description of such
situations is formalized in the form of so-called presets, and the program is already
7 special presets included, grouped by category:

  • code execution;
  • command execution;
  • directory traversal;
  • globals overwrite;
  • include;
  • SQL injection;
  • miscellaneous.

It's funny that the program is written in
PHP/WinBinder and compiled
bamcompile , so it looks just like a regular Windows application. Through
convenient interface, the pentester can enable or disable code analysis for the presence of
certain vulnerabilities.

Pixy

Website:
pixybox.seclab.tuwien.ac.at
License: Freeware
Platform: Unix, Windows
Languages: PHP

The tool is based on scanning source code and constructing graphs
data streams. This graph traces the path of the data that arrives
from outside the program - from the user, from the database, from some external
plugin, etc. In this way, a list of vulnerable points (or entrances) to
applications. Using patterns that describe vulnerabilities, Pixy checks for:
points and allows you to identify XSS and SQL vulnerabilities. Moreover, the graphs themselves, which
built during analysis, can be viewed in the graphs folder (for example,
xss_file.php_1_dep.dot) - this is very useful to understand why
this or that section of code is considered Pixy-vulnerable. In general, the development itself
extremely educational and demonstrates how advanced utilities work
static code analysis. On the page

documentation, the developer clearly talks about the different stages of work
program, explains the logic and algorithm of how the program should be analyzed
this or that piece of code. The program itself is written in Java and distributed in
open source, and on the home page there is even a simple online service
to check your code for XSS vulnerabilities.

Ounce 6

Website: www.ouncelabs.com/products
License: Shareware
Platform: Windows

Alas, existing free solutions are still head and shoulders below commercial ones
analogues. It is enough to study the quality and detail of the report, which is
Ounce 6
– and understand why. The program is based on a special
Ounce Core analysis engine, which checks code for compliance with rules
and policies compiled by a team of professional pentesters,
which have accumulated the experience of well-known security companies, the hacker community, as well as
safety standards. The program detects a variety of vulnerabilities in code: from
Buffer overflow before SQL injections. If desired, Ounce can easily integrate with
popular IDEs to implement automatic code checking during build
each new build of the application being developed. By the way,
The development company, Ounce Labs, was acquired by IBM itself this summer. So
that the product will most likely continue to develop as part of one of
IBM commercial applications.

Klocwork Insight

Website: www.klocwork.com
License: Shareware
Platform: Windows
Languages: C++, Java, C#

For a long time, this, again, commercial product implemented a static
Code scanning for C, C+ and Java only. But, as soon as Visual Studio came out
2008 and .NET Framework 3.5, the developers announced support for C#. I drove away
program on two of his auxiliary projects, which he wrote in a hurry
on Sharpe and the program identified 7 critical vulnerabilities. It's good that they
written for internal use only :). Klocwork Insight
initially configured primarily to work in conjunction with integrated environments
development. Integration with the same Visual Studio or Eclipse is extremely good
successful - you begin to seriously think that such functionality should exist
implemented in them by default :). If you don't take into account the problems with logic
application performance and performance problems, then Klocwork Insight
does a great job of finding buffer overflows and missing filtering
custom code, SQL/Path/Cross-site injection capabilities, weak
encryption, etc. Another interesting option is building an execution tree
application that allows you to quickly understand the general principle of the application and
separately monitor, for example, the processing of any user
input. And for quickly constructing rules for checking code, it is even proposed
special tool - Klocwork Checker Studio.

Coverity Prevent Static Analysis

Website: www.coverity.com/products
License: Shareware
Platform: Windows
Languages: C++, Java, C#

One of the most famous static code analyzers in C/C++, Java and C#.
According to its creators, the solution is used by more than 100,000
developers all over the world. Well-thought-out mechanisms allow you to automate
search for memory leaks, uncaught exceptions, performance issues, and
security vulnerabilities, of course. The product supports different platforms,
compilers (gcc, Microsoft Visual C++ and many others), and also integrates with
various development environments, primarily Eclipse and Visual Studio. At the core
code traversal does not use stupid traversal algorithms from start to finish, but something
like a debugger that analyzes how a program will behave in different
situations after a branch meeting. This way, 100% code coverage is achieved.
Such a complex approach was required, among other things, to fully analyze
multi-threaded applications specially optimized to run on multi-cores
processors. Coverity Integrity Center allows you to find such errors
as a race condition (a design error in a multitasking system in which
the system's operation depends on the order in which parts of the code are executed), deadlocks
and much more. Why do reversers need this? Ask the developers 0day about it
exploits for Firefox and IE :).

OWASP Code Crawler

Website: www.owasp.org
License: GNU GPL
Platform: Windows
Languages: Java, C#, VB

The creator of this tool, Alessio Marziali, is the author of two books on ASP.NET,
a reputable coder of high-load applications for the financial sector, as well as
pentester. In 2007, he published information about critical vulnerabilities in 27
Italian government websites. His brainchild - OWASP Code Crawler
designed for static analysis of .NET and J2EE/JAVA code, openly available
on the Internet, and at the end of the year the author promises to release a new version of the program with
much more functionality. But the most important thing has already been implemented -
analysis of source codes in C#, Visual Basic and Java. Files to be scanned are selected
through the GUI interface, and scanning starts automatically. For each
problem section of code, a description of the vulnerability is displayed in the Threat section
Description. True, field OWASP Guidelines probably indicating the path
Unfortunately, a solution to the problem is not yet available. But you can use
experimental feature of scanning code on a remote machine, accessible
in the Remote Scan tab. The author promises to seriously upgrade this feature and, in
including, aggregating application sources for analysis directly from the system
version control.

WARNING

The information is presented for informational purposes and, above all, shows
how developers can avoid critical errors during development
applications. For using the acquired knowledge for illegal purposes, neither the author nor
the editors are not responsible.