Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Труды ИСП РАН, том 33, вып. 3, 2021 г. // Trudy ISP RAN/Proc. ISP RAS, vol.

33, issue 3, 2021 Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova A.A. An automated framework for testing
source code static analysis tools. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 41-50

предназначенной для приёмочного тестирования инструментов статического анализа исходного кода


программ. Представленная среда используется для непрерывного тестирования инструментов
статического анализа исходного кода программ на языках C, C++ и Python.
DOI: 10.15514/ISPRAS-2021-33(3)-3 Ключевые слова: автоматизированное тестирование; обеспечение качества; статический анализ
исходного кода.
Для цитирования: Гиматдинов Д.М., Герасимов А.Ю., Привалов П.А., Буткевич В.Н., Чернова Н.А.,
An Automated Framework for Testing Source Code Горелова А.А. Автоматизированная система тестирования инструментов статического анализа кода.
Static Analysis Tools Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 41-50 (на английском языке). DOI: 10.15514/ISPRAS–
2021–33(3)–3.

1. Introduction
1,2
D.M. Gimatdinov, ORCID: 0000-0002-1329-4541 <[email protected]> Acceptance testing is a very common approach to make sure required software functionality is
2
A.Y. Gerasimov, ORCID: 0000-0001-9964-5850 <[email protected]> satisfying needs of end user in an automatic way. Wide usage of continuous integration systems with
2
P.A. Privalov, ORCID: 0000-0002-8939-5824 <[email protected]> automatic tests run allows to automate testing process to make sure the functionality is not broken
2
V.N. Butkevich, ORCID: 0000-0001-9376-9051 <[email protected]> by separate change in a program code. That is why it is important to build suitable testing framework
2
N.A. Chernova, ORCID: 0000-0001-8678-9193 <[email protected]> to satisfy needs in continuous testing of specific software.
2
A.A. Gorelova, ORCID: 0000-0001-7974-7913 <gorelova. anna @huawei.com> A source code static analysis tools are become an industrial standard for software quality assurance
1
Higher School of Economics, at early stages in secure software development lifecycle. They are commonly used for detection of
11, Pokrovsky boulevard, Moscow, 109028, Russia program issues and logical errors. Being quality assurance tools by nature they need to satisfy
2
Huawei Technologies Co., Ltd., specific requirements such as an analysis precision, completeness and performance. A possibility to
7b9, Derbenevskaya naberezhnaya, Moscow, 115114, Russia introduce bug warnings of a safe code, also known as false positive warnings, set a target for a
Abstract. Automated testing frameworks are widely used for assuring quality of modern software in secure testing framework to control as true positive warning, as false positive warnings. An acceptance
software development lifecycle. Sometimes it is needed to assure quality of specific software and, hence testing of such tools controls behavior of a tool on specific code snippets, which represent as buggy
specific approach should be applied. In this paper, we present an approach and implementation details of code, as code which has no bugs and issues.
automated testing framework suitable for acceptance testing of static source code analysis tools. The presented At the same time, such tools are very complex in implementation details, because consist of general
framework is used for continuous testing of static source code analyzers for C, C++ and Python programs. analysis framework, frequently called engine, which propose general analysis techniques such as
Keywords: automated testing; quality assurance; source code static analysis reaching definitions, live variables, taint analysis and others, and a number of specific wrong
program behavior checkers build on top of an engine. Any small change to the engine can broke
For citation: Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova
checkers behavior. That’s why it is important to have testing framework which can check and state
A.A. An Automated Framework for Testing Source Code Static Analysis Tools. Trudy ISP RAN/Proc. ISP
RAS, vol. 33, issue 3, 2021, pp. 41-50. DOI: 10.15514/ISPRAS-2021-33(3)-3
sanity of the tool during development lifecycle.
In our previous talk 1 , we have described a generalized approach for testing static source code
analysis tools, which includes Acceptance Testing Framework and Regression Testing System
Автоматизированная система тестирования инструментов
called Report History Server.
статического анализа кода In this paper, we introduce requirements, implementation details, evaluation and limitations of
1,2
Acceptance Testing Framework for static source code analysis tools based on our experience of
Д.М. Гиматдинов, ORCID: 0000-0002-1329-4541 <[email protected]> development and daily usage of such a framework in industrial development of static source code
2
А.Ю. Герасимов, ORCID: 0000-0001-9964-5850 <[email protected]> analysis tools. This paper is organized as follows. Section 2 describes in detail requirements to such
2
П.А. Привалов, ORCID: 0000-0002-8939-5824 <[email protected]> kind of framework, Section 3 provides overview of existing approaches, Section 4 provides an
2
В.Н. Буткевич, ORCID: 0000-0001-9376-9051 <[email protected]> overview of proposed approach. Section 5 describes in detail implementation of proposed approach,
2
Н.А. Чернова, ORCID: 0000-0001-8678-9193 <[email protected]> Section 6 contains evaluation results of proposed approach, Section 7 concludes proposed approach
2
А.А. Горелова, ORCID: 0000-0001-7974-7913 <gorelova. anna @huawei.com> and future directions of development.
1
Национальный исследовательский университет Высшая школа экономики,
109028, Россия, Москва, Покровский бульвар 11 2. Requirements to acceptance testing framework
2
Техкомпания Хуавэй,
Source code static analysis tools have to check conditions of source code of programs from the point
115114, Россия, Москва, Дербеневская набережная, 7с9
of view of very different rules, which can be applied as industrial or companywide coding standard.
Аннотация. Среды автоматизированного тестирования широко используются для обеспечения Despite of focus for modern static source code analyzers on code security, lack of logical errors and
качества современного программного обеспечения в жизненном цикле разработки безопасного
программного обеспечения. Иногда требуется проверка качества специфического программного 1
Alexander Gerasimov, Petr Privalov, Sergey Vladimirov, Veronica Butkevich, Natalya Chernova, Anna
обеспечения и поэтому требуется применение специфического подхода для решения этой задачи. В
этой статье мы представляем подход и детали реализации среды автоматического тестирования, Gorelova. An approach to assuring quality of automatic program analysis tools. Ivannikov Ispras Open
Conference (ISPRAS), 2020
41 42
Гиматдинов Д.М., Герасимов А.Ю., Привалов П.А., Буткевич В.Н., Чернова Н.А., Горелова А.А. Автоматизированная система Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova A.A. An automated framework for testing
тестирования инструментов статического анализа кода. Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 41-50 source code static analysis tools. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 41-50

performance, some kind of coding rules applied in companies or industry can contain such evaluation system for static source code analysis tools. In this paper, we describe technical details
requirements to the code as style of indentation, naming conventions, etc. For example, if we take a and evaluation of proposed approach.
look to Python programs then source code can contain commentaries of the specific look, such as
Shebang [1], encoding of the file [2], company code ownership statement and version or license 4. Overview
notes. That’s why trying to satisfy needs of testing industrial static source code analyzers such a
framework cannot rely on specific comments and code formatting, such as used in most known test Acceptance Testing Framework solves problem of evaluating the quality of automatic program
cases database Juliet of National Institute for Standardization and Technology of USA [3]. analysis tools. The quality is measured by parameters such as: performance, scalability, precision,
completeness.
Instead of that we have to have a database of error code snippet describers. Such kind of describers
provide all necessary information on test case in a file or set of files with directories structure, Performance – how fast an analysis tool can provide an analysis result and how much resources it
separated and independent of language for a source code of target analyzer and target language of consumes.
analyzed programs. We use specific JSON [4] formatted descriptions of test cases which describe Scalability – how analysis time reduces if we providing additional computational resources.
every test case as for erroneous examples, as for clean code examples. Precision – how precise an analysis result is (small number of false positive warnings or noise).
On the other hand, we have set a goal to compare tested static source code analyzer with competing Completeness – how many true positive warnings issued by a tool in comparison to errors exist in
ones. That’s why we put as a requirement ability to run competing static source code analyzers in the test suite (number of false negatives – errors has been missed).
one bundle to compare precision, completeness and performance of such tools. That is second To compute such parameters Acceptance Testing Framework allows to run program analysis tool
requirement. against a limited, manually crafted set of test cases combined in one test suite. Test suite represents
Next, we need to have solution for different environments such as operating systems and hardware behavior of defective and similar to defective programs. The defective one gives rate of true positive
platforms. That’s why we set it as one of requirements to the framework. warnings should be found and similar to defective gives rate of false positive warnings, which
And, last, but not least, we want to make out Acceptance Testing Framework independent of target absence is expected. So far the resulting precision and completeness are calculated and evaluated.
language of analyzed programs. It should be suitable for testing analyzers for programming As far as precision and completeness are evaluated by Acceptance Testing Framework for program
languages C, C++, Java, C#, Python and other languages. analysis tool, decision about quality could be made. In theory perfect tool has 100% completeness
To summarize: of test suite (all defects detected) and 100% precision (no noise and no defect detected on similar to
defective code snippets), but such values cannot be achieved at current stage of engineering and
• Independence of target environment, such as hardware and operating system.
have the theoretical limitation of Rice’s theorem [11].
• Independence of analyzed programming languages. There are no strict generally accepted values for performance and scalability as far as these
• Possibility to check source code snippets without modification of original code even in parameters depend on depth, complexity and target of analysis and vary greatly among analysis
comments part. tools. Moreover, the exact conclusion about the quality of analysis tools directly depends on the test
• Possibility to check as erroneous, as clean code examples (true positive and false positive suite. Acceptance Testing Framework doesn't contain built-in features to get performance and
warnings checks). scalability on its own for now. Despite this Acceptance Testing Framework could be used in the
computation process of these parameters by running program analysis tool against set of different
• Support pretty unlimited number of checkers for coding rules, including, but not limited to complexity (from low to high) test suites and observe how performance dynamic depends on
formatting and comment styles. complexity of test suite or scalability dynamic in the case of additional computational resources
• Possibility to compare different static source code analysis tools. involved in computation process.
• Possibility to represent results of analysis in different formats: machine readable (JSON, XML Test suite could follow company or industrial standards, contain code snippets with security
and others), output formatted to represent result on the screen, HTML format, etc. with vulnerabilities, code style or leading to crash errors. In our case test suite follows company standard
possibility to extend list of reporting formats on demand. and together with Acceptance Testing Framework has deployed in continuous integration processes
of static analysis tool development in Huawei Russian Research Institute.
3. Existing approaches
5. Design and implementation
There are a lot of research papers dedicated to evaluation of static code analysis tools [5, 6, 7]. These
works observe behavior of static code analysis tools on selected subset of NIST SAMATE test cases In this section, we describe the design and implementation of our framework. We describe it from
for selected OWASP [8] Top 10 vulnerabilities. But these papers a dedicated to manual evaluation requirements perspective.
of static code analysis tools and does not solve the problem of automated frameworks
implementation. 5.1 Independence of target environment
The work [9] attempts to solve the problem of creating automated test suite to evaluate static analysis
tools by designing test cases as small code snippets, which automatically in-lined into template To satisfy requirement of an independence of target environment such as hardware and operating
program to specific placeholder. system we managed to implement our framework in Python programming language as far as it has
Python source code interpreters for most of industrial operating systems and for most popular
The work [10] describes an approach of detecting minimal original test cases from real-world found
hardware platforms.
errors and tries to add code to the original test code snippet to check sensitivity of analysis to paths
and call context. The difference of our approach is in common automation of acceptance testing and

43 44
Гиматдинов Д.М., Герасимов А.Ю., Привалов П.А., Буткевич В.Н., Чернова Н.А., Горелова А.А. Автоматизированная система Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova A.A. An automated framework for testing
тестирования инструментов статического анализа кода. Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 41-50 source code static analysis tools. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 41-50

5.2 Independence of analyzed programming language implementation of interface is to issue report in specific format. We have implemented three
The framework does not rely somehow on code snippets content by using JSON formatted test case reporters supported out of the box:
annotations. • Output reporter. Represents test suite run results in human readable text format.
• JUnit reporter. Represents test suite run results in JUnit format.
5.3 Possibility to check code snippets without modification of original code,
• HTML reporter. Represents test suite run results in format of static web-site with possibility to
even in comments. Possibility to check as erroneous, as clean code snippets represent result in different view up to source code snippet of test case.
without modification Architecture diagram of Acceptance Testing Framework is shown on fig. 1. It consists of following
We use test case annotation files in JSON format. Test case for Acceptance Testing Framework is a blocks (classes):
tuple of annotation file and source code snippet. JSON annotation file contains following
• Driver. It is entry point of framework. It allows to configure test suite, reporter and tools
information:
accordingly to parameters passed to framework on the run.
• Kind of a snippet: does it contains a defect (True Positive) or it is not expected in this code
• TestSuite is a collection of TestCases which constructed using provided path to test suite
snippet (True Negative).
directory, where every test case has its annotation in JSON format and test case source code
• Kind of a defect expected to be reported or not reported. files directory structure.
• Description of a test case.
• Skip flag for marking test cases which are not supported, but planned to be supported in
future.
• Defect location: filename, line and offset in the line for expected defect.
• Additional service information. For example, if test case designed for specific version of
language, to configure analyzer appropriately, or additional field describing the goal of test
case to QA engineer or developer.
Such decision allows to keep all this information independent of test cases and needed by
Acceptance Testing Framework to configure analysis tools appropriately, and do not rely somehow
on number of test cases, because it is enough to just point the location of file system directory with
test suite formatted to be used with Acceptance Testing Framework while running framework and
all work related to running analysis tools on the test suite handled by framework itself via traversing
directories structure.
Fig. 1. Acceptance Testing Framework architecture diagram
5.4 Possibility to compare different analysis tools
• Tool. It is an interface representing a tool runner. Instantiations of this interface depends on
Acceptance Testing Framework satisfy this requirement by introducing abstract interface Tool to settings of the framework passed as command line arguments.
run external analysis tool as executable program and get results of analysis in Acceptance Testing
• Reporter. It is an interface allowing to represent analysis results using unified internal test suite
Framework internal representation. Having such kind of interface to support of new analysis tool
run results representation.
ones need to implement interface Tool to convert test case settings from test case annotations to
expected arguments of analysis tool and run this tool as external process. We have developed a In general, Acceptance Testing Framework is a Driver, which responsible for:
number of interface implementations for tools, such as PyLint [12], JetBrains PyCharm [13] and • Instantiation of supported analysis tool wrappers, which are implementations of Tool interface,
eight more tools, which have different paradigm of analysis. For example, PyLint accepts analysis accordingly to parameters passed to the Driver by user.
of single file and can be run on every test case separately. PyCharm expects a file system directory • Instantiation of the Reporter which will be used to output result of analysis by every tool.
and treats it as one project to analyze.
• Running the analysis process to collect analysis result in internal representation form and pass
On the other hand, analysis results representation of different tools can vary significantly. An
received result to Reporter.
implementation of Tool interface also responsible for interpretation of external analysis tool results
and converting it to Acceptance Testing Framework internal representation. This representation is a
kind of map for every test case to analysis result in term of Passed or Failed state. 6. Results & evaluation
Thus all logic of working with analysis tool is encapsulated inside of Tool interface implementation. This section aims to obtain a classification of tools according to the metrics applied to the results
obtained from the execution of the tools against our test suite.
5.5 Possibility to represent results of analysis in different formats Tested static analysis tools:
Acceptance Testing Framework provides universal interface Reporter which provides one public • Huawei Python Analysis Tool (HPAT) is a PyCharm plugin with the set of inspections
method report accepting internal representation of analysis tool run results. A responsibility of requested by Huawei Python Code Style Guide and Huawei Secure Coding Style Guide.
• Flake8 [14] is an open source tool that glues together pep8 [15], pyflakes [16], mccabe [17],
45 46
Гиматдинов Д.М., Герасимов А.Ю., Привалов П.А., Буткевич В.Н., Чернова Н.А., Горелова А.А. Автоматизированная система Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova A.A. An automated framework for testing
тестирования инструментов статического анализа кода. Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 41-50 source code static analysis tools. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 41-50

and third-party plugins to check the style and quality of some python code. Table 1 and fig. 2 show a number of vulnerability categories (NVC) for which the tool is tested.
• PyLint is an open source tool that checks for errors in Python code, tries to enforce a coding HPAT has the biggest value because test suite is developed exactly for satisfying needs of Huawei
standard and looks for code smells. coding standards.
The summary of metrics used is: Table 2 and fig. 3 show a result of running tools on test suite in terms of true/false positive, true/false
negative.
• True positives rate – TP (correct detections).
Tab. 3 and Fig. 4 show metrics results of all tools included in this analysis.
• False positive – FP (reporting false error warning). Table 3. Assessment results computing and ranking the selected metrics by TP ratio
• Number of vulnerability categories for which the tool was tested. Metric Tool TP ratio FP ratio Precision Recall
• Precision (1). Proportion of the total TP detections: HPAT 1 0 1 1
/ + 1
Pylint 0.219 0 1 0.219
• Recall (2). Ratio of detected vulnerabilities to the number that really exists in the code. Recall
is also referred to as the True Positive Rate: Flake8 0.217 0 1 0.217
/ + 2
Table 1. Number of vulnerability categories Tools metrics comparison
Tool Metric HPAT Pylint Flake8
1,5
NVC 68 32 15
1
NUMBER OF VULNERABILITY CATEGORIES 0,5
100 0
TP ratio FP ratio Precision Recall
50
HPAT Flake8 Pylint

0 Fig 4. Metrics obtained by the tools comparison


HPAT Pylint Flake8 Implemented framework allows to assess tools on the same testing code base and present relative
Fig. 2. Number of checked defect types results
Table 2. Vulnerabilities detection. Numbers of true/false positive, true/false negative test case detection
7. Conclusion
Tool Metric HPAT Pylint Flake8
In this paper, we focused on checking quality of static source code analysis tools with help of an
TP 695 91 102
automated framework for running such tools against a number of test cases combined in one suite.
FN 0 324 368 This approach allows us to control quality of the tool in terms of created erroneous and error free
FP 0 0 0 test cases as code snippets on target for analysis programming language. The framework allows to
use any kind of test suites if configured well within a profile or manifest in expected format.
TN 591 121 184 This approach to testing static source code analysis tools has applied in development process of
Total 1286 536 654 static source code analysis tools for Python and C/C++ in Huawei Russian Research institute. In
future we plan to extend functionality of Acceptance Testing Framework to check non-functional
requirements for tools such as time of running, memory consumption and CPU utilization.
Tools test cases ratio
1500 References
1000 [1] M. Cooper. Advanced Bash Scripting Guide – Volume 1: An in-depth exploration of the art of shell
scripting. (Revision 10). Independently published, 2019, 589 p.
500
[2] M.-A. Lemburg, M. von Löwis. PEP-263 – Defining Python Source Code Encodings. 2001. URL:
0 https://www.python.org/dev/peps/pep-0263/.
TP FN FP TN Total [3] NIST SAMATE Juliet Test Suite. URL: https://samate.nist.gov/SRD/testsuite.php.
[4] RFC-8259. The JavaScript Object Notation (JSON) Data Interchange Format, 2017. URL:
HPAT Pylint Flake8 https://datatracker.ietf.org/doc/html/rfc8259.
[5] H.H. AlBreiki, Q.H. Mahmoud. Evaluation of static analysis tools for software security. In Proc. of the
Fig 3. Test cases ratio obtained by the tools comparison IEEE 2014 10th International Conference on Innovations in Information Technology, 2014, pp. 93-98,

47 48
Гиматдинов Д.М., Герасимов А.Ю., Привалов П.А., Буткевич В.Н., Чернова Н.А., Горелова А.А. Автоматизированная система Gimatdinov D.M., Gerasimov A.Y., Privalov P.A., Butkevich V.N., Chernova N.A., Gorelova A.A. An automated framework for testing
тестирования инструментов статического анализа кода. Труды ИСП РАН, том 33, вып. 3, 2021 г., стр. 41-50 source code static analysis tools. Trudy ISP RAN/Proc. ISP RAS, vol. 33, issue 3, 2021, pp. 41-50

[6] R. Mamood, Q.H. Mahmoud. Evaluation of static analysis tools for finding vulnerabilitites in Java and Анна Антоновна ГОРЕЛОВА, младший инженер. научные интересы: искусственный
C/C++ source code. arXiv:1805.09040, 2018, 7 p. интеллект, машинное обучение.
[7] T. Hofer. Evaluating static source code analysis tools. Master’s thesis. École Polytechnique Fédérale de
Lausanne, 2010, pp. 1-74. Anna Antonovna GORELOVA, Junior Developer. Research interests: artificial intelligence,
[8] OWASP – Open web application security project. URL: https://owasp.org machine learning.
[9] M. Johns, M. Jodeit. Scanstud: a methodology for systematic, fine-grained, evaluation of static analysis
tools. 4th International conference on software testing, verification and validation workshops. In Proc. of
the 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation
Workshops, 2011, pp. 523-530.
[10] G. Hao, F. Li et al. Constructing benchmarks for supporting explainable evaluations of static application
security testing tools. In Proc. of the 2019 International symposium on Theoretical Aspects of Software
Engineering, 2019, pp. 66-72.
[11] H.G. Rice. Classes of Recursively Enumerable Sets and Their Decision Problems. Transactions of the
American Mathematical Society, vol. 74, no. 2, 1953, pp. 358-366.
[12] Pylint. URL: https://pypi.org/project/pylint/.
[13] JetBrains PyCharm. URL: https://www.jetbrains.com/pycharm/.
[14] Flake8. URL: https://pypi.org/project/flake8/.
[15] Pep8 – Python style guide checker. URL: https://pypi.org/project/pep8/.
[16] Pyflakes. URL: https://github.com/PyCQA/pyflakes.
[17] McCabe complexity checker. URL: https://github.com/PyCQA/mccabe.

Информация об авторах / Information about authors


Дамир Маратович ГИМАТДИНОВ, выпускник ВШЭ, магистр, младший инженер в
Техкомпании Хуавэй. Научные интересы: статический анализ исходного кода программ.
Damir Maratovich GIMATDINOV, HSE graduate, master, Junior engineer in Huawei
Technologies. Research interests: Source code static analysis.
Александр Юрьевич ГЕРАСИМОВ, кандидат физико-математических наук, старший эксперт
в области автоматического и автоматизированного анализа программ электронных
вычислительных машин. Научные интересы: статический анализ программ, динамический
анализ программ, обеспечение качества программ, обнаружение ошибок в программах.
Alexander Yurievich GERASIMOV, Doctor of Philosophy in Computer Sciences, Senior Expert in
the field of automatic and automated analysis of electronic computer programs in Huawei
Technologies. Research interests: static program analysis, dynamic program analysis, quality
assurance, program defects detection.
Пётр Алексеевич ПРИВАЛОВ, магистр, ведущий инженер-программист. Научные интересы:
статический и динамический анализ программ, фаззинг.
Petr Alekseevich PRIVALOV, master, Senior software engineer. Research interests: static and
dynamic program analysis, fuzzing.
Вероника Николаевна БУТКЕВИЧ, магистр, старший инженер. Научные интересы:
статический анализ исходного кода программ, обнаружение уязвимостей в программном
коде.
Veronika Nikolaevna BUTKEVICH, master, developer. Research interests: static analysis, security
vulnerabilities in software
Наталья Андреевна ЧЕРНОВА, магистр, младший инженер. Научные интересы: статический
анализ программ, анализ потока данных.
Natalya Andreevna CHERNOVA, master, junior developer. Research interests: static analysis of
programs, data-flow analysis.
49 50

You might also like