Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Errors, Defects, Failures & Root Causes

7 principles of testing
1. Testing shows the presence of defects, not their absence
The point of testing is not to prove that the system works but rather that the ulterior errors exist
within a domain of acceptability. Testing is not a proof of correctness.

2. Exhaustive testing is impossible


There are limited resources to each development cycle, which makes testing of all existing cases
impractical.

3. Early testing saves time and money


As projects get developed and become more complicated, tracking bugs in the functionality of
the systems can become a cumbersome or costly task. Therefore, testing must be started as
early as possible.

4. Defects cluster together


Following a Pareto distribution, it is acceptable to do a guessonomics test and say that 20% of
the causes are responsible for 80% of the errors. One must keep this in mind when designing
tests and running risk analysis.

5. Beware of the pesticide paradox


Testing the same components over and over will eventually cause our productivity to fall like a
logarithmic function. More testing does not equal to more defects being found. There are
however cases where it’s useful to over-run the tests.

6. Testing is context dependent


There is no one-fits-all solution to finding defects in systems, the test designer must propose the
tests according to the system specifications and functionality.

7. Absence-of-errors is a fallacy
No system will be 100% error proof.
Test Process
There are some good practices on the testing process, but there is no universal solution. These test
processes must have coverage criteria (which can work as key performance indicators) clearly defined
for them to be useful.

In the case of a mobile app, each platform the app will run on becomes a requirement of the test basis.
Each requirement another element of the test basis. Test Processes ISO Standard 29119-2

The test process can be broken down in the following “main” group of activities:

 Test planning
 Test monitoring and control
 Test analysis
 Test design
 Test implementation
 Test execution
 Test completion

Even though these activities look sequential, they can be performed iteratively depending on the
development requirements (ex. Agile working on fast cycles).

Test Planning
Test planning is defining the objectives of the tests (imposed by context).

Test Monitoring and Control


Test monitoring is making sure the tests are both working as intended and reaching the goals as defined
in the Test Planning phase. In this phase we check test results, assess the quality of our components and
propose new tests as needed.

Test Analysis
Test analysis is the specification of “what to test”:

 Requirement specifications (ex. business, functional, system, user stories, use cases)
 Design and implementation information (ex. diagrams & implementation information,
specifications)
 Risk analysis reports

Identify problems in test basis such as:

 Ambiguities
 Omissions
 Inconsistencies
 Inaccuracies
 Contradictions
 Superfluous statements
Test Design
Test design is the specification of “how to test”

 Design and prioritize test cases and their sets


 Identifying the data required to support test conditions and cases
 Develop and setup the test environment, infrastructure and tools.
 Connection between test basis, test cases, test conditions and test procedures

Test Implementation
Test implementation is the development of testware, as well a sequencing of test cases into procedures.
If test design looks into what to test, test implementation asks “do we have everything needed to run
the tests?”

 Creating automated tests if possible


 Cleaning data
 Creating test suites from test procedures
 Developing a test order

Test Execution
 Recording specifications of units being tested (ID, software, test tools and testware)
 Executing tests
 Logging results
 Comparison between expectations and results
 Analyze unexpected results
 Report defects
 Re-doing tests based on result analysis

Test Completion
Giving closure to the test cycle, in Test Completion we collect data from the completed test activities
and re-evaluate our tools and processes. Test completion occurs when the project reaches a milestone.

 Verify all test specification and evaluate their results


 Creating a test summary to be communicated to the stakeholders
 Archiving test environment, data, testware for later use
 Analyze lessons learned
 Use information to improving test process maturity.

Test Work Products


 Test Planning – Test plans
 Test Monitoring & Control – Various types of reports (progress & summary)
 Test Analysis – Bidirectionally defined, prioritized test conditions
 Test Design – Sets of test cases (can be high-level for reusability)
 Test Implementation – Test procedures and their sequencing, suites, and the test execution
schedule. Also management (including creation) of test data
 Test Execution – Documentation of the test cases/procedures, defect reports and
documentation on the relevant specifications of the testing procedures and instruments
 Test Completion – Test summary reports, lessons learned, documentation for future
applications. (Analyzing impact of changes, making test auditable, IT governance criteria,
improved understandability, translation of results to stakeholder, etc.

Psychology of Testing
A Tester must have the intrapersonal skills to communicate defects to the other teams, taking into
consideration human psychology phenomena such as confirmation bias.

 Collaboration > Arguments


 Convince others that testing is good
 Share results neutrally
 Be wary of the other person’s feelings
 Get confirmation on what you’ve talked about

Software Development and Software Testing


There are characteristics for good testing independently of the software development model:

 Testing activity for each development activity


 Each test level has its own specifications/goals
 Test development stays within the planned boundaries
 The team engage with the testing basis as soon as the drafts are available

Two types of cycles:

1. Sequential
2. Iterative & incremental

In Waterfall model, an activity begins when the last is done.

The V-model implements the principle of early testing (I’m guessing triple-V model?)
 Rational Unified Process: Long periods with groups of two-three related features
 Scrum: Short periods with fast deliveries
 Kanban: Flexible periods and deliverables
 Spiral: Also known as prototyping, experimental increments that can be dropped if they do not
function as intended

Component Testing

You might also like