Chapter 6 - Testing Strategies

Typically a fault leads to an error, and the error will ultimately lead to a failure.

Failure: any deviation in the observed behaviour from the specified behaviour
Error: the system is in a state where further processing will lead to failure
Fault: the cause of the error
Testing: plans to find faults

Dyanmic and static verification

Dynamic Verification

Concerned with exercising and observing product behaviour (testing) - includes executing the codes

Static Verification

Concerned with the analysis of the static system representation to discover problems. Does not include execution, but includes looking at the code. Often can be done during pair programming.

Testing activities

Screenshot-2017-10-16-chapter6-test-ppt(1).png

Unit test

A unit can have a lot going on in it, but it can be isolated from the rest of the system. It can get right down to the individual function/method level. Carried out by developers.  The main goal of unit testing is to confirm that each subsystem is correctly coded and carries out the intended functionality. "For all the circumstances where this code is run, if certain data is given to it, will it run and give the expected output?"

A successful test is a test which causes a program to behave in an anomalous way. Tests show the presence not the absence of defects. Only exhaustice testing can show a program is free from defects - however, exhaustive testing is impossible.

Test data are inputs which have been devised to test the system. Test cases are inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification: at a high level, these are based on the system's use cases.

Integration test

Testing groups of subsystems or units working together. The design document details which units go together and what data is supposed to pass between units. Also carried out by developers. The main goal of integration testing is to test the interfaces between the subsystems.

System testing

The entire system is tested by developers. The main goal of system testing is to determine if the system meets the requirements (functional and global)

Think what every person is developing, and then how, through a design document, things are supposed to work together. During testing, always refer to the system requirements, design and user manual. It is easy to generate a user manual from the list of use cases. What the 'happy path' is through the use cases, and the sets of steps through the system.

Acceptance testing

Does it suit the needs of the person who will be using it. It usually takes place just before the system leaves the development site. (sometimes known as an alpha test). The main goal of acceptance testing is to demonstrate that the system meets customer requirements and is ready to use.

The choice of tests is made by the client/sponsor. Many tests can be taken from the integration testing phase. Acceptance tests are performed by the client, not the developer.

Alpha test

The sponsor/client uses the software at the developer's site. The software is used in a controlled environment, with the developer always ready to fix bugs that show up.

Beta test

Beta tests are conducted at the sponsor/client's site, and the developer is not present at this site. The software gets a realistic environment with some live data. The potential customer might get discouraged at this stage though if there are visible issues.

Sample acceptance plan

2017-10-17-19_12_19-chapter6-test.ppt.png

Installation test

Sometimes referred to as a beta test. Not a final release, but largely function and working.

System in use

When it goes through the tests, and the system is in use with the client. There is still a relationship with the developer during a support/maintenance phase.

Regression testing - JUnit/CPPUnit

Unit tests in Java/C(++) that can be run at any time, but typically after some change in the code has been made.  They are usually automated, and tests can be written before the code is written. (New code only written when a test doesn't work).

Part of continuous integration - the notion that as soon as something is written, test it, then ammend if needed, then put the code into the main system, and test the entire system.

import static org.junit.Assert.*;
import org.junit.Test;

public class MoneyTest {
  
  @Test
  public void testAdd() {
    Money moneyTester = new Money(5, "GBP");
    Money moneyAmount = new Money(5, "GBP");
    Money addResult = moneyTester.add(moneyAmount);
    Money expectedResult = new Money(10, "GBP");
    
    assertEquals("Amount", expectedResult.getAmount(), addResult.getAmount());
    assertEquals("Currency", expectedResult.getCurrency(), addResult.getCurrency());
  }
}

JUnit and continuous integration (CI)

With CI, when code is changed, it is automatically tested, and tests are scheduled regularly. Unit tests, like JUnit tests, make ure that new code works as expected, that ammended code still works, and that the software 'build' is not broken as a result of the change. (Builds are scheduled regularly, and happen if tests are successful)

JUnit tests enforce regression testing - making sure that after change, the build works as expected.

Unit testing techniques

Static analysis

  • Hand execution: reading the source code
    • Not running it, just looking at it
    • Useful in pair programming, where one person is constantly reading the code that is being entered
  • Walkthrough (informal presentation to others)
  • Code inspection (formal presentation to others)
  • Automated tools checking for the following:
    • (Eclipse example: SpotBugs)
    • Syntactic and semantic errors
    • Departure from coding standards

Dynamic analysis

  • Black box testing (test the input/output behaviour)
  • White box testing (test the internal logic of the subsystem or object)
  • Data structure based testing (data types determine test cases)

Black box testing

The approach to testing where the program is considered as a 'black-box'. The program test cases are based on the system specification. Inputs from test data may reveal anomalous output (defects). Test planning can begin early in the software development process. The main problem is the selection of inputs

Equivalence partitioning

Partition system inputs and outputs into 'equivalence sets'.

For example: if an input should be between 4 and 10, choose:

  1. A number less than 4
  2. A number between 4 and 10
  3. A number greater than 10

Choose test cases at the boundary of these sets.

  • Potential combinatorial explosion of test cases (valid & invalid data)
  • Often not clear whether the selected test cases uncover a particular error
  • Sometimes difficult to fully test GUIs
    • sometimes automatic tests can be set up for websites
  • Difficult to trace the steps back to the source of any problem that comes up

White box testing

Sometimes called structural testing or glass box testing. You as the developer knows the different flows of data through the system. The testing process forces data down certain routes in the software. A thorough test will go through and check each line/loop/function is tested and executed.

Making sure it will behave properly under all circumstances.

  • Potentially infinite number of paths have to be tested
  • Often tests what is done, instead of what should be done
  • Cannot detect missing use cases

Static Verification

Verifying the conformance of a software system and its specification without executing the code

It involves the analysis of source text by humans or other software. Static verification also takes place on any documents produced as part of the software process. It can discover errors early in the software process.

Two main techniques

  • Walkthroughs - 4 to 6 people examine code and list perceived problems, the meet to discuss
  • Program inspections (Fagan) - formal approach using checklists

More than 60% of program errors can be detected in software inspections. It is usually more cost effective than testing for defect detection at the unit and module level.

Stress testing

Some systems need to handle specified loads. Stress testing tests the failure behaviour of the system. It shows up defects that only occur under high volume

Integration testing strategies

The order in which the subsystems are selected for testing and integration determines the testing strategy:

Big bang integration

Non incremental

Bottom up integration
  • Lowest layer subsystems tested individually
  • Next subsstems tested that call those subsystems
  • Repeat until all subsystems are included in the testing
  • A special program called a test driver is required.
    • A test driver calls a subsystem and passes a test case to it
  • It tests the most important subsystem (UI) last (disadvantage)
  • Useful for inegrating OO systems (advantage)
Top down integration
  • Tests each of the called subsystems
  • Repeat until all subsystems are incorporated into the test
  • A special program called a test stub is needed to do the testing. It simulates the activity of a missing subsystem (if they don't have access to it, or it has not yet been developed.)
  • Writing stubs is difficult, and a very large number of stubs may be required (disadvantages)
  • Test cases can be defined in terms of the functionality of the system (advantage)
Sandwich testing
  • Combines top down strategy with bottom up strategy.
  • Does not test all the individual subsystems thoroughly before integration (disadvantage)
  • Parallel top and bottom layer tests (advantage)

System testing

  1. Functional testing (black box)
  2. Structure testing (white box)
  3. Performance testing
  4. Acceptance testing
  5. Installation testing

The impact of requirements on system testing are as follows: The more explicit the requirements, the easier they are to test; Quality of use cases determines the ease of functional testing; Quality of subsystem decomposition determines the ease of structure testing; Quality of nonfunctional requirements and constraints determines the ease of performance tests.

Performance testing

Method/Information Method/Information
Stress testing Timing testing

Stress limits of the system

  • Max # of users
  • Peak demands
  • Extended operation

Evaluate response times and time to perform a function.

How fast is 'real time' for the user?

A response could be needed instantly as in a vehicle's auto braking system

Or a response time of a couple of seconds may be acceptable

Volume testing Environmental test
Test what happens if large amounts of data are handled Test tolerances for heat, humidity, motion and portability
Configuration testing Quality testing
Test the various SW and HW configurations Test reliability, maintainability and availability of the system
Compatibility test Recovery testing
Test backward compatibility with existing systems Test system's response to presence of errors or loss of data
Security test Human factors testing
Try to violate security requirements Tests user interface with the end user

 

No Comments

Back to top