CSC2045 - Software Engineering

Software Engineering

Chapter 1 - Introduction


How well the software is planned and organised. Version management software. 
Refer to the CSC2045 Software Engineering Information sheet for more information on how the marks are divided, and when exams and deadlines are.

Use Cases

Sets of sequences of actions. Use cases should be thought about right from the beginning of the process. In use case diagrams, only have a few well defined use cases.

Chapter 2 - Software Process (Life Cycle)

Primitive Software Process Model

For a simple programme written by one person, this works fine.



More Complex Software Process Model


Generic Engineering Process

  1. Specification - Set out requirements and constraints
  2. Design - Produce a paper model of the system
  3. Manufacture - Build the system
  4. Test - Check the system meets the required specifications
  5. Install - Deliver system to customer and ensure it is operational
  6. Maintain - Repair faults in the system as they are discovered

Software Process - Who is involved?


The stick men are called Actors.

Software Process Models

Waterfall model

  • Requirements Definition
    • Design
      • Implementation
        • Verification and Validation
          • Installation
            • Operation & Maintenance

Problems with the Waterfall Model

The requirements may not be fully understood or need to be changed, and it would be too late to change the software easily.

  • Managers love waterfall models
    • Nice milestones
    • No need to look back, one activity at a time
    • Easy to check progress
  • Software development is iterative
    • During design, problems with requirements are identified
    • During coding, design and requirement problems are found
    • During testing, coding, design and requirement errors are found
  • System development is a non-linear activity
  • Verification and Validations comes too late in the process

Evolutionary Models

Waterfall models are poor at managing risk.Waterfall assumes that all requirements are known at the beginning. Prototyping can provide the answer. Software in evolutionary strategies is grown rather than built.

Throwaway prototyping

Building a prototype to test a hypothesis. Also known as a 'spike'. Then build the real product from it.

Evolutionary prototyping

Build a prototype as a demonstration, then use it as the base for the fully functioning software. Prototyping is particularly common in designing GUI's. It is lower risk than the waterfall model as testing is very early. The risk is spread out over the process.

Difficulties with evolutionary models

  • Planning
  • Managers don't like it
  • No guarantee that the end will be reached
  • Architecture could get messy, bringing the need for a rework/refactoring job near the end of the process.

Spiral model


  1. Determine objectives
  2. Evaluate alternatives - resolve risks 
  3. Use a waterfall model.
  4. Evaluation and plan next quadrant

Incremental and iterative methods

The aim is to have a good working system at every stage of development. 

This method is good for a number of reasons:

  • Early feedback
  • Better time to market for high priority
  • Easier reaction to change
  • Better estimates
  • Lower risk



Design to schedule 

The higher priority tasks are completed first so that if there is a shortage of time or money, there won't be major components missing.

Architecture must be open at each stage so that the software can be added to and changed as needed.

The Gantt Chart

The Gantt chart allows development strands and the relationships between them, to be represented on a timeline. They are intended to represent an iterative development process involving some design, some implementation, some testing, then back to design. Lines on a gantt chart represent dependencies.

Good Gantt charts are readable at a glance

Capability Maturity Model

  1. Initial level - also called ad hoc or chaotic
    • No problem statement or requirements specification
  2. Repeatable level - process depends on individuals ("champions)
  3. Defined level - process is institutionalised (sanctioned by management)
  4. Managed level - activities are measured and provide feedback for resource allocation
  5. Optimising level - process allows feedback of information to change process itself

It goes from chaos to closing managed and monitored software development through feedback.


Chapter 3 - Software Project Management - Estimation, Scheduling and Metrics

White diamond - aggregation (a part of)

Cost Estimation Techniques

  • Expert Judgement
  • Past Experience
    • Build up a databank of past projects and their cost
  • Top down
    • Break the problem up into smaller problems and estimate these
  • Function Point analysis
    • Uses the requirement specification to assess inputs, outputs, file accesses, user interactions and interfaces and calculates the size based on these
  • Algorithmic Cost Modelling
    • COnstructive Cost MOdelling (COCOMO), Barry Boehm 1981, has been an influential approach

Function Point Analysis (FPA)

Getting an idea of complexity from the quantity of components.

The following system components are considered. Number of:

  • external inputs (eg, input files of transactions)
  • external outputs (eg output files of reports, messages)
  • user interactions / enquiries (eg menu selection, queries)
  • internal or logical files used by the system
  • number of external or interface files shared with other applicatoins

In each case, depending on the number of field types associated with each component, and the variety of file types that these elements refer to, these components are rated as simple, average or complex.

A weight is associated with each of the ratings, simple average or complex. The number of external inputs is multiplied by the selected weighting for that system component likewise for external outputs and the other system components.


The weighted total are added to give the Unadjusted Function Points.

UFP is then adjusted to take account of the type of application

  • This adjustment is made by multiplying UFP by a technical complexity factor.
  • As preparation for calculating TCF, fourteen General System Characteristics are scored for Degree of Influence from 0 to 5 (no influence to strong influence)
    1. Data communications
    2. Distributed functions
    3. Performance
    4. Heavily used configuration
    5. Transaction Rate
    6. On-line data entry
    7. End-user efficiency
    8. On-line update
    9. Complex Processing
    10. Reusability
    11. Installation ease
    12. Operational ease
    13. Multiple sites
    14. Facilitate change
  • TCF is then calculated as TCF = 0.65 + 0.01 x DI where DI is the total Degree of Influence from the 14 scored characteristics

Function points (FP) are calculated as FP = UFP * TCF. FPs can be used to estimate Lines of Code (LOC), assuming that the average number of LOC per FP for a given language can be calculated.

But there are some difficulties:

  • FPs, and particularly the scores given to the General System Characteristics, are very subjective. They cannot be counted automatically and depend on the analyst's assessment.
  • There are only 3 complexity levels for weighting the functionality of the main system components
  • The approach has to be calibrated or adjusted for different programming tasks and environments


3 Classes of project are recognised
  1. Simple (or organic): small teams, familiar environment, well understood applications, no difficult non-functional requirements
  2. Moderate (or semi-detached): project team may have experience mixture, system may have more significant non-functional constraints, organisation may have less familiarity with application.
  3. Embedded: HW/SQ systems, tight constraints, unusual for team to have deep application experience.

COCOMO: E = a x KDSIb; D = 2.5 x Ec

  • E = Effort in Person-months
  • a, b, c = Constants based on project class & historical data
  • D = development time in months
  • KDSI = Thousands of delivered source instructions (~Lines of code)

The multipliers and exponents for basic COCOMO's formulae change according to the class of the project.


Intermediate COCOMO

Intermediate COCOMO takes the basic COCOMO formula as its starting point. The value a is equal to 3.2, 3.0, 2.8, for organic, semi-detached and embedded projects respectively.

Additionally Intermediate COCOMO identifies personnel, product, computer and project attributes which affect cost. With intermediate COCOMO, the basic cost is adjusted by attribute multipliers: presently we'll give values to a sample of these multipliers:


These are attributes which were found to be significant in one organisation with a limited project history database. Other attributes may be more significant for other projects and other organisations.

Configuration Management

In an on-going project, all products of the software process have to be managed.

  • Specifications
  • Designs
  • Programs
  • Test data
  • User manuals

Thousands of separate documents are generated for a large software system.

CM Plan
  • Defines the types of documents to be managed and a document naming scheme
  • Defines who takes responsibility for the CM procedures and creation of baselines
  • Defines policies for change control and version management
  • Defines the CM records which must be maintained

The configuration database

All CM information should be maintained in a configuration database. This should allow for queries. The CM database should preferably be linked to the software being managed. (eg. Who has a particular system version? What platform is required for a particular version? What versions are affected by a change to component X?)

The change management process

  • Request change by completing a change request form
  • Analyse change request
  • if change is valid then
    • assess how change might be implemented
    • asses change cost
    • submit request to change control board
    • if change is accepted then
      • do this until software quality is adequate
        • make changes to software
        • submit changed software for quality approval
      • create new system version
    • else
      • reject change request
  • else
    • reject change request

Derivation history

A record of changes applies to a document or code component. It should record, in outline form, the change made, the rationale for the change, who made the change and when it was implemented.



An instance of a system which is functionally distinct in some way from other system instances


An instance of a system which is functionally identical but non-functionally distinct from other instances of a system


An instance of a system which is distributed to users outside of the development team

Software Metrics

When you can measure what you are speaking about and express it in numbers, you know something about it

Allow processes and products to be assessed. Used as indicators for improvement

Quality Metrics

Defect Removal Efficiency

Measures how good quality assurance is. DRE should be 1 (in an ideal situation)

DRE = E/(E+D)

E = Number of errors before delivery.
D = Number of defects after delivery.

Defects per kLOC

C = #defects/kLOC


A system's ability to withstand attacks (including accidental ones) on security

I = sigma[1 - threat x (1 - security)]

Threat = probability than an attack of a particular type will occur at a given time
Security = probability that an attack of that type will be repelled

Design Complexity

'fan-in and fan-out' in a structure chart. High fan-in = high coupling. High fan-out = high coupling (complexity). Length is any measure of program size such as LOC.

Complexity = Length * (fan-in * fan-out)2

Reliability Metrics

Probability of failure on demand

This is a measure of the likelihood that the system will fail when a service request is made. a PoFOD of 0.001 means that 1 out of every 1000 service requests result in failure. This is relevant for safety-critical or non-stop systems. It is computed by measuring the number of system failures for a given number of system inputs.

Rate of occurrence of failures at time t

The mean rate of failures per unit time at time t. A RoCOF of 0.02 means 2 failures are likely in each 100 operational time units


Chapter 4 - Scrum

Scrum Project Management

Scrum is an agile process that allows for focus on delivering the highest business value in the shortest time. It allows for rapid and repeated inspection of actual working software (every two weeks to one month).

The business sets the priorities. Teams self-organise to determine the best way to deliver the highest priority features. Every two weeks to a month, anyone can see real working software and decide to release it as is, or continue to enhance it for another sprint.


Scrum projects make progress in a series of iterations called sprints. Typically, the duration is two to four weeks or a calendar month at most. A constant duration leads to a better rhythm. The product is designed, coded and tested during the sprint.


Product Backlog

A list of user stories

Sprint backlog

A list of tasks to be completed during the sprint. Can be discussed in the daily scrum

Changes during a sprint

There are no changes during a sprint. Plan sprint duration around how long keeping change out of the sprint can be committed to.

Sequential vs overlapping development


Scrum Framework


Product Owner
  • Define the features of the product
  • Decide on release date and content
  • Be responsible for the profitability of the product
  • Prioritise features according to market value
  • Adjust features and priority every sprint as required
  • Accept or reject work results
  • Decide if increment will be released
The Scrum Master
  • They oversee the process, and make sure that it is running smoothly
  • They are not concerned with people management
  • Represents management to the project
  • Responsible for enacting Scrum values and practices
  • Removes impediments
  • Ensure that the team is fully functional and productive
  • Enable close cooperation across all roles and functions
  • Shield the team from external interference
The Scrum Team
  • May range from three to nine people.
  • Cross functional - programmers, testers, UI designers.
  • Members should be full time. (full time working on that project, however sometimes a database administrator might be shared across teams)
  • Teams are self organising. They decide how they will do the work.


  • Sprint Planning
  • Sprint Review
    • Considering how well the customer's requirements were met
    • Getting product owner's approval for shipping/deployment
  • Sprint retrospective
  • Daily scrum meeting
    • Letting fellow team members know how they are getting on.
    • A commitment. You would be expected to work on what you say you were working on.


  • Product backlog
  • Sprint backlog
  • Burndown charts

Sprint planning

Time boxed to a maximum of 8 hours for a one month sprint, and less than this for shorter sprints. The team selects the highest priority items from the product backlog they can commit to completing.

Sprint backlog is created:

  • Tasks are identified and each is estimated (1-7 hours)
  • Collaboratively, not done alone by the Scrum Master

High level design is considered in the sprint planning stage.

The daily scrum

Daily, 15 minute stand up meetings. They are not for problem solving. Everyone is invited, but only the scrum team must attend. Only the scrum team, the scrum master and the product owner can talk during the meeting. These are called the Pigs, others are Chickens. This helps to avoid other unnecessary meetings.

Three questions

Everyone answers 3 questions. They are not just to tell the scrum master the status of the work. They are commitments made in front of your peers, as social commitment is a very strong motivator.

  1. What did you do yesterday?
  2. What will you do today?
  3. Is there anything in your way?

The scrum meeting should be held in the same place at the same time every work day. The daily scrum is best held first thing in the day so that the first thing team members do on arrival at work, is think about what they did the day before, and what they plan to do today.

All members are required to attend, but if they cannot, the absent member must attend by phone, or by having another member report on the status of the absent member on their behalf.

Team members must be prompt. The scrum master starts the meeting at the appointed time, regardless of who is present. Any team member who is late, has to pay or serve a fine of some sort. The scrum master begins the meeting by starting with the person to his left, and proceeding around the room until everyone has reported.


Team members should not digress beyond answering these three questions into issues, designs, discussions of problems or gossip. The scrum master is responsible for moving the reporting along briskly from person to person. During the daily scrum, only one person talks at a time. That person is the one who is reporting his status. Everyone else must listen, there are to be no side conversations.

When a team member reports something that is of interest to other team members, or needs the assistance of other team members, an team member can immediately arrange for all interested parties to get together after the daily scrum to set up a meeting.

Chickens are not allowed to talk, or otherwise make their presence in the daily scrum meeting obtrusive. They stand on the periphery of the team, so as to not interfere with the meeting. If there are too many chickens, the scrum master can limit attendance so that the meeting can remain orderly and focused. Chickens are not allowed to talk with team members after the meting for clarification or to provide advice or instructions.

Common problems of Scrum meetings
  • Implicit impediment
    • Listen to everything; sometimes someone mentions an impediment but doesn't identify it as such
  • Side discussion
    • Ask people to listen when they are not speaking
  • Rambling
    • Ask people to summarise more quickly
  • Sidetracked meeting
    • Ask people to have a meeting immediately afterwards for people who care about the topic
  • Observer who speaks
    • Remind them that they are an observer
  • Late arrival
    • Charge them £1 if that's what your team does.

Chapter 5 - User Stories

A user story is a form of Agile Requirements specification.

A concise, written description of a piece of functionality that will be valuable to a user (or owner) of the software).

They define what will be built into the project. They are formulated in one or two sentences in every day language, understood by both developers and customers/product owners. Anyone can write user stories - typically it's the team members in disalogue with the customer.

The work required to implement them is estimated by the developers, and prioritised by te customer after estimation is carried out.

Benefits of user stories

  • They are quick to write compared to formalised requirements specifications
  • Easier to modify and maintain when requirements change than more formal specifications
  • Enable a quick response to changing requirements (main idea behind agile methods)
  • Encourage face to face communication
  • Minimal investment of time needed until really necessary
  • Allows projects to be broken up into 'sprintable' chunks
  • Gives a unit of work that developers can estimate

How to write user stories

The followign general format is commonly used when writing user stories:

As [role], I can [feature] so that [reason]

Stop focussing on the word 'user', start focussing on roles. Replace the word with 'actor'.

Eg: As an unregistered user, i can register so that I can log in. As a registered user, I can log in, so that I can see my profile page. As a registered user, I can log in, so that I can start a chat conversation.

Sometimes, "so that" can be left off as it can seem redundant. The format does not need to be followed stricktly.

Some alternative formats could be:

  • As a [role], I want [feature] because [reason]
  • As a [role], I want [feature] in order that [reason]
  • As a [role], I can [feature]
  • As a [role], I can [feature] so that [person]

The three C's


Stories are traditionally written on index cards (3" by 5"). The card is annotated with estimates, notes etc.

The following can be captured on the card:

  • Story number
  • Story name - a simple name that can be used to identify it in conversation
  • Story description - "As a ..."
  • Developer's estimate - in story points
  • Conersation notes - can include a sketch of UI, etc
  • Confirmation details (acceptance test details - often written on the reverse side of the card)

They keep the story small, and their descriptions concise. They can be displayed easily for the whole team to look at and monitor. They can be put on a board and moved around easily as the work is completed.


If teams are not co-located, the stories cannot be shared unless in electronic format. Also if teams do not have a permanent workspace of their own it is impractical.


Details of the story are learned during conversations with the customer.


Acceptance tests confirm the story was impemented correctly. The story must be testable to allow developers to know when they are finished. Each story should be accompanied by at least one acceptance test. Given [some context], when [an event occurs], then [some outcome]. They should be written and refined by product owners and developers following conversations.

User Story or Use Cases

There is a difference between user stories and use cases.

The use case is a set of sequences of actions, that yield a result that is of value to an actor.

They are fundamentally different to user stories. The main flow to get the job done, and any alternative flows down to user choice/excpetion. They take more upfront effort, but can guage the difficulty of the task more easily.

The user stories by contrast, gets these individual narratives that collectively build up. Getting what the user really wants and how it can be tested.

For the project, use cases are required.

User roles

It is unusual for there to be only one type of user. The role may refer to a logical user rather than physical. In some cases, the 'user' will appear to be a service or software component.

Users can vary by

  • What they use the software for
  • How they use the software
  • Familiarity with software/computers

Characteristics of good user stories

  • Independent - otherwise they are difficult to prioritise
  • Negotiable - not contracts, just reminders for discussion but the customer is the person who 'knows' exactly what it menas, deveoper strives to understand it clearly
  • Valuable - to the customer, otherwise why implement it?
  • "Estimable"
    • Developers may not know enough about the domain
    • Developers may not know enough about the technology
    • The story is too large to see the 'edges'
  • Brief - stick to the recommended formats
  • Testable - very easy to write un-testable requirements - take care to ensure that the product owner and the developer both agree what a successful outcome will look and act like

A story writing workshop

Includes developers, users, customers, others

  • Brainstorm to generate stories
  • Goal is to write as many stories as possible
  • Some will be implementation ready
  • Others will be epics
  • No prioritisation is done at this point
  • User questionnaries to gain further stories when appropriate
  • Hold user interiews/observe users at work

User stories should ideally be considered as full vertical slices of the system.

Story Estimation

At the start of a project, the initial set of user stories need to be estimated. Estimates are expressed in story points. Some practitioners say not to equate story points to a measure of time, bu tothers say to think of them in terms of 'ideal programming days'. The estimates should be relative as opposed to absolute.

The team should make these estimates together using all expertise available. This can be done using a process called planning poker.

Planning poker

Participants include all of the developers in the team. Each estimator is given a deck of planning cards. Each card has a valid estimation value on it. Some practitioners suggest that these values should follow the Fibonacci like sequence. This speeds up estimation because there are only a few choices. It also avoids a false sense of accurate estimates. Indicated the stories that need to be split up (>20 points)

Steps in planning poker

Step 1
  1. Someone reads out the next user story to be esimated
  2. The product owner answers any questions that the estimators might have about it
  3. ... although these discussions should not go on for too long
  4. The estimates obtained are not considered definite and final, so it is not worthwhile to invest too much time trying to make them completely accurate
  5. It is more important that the relative estimates are good
Step 2
  1. Each estimator privately selects a card representing his/her estimate
  2. Cards are not shown until each person has made a selection
  3. At that time all cards are simultaneously turned over and shown to the rest of the team
Step 3

It is likely that the estimates will not be the same. The high and low estimators for this story should explain their estimates. This should not be about arguing your case though. The exercise is just about learning what everyone was thinking. A low estimator may not see some complexity in a story that the higher estimators can see.

Or alternatively the low estimators might see a straightforward solution to something that the high estimators cannot see.

The discussion should take no more than about 2 minutes before a second round of cards is performed in the same way as before.

Achieving concensus

In many cases this will be enough for the team to come to a consensus on an estimate. However more rounds may be necessary. If the team has the following estimates for a story, 5, 5, 5, ,3, 5, 5; then the low estimator will be asked if they would be OK with a 5.

Remember that the process is not about precision - it is about reasonableness

Chapter 6 - Testing Strategies

Typically a fault leads to an error, and the error will ultimately lead to a failure.

Failure: any deviation in the observed behaviour from the specified behaviour
Error: the system is in a state where further processing will lead to failure
Fault: the cause of the error
Testing: plans to find faults

Dyanmic and static verification

Dynamic Verification

Concerned with exercising and observing product behaviour (testing) - includes executing the codes

Static Verification

Concerned with the analysis of the static system representation to discover problems. Does not include execution, but includes looking at the code. Often can be done during pair programming.

Testing activities


Unit test

A unit can have a lot going on in it, but it can be isolated from the rest of the system. It can get right down to the individual function/method level. Carried out by developers.  The main goal of unit testing is to confirm that each subsystem is correctly coded and carries out the intended functionality. "For all the circumstances where this code is run, if certain data is given to it, will it run and give the expected output?"

A successful test is a test which causes a program to behave in an anomalous way. Tests show the presence not the absence of defects. Only exhaustice testing can show a program is free from defects - however, exhaustive testing is impossible.

Test data are inputs which have been devised to test the system. Test cases are inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification: at a high level, these are based on the system's use cases.

Integration test

Testing groups of subsystems or units working together. The design document details which units go together and what data is supposed to pass between units. Also carried out by developers. The main goal of integration testing is to test the interfaces between the subsystems.

System testing

The entire system is tested by developers. The main goal of system testing is to determine if the system meets the requirements (functional and global)

Think what every person is developing, and then how, through a design document, things are supposed to work together. During testing, always refer to the system requirements, design and user manual. It is easy to generate a user manual from the list of use cases. What the 'happy path' is through the use cases, and the sets of steps through the system.

Acceptance testing

Does it suit the needs of the person who will be using it. It usually takes place just before the system leaves the development site. (sometimes known as an alpha test). The main goal of acceptance testing is to demonstrate that the system meets customer requirements and is ready to use.

The choice of tests is made by the client/sponsor. Many tests can be taken from the integration testing phase. Acceptance tests are performed by the client, not the developer.

Alpha test

The sponsor/client uses the software at the developer's site. The software is used in a controlled environment, with the developer always ready to fix bugs that show up.

Beta test

Beta tests are conducted at the sponsor/client's site, and the developer is not present at this site. The software gets a realistic environment with some live data. The potential customer might get discouraged at this stage though if there are visible issues.

Sample acceptance plan


Installation test

Sometimes referred to as a beta test. Not a final release, but largely function and working.

System in use

When it goes through the tests, and the system is in use with the client. There is still a relationship with the developer during a support/maintenance phase.

Regression testing - JUnit/CPPUnit

Unit tests in Java/C(++) that can be run at any time, but typically after some change in the code has been made.  They are usually automated, and tests can be written before the code is written. (New code only written when a test doesn't work).

Part of continuous integration - the notion that as soon as something is written, test it, then ammend if needed, then put the code into the main system, and test the entire system.

import static org.junit.Assert.*;
import org.junit.Test;

public class MoneyTest {
  public void testAdd() {
    Money moneyTester = new Money(5, "GBP");
    Money moneyAmount = new Money(5, "GBP");
    Money addResult = moneyTester.add(moneyAmount);
    Money expectedResult = new Money(10, "GBP");
    assertEquals("Amount", expectedResult.getAmount(), addResult.getAmount());
    assertEquals("Currency", expectedResult.getCurrency(), addResult.getCurrency());

JUnit and continuous integration (CI)

With CI, when code is changed, it is automatically tested, and tests are scheduled regularly. Unit tests, like JUnit tests, make ure that new code works as expected, that ammended code still works, and that the software 'build' is not broken as a result of the change. (Builds are scheduled regularly, and happen if tests are successful)

JUnit tests enforce regression testing - making sure that after change, the build works as expected.

Unit testing techniques

Static analysis

  • Hand execution: reading the source code
    • Not running it, just looking at it
    • Useful in pair programming, where one person is constantly reading the code that is being entered
  • Walkthrough (informal presentation to others)
  • Code inspection (formal presentation to others)
  • Automated tools checking for the following:
    • (Eclipse example: SpotBugs)
    • Syntactic and semantic errors
    • Departure from coding standards

Dynamic analysis

  • Black box testing (test the input/output behaviour)
  • White box testing (test the internal logic of the subsystem or object)
  • Data structure based testing (data types determine test cases)

Black box testing

The approach to testing where the program is considered as a 'black-box'. The program test cases are based on the system specification. Inputs from test data may reveal anomalous output (defects). Test planning can begin early in the software development process. The main problem is the selection of inputs

Equivalence partitioning

Partition system inputs and outputs into 'equivalence sets'.

For example: if an input should be between 4 and 10, choose:

  1. A number less than 4
  2. A number between 4 and 10
  3. A number greater than 10

Choose test cases at the boundary of these sets.

  • Potential combinatorial explosion of test cases (valid & invalid data)
  • Often not clear whether the selected test cases uncover a particular error
  • Sometimes difficult to fully test GUIs
    • sometimes automatic tests can be set up for websites
  • Difficult to trace the steps back to the source of any problem that comes up

White box testing

Sometimes called structural testing or glass box testing. You as the developer knows the different flows of data through the system. The testing process forces data down certain routes in the software. A thorough test will go through and check each line/loop/function is tested and executed.

Making sure it will behave properly under all circumstances.

  • Potentially infinite number of paths have to be tested
  • Often tests what is done, instead of what should be done
  • Cannot detect missing use cases

Static Verification

Verifying the conformance of a software system and its specification without executing the code

It involves the analysis of source text by humans or other software. Static verification also takes place on any documents produced as part of the software process. It can discover errors early in the software process.

Two main techniques

  • Walkthroughs - 4 to 6 people examine code and list perceived problems, the meet to discuss
  • Program inspections (Fagan) - formal approach using checklists

More than 60% of program errors can be detected in software inspections. It is usually more cost effective than testing for defect detection at the unit and module level.

Stress testing

Some systems need to handle specified loads. Stress testing tests the failure behaviour of the system. It shows up defects that only occur under high volume

Integration testing strategies

The order in which the subsystems are selected for testing and integration determines the testing strategy:

Big bang integration

Non incremental

Bottom up integration
  • Lowest layer subsystems tested individually
  • Next subsstems tested that call those subsystems
  • Repeat until all subsystems are included in the testing
  • A special program called a test driver is required.
    • A test driver calls a subsystem and passes a test case to it
  • It tests the most important subsystem (UI) last (disadvantage)
  • Useful for inegrating OO systems (advantage)
Top down integration
  • Tests each of the called subsystems
  • Repeat until all subsystems are incorporated into the test
  • A special program called a test stub is needed to do the testing. It simulates the activity of a missing subsystem (if they don't have access to it, or it has not yet been developed.)
  • Writing stubs is difficult, and a very large number of stubs may be required (disadvantages)
  • Test cases can be defined in terms of the functionality of the system (advantage)
Sandwich testing
  • Combines top down strategy with bottom up strategy.
  • Does not test all the individual subsystems thoroughly before integration (disadvantage)
  • Parallel top and bottom layer tests (advantage)

System testing

  1. Functional testing (black box)
  2. Structure testing (white box)
  3. Performance testing
  4. Acceptance testing
  5. Installation testing

The impact of requirements on system testing are as follows: The more explicit the requirements, the easier they are to test; Quality of use cases determines the ease of functional testing; Quality of subsystem decomposition determines the ease of structure testing; Quality of nonfunctional requirements and constraints determines the ease of performance tests.

Performance testing

Method/Information Method/Information
Stress testing Timing testing

Stress limits of the system

  • Max # of users
  • Peak demands
  • Extended operation

Evaluate response times and time to perform a function.

How fast is 'real time' for the user?

A response could be needed instantly as in a vehicle's auto braking system

Or a response time of a couple of seconds may be acceptable

Volume testing Environmental test
Test what happens if large amounts of data are handled Test tolerances for heat, humidity, motion and portability
Configuration testing Quality testing
Test the various SW and HW configurations Test reliability, maintainability and availability of the system
Compatibility test Recovery testing
Test backward compatibility with existing systems Test system's response to presence of errors or loss of data
Security test Human factors testing
Try to violate security requirements Tests user interface with the end user


Extra notes - Use cases

Use case

A use case is a description of a set of sequences of actions, including variants, that a system performs to yield an observable result of value to an actor.

Graphically, a use case is rendered as an ellipse.

Sequences of actions: "I do this", "System does that", to and fro from actor and system. An actor is a user fulfilling a particular role. The "observable result" is something functional that needs to be done to achieve the goal of the use case. Note that it is typically a "set" of sequences, not just a single sequence of actions.

A use case is typically documented as a numbered list. Normal flow and exception flows form part the sets of sequences.

At some point, each use case will have a realisation. The realisaton will be an interaction between software objects. This can be represented by a class diagram or a sequence diagram (where messages get passed between instances).

Chapter 7 - Requirements Engineering - Use cases

Gathering requirements

Many different perspectives:

  • Manager
  • Administrator
  • Operator
  • Director
  • Purchaser
  • etc...

Requirements engineering

The process of establishing the services that the customer requires from a system and the constraints under which it operates and is developed

A requirement is a feature that a system must have or a constraint that is must satisfy to be accepted by the client. It may range from a high level abstract statement of a service or of a system constraint to a detailed mathematical functional specification.

Requirements engineering has two main activities.

  1. Requirements elicitation
  2. Requirements analysis

Difficulties with requirements elicitation

This requires collaboration between domain experts and technical experts (deelopers, clients, users). Hard to anticipate the effects that the new system will have on the organisation. Different users may have different requirements and priorities. System end users and organisations who pay for the system may have different requirements. There may also be natural language problems, where the customers may not be able to express what they want in words, even though they know what they want.

How requirements are elicited


Allow for details, first hand information gathering. This is expensive, however.

The structured interview can have different points of focus. There can be open discussion, but the structured discussion will get the main points that are required. A case study could be part of a focussed discussion, and looking at specific cases and scenarios that the system may be under.

The interviewer may give their idea of what they woud do, and then ask the customer to give a critique on the idea.


These are good if many people are involved, especially if dispersed across the organisation. They tend to have poor responses though.


This is accurate if done well: in that you get to see first hand what the clients are actually doing. This is also expensive, however. You get an impression of what they currently do, and the problems they are having that the software to be developed should solve.

Joint application design:

Workshops with clients facilitate consensus on requirements. Meetings may mean that clients are away from their workspace.

A customer may send some of their users to the place where the software is being developed to be part of the development team.

Typical requirements in the elicitation process

  • Identify actors
    • Types of user who interact with the system
  • Identify scenarios
    • Concrete examples of how the system will be used
    • Used to communicate with users about the requirements for the system
  • Identify use cases
    • Description of all possible behaviour in the system
    • These are abstractions unlike scenarios which are specific examples
  • Refine use cases
    • Ensure completeness including exceptions/error paths
  • Identify relationships among use cases
    • Factor out common functionality
    • Find dependencies
  • Identify non functional requirements


Requirements definition

A statement in natural language plus diagrams of the services the system provides and its operational constraints. Written for customers.

Requirements specification

A structured document setting out detailed descriptions of the system services. Written as a contract between client and contractor. For customers and developers.

Requirements documents

There are many different approaches to this. Typically grouped by feature. Requirements definition describes the requirements at a high level. The requirements specification gives the details for each function.

  • Name
  • Pre-conditions
  • Inputs
  • Source
  • Function description
  • Outputs
  • Post-conditions

Remember to look up the IEEE requirements engineering standard: ISO/IEC/IEEE 9148:2011

UML - Use case modelling

Use cases describe system behaviour that is visible to a user or to another system. Use ases are initiated by users or systems called actors. An actor represents anything that needs to interact with the system to exchange information - it may be a role a user plays or another system. In the diagram, actors are typically kept around the edges of the diagram.

Each use case is a significant set of sequences of interactions between the actor and system. It yields an observable result of value to the actor.

Use case diagrams represent an overview of the actors of the system and the behaviour that the system provides for them. The acto initiates system activity, a user case, for the purposes of completing some task.

If an external party initiates the input, it is considered an actor:

  • Potential human users of the system. Identify their roles
  • Interactions with external systems

Use cases do not describe the internal details of the software system. Even though they are likely to influence the form it eventually takes. They mipose constraints on designers who have to meet their functionality.

Describing use cases

Describe the course of events. Cover all possible events, especially where things can go wrong. The UML does not define a standard way to describe use cases in text form.


  • Name (short verb phrase)
  • Summary (description)
  • Actors
    • Primary actor (the one who initiates the use case)
    • All others involved
  • Triggers - events that start the user case
  • Pre-conditions - must be true before the use case can start
  • Post-conditions - will be true after the use case has finished
    • Minimumal guarantees (all cases)
    • Success guarantees (normally)
  • Flow of events (as a numbered list)
  • Alternative or exceptional flow of events - usually "At [point number] x... "
  • Extension points - where another use case can take over
  • Inclusions - summarises use cases that are included in this one.

Use case inclusion

Used where two use cases have shared behaiour. (Note that this is not inheritance, it is inclusion)