Software project management
Function point analysis (FPA)
The number of the following system components are considered:
- External inputs - eg. input files of transactions -
- External outputs - eg. output files of reports, messages -
- User interactions / enquiries - eg. menu selection, queries -
- Internal or logical files used by the system -
- External or interface files shared with other applications -
In each case, depending on the number of field types that are associated with each component, and the variety of file types that these elements refer to, these components are rated as simple, average or complex.
A weight is associated with each of the ratings for each component type. The number of external inputs is multiplied by the selected weighting for that system component - likewise for external outputs and the other system components. The weighted totals are added to give the Unadjusted Function Points (UFP) -
UFP = Σ Input(wi) + Σ Output(wo) + Σ Enquiry(we) + Σ Logical File(wl) + Σ Interface File(wif)
The UFP value is then adjusted to take account of the type of application. This adjustment is made by multiplying the UFP value by a Technical Complexity Factor (TCF). As preparation for calculating TCF, 14 general system characteristics (GSCs) are scored for degree of influence from zero (no influence) to five (strong influence):
- Data communications.
- Distributed functions.
- Heavily used configuration.
- Transaction rate.
- Online data entry.
- End-user efficiency.
- Online update.
- Complex processing.
- Installation ease.
- Operational ease.
- Multiple sites.
- Facilitate change.
TCF is then calculated as follows:
TCF = 0.65 + 0.01 * DI where
DI is the total degree of influence from the 14 scored characteristics.
The top score from the GSCs is
14 * 5 = 70. A normal or moderate score (where collectively the GSCs are midway between 'no influence' and 'strong influence') would be
14 * 2.5 = 35.
A case needs to be made where the GSCs total 35 and the technical complexity factor is one, and therefore has no effect as a multiplier on the UFP. One way to do this is to multiply
0.01 and then add
0.65. That is the basis of the Value adjusted formula (VAF).
Difficulties with function points
Function points, and particularly, the scores given to the GSCs are very subjective. They cannot be counted automatically and depend on the assessment of an analyst. There are only three complexity levels for weighting the functionality of the main system components. The approach has to be calibrated or adjusted for different programming tasks and environments. Ultimately, it will come down to experience that will determine how well the approach is working.
COCOMO - algorithmic cost modelling
Cost is estimated as a mathematical function of project, product and process attributes. The function is derived from a study of historial costing data. The most commonly used product attribute for cost estimation is lines of code (LOC). One widely used costing model is the Constructive Cost Model (COCOMO).
COCOMO exists in three forms:
- Basic - this gives a rough estimate based on product attributes.
- Intermediate - modifies the basic estimate using project and process attributes.
- Advanced - estimates project phases and parts separately.
The class of project affects the multipliers and exponents used in basic COCOMO. There are three recognised classes of project:
- Simple (or organic) - small teams; familiar environment; well-understood applications; no difficult non-functional requirements.
- Moderate (or semi-detached) - project team may have a mix of experience; system may have more significant non-functional constraints; organisation may have less familiarity with the application.
- Embedded - hardware / software systems; tight constraints; unusual for team to have deep application experience.
Formulae for basic COCOMO
E = a * KDSIb
D = 2.5 * Ec
E - Effort in person-months.
c are based on project class and historical data.
D - Development time in months.
KDSI - Thousands of delivered source instructions.
The multipliers and exponents for basic COCOMO's formulae change according to the class of the project:
Intermediate COCOMO takes the Basic COCOMO formula as its starting point.
a is equal to
2.8 for organic, semi-detached and embedded projects respectively. Additionally, intermediate COCOMO identifies personnel, product, computer and project attributes which affect cost. The basic cost is adjusted by attribute multipliers:
- Product attributes
- Required software reliability - RELY
- Database size - DATA
- Product complexity - CPLX
- Computer attributes
- Execution time constraints - TIME
- Storage constraints - STOR
- VM volatility - VIRT
- Computer turnaround time - TURN
- Personnel attributes
- Analyst capability - ACAP
- Programmer capability - PCAP
- Applications experience - AEXP
- VM experience - VEXP
- Programming language experience - LEXP
- Project attributes
- Modern programming practices - MODP
- Software tools - TOOL
- Required development schedule - SCED
Some attributes may be more or less significant between different projects and organisations.
A faster CPU can be used along with more memory to reduce the TIME and STOR attribute multipliers. Some software tools can be bought to reduce the TIME and TOOL multipliers (although more VM experience would be needed as a result).
Work breakdown structure
The hierarchical representation of all the tasks in a project is called the work breakdown structure (WBS).
There are two major philosophies:
Activity-orientated decomposition ("Functional decomposition")
- Write the book, get it reviewed, do the suggested changes, get it published.
Result-orientated ("Object orientated decomposition")
- Chapter 1, chapter 2, chapter 3.
WBSs break the project down into tasks. Gantt charts.
Lines between blocks indicate dependence on the previous task. Don't forget to draw the critical path. The critical path is the path along the project where there is no spare time (ie. the float is zero throughout).
ES = Earliest start
D = Duration
EF = Earliest finish
LS = Latest start
F = Float
LF = Latest finish
Defect removal efficiency
DRE = E / (E + D) where
E = number of errors before delivery;
D = number of defects after delivery. This is a measure of how good the team is at quality assurance. The
DRE should ideally be
1. (ie. all errors detected and resolved, and there are no reported defects after delivery).
Defects per kLOC
C = number defects / kLOC where
defects are a lack of conformance to a requirement. This is a measure of correctness.
Integrity is a system's ability to withstand attacks on its security.
I = Σ[1 - threat * (1 - security)] where
threat is the probability than an attack of a particular type will occur at a given time and
security is the probability that an attack of that same type will get repelled.
Represented with a fan-in and fan-out structural chart. A high fan-in (number of calling functions) means there is a high coupling. A high number of fan-out (number of calls) means there is a high coupling complexity. The length is any measure of programme size such as lines of code.
Complexity = Length * (fan-in * fan-out)2
- Cyclomatic complexity - the complexity of program control
- Length of identifiers
- Depth of conditional nesting
- Supporting documentation - the Gunning Fog Index based on length of sentences and number of syllables.