Chapter 12

Software design quality

Design strategies

Functional design

The system is designed from a functional viewpoint. Starting with high level views and refining into more detail. For example, Structure design, SSADM, step-wise refinement, Jackson Structured Programming etc.

Object-orientated design

The system is views as a collection of interacting objects. Objects may be instances of an object class, and communicate by calling methods.


Design quality

A good design may be the most efficient, the cheapest, the most maintainable, the most reliable, etc.

A design should be modular and partitionaed into modules that have certain functions. It should also have distinct and separable representation of data and procedure. It should lead to modules that have independent functions, and it should be derived according to a repeatable method that is driven by the requirements specification.

A module in reality could be a considerable chunk of the software. On the other end of the scale, a module can be a single class.


Permits one to concentrate on a problem at some level of generalisation without regard to irrelevant low-level details

Abstraction 1

Software will include a computer graphics interface which will enable a draftsperson to see a drawing and to communicate with it via a mouse.All line and curve drawing, geometric computations etc will be performed by the CAD Software. Drawings will be stored in a drawings file.

Abstraction 2

CAD Software tasks: user interaction task, 2D drawing, graphics display task, drawing file management task; end.

This is procedural abstraction


Software is divided into separately named, addressable modules. The complexity of a program depends on its modularity.

Let C(x) be a measure of complexity and p1 and p2 be problems. Let E(x) be a measure of Effort to Solve.

If C(p1) > C(p2) then it follows that E(p1) > E(p2). In addition C(p1 + p2) > C(p1) + C(p2). Therefore, E(p1 + p2) > E(p1) + E(p2).


Coupling and cohesion

In general, high cohesion is good. Cohesion is how well modules fit together. Coupling is bad. Coupling is how dependent modules are on each other.

In an ideal system, high cohesion and low coupling is the goal. Components only work with as many components as they need to, and with as little data transfer between components as possible.


Interaction within a module is a measure of how well that module 'fits together'. With high cohesion, the module implements a single logical entity or function. When a change has to be made, it is localised within a single module.

This is good as it brings ease of maintenance, possibility of reuse, a module name that easily expresses what it does, reduced interface complexity.

Cohesion Levels

Coincidental (weak): Parts of a module are simply bundled together. The module is difficult to maintain, offers little or no reusability, and offers little advantage in these circumstances.

Logical association (weak): Elements which perform similar functions are grouped. The interface is difficult to understand. The module is difficult to maintain if code for individual tasks is related or interconnected.

Temporal (weak): Elements which are activated at the same time are grouped. Tasks may not be unrelated, so if changes are to be made, other modules may have to also be changes. The module's name may not adequately describe its purpose.

Procedural (weak): The elements in a module constitute a single control sequence. This is better than temporal cohesion, since all the module's operations are at least logically related, but reusability is limited because the single control sequence might relate to a very particular set of circumstances and the sequence of interaction between the elements in the module may be unclear, and since all the elements of the module do not necessarily deal with the same data, it may be difficult to maintain the broader system interface to the module.

Communicational (medium): All the elements of a module operate on the same input or produce the same output. Better than procedural since the module has a clear interface to the broader system, and there is a clear interface between the elements within the module.

Sequential (medium): The output from one part of a module is the input to another part

Functional (strong): Each element of a module is necessary for the execution of a single function.

Object (strong): Each method within the module (which is a software object) provides functionality which allows object attributes to be modified or inspected.


A measure of the strength of the interconnections between system modules. Loose coupling means that module changes are unlikely to affect other modules. Shared variables or control information exchange leads to tight coupling. Loose coupling can be achieved by state decentralisation and module communication via parameters or message passing.

Bad practice

Stamp coupling: More data than necessary is passed via arguments.

Control coupling: A flag is passed from one module to another affecting the functionality of the second module. Modules are no independent and so reuse is limited. The calling module must 'know' how the called module works.

Common coupling: Two or more modules accessing the same global or shared data. The resulting code is difficult to read. There may be side-effects: has another module changed a variable? Programs are costly to maintain as a change to a global variable means the whole program has to be searched to find its effects. Reusability is poor. Security problems: each module has access to more data than it needs.

Content coupling: The calling module can directly modify or refer to a data element defined internally in the called module. This makes maintenance difficult, as the program will be difficult to understand.

UML to design

Analysis / early design produced different models. A class diagram - showing the classes in the system. A dynamic model - comprising state diagrams, sequence diagrams, and use-cases.

Detailed design is more concerned with designing algorithms or methods. In designing methods, a number of points may be considered: Which one is cheapest to implement? Which data structures might the method manipulate or create? Will new classes (intermediate results) have to be defined to support the method?

Detailed design may also involve design optimisation which aims to refine methods for efficiency; adjustement of inheritance which aims to abstract common behaviour out of groups of classes (ie, inheritance opportunities missed in the earlier class diagrams); and information hiding which aims to finess the design to ensure that classes are black boxes whose external interface is public, but whose internal details are hidden; objects' states may be changed without unnecessarily affecting other objects and attributes that should be accessible to other objects are public and those which shouldn't be are private

Systems and subsystems


Division of a system into subsystems to increase understanding, increase reusability, and allow for the assignement of responsibility to different members of a development team.

Software architecture

In the case of large software systems, the architecture describes the structure of a system, its major modules / subsystems and how they communicate.

Abstract machine model

Used to model the interfacing of subsystems. It organises the system into a set of layers (or abstract machines) each of which provides a set of services.

When a layer interface changes, only the adjacent layer is affected, for example, with a version managment system.


Repository model

  • Advantages
    • Efficient way to share large amounts of data
    • Subsystems need not be concerned with how data is produced
    • Centralised management
  • Disadvantages
    • Subsystems must agree on a repository data model, which is inevitabily a compromise
    • Data evolution is difficult and expensive


Client server model


  • Advantages
    • Distribution of data is straightforward
    • Makes effective use of networked systems
    • Easy to add new servers or upgrade existing servers
  • Disadvantages
    • No shared data model, so subsystems may use different data organisation
    • Data interchange may be inefficient
    • Redundant management in each server
    • No central register of names and services - it may be hard to find out what servers and services are available

Thin and fat clients


In a thin client model, all of the application processing and data management is carried out on the server. The client is simply responsible for running the presentation software


In a fat client model, the server is only responsible for data management. The software on the client implements the application logic and the interactions with the system user.

Three-tier architectures

Each of the application processing layers may execute on a separate processor. Allows for better performance than a thin-client approach and is simpler to manage than a far-client approach. A more scalable architecture - as demands increase, extra servers can be added.


An internet banking system


Control models - centralised control

A control subsystem takes responsibility for managing the execution of other subsystems. The call-return model is a top down subroutines model where control starts at the top of a subroutine hierarchy and moves downwards.


Event-driven systems

Broadcast driven

This is where different subsystems are on a network. The subsystems register an interest in specific events, and when these occur, control is transferred to a subsystem that can handle the event. The event is then handled.


Interrupt driven

Real time systems where fast response is essential. Knwon interrupt types, each with a handler are defined. Each type is associated with a memory location, and a hardware switch causes transfer to its handler.


Distributed systems architectures

No distinction between clients and servers. Any object on the system may provide and use services from other objects. In contrast with client-server architectures, distributed services which are called on by clients. Servers that provide services are treated differently from clients that use services.

Distributed object architecture

There is no distinction between clients and servers. Each distributable entity is an object that provides services to other objects and receives services from other objects. Object communication is through a middleware system called an object request broker (software bus). However, this is more complex to design than client-server systems.



A distributed object architecutre allows the system designer to delay decisions on where and how services should be provided.

It is a very open system architecture that allows new resources to be added to it as required.

The system is flexible and scaleable.

It is possible to reconfigure the system dynamically with objects migrating across the network as required.


No Comments

Back to top