Prepared By:
Lecturer CS/IT
MCS, M.Phil (CS)
Software Engineering
CMP-3310
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 2
Dedication
All glories praises and gratitude to Almighty Allah Pak, who blessed
us with a super, unequalled processor! Brain…
I dedicate all of my efforts to my students who gave me an urge and
inspiration to work more.
I also dedicate this piece of effort to my family members who always
support me to move on. Especially, my father (Ch. Ali Muhammad)
who pushed me ever and makes me to stand at this position today.
Muhammad Shahid Azeem
Author
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 3
Course Syllabus: 1. The Nature of Software, Unique Nature of WebApps, Software Engineering, The Software
Process, Software Engineering Practice, Software Myths. [TB1: Ch. 1]
2. Generic Process Models: Framework Activity, Task Set, Process Patterns, Process
Improvement, And CMM, Prescriptive Process Models: Waterfall Model, Incremental Process
Model, And Evolutionary Process Model. [TB1: Ch. 2]
3. Specialized Process Models: Component Based Development, The Formal Methods Models,
Agile Development. [TB1: Ch. 2-3]
4. Introduction to Systems Analysis and Design, Business Information Systems, Information
System Components, Types of Information Systems, Evaluating Software, Make or Buy
Decision.
5. Introduction to SDLC, SDLC Phases, System Planning, Preliminary Investigation, SWOT
Analysis. [TB1: Ch. 2]
6. The Importance of Strategic Planning, Information Systems Projects, Evaluation of
Systems Requests, Preliminary Investigation, Systems Analysis, Requirements Modeling, Fact-
Finding Techniques. [TB1: Ch. 2-3]
7. Requirements Engineering, Establishing the Groundwork, Eliciting Requirements,
Developing Use Cases, Building the Requirements Model. [TB1: Ch. 5]
8. Requirements Modeling Strategies, Difference between Structured Analysis and Object
Oriented Analysis; Difference between FDD Diagrams & UML Diagrams. [TB2:Ch. 3]
9. Data & Process Modeling, Diagrams: Data Flow, Context, Conventions, Detailed Level
DFD’s Diagram 0, Leveling, Balancing, Logical Versus Physical Models. [TB2: Ch. 4]
10. Design within the Context of Software Engineering, The Design Process, Design
Concepts, Design Models: Data Design Elements. [TB1: Ch. 8]
11. Architecture Design Elements, Interface Design Elements, Component-Level Design
Elements, Deployments Design Elements. [TB: Ch. 8]
12. System Architecture, Architectural Styles, User Interface Design: The Golden Rules, User
Interface Analysis and Design, WebApps Interface Design. [TB1: Ch. 9-11]
13. Software Quality Assurance: Background Issues, Elements of Software Quality Assurance,
Software Testing Strategies, Strategic Issues, Test Strategies for Conventional Software. [TB1:
Ch.16-17]
14. Validation Testing, System Testing, Internal and External View of Testing: White Box Testing and Black Box Testing Techniques. [TB1: Ch. 17-18)]
15. Introduction to Project Management, Project Scheduling: Gantt Chart, Risk Management:
Proactive versus Reactive Risk Strategies, Software Risks, Maintenance and Reengineering:
Software Maintenance, Software Reengineering. [TB1: Ch. 28-29]
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 4
This page is left blank intentionally
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 5
Chapter 01
Software and Software Engineering Software:
Software is a well-defined set of instructions (computer programs) that when
executed provide desired features, function, and performance; some kind of data structures
that enable the programs to adequately manipulate information, and a descriptive
information in both hard copy and virtual forms that describes the operation and use of the
programs.
Some of the constituted items of software are described below.
Program: The program or code itself is definitely included in the software.
Data: The data on which the program operates is also considered as part of the
software.
Documentation: Another very important thing that most of us forget is
documentation. All the documents related to the software are also considered as part
of the software.
Characteristics of Software: 1. Software is developed or engineered; it is not manufactured in the classical sense.
2. Software doesn’t “wear out.”
3. Although the industry is moving toward component-based construction, most
software continues to be custom built.
Software Application Domains Today, seven broad categories of computer software present continuing challenges for
software engineers:
System software—System Software is a collection of programs written to service other
programs. Some system software (e.g., compilers, editors, and file management utilities)
processes complex, but determinate, 4 information structures. Other systems applications
(e.g., operating system components, drivers, networking software, telecommunications
processors) process largely indeterminate data. In either case, the systems software area is
characterized by heavy interaction with computer hardware; heavy usage by multiple users;
concurrent operation that requires scheduling, resource sharing, and sophisticated process
management; complex data structures; and multiple external interfaces.
Application software—stand-alone programs that solve a specific business need.
Applications in this area process business or technical data in a way that facilitates business
operations or management/technical decision making. In addition to conventional data
processing applications, application software is used to control business functions in real
time (e.g., point-of-sale transaction processing, real-time manufacturing process control).
Engineering/scientific software—has been characterized by “number crunching”
algorithms. Applications range from astronomy to volcanology, from automotive stress
analysis to space shuttle orbital dynamics, and from molecular biology to automated
manufacturing. However, modern applications within the engineering/scientific area are
moving away from conventional numerical algorithms. Computer-aided design, system
simulation, and other interactive applications have begun to take on real-time and even
system software characteristics.
Embedded software—resides within a product or system and is used to implement and
control features and functions for the end user and for the system itself. Embedded software
can perform limited and esoteric functions (e.g., key pad control for a microwave oven) or
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 6
provide significant function and control capability (e.g., digital functions in an automobile
such as fuel control, dashboard displays, and braking systems).
Product-line software—designed to provide a specific capability for use by many different
customers. Product-line software can focus on a limited and esoteric marketplace (e.g.,
inventory control products) or address mass consumer markets (e.g., word processing,
spreadsheets, computer graphics, multimedia, entertainment, database management, and
personal and business financial applications).
Web applications— also called “WebApps,” this network-centric software category spans a
wide array of applications. In their simplest form, WebApps can be little more than a set of
linked hypertext files that present information using text and limited graphics. However, as
Web 2.0 emerges, WebApps are evolving into sophisticated computing environments that
not only provide stand-alone features, computing functions, and content to the end user, but
also are integrated with corporate databases and business applications.
Artificial intelligence software—makes use of non numerical algorithms to solve complex
problems that are not amenable to computation or straightforward analysis. Applications
within this area include robotics, expert systems, pattern recognition (image and voice),
artificial neural networks, theorem proving, and game playing.
The Nature of Software: Today, software plays dual role. It is a product, and at the same time, the
vehicle for delivering a product. As a product, it delivers the computing potential embodied
by computer hardware or more broadly, by a network of computers that are accessible by
local hardware. Whether it resides within a mobile phone or operates inside a mainframe
computer, software is information transformer, producing, managing, acquiring, modifying,
displaying, or transmitting information that can be as simple as a single bit or as complex as
a multimedia presentation derived from data acquired from dozens of independent sources.
As the vehicle used to deliver the product, software acts as the basis for the control of the
computer (operating systems), the communication of information (networks), and the
creation and control of other programs (software tools and environments).
Software delivers the most important product of our time—information. It
transforms personal data so that the data can be more useful in a local context; it manages
business information to enhance competitiveness; it provides a gateway to worldwide
information networks (e.g., the Internet), and provides the means for acquiring information
in all of its forms.
The role of computer software has undergone significant change over the last
half-century. Dramatic improvements in hardware performance, profound changes in
computing architectures, vast increases in memory and storage capacity, and a wide variety
of exotic input and output options, have all precipitated more sophisticated and complex
computer-based systems. Sophistication and complexity can produce dazzling results when a
system succeeds, but they can also pose huge problems for those who must build complex
systems.
The Unique Nature of Web Apps: In the early days of the World Wide Web (circa 1990 to 1995), websites
consisted of little more than a set of linked hypertext files that presented information using
text and limited graphics. As time passed, the growth of HTML by development tools (e.g.,
XML, Java) enabled Web engineers to provide computing capability along with
informational content. Web-based systems and applications were born. Today, WebApps
have evolved into sophisticated computing tools that not only provide stand-alone function
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 7
to the end user, but also have been integrated with corporate databases and business
applications. WebApps are one of a number of distinct software categories. And yet, it can
be argued that WebApps are different. Powell suggests that Web-based systems and
applications “involve a mixture between print publishing and software development,
between marketing and computing, between internal communications and external relations,
and between art and technology.”
The following attributes are encountered in the vast majority of WebApps.
Network intensiveness: A WebApp resides on a network and must serve the needs of a
diverse community of clients. The network may enable world wide access and
communication (i.e., the Internet) or more limited access and communication (e.g., a
corporate Intranet).
Concurrency: A large number of users may access the WebApp at one time. In many cases,
the patterns of usage among end users will vary greatly.
Unpredictable load: The number of users of the WebApp may vary by orders of magnitude
from day to day. One hundred users may show up on Monday; 10,000 may use the system
on Thursday.
Performance: If a WebApp user must wait too long (for access, for server-side processing,
for client-side formatting and display), he or she may decide to go elsewhere.
Availability: Although expectation of 100 percent availability is unreasonable, users of
popular WebApps often demand access on a 24/7/365 basis. Users in Australia or Asia
might demand access during times when traditional domestic software applications in North
America might be taken off-line for maintenance.
Data driven: The primary function of many WebApps is to use hypermedia to present text,
graphics, audio, and video content to the end user. In addition, WebApps are commonly
used to access information that exists on databases that are not an integral part of the Web-
based environment (e.g. e-commerce or financial applications).
Content sensitive: The quality and aesthetic nature of content remains an important
determinant of the quality of a WebApp.
Continuous evolution: Unlike conventional application software that evolves over a series
of planned, chronologically spaced releases, Web applications evolve continuously. It is not
unusual for some WebApps (specifically, their content) to be updated on a minute-by-
minute schedule or for content to be independently computed for each request.
Immediacy: Although immediacy—the compelling needs to get software to market
quickly—is a characteristic of many application domains, WebApps often exhibit a time-to-
market that can be a matter of a few days or weeks.7
Security: Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In order to
protect sensitive content and provide secure modes of data transmission, strong security
measures must be implemented throughout the infrastructure that supports a WebApp and
within the application itself.
Aesthetics: An undeniable part of the appeal of a WebApp is its look and feel. When an
application has been designed to market or sell products or ideas, aesthetics may have as
much to do with success as technical design.
Software Engineering
Software Engineering as defined by IEEE (International institute of Electric
and Electronic Engineering). IEEE is an authentic institution regarding the computer related
issues.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 8
“The application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; that is, the application of engineering
to software.”
Before explaining this definition lets first look at another definition of
Software Engineering given by Ian Somerville.
“All aspects of software production’ Software engineering is not just
concerned with the technical processes of software development but also with activities such
as software project management and with the development of tools, methods and theories to
support software production”.
A definition proposed by Fritz Bauer at the seminal conference on the subject
still serves as a basis for discussion:
“Software engineering is the establishment and use of sound engineering
principles in order to obtain economically software that is reliable and works efficiently on
real machines.”
These definitions make it clear that Software Engineering is not just about
writing code.
Software Engineering Framework Any Engineering approach must be founded on organizational commitment
to quality. That means the software development organization must have special focus on
quality while performing the software engineering activities. Based on this commitment to
quality by the organization, a software engineering framework is proposed. The major
components of this framework are described below.
Quality Focus: As we have said earlier, the given framework is based on the organizational
commitment to quality. The quality focus demands that processes be defined for rational and
timely development of software. And quality should be emphasized while executing these
processes.
Processes: The processes are set of key process areas (KPAs) for effectively manage and
deliver quality software in a cost effective manner. The processes define the tasks to be
performed and the order in which they are to be performed. Every task has some
deliverables and every deliverable should be delivered at a particular milestone.
Methods: Methods provide the technical “how-to’s” to carry out these tasks. There could be
more than one technique to perform a task and different techniques could be used in
different situations.
Tools: Tools provide automated or semi-automated support for software processes, methods,
and quality control.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 9
The Software Process:
A process is a collection of activities, actions, and tasks that are performed
when some work product is to be created. An activity strives to achieve a broad objective
(e.g., communication with stakeholders) and is applied regardless of the application domain,
size of the project, complexity of the effort, or degree of rigor with which software
engineering is to be applied. An action (e.g., architectural design) encompasses a set of tasks
that produce a major work product (e.g., an architectural design model). A task focuses on a
small, but well-defined objective (e.g., conducting a unit test) that produces a tangible
outcome.
In the context of software engineering, a process is not a rigid prescription for
how to build computer software. Rather, it is an adaptable approach that enables the people
doing the work (the software team) to pick and choose the appropriate set of work actions
and tasks. The intent is always to deliver software in a timely manner and with sufficient
quality to satisfy those who have sponsored its creation and those who will use it.
A process framework establishes the foundation for a complete software
engineering process by identifying a small number of framework activities that are
applicable to all software projects, regardless of their size or complexity. In addition, the
process framework encompasses a set of umbrella activities that are applicable across the
entire software process. A generic process framework for software engineering encompasses
five activities:
Communication: Before any technical work can commence, it is critically important to
communicate and collaborate with the customer (and other stakeholders the intent is to
understand stakeholders’ objectives for the project and to gather requirements that help
define software features and functions.
Planning: Any complicated journey can be simplified if a map exists. A software project is
a complicated journey, and the planning activity creates a “map” that helps guide the team as
it makes the journey. The map—called a software project plan—defines the software
engineering work by describing the technical tasks to be conducted, the risks that are likely,
the resources that will be required, the work products to be produced, and a work schedule.
Modeling: Whether you’re a landscaper, a bridge builder, an aeronautical engineer, a
carpenter, or an architect, you work with models every day. You create a “sketch” of the
thing so that you’ll understand the big picture—what it will look like architecturally, how
the constituent parts fit together, and many other characteristics. If required, you refine the
sketch into greater and greater detail in an effort to better understand the problem and how
you’re going to solve it. A software engineer does the same thing by creating models to
better understand software requirements and the design that will achieve those requirements.
Construction: This activity combines code generation (either manual or automated) and the
testing that is required to uncover errors in the code.
Deployment: The software (as a complete entity or as a partially completed increment) is
delivered to the customer who evaluates the delivered product and provides feedback based
on the evaluation.
These five generic framework activities can be used during the development
of small, simple programs, the creation of large Web applications, and for the engineering of
large, complex computer-based systems. The details of the software process will be quite
different in each case, but the framework activities remain the same.
For many software projects, framework activities are applied iteratively as a
project progresses. That is, communication, planning, modeling, construction, and
deployment are applied repeatedly through a number of project iterations. Each project
iteration produces a software increment that provides stakeholders with a subset of overall
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 10
software features and functionality. As each increment is produced, the software becomes
more and more complete.
Software engineering process framework activities are complemented by a
number of umbrella activities. In general, umbrella activities are applied throughout a
software project and help a software team manage and control progress, quality, change, and
risk. Typical umbrella activities include:
Software project tracking and control—allows the software team to assess progress
against the project plan and take any necessary action to maintain the schedule.
Risk management: assesses risks that may affect the outcome of the project or the quality
of the product.
Software quality assurance: defines and conducts the activities required to ensure software
quality.
Technical reviews: assesses software engineering work products in an effort to uncover and
remove errors before they are propagated to the next activity.
Measurement—defines and collects process, project, and product measures that assist the
team in delivering software that meets stakeholders’ needs; can be used in conjunction with
all other framework and umbrella activities.
Software configuration management—manages the effects of change throughout the
software process.
Reusability management—defines criteria for work product reuse (Including software
components) and establishes mechanisms to achieve reusable components.
Work product preparation and production: encompasses the activities required to create
work products such as models, documents, logs, forms, and lists.
Software engineering process is not a rigid prescription that must be followed
strictly by a software team. Rather, it should be agile and adaptable (to the problem, to the
project, to the team, and to the organizational culture). Therefore, a process adopted for one
project might be significantly different than a process adopted for another project. Among
the differences are
Overall flow of activities, actions, and tasks and the interdependencies among them
Degree to which actions and tasks are defined within each framework activity
Degree to which work products are identified and required.
Manner in which quality assurance activities are applied
Manner in which project tracking and control activities are applied
Overall degree of detail and rigor with which the process is described
Degree to which the customer and other stakeholders are involved with the project
Level of autonomy given to the software team
Degree to which team organization and roles are prescribed.
Software Engineering Practice: Generic software process model composed of a set of activities that establish
a framework for software engineering practice. Generic framework activities—
communication, planning, modeling, construction, and deployment—and umbrella
activities establish a skeleton architecture for software engineering work. But how does the
practice of software engineering fit in? Let’s gain a basic understanding of the generic
concepts and principles that apply to framework activities.
The Essence of Practice
George Polya outlined the essence of problem solving, and consequently, the
essence of software engineering practice:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 11
1. Understand the problem (communication and analysis).
2. Plan a solution (modeling and software design).
3. Carry out the plan (code generation).
4. Examine the result for accuracy (testing and quality assurance).
Understand the problem. It’s sometimes difficult to admit, but most of us suffer from
hubris when we’re presented with a problem. We listen for a few seconds and then think, Oh
yeah, I understand, let’s get on with solving this thing. Unfortunately, understanding isn’t
always that easy. It’s worth spending a little time answering a few simple questions:
Who has a stake in the solution to the problem? That is, who are the stakeholders?
(A stakeholder is anyone who has a stake in the successful outcome of the project—
business managers, end users, software engineers, support people, etc.).
What are the unknowns? What data, functions, and features are required to properly
solve the problem?
Can the problem be compartmentalized? Is it possible to represent smaller problems
that may be easier to understand?
Can the problem be represented graphically? Can an analysis model be created?
Plan the solution. Now you understand the problem (or so you think) and you can’t wait to
begin coding. Before you do, slow down just a bit and do a little design:
Have you seen similar problems before? Are there patterns that are
recognizable in a potential solution? Is there existing software that
implements the data, functions, and features that are required?
Has a similar problem been solved? If so, are elements of the solution
reusable?
Can sub-problems be defined? If so, are solutions readily apparent for the
sub-problems?
Can you represent a solution in a manner that leads to effective
implementation? Can a design model be created?
Carry out the plan. The design you’ve created serves as a road map for the system you
want to build. There may be unexpected detours, and it’s possible that you’ll discover an
even better route as you go, but the “plan” will allow you to proceed without getting lost.
Does the solution conform to the plan? Is source code traceable to the design
model?
Is each component part of the solution provably correct? Have the design and
code been reviewed, or better, have correctness proofs been applied to the
algorithm?
Examine the result. You can’t be sure that your solution is perfect, but you can be sure that
you’ve designed a sufficient number of tests to uncover as many errors as possible.
Is it possible to test each component part of the solution? Has a reasonable
testing strategy been implemented?
Does the solution produce results that conform to the data, functions, and
features that are required? Has the software been validated against all
stakeholder requirements?
It shouldn’t surprise you that much of this approach is common sense. In fact,
it’s reasonable to state that a commonsense approach to software engineering
will never lead you astray.
General Principles The dictionary defines the word principle as “an important underlying law or
assumption required in a system of thought.” Throughout this book I’ll discuss principles at
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 12
many different levels of abstraction. Some focus on software engineering as a whole, others
consider a specific generic framework activity (e.g., communication), and still others focus
on software engineering actions (e.g., architectural design) or technical tasks (e.g., write a
usage scenario). Regardless of their level of focus, principles help you establish a mind-set
for solid software engineering practice. They are important for that reason.
David Hooker has proposed seven principles that focus on software
engineering practice as a whole. They are reproduced in the following paragraphs:
The First Principle: The Reason It All Exists
A software system exists for one reason: to provide value to its users. All
decisions should be made with this in mind. Before specifying a system requirement, before
noting a piece of system functionality, before determining the hardware platforms or
development processes, ask yourself questions such as: “Does this add real value to the
system?” If the answer is “no,” don’t do it. All other principles support this one.
The Second Principle: KISS (Keep It Simple, Stupid!)
Software design is not a haphazard process. There are many factors to
consider in any design effort. All design should be as simple as possible, but no simpler.
This facilitates having a more easily understood and easily maintained system. This is not to
say that features, even internal features, should be discarded in the name of simplicity.
Indeed, the more elegant designs are usually the more simple ones. Simple also does not
mean “quick and dirty.” In fact, it often takes a lot of thought and work over multiple
iterations to simplify. The payoff is software that is more maintainable and less error-prone.
The Third Principle: Maintain the Vision
A clear vision is essential to the success of a software project. Without one, a
project almost unfailingly ends up being “of two or more minds” about itself. Without
conceptual integrity, a system threatens to become a patchwork of incompatible designs,
held together by the wrong kind of screws. . . . Compromising the architectural vision of a
software system weakens and will eventually break even the well-designed systems. Having
an empowered architect who can hold the vision and enforce compliance helps ensure a very
successful software project.
The Fourth Principle: What You Produce, Others Will Consume
Seldom is an industrial-strength software system constructed and used in a
vacuum. In some way or other, someone else will use, maintain, document, or otherwise
depend on being able to understand your system. So, always specify, design, and implement
knowing someone else will have to understand what you are doing. The audience for any
product of software development is potentially large. Specify with an eye to the users.
Design, keeping the implementers in mind. Code with concern for those that must maintain
and extend the system. Someone may have to debug the code you write, and that makes
them a user of your code. Making their job easier adds value to the system.
The Fifth Principle: Be Open to the Future
A system with a long lifetime has more value. In today’s computing
environments, where specifications change on a moment’s notice and hardware platforms
are obsolete just a few months old, software lifetimes are typically measured in months
instead of years. However, true “industrial-strength” software systems must endure far
longer. To do this successfully, these systems must be ready to adapt to these and other
changes. Systems that do this successfully are those that have been designed this way from
the start. Never design yourself into a corner.
Always ask “what if,” and prepare for all possible answers by creating
systems that solve the general problem, not just the specific one. This could very possibly
lead to the reuse of an entire system.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 13
The Sixth Principle: Plan Ahead for Reuse
Reuse saves time and effort.15Achieving a high level of reuse is arguably the
hardest goal to accomplish in developing a software system. The reuse of code and designs
has been proclaimed as a major benefit of using object-oriented technologies. However, the
return on this investment is not automatic. To leverage the reuse possibilities that object-
oriented programming provides requires forethought and planning. There are many
techniques to realize reuse at every level of the system development process. . . . Planning
ahead for reuse reduces the cost and increases the value of both the reusable components
and the systems into which they are incorporated.
The Seventh principle: Think!
This last principle is probably the most overlooked. Placing clear, complete
thought before action almost always produces better results. When you think about
something, you are more likely to do it right. You also gain knowledge about how to do it
right again. If you do think about something and still do it wrong, it becomes a valuable
experience. A side effect of thinking is learning to recognize when you don’t know
something, at which point you can research the answer.
If every software engineer and every software team simply followed
Hooker’s seven principles, many of the difficulties we experience in building complex
computer based systems would be eliminated.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 14
Chapter 02
SOFTWARE MODELS Software Process:
Software process is a set of activities, actions, and tasks that are required to
build high-quality software. A software process defines the approach that is taken as
software is engineered. It provides stability, control, and organization to an activity that can,
if left uncontrolled, become quite chaotic. To build an efficient and working product, the
software process must be efficient. There are a number of software process assessment
mechanisms that enable organizations to determine the “maturity” of their software process.
However, the quality, timeliness, and long-term viability of the product you build are the
best indicators of the efficacy of the process.
A Generic Process Model: A process was defined as a collection of work activities, actions, and tasks
that are performed when some work product is to be created. Each of these activities,
actions, and tasks reside within a framework or model that defines their relationship with the
process and with one another.
The software process is represented
schematically in Following Figure. In this figure, each
framework activity is populated by a set of software
engineering actions. Each software engineering action is
defined by a task set that identifies the work tasks that are
to be completed, the work products that will be produced,
the quality assurance points that will be required, and the
milestones that will be used to indicate progress.
A generic process framework for software
engineering defines five framework activities
Communication
Planning
Modeling,
Construction
Deployment.
In addition, a set of umbrella activities, project tracking and control, risk
management, quality assurance, configuration management, technical reviews, and others—
are applied throughout the process.
Process Flow:
An important aspect of software process model is process flow. Process flow
defines the organization of activities, actions or tasks of a process model with respect to
sequence and time.
These are various process flows. i.e.
1. Linear Process Flow
A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment.
2. Iterative Process Flow
An iterative process flow repeats one or more of the activities before proceeding to
the next activity.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 15
3. Evolutionary Process Flow
An evolutionary process flow executes the activities in a “circular” manner. Each
circuit through the five activities leads to a more complete version of the software.
4. Parallel Process Flow:
A parallel process flow executes one or more activities in parallel with other
activities.
Framework Activity:
A software team would need significantly more information before it could
properly execute any one of activities of Generic Model as part of the software process.
Therefore, you are faced with a key question: What actions are appropriate for a framework
activity, given the nature of the problem to be solved, the characteristics of the people doing
the work, and the stakeholders who are sponsoring the project?
For a small software project requested by one person (at a remote location)
with simple, straightforward requirements, the communication activity might encompass
little more than a phone call with the appropriate stakeholder. Therefore, the only necessary
action is phone conversation, and the work tasks (the task set) that this action encompasses
are:
1. Make contact with stakeholder via telephone.
2. Discuss requirements and take notes.
3. Organize notes into a brief written statement of requirements.
4. E-mail to stakeholder for review and approval.
If the project was considerably more complex with many stakeholders, each
with different set of (sometime conflicting) requirements, the communication activity might
have six distinct actions:
Inception,
Elicitation,
Elaboration,
Negotiation,
Specification,
Validation.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 16
Each of these software engineering actions would have many work tasks and
a number of distinct work products.
Identifying a Task Set Each software engineering action (e.g., elicitation, an action associated with
the communication activity) can be represented by a number of different task sets—each a
collection of software engineering work tasks, related work products, quality assurance
points, and project milestones. Software Engineers choose a task set that best accommodates
the needs of the project and the characteristics of team. This implies that a software
engineering action can be adapted to the specific needs of the software project and the
characteristics of the project team.
Process Patterns Every software team encounters problems as it moves through the software
process. It would be useful if proven solutions to these problems were readily available to
the team so that the problems could be addressed and resolved quickly. A process pattern
describes a process-related problem that is encountered during software engineering work,
identifies the environment in which the problem has been encountered, and suggests one or
more proven solutions to the problem. A process pattern provides a template, a consistent
method for describing problem solutions within the context of the software process. By
combining patterns, a software team can solve problems and construct a process that best
meets the needs of a project.
Patterns can be defined at any level of abstraction. In some cases, a pattern
might be used to describe a problem (and solution) associated with a complete process
model (e.g., prototyping). In other situations, patterns can be used to describe a problem
(and solution) associated with a framework activity (e.g., planning) or an action within a
framework activity (e.g., project estimating).
Ambler has proposed a template for describing a process pattern:
Pattern Name. The pattern is given a meaningful name describing it within the
context of the software process (e.g., Technical Reviews).
Forces. The environment in which the pattern is encountered and the issues that
make the problem visible and may affect its solution.
Type. The pattern type is specified. Ambler suggests three types:
1. Stage pattern—defines a problem associated with a framework activity for the process.
Since a framework activity encompasses multiple actions and work tasks, a stage pattern
incorporates multiple task patterns that are relevant to the stage (framework activity). An
example of a stage pattern might be Establishing Communication. This pattern would
incorporate the task pattern Requirements Gathering and others.
2. Task pattern—defines a problem associated with a software engineering action or work
task and relevant to successful software engineering practice (e.g., Requirements
Gathering is a task pattern).
3. Phase pattern—define the sequence of framework activities that occurs within the
process, even when the overall flow of activities is iterative in nature.
Process Assessment and Improvement: The existence of a software process is no guarantee that software will be
delivered on time, that it will meet the customer’s needs, or that it will exhibit the technical
characteristics that will lead to long-term quality characteristics. Process patterns must be
coupled with solid software engineering practice. In addition, the process itself can be
assessed to ensure that it meets a set of basic process criteria that have been shown to be
essential for a successful software engineering.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 17
A number of different approaches to software process assessment and
improvement have been proposed over the past few decades:
Standard CMMI Assessment Method for Process Improvement (SCAMPI)
SCAMPI provides a five-step process assessment model that incorporates
five phases: initiating, diagnosing, establishing, acting, and learning. The SCAMPI method
uses the SEI CMMI as the basis for assessment.
CMM-Based Appraisal for Internal Process Improvement (CBA IPI)
It provides a diagnostic technique for assessing the relative maturity of a
software organization; uses the SEI CMM as the basis for the assessment.
SPICE (ISO/IEC15504) A standard that defines a set of requirements for software process assessment.
The intent of the standard is to assist organizations in developing an objective evaluation of
the efficacy of any defined software process.
ISO 9001:2000 For Software—a generic standard that applies to any organization that wants
to improve the overall quality of the products, systems, or services that it provides.
Therefore, the standard is directly applicable to software organizations and companies.
Prescriptive Process Models Prescriptive process models were originally proposed to bring order to the
chaos of software development. History has indicated that these traditional models have
brought a certain amount of useful structure to software engineering work and have provided
a reasonably effective road map for software teams.
All software process models can accommodate the generic framework
activities, but each applies a different emphasis to these activities and defines a process flow
that invokes each framework activity in a different manner.
Waterfall Model: Waterfall Model is also called Linear Sequential Model or Classic Life Cycle
Model. In this Model, software evolution proceeds through an orderly sequence of
transitions from one phase to the next in order. The waterfall model is a sequential design
process, often used in software development processes, in which progress is seen as flowing
steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis,
Design, Construction, Testing, Production/Implementation and Maintenance these models
have been perhaps most useful in helping to structure, staff, and manage large software
development projects in complex organizational settings, which was one of the primary
purposes.
With the waterfall model, the activities performed in a software
development project are requirements analysis, project planning, system design, detailed
design, coding and unit testing, system integration and testing. First to clearly identify the
end of a phase and then comes beginning of the others. In this way, software passes through
all of its development steps one by one. Graphical representation of the Waterfall Model is
given below.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 18
Merits of Waterfall Model:
The waterfall model offers numerous advantages for software developers.
Simple and Straight Forward:
It is a linear model and of course, linear models are the most simple to be
implemented. It is conceptually straightforward and divides the large task of building a
software system into a series of cleanly divided phases, each phase dealing with a separate
logical concern.
Disciplined and easy to Administer:
The staged development cycle enforces discipline: every phase has a defined
start and end point, and progress can be conclusively identified (through the use of
milestones) by both vendor and client. It is also easy to administer in a contractual setup—as
each phase is completed and its work product produced. Due to division in Phases of the
project, its progress is easy to track.
Reduction of Rework:
The emphasis on requirements and design before writing a single line of code
ensures minimal wastage of time and effort and reduces the risk of schedule slippage, or of
customer expectations not being met.
Quality Assurances:
Getting the requirements and design out of the way first also improves
quality; Verification at each stage ensures early detection of errors and misunderstanding it
is much easier to catch and correct possible flaws at the design stage than at the testing
stage, after all the components have been integrated and tracking down specific errors is
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 19
more complex. So catching these errors at earlier stages assures an acceptable and quality
product.
Distributed Implementation:
Finally, because the first two phases end in the production of a formal
specification, the waterfall model can aid efficient knowledge transfer when team members
are dispersed in different locations.
Demerits of Waterfall Model:
Static Requirements:
The main disadvantage of the Waterfall Model is that it assumes that the
requirements remain same throughout the development of the Project. So, in case the project
is going wrong, developer can’t go back and change the requirements as it’ll create a lot of
confusions.
Requirements gathering problem:
As waterfall model requires that requirements can’t be changed after once
gathered. But practically it is impossible to get final and static requirements, as often, it
happens that the client is not sure exactly that what he wants to be automated. This is
possible for system designer to automate an existing manual system. But for a quite new
system, determining the requirements in begging is very difficult; therefore, having
unchanging requirements is unrealistic for real project.
Ever Changing Technology:
Freezing the requirements usually requires choosing the hardware (since it
forms a part of the requirement specification). A large project might take a few years to
complete. If the hardware is selected early, then due to the speed at which hardware
technology is changing, it is quite possible that the final software will employ a hardware
technology that is on the end of its life and going to be absolute soon. This is clearly not
desirable for such expensive software.
No Preview during Process:
Until the final stage of the development cycle is complete, it is impossible to
show a working demonstration to the client. Thus, he is not in a position to verify if what has
been designed is exactly what he desire.
Waste of Time:
The main property of waterfall model is that a stage can’t start until the
previous is completed. Due to this property Developers are often blocked unnecessarily, due
to previous tasks not being done. So, a lot of precious time is wasted.
Incremental Model
This is a combination of the linear sequential model and the iterative model.
In the incremental model, as opposed to the waterfall model, the product is partitioned into
smaller pieces which are then built and delivered to the client in increments at regular
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 20
intervals. Since each piece is much smaller than the whole, it can be built and sent to the
client quickly. This results in quick feedback from the client and any requirement related
errors or changes can be incorporated at a much lesser cost. It is therefore less disturbing as
compared to the waterfall model. It also required smaller capital outlay and yield a rapid
return on investment. However, this model needs and open architecture to allow integration
of subsequent builds to yield the bigger product.
There are two fundamental approaches to the incremental development.
In the first case, the requirements, specifications, and architectural design for the whole
product are completed before implementation of the various builds commences.
In Second case, once the user requirements have been elicited, the specifications of the first
build are drawn up. When this has been completed, the specification team turns to the
specification of the second build while the design team designs the first build. Thus the
various builds are constructed in parallel, with each team making use of the information
gained in the all the previous builds. This approach incurs the risk that the resulting build
will not fit together and hence requires careful monitoring.
Merits of Incremental Model:
Operational on early Stage
It delivers an operational quality product at each stage, but one that satisfies
only a subset of the client’s requirements.
Less Staff Required
A relative small number of programmers/developers may be used.
Clients Feed Back
From the delivery of the first build, the client is able to perform useful work.
There is a working system at all times and progress is visible, rather than being buried in
documents. In this fashion, Clients can see the system and provide feedback which reduces
the fear of any key feature missing or chances of errors.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 21
Gradual Evaluation in the existing System
It reduces the traumatic effect of imposing a completely new product on the
client organization by providing a gradual introduction.
Reducing Complexity
Most importantly, it breaks down the problem into sub-problems, dealing
with reduced complexity, and reduced the ripple effect of changes by reducing the scope to
only a part of the problem at a time.
Demerits of Incremental Model:
Difficult Integration
Each additional build has somehow to be incorporated into the existing
structure without degrading the quality of what has been build to date. So, addition of
succeeding builds must be easy and straightforward.
Unexpected Problems:
The more the succeeding builds is the source of unexpected problems, the
more the existing structure has to be reorganized, leading to inefficiency and degrading
internal quality and degrading maintainability.
Design errors:
Design errors become part of the system and are hard to remove.
Clients Creeping Requirements:
Due to gradual development of the system when client is reviewing the
system, he finds possibilities and wants to change requirements. Such creeping requirements
of the client lead to degrade product
Evolutionary Process Models: Software evolves over a period of time. Business and product requirements
often change as development proceeds, making a straight line path to an end product
unrealistic; tight market deadlines make completion of a comprehensive software product
impossible, but a limited version must be introduced to meet competitive or business
pressure; a set of core product or system requirements is well understood, but the details of
product or system extensions have yet to be defined. In these and similar situations, you
need a process model that has been explicitly designed to accommodate a product that
evolves over time.
Evolutionary models are iterative. They are characterized in a manner that
enables you to develop increasingly more complete versions of the software. Two common
evolutionary process models are given below.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 22
Prototyping Model
The Prototyping Model is a
systems development method (SDM) in which a
prototype (a mockup of a final product) is
built, tested, and then reworked as necessary
until an acceptable prototype is finally achieved
from which the complete system or product can
now be developed The developer and customer
define the overall objectives for the software. A
quick design focuses on what the customer will
see.
From this, a prototype is constructed. The user evaluates it and improvements
are made. This continues in an iterative way until a satisfactory product is achieved. This
model works best in scenarios where not all of the project requirements are known in detail
ahead of time. It is an iterative, trial-and-error process that takes place between the
developers and the users.
Merits of Prototyping Model:
Reduced time and costs:
Prototyping can improve the quality of requirements and specifications
provided to developers. Because changes cost exponentially more to implement as they are
detected later in development, the early determination of what the user really wants can
result in faster and less expensive software.
Improved and increased user involvement:
Prototyping requires user involvement and allows them to see and interact
with a prototype allowing them to provide better and more complete feedback and
specifications. User’s examined prototype prevents many misunderstandings and
miscommunications that occur when each side believe the other understands what they said.
Since users know the problem domain better than anyone on the development team does,
increased interaction can result in final product that has greater tangible and intangible
quality. The final product is more likely to satisfy the user’s desire for look, feel and
performance.
Quality Product:
In this model, users are involved at every stage of development. This
increases the quality of the final product, as there is no chance of requirements missing. So,
it provides a better system to users, as users have natural tendency to change their mind in
specifying requirements and this method of developing systems supports this user tendency.
Quicker user feedback is available leading to better solutions.
Reduced Rework:
Errors can be detected much earlier as the system is mode side by side due to
frequent verification and validation of Prototype. This reduces the rework and extra efforts
during Development process.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 23
Users Training
During development of the system using prototyping model, users are
involved at every stage so they have a good understanding of the system. So this model is
good for the systems sing by the users which have less knowledge of IT.
Demerits of Prototyping Model:
Insufficient analysis:
The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered
final projects that are hard to maintain. Further, since a prototype is limited in functionality
it may not scale well if the prototype is used as the basis of a final deliverable, which may
not be noticed if developers are too focused on building a prototype as a model.
User confusion of prototype and finished system:
Users can begin to think that a prototype, intended to be thrown away, is
actually a final system that merely needs to be finished or polished. This can lead them to
expect the prototype to accurately model the performance of the final system when this is
not the intent of the developers. Users can also become attached to features that were
included in a prototype for consideration and then removed from the specification for a final
system. If users are able to require all proposed features be included in the final system this
can lead to conflict.
Developer misunderstanding of user objectives:
Developers may assume that users share their objectives so there is always a
fear of missing of some key feature. Users might believe they can demand auditing on every
field, whereas developers might think this is feature creep because they have made
assumptions about the extent of user requirements. If the solution provider has committed
delivery before the user requirements were reviewed, developers are between a rock and a
hard place, particularly if user management derives some advantage from their failure to
implement requirements.
Developer attachment to prototype:
Developers can also become attached to prototypes they have spent a great
deal of effort producing; this can lead to problems like attempting to convert a limited
prototype into a final system when it does not have an appropriate underlying architecture.
(This may suggest that throwaway prototyping, rather than evolutionary prototyping, should
be used.)
Excessive development time of the prototype:
A key property to prototyping is the fact that it is supposed to be done
quickly. If the developers lose sight of this fact, they very well may try to develop a
prototype that is too complex. When the prototype is thrown away the precisely developed
requirements that it provides may not yield a sufficient increase in productivity to make up
for the time spent developing the prototype. Users can become stuck in debates over details
of the prototype, holding up the development team and delaying the final product.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 24
Expense of implementing prototyping:
The startup costs for building a development team focused on prototyping
may be high. Many companies have development methodologies in place, and changing
them can mean retraining, retooling, or both. Many companies tend to just jump into the
prototyping without bothering to retrain their workers as much as they should.
Spiral model
The spiral model, also known as the spiral lifecycle model, is a systems development
lifecycle (SDLC) model used in information technology. This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favoured for large,
expensive, and complicated projects. The model takes its name from the spiral representation as
shown in the diagram. Beginning at the centre of the spiral, where there is limited detailed knowledge of requirements and small costs, successive refinement of the software is shown as a
traverse around the spiral, with a corresponding accumulation of costs as the length of the spiral
increases. Interaction between phases is not shown directly since the model assumes a sequence of
successive refinements coupled to decisions that project risks associated with the next refinement are
acceptable. Each 360° rotation around the centre of the spiral passes through four stages: planning,
seeking alternatives, evaluation of alternatives and risks, and (in the lower right quadrant) activities
equivalent to the phases represented in Waterfall Model Graphical representation of the Waterfall
Model is given below.
Steps Involved In Spiral Model:
1. The new system requirements are defined in as much detail as possible. This usually
involves interviewing a number of users representing all the external or internal users and
other aspects of the existing system.
2. A preliminary design is created for the new system.
3. A first prototype of the new system is constructed from the preliminary design. This is
usually a scaled-down system, and represents an approximation of the characteristics of the
final product.
4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in
terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second
prototype; (3) planning and designing the second prototype; (4) constructing and testing the
second prototype.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 25
5. At the customer's option, the entire project can be aborted if the risk is deemed too great.
Risk factors might involve development cost overruns, operating-cost miscalculation, or any
other factor that could, in the customer's judgment, result in a less-than-satisfactory final
product.
6. The existing prototype is evaluated in the same manner as was the previous prototype, and,
if necessary, another prototype is developed from it according to the fourfold procedure
outlined above.
7. The preceding steps are iterated until the customer is satisfied that the refined prototype
represents the final product desired.
8. The final system is constructed, based on the refined prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a
continuing basis to prevent large-scale failures and to minimize downtime.
Merits of Spiral Model:
Risk Management:
Repeated or continuous development helps in risk management. The
developers or programmers describe the characteristics with high priority first and then
develop a prototype based on these. This prototype is tested and risks are identified,
prototypes can be created to resolve them. When desired changes are made in the new
system the prototype is tested again. This continual and steady approach minimizes the risks
or failure associated with the change in the system.
Flexibility:
Adaptability in the design of spiral model in software engineering
accommodates any number of changes that may happen, during any phase of the project. It
is more able to cope with the (nearly inevitable) changes that software development
generally entails. Any extra functionality required can be added at a later stage
Controlled Process:
In Spiral Model, Project passes through Strong approval and documentation
control. After every iteration, Prototype is validated and verified. Weaknesses and missing
requirements are identified and these problems are covered in the next prototype. In this
Model software
Good for Big Projects:
Due to Iterative nature, Spiral Model is best for large and mission-critical
projects. Because of flexibility, there is no fear of any important or desired feature missing.
And due to controlled Process, risk of failure of the project is reduced to a great extent.
Client’s Training and Satisfaction:
Since the prototype building is done in small fragments or bits, cost
estimation becomes easy and the customer can gain control on administration of the new
system. Prototypes provide clients an overview of the progress and requirements and as the
model continues towards final phase, the customer's expertise on new system grows,
enabling smooth development of the product meeting client's needs.
Demerits of Spiral Model:
Not good for Small Projects:
Spiral models work best for large projects only, where the costs involved are
much higher and system pre requisites involves higher level of complexity. But this model is
not much good for relatively small Projects.
Need Expertise:
Spiral model needs extensive skill in evaluating uncertainties or risks
associated with the project and their abatement.
Hard to Manage:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 26
Spiral models work on a protocol, which needs to be followed strictly for its
smooth operation. Sometimes it becomes difficult to follow this protocol.
Risk of Increasing Cost
Evaluating the risks involved in the project can shoot up the cost and it may
be higher than the cost for building the system. In this way, spiral Model can be a very
costly Model.
Less Reusability:
This Model is highly customized and this feature limiting re-usability in the
code, and this model applied differently for each application.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 27
Chapter 03
SOFTWARE MODELS Specialized Process Models:
Specialized process models take on many of the characteristics of one or
more of the traditional models presented before. However, these models tend to be applied
when a specialized or narrowly defined software engineering approach is chosen.
Component-Based Development: Commercial off-the-shelf (COTS) software components, developed by
vendors who offer them as products, provide targeted functionality with well-defined
interfaces that enable the component to be integrated into the software that is to be built. The
component-based development model incorporates many of the characteristics of the spiral
model. It is evolutionary in nature, demanding an iterative approach to the creation of
software. However, the component-based development model constructs applications from
prepackaged software components. Modeling and construction activities begin with the
identification of candidate components. These components can be designed as either
conventional software modules or object-oriented classes or packages of classes. Regardless
of the technology that is used to create the components, the component-based development
model incorporates the following steps:
1. Available component-based products are researched and evaluated for the
application domain in question.
2. Component integration issues are considered.
3. A software architecture is designed to accommodate the components.
4. Components are integrated into the architecture.
5. Comprehensive testing is conducted to ensure proper functionality.
The component-based development model leads to software reuse, and
reusability provides software engineers with a number of measurable benefits. Software
engineering team can achieve a reduction in development cycle time as well as a reduction
in project cost if component reuse becomes part of your culture.
The Formal Methods Model The formal methods model encompasses a set of activities that leads to
formal mathematical specification of computer software. Formal methods enable you to
specify, develop, and verify a computer-based system by applying a rigorous, mathematical
notation. A variation on this approach, called clean room software engineering is currently
applied by some software development organizations.
When formal methods are used during development, they provide a
mechanism for eliminating many of the problems that are difficult to overcome using other
software engineering paradigms. Ambiguity, incompleteness, and inconsistency can be
discovered and corrected more easily—not through ad hoc review, but through the
application of mathematical analysis. When formal methods are used during design, they
serve as a basis for program verification and therefore enable you to discover and correct
errors that might otherwise go undetected. Although not a mainstream approach, the formal
methods model offers the promise of defect-free software. Yet, concern about its
applicability in a business environment has been voiced:
The development of formal models is currently quite time consuming and expensive.
Because few software developers have the necessary background to apply formal
methods, extensive training is required.
It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 28
These concerns notwithstanding, the formal methods approach has gained
adherents among software developers who must build safety-critical software and among
developers that would suffer severe economic hardship should software errors occur.
Agile Development:
Agile software development is a software development methodology that
promotes development iterations throughout the life cycle of the software development
project. Agile programming breaks down an application development project into small
modularized pieces. Each piece is referred to as iteration. Each iteration last from one to four
weeks. Each iteration is like a mini software development project. It consists of planning,
requirements, analysis, design, coding, testing and documentation, but it is done for only a
particular application feature. At the end of each iteration, the team meets and re- evaluates
project priorities. Each iteration, addressed one at a time a very short frame, adds to the
application and represents a complete functionality. Iteration may not have enough
functionality to assure releasing the product to user but the goal is to have an available
release at the end of each iteration without bugs or errors. Agile method produces very little
written documentation relative to other methods. This has resulted in criticism of agile
method as being undisciplined.
Agility for a software development organization is the ability to adopt and
react expeditiously and appropriately to changes in its environment and to demands imposed
by this environment. An agile process is one that readily embraces and supports this degree
of adaptability. So, it is not simply about the size of the process or the speed of delivery; it is
mainly about flexibility. (Crichton 2001, 27)
Agility Principles
The Agile Alliance defines 12 agility principles for those who want to
achieve agility:
1. Our highest priority is to satisfy the customer through early and continuous delivery of
valuable software.
2. Welcome changing requirements, even late in development. Agile processes harness
change for the customer’s competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with
a preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they
need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users
should be able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self–organizing teams.
12. At regular intervals, the team reflects on how to become more effective, the tunes and
adjusts its behavior accordingly.
Human Factors
Proponents of agile software development take great pains to emphasize the
importance of “people factors.” As Cockburn and Highsmith state, “Agile development
focuses on the talents and skills of individuals, molding the process to specific people and
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 29
teams.” The key point in this statement is that the process molds to the needs of the people
and team, not the other way around.
If members of the software team are to drive the characteristics of the process
that is applied to build software, a number of key traits must exist among the people on an
agile team and the team itself:
Competence:
In an agile development (as well as software engineering) context,
“competence” encompasses innate talent, specific software-related skills, and overall
knowledge of the process that the team has chosen to apply. Skill and knowledge of process
can and should be taught to all people who serve as agile team members.
Common focus:
Although members of the agile team may perform different tasks and bring
different skills to the project, all should be focused on one goal—to deliver a working
software increment to the customer within the time promised. To achieve this goal, the team
will also focus on continual adaptations (small and large) that will make the process fit the
needs of the team.
Collaboration:
Software engineering (regardless of process) is about assessing, analyzing,
and using information that is communicated to the software team; creating information that
will help all stakeholders understand the work of the team; and building information
(computer software and relevant databases) that provides business value for the customer.
To accomplish these tasks, team members must collaborate—with one another and all other
stakeholders.
Decision-making ability:
Any good software team (including agile teams) must be allowed the freedom
to control its own destiny. This implies that the team is given autonomy—decision-making
authority for both technical and project issues.
Fuzzy problem-solving ability:
Software managers must recognize that the agile team will continually have
to deal with ambiguity and will continually be buffeted by change. In some cases, the team
must accept the fact that the problem they are solving today may not be the problem that
needs to be solved tomorrow. However, lessons learned from any problem-solving activity
(including those that solve the wrong problem) may be of benefit to the team later in the
project.
Mutual trust and respect:
The agile team must become what DeMarco and Lister call a “jelled” team.
A jelled team exhibits the trust and respect that are necessary to make them “so strongly knit
that the whole is greater than the sum of the parts.”
Self-organization:
In the context of agile development, self-organization implies three things:
(1) the agile team organizes itself for the work to be done, (2) the team organizes the process
to best accommodate its local environment, (3) the team organizes the work schedule to best
achieve delivery of the software increment. Self-organization has a number of technical
benefits, but more importantly, it serves to improve collaboration and boost team morale. In
essence, the team serves as its own management. Ken Schwaber addresses these issues when
he writes: “The team selects how much work it believes it can perform within the iteration,
and the team commits to the work. Nothing de-motivates a team as much as someone else
making commitments for it. Nothing motivates a team as much as accepting the
responsibility for fulfilling commitments that it made itself.”
Merits of Agile Model:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 30
1. Flexibility:
The most important of the advantages of agile model is the ability to respond
to the changing requirements of the project. This ensures that the efforts of the development
team are not wasted, which is often the case with the other methodologies. The changes are
integrated immediately, which saves trouble later.
2. Communication between user and Developer:
There is no presumption between the development team and the customer, as
there is face to face communication and continuous inputs from the client.
3. Quality Products:
The documents are to the point, which no leaves any space for ambiguity.
The result of this is that high quality software is delivered to the client in the shortest period
of time and leaves the customer satisfied.
4. User’s Feed Back:
Users have a face to face interaction with the developer so there is no chance
of requirements missing due to user’s quick feedback.
Demerits of Agile Model:
1. Good for Small Projects:
If the projects are smaller projects, then using the agile model is certainly
profitable, but if it is a large project, then it becomes difficult to judge the efforts and the
time required for the project in the software development life cycle.
2. Creeping Requirements:
Since the requirements are ever changing, there is hardly any emphasis,
which is laid on designing and documentation. Therefore, chances of the project going off
the track easily are much more.
3. Misleading Feed Back by User:
The added problem is if the customer representative is not sure, then the
project going off track increase manifold. Only senior developers are in a better position to
take the decisions necessary for the agile type of development, which leaves hardly any
place for newbie programmers, until it is combined with the senior’s resources.
Extreme Programming (Xp)
In order to illustrate an agile process in a bit more detail, I’ll provide you with
an overview of Extreme Programming (XP), the most widely used approach to agile
software development. Although early work on the ideas and methods associated with
XP occurred during the late 1980s, the seminal work on the subject has been
written by Kent Beck. More recently, a variant of XP, called Industrial XP (IXP) has been
proposed. IXP refines XP and targets the agile process specifically for use within large
organizations.
XP Values
Beck defines a set of five values that establish a foundation for all work
performed as part of XP—communication, simplicity, feedback, courage, and respect. Each
of these values is used as a driver for specific XP activities, actions, and tasks.
In order to achieve effective communication between software engineers and
other stakeholders (e.g., to establish required features and functions for the software), XP
emphasizes close, yet informal (verbal) collaboration between customers and developers, the
establishment of effective metaphors for communicating important concepts, continuous
feedback, and the avoidance of voluminous documentation as a communication medium.
To achieve simplicity, XP restricts developers to design only for immediate
needs, rather than consider future needs. The intent is to create a simple design that can be
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 31
easily implemented in code). If the design must be improved, it can be refactored at a later
time. Feedback is derived from three sources: the implemented software itself, the customer,
and other software team members. By designing and implementing an effective testing
strategy, the software (via test results) provides the agile team with feedback. XP makes use
of the unit test as its primary testing tactic. As each class is developed, the team develops a
unit test to exercise each operation according to its specified functionality. As an increment
is delivered to a customer, the user stories or use cases that are implemented by the
increment are used as a basis for acceptance tests. The degree to which the software
implements the output, function, and behavior of the use case is a form of feedback.
Finally, as new requirements are derived as part of iterative planning, the
team provides the customer with rapid feedback regarding cost and schedule impact. Beck
argues that strict adherence to certain XP practices demands courage. A better word might
be discipline. For example, there is often significant pressure to design for future
requirements. Most software teams succumb, arguing that “designing for tomorrow” will
save time and effort in the long run. An agile XP team must have the discipline (courage) to
design for today, recognizing that future requirements may change dramatically, thereby
demanding substantial rework of the design and implemented code. By following each of
these values, the agile team inculcates respect among its members, between other
stakeholders and team members, and indirectly, for the software itself. As they achieve
successful delivery of software increments, the team develops growing respect for the XP
process.
The XP Process
Extreme Programming uses an object-oriented approach (Appendix 2) as its
preferred development paradigm and encompasses a set of rules and practices that occur
within the context of four framework activities: planning, design, coding, and testing.
Following Figure illustrates the XP process and notes some of the key ideas and tasks that
are associated with each framework activity. Key XP activities are summarized in the
paragraphs that follow.
Planning:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 32
The planning activity (also called the planning game) begins with
listening—a requirements gathering activity that enables the technical members of the XP
team to understand the business context for the software and to get a broad feel for required
output and major features and functionality. Listening leads to the creation of a set of
“stories” (also called user stories) that describe required output, features, and functionality
for software to be built. Each story is written by the customer and is placed on an index card.
The customer assigns a value (i.e., a priority) to the story based on the overall business value
of the feature or function.5 Members of the XP team then assess each story and assign a
cost—measured in development weeks—to it. If the story is estimated to require more than
three development weeks, the customer is asked to split the story into smaller stories and the
assignment of value and cost occurs again. It is important to note that new stories can be
written at any time.
Customers and developers work together to decide how to group stories into
the next release (the next software increment) to be developed by the XP team. Once a basic
commitment (agreement on stories to be included, delivery date, and other project matters) is
made for a release, the XP team orders the stories that will be developed in one of three
ways: (1) all stories will be implemented immediately (within a few weeks), (2) the stories
with highest value will be moved up in the schedule and implemented first, or (3) the riskiest
stories will be moved up in the schedule and implemented first.
After the first project release (also called a software increment) has been
delivered, the XP team computes project velocity. Stated simply, project velocity is the
number of customer stories implemented during the first release. Project velocity can then be
used to (1) help estimate delivery dates and schedule for subsequent releases and (2)
determine whether an over commitment has been made for all stories across the entire
development project. If an over commitment occurs, the content of releases is modified or
end delivery dates are changed. As development work proceeds, the customer can add
stories, change the value of an existing story, split stories, or eliminate them. The XP team
then reconsiders all remaining releases and modifies its plans accordingly.
Design:
XP design rigorously follows the KIS (keep it simple) principle. A simple
design is always preferred over a more complex representation. In addition, the design
provides implementation guidance for a story as it is written—nothing less, nothing more.
The design of extra functionality (because the developer assumes it will be required later) is
discouraged.6
XP encourages the use of CRC cards as an effective mechanism for thinking
about the software in an object-oriented context. CRC (class-responsibility collaborator)
cards identify and organize the object-oriented classes7 that are relevant to the current
software increment. The XP team conducts the design exercise using a particular process.
The CRC cards are the only design work product produced as part of the XP process.
If a difficult design problem is encountered as part of the design of a story,
XP recommends the immediate creation of an operational prototype of that portion of the
design. Called a spike solution, the design prototype is implemented and evaluated.
The intent is to lower risk when true implementation starts and to validate the
original estimates for the story containing the design problem. In the preceding section, we
noted that XP encourages refactoring—a construction technique that is also a method for
design optimization. Fowler describes refactoring in the following manner:
Refactoring is the process of changing a software system in such a way that it
does not alter the external behavior of the code yet improves the internal structure. It is a
disciplined way to clean up code and modify/simplify the internal design] that minimizes the
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 33
chances of introducing bugs. In essence, when you refactor you are improving the design of
the code after it has been written.
Because XP design uses virtually no notation and produces few, if any, work
products other than CRC cards and spike solutions, design is viewed as a transient artifact
that can and should be continually modified as construction proceeds. The intent of
refactoring is to control these modifications by suggesting small design changes that “can
radically improve the design”. It should be noted, however, that the effort required for
refactoring can grow dramatically as the size of an application grows.
A central notion in XP is that design occurs both before and after coding
commences. Refactoring means that design occurs continuously as the system is
constructed. In fact, the construction activity itself will provide the XP team with guidance
on how to improve the design.
Coding:
After stories are developed and preliminary design work is done, the team
does not move to code, but rather develops a series of unit tests that will exercise each of the
stories that is to be included in the current release (software increment).8
Once the unit test9 has been created, the developer is better able to focus on
what must be implemented to pass the test. Nothing extraneous is added (KIS). Once the
code is complete, it can be unit-tested immediately, thereby providing instantaneous
feedback to the developers. A key concept during the coding activity (and one of the most
talked about aspects of XP) is pair programming. XP recommends that two people work
together at one computer workstation to create code for a story. This provides a mechanism
for real time problem solving (two heads are often better than one) and real-time quality
assurance (the code is reviewed as it is created). It also keeps the developers focused on the
problem at hand. In practice, each person takes on a slightly different role. For example, one
person might think about the coding details of a particular portion of the design while the
other ensures that coding standards (a required part of XP) are being followed or that the
code for the story will satisfy the unit test that has been developed to validate the code
against the story.
As pair programmers complete their work, the code they develop is integrated
with the work of others. In some cases this is performed on a daily basis by an integration
team. In other cases, the pair programmers have integration responsibility.
This “continuous integration” strategy helps to avoid compatibility and
interfacing problems and provides a “smoke testing” environment that helps to uncover
errors early.
Testing:
I have already noted that the creation of unit tests before coding commences
is a key element of the XP approach. The unit tests that are created should be implemented
using a framework that enables them to be automated (hence, they can be executed easily
and repeatedly). This encourages a regression testing strategy whenever code is modified
(which is often, given the XP refactoring philosophy).
As the individual unit tests are organized into a “universal testing suite”,
integration and validation testing of the system can occur on a daily basis. This provides the
XP team with a continual indication of progress and also can raise warning flags early if
things go awry. Wells states: “Fixing small problems every few hours takes less time than
fixing huge problems just before the deadline.”
XP acceptance tests, also called customer tests, are specified by the customer
and focus on overall system features and functionality that are visible and reviewable by the
customer. Acceptance tests are derived from user stories that have been implemented as part
of a software release.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 34
Industrial XP Joshua Kerievsky describes Industrial Extreme Programming (IXP) in the
following manner: “IXP is an organic evolution of XP. It is imbued with XP’s minimalist,
customer-centric, test-driven spirit. IXP differs most from the original XP in its greater
inclusion of management, its expanded role for customers, and its upgraded technical
practices.” IXP incorporates six new practices that are designed to help ensure that an XP
project works successfully for significant projects within a large organization.
Readiness assessment:
Prior to the initiation of an IXP project, the organization should conduct a
readiness assessment. The assessment ascertains whether (1) an appropriate development
environment exists to support IXP, (2) the team will be populated by the proper set of
stakeholders, (3) the organization has a distinct quality program and supports continuous
improvement, (4) the organizational culture will support the new values of an agile team,
and (5) the broader project community will be populated appropriately.
Project community:
Classic XP suggests that the right people be used to populate the agile team to
ensure success. The implication is that people on the team must be well-trained, adaptable
and skilled, and have the proper temperament to contribute to a self-organizing team. When
XP is to be applied for a significant project in a large organization, the concept of the “team”
should morph into that of a community. A community may have a technologist and
customers who are central to the success of a project as well as many other stakeholders
(e.g., legal staff, quality auditors, manufacturing or sales types) who “are often at the
periphery of an IXP project yet they may play important roles on the project”. In IXP, the
community members and their roles should be explicitly defined and mechanisms for
communication and coordination between community members should be established.
Project chartering:
The IXP team assesses the project itself to determine whether an appropriate
business justification for the project exists and whether the project will further the overall
goals and objectives of the organization. Chartering also examines the context of the project
to determine how it complements, extends, or replaces existing systems or processes.
Test-driven management:
An IXP project requires measurable criteria for assessing the state of the
project and the progress that has been made to date. Test-driven management establishes a
series of measurable “destinations” and then defines mechanisms for determining whether or
not these destinations have been reached.
Retrospectives:
An IXP team conducts a specialized technical review after a software
increment is delivered. Called a retrospective, the review examines “issues, events, and
lessons-learned” across a software increment and/or the entire software release. The intent is
to improve the IXP process.
Continuous learning:
Because learning is a vital part of continuous process improvement,
members of the XP team are encouraged (and possibly, incented) to learn new methods and
techniques that can lead to a higher quality product. In addition to the six new practices
discussed, IXP modifies a number of existing XP practices. Story-driven development
(SDD) insists that stories for acceptance tests be written before a single line of code is
generated. Domain-driven design (DDD) is an improvement on the “system metaphor”
concept used in XP. DDD suggests the evolutionary creation of a domain model that
“accurately represents how domain experts think about their subject”. Pairing extends the
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 35
XP pair programming concept to include managers and other stakeholders. The intent is to
improve knowledge sharing among XP team members who may not be directly involved in
technical development. Iterative usability discourages front-loaded interface design in favor
of usability design that evolves as software increments are delivered and users’ interaction
with the software is studied.
IXP makes smaller modifications to other XP practices and redefines certain
roles and responsibilities to make them more amenable to significant projects for large
organizations.
The XP Debate
All new process models and methods spur worthwhile discussion and in some
instances heated debate. Extreme Programming has done both. In an interesting book that
examines the efficacy of XP, Stephens and Rosenberg argue that many XP practices are
worthwhile, but others have been overhyped, and a few are problematic. The authors suggest
that the codependent natures of XP practices are both its strength and its weakness. Because
many organizations adopt only a subset of XP practices, they weaken the efficacy of the
entire process. Proponents counter that XP is continuously evolving and that many of the
issues raised by critics have been addressed as XP practice matures. Among the issues that
continue to trouble some critics of XP are:
Requirements volatility: Because the customer is an active member of the XP team, changes to
requirements are requested informally. As a consequence, the scope of the project can
change and earlier work may have to be modified to accommodate current needs.
Proponents argue that this happens regardless of the process that is applied and that XP
provides mechanisms for controlling scope creep.
Conflicting customer needs:
Many projects have multiple customers, each with his own set of needs. In
XP, the team itself is tasked with assimilating the needs of different customers, a job that
may be beyond their scope of authority.
Requirements are expressed informally.
User stories and acceptance tests are the only explicit manifestation of
requirements in XP. Critics argue that a more formal model or specification is often needed
to ensure that omissions, inconsistencies, and errors are uncovered before the system is built.
Proponents counter that the changing nature of requirements makes such models and
specification obsolete almost as soon as they are developed.
Lack of formal design.
XP deemphasizes the need for architectural design and in many instances,
suggests that design of all kinds should be relatively informal. Critics argue that when
complex systems are built, design must be emphasized to ensure that the overall structure of
the software will exhibit quality and maintainability. XP proponents suggest that the
incremental nature of the XP process limits complexity (simplicity is a core value) and
therefore reduces the need for extensive design.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 36
Chapter 04
SYSTEM ANALYSIS AND DESIGN 4.1. Systems Analysis and Design
Systems analysis and design is a step-by-step process for developing high-
quality information systems. An information system combines information technology,
people, and data to support business requirements. For example, information systems handle
daily business transactions, improve company productivity, and help managers make sound
decisions. The IT department team includes systems analysts who plan, develop, and
maintain information systems. With increasing demand for talented people, employment
experts predict a shortage of qualified applicants to fill IT positions. Many companies list
employment opportunities on their Web sites.
4.2.Business Information Systems:
Traditionally, a company developed its own information systems, called in-
house applications, or purchased systems called software packages from outside vendors.
Today, the choice is much more complex. Options include Internet-based application
services, outsourcing, custom solutions from IT consultants, and enterprise-wide software
strategies.
Regardless of the development method, launching a new information system
involves risks as well as benefits. The greatest risk occurs when a company tries to decide
how the system will be constructed before determining what the system needs to do. Instead
of putting the cart before the horse, a company must begin by outlining its business needs
and identifying possible IT solutions. Typically, this important work is performed by
systems analysts and other IT professionals. A firm should not consider implementation
options until it has a clear set of objectives. Later on, as the system is developed, a systems
analyst’s role will vary depending on the implementation option selected.
4.3.Information System Components:
A system is a set of related components that produces specific results. For
example, specialized systems route Internet traffic, manufacture microchips, and control
complex entities like the Mars Rover. A mission-critical system is one that is vital to a
company’s operations. An order processing system, for example, is mission-critical because
the company cannot do business without it.
Every system requires input
data. For example, your computer receives
data when you press a key or click a menu
command. In an information system, data
consists of basic facts that are the system’s
raw material. Information is data that has been
transformed into output that is valuable to
users.
An information system has five
key components: hardware, software, data,
processes, and people.
Hardware
Hardware consists of
everything in the physical layer of the
information system. For example, hardware
can include servers, workstations, networks,
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 37
telecommunications equipment, fiber-optic
cables, mobile devices, scanners, digital
capture devices, and other technology-based infrastructure. As new technologies emerge,
manufacturers race to market the innovations and reap the rewards.
Hardware purchasers today face a wide array of technology choices and
decisions. In 1965, Gordon Moore, a cofounder of Intel, predicted that the number of
transistors on an integrated circuit would double about every 24 months. His concept, called
Moore’s Law, has remained valid for more than 50 years. Fortunately, as hardware became
more powerful, it also became much less expensive. Large businesses with thousands or
millions of sales transactions require companywide information systems and powerful
servers.
Software
Software refers to the programs that control the hardware and produce the
desired information or results. Software consists of system software and application
software. System software manages the hardware components, which can include a single
workstation or a global network with many thousands of clients. Either the hardware
manufacturer supplies the system software or a company purchases it from a vendor.
Examples of system software include the operating system, security software that protects
the computer from intrusion, device drivers that communicate with hardware such as
printers, and utility programs that handle specific tasks such as data backup and disk
management. System software also controls the flow of data, provides data security, and
manages network operations. In today’s interconnected business world, network software is
vitally important. Application software consists of programs that support day-to-day
business functions and provide users with the information they require. Application software
can serve one user or thousands of people throughout an organization. Examples of
company-wide applications, called enterprise applications, include order processing systems,
payroll systems, and company communications networks. On a smaller scale, individual
users increase their productivity with tools such as spreadsheets, word processors, and
database management systems.
Application software includes horizontal and vertical systems. A horizontal
system is a system, such as an inventory or payroll application, that can be adapted for use in
many different types of companies. A vertical system is designed to meet the unique
requirements of a specific business or industry, such as a Web-based retailer, a medical
practice, or a video chain.
Data
Data is the raw material that an information system transforms into useful
information. An information system can store data in various locations, called tables. By
linking the tables, the system can extract specific information. Following Figure shows a
payroll system that stores data in four separate tables. Notice that the linked tables work
together to supply 19 different data items to the screen form. Users, who would not know or
care where the data is stored, see an integrated form, which is their window into the payroll
system.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 38
Processes
Processes describe the tasks and business functions that users, managers, and
IT staff members perform to achieve specific results. Processes are the building blocks of an
information system because they represent actual day-to-day business operations. To build a
successful information system, analysts must understand business processes and document
them carefully.
People:
People who have an interest in an information system are called stakeholders.
Stakeholders include the management group responsible for the system, the users
(sometimes called end users) inside and outside the company who will interact with the
system, and IT staff members, such as systems analysts, programmers, and network
administrators who develop and support the system.
Each stakeholder group has a vital interest in the information system, but
most experienced IT professionals agree that the success or failure of a system usually
depends on whether it meets the needs of its users. For that reason, it is essential to
understand user requirements and expectations throughout the development process.
4.4.Types of Business Information Systems
In the past, IT managers divided systems into categories based on the user
group the system served. Categories and users included office systems (administrative staff),
operational systems (operational personnel), decision support systems (middle-managers and
knowledge workers), and executive information systems (top managers).
Today, traditional labels no longer apply. For example, all employees,
including top managers, use office productivity systems. Similarly, operational users often
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 39
require decision support systems. As business changes, information use also changes in most
companies. Today, it makes more sense to identify a system by its functions and features,
rather than by its users. A new set of system definitions includes enterprise computing
systems, transaction processing systems, business support systems, knowledge management
systems, and user productivity systems.
Enterprise Computing
Enterprise computing refers to information systems that support company-
wide operations and data management requirements. Wal-Mart’s inventory control system,
Boeing’s production control system, and Hilton Hotels’ reservation system are examples of
enterprise computing systems. The main objective of enterprise computing is to integrate a
company’s primary functions (such as production, sales, services, inventory control, and
accounting) to improve efficiency, reduce costs, and help managers make key decisions.
Enterprise computing also improves data security and reliability by imposing a company-
wide framework for data access and storage.
In many large companies, applications called enterprise resource planning
(ERP) systems provide cost-effective support for users and managers throughout the
company. For example, a car rental company can use ERP to forecast customer demand for
rental cars at hundreds of locations. By providing a company-wide computing environment,
many firms have been able to achieve dramatic cost reductions. Other companies have been
disappointed in the time, money, and commitment necessary to implement ERP
successfully. A potential disadvantage of ERP is that ERP systems generally impose an
overall structure that might or might not match the way a company operates. Because of its
growth and potential, many hardware and software vendors target the enterprise computing
market and offer a wide array of products and services.
Transaction Processing
Transaction processing (TP) systems process data generated by day-to-day
business operations. Examples of TP systems include customer order processing, accounts
receivable, and warranty claim processing. TP systems perform a series of tasks whenever a
specific transaction occurs. For example a TP system verifies customer data, checks the
customer’s credit status, posts the invoice to the accounts receivable system, checks to
ensure that the item is in stock, adjusts inventory data to reflect a sale, and updates the sales
activity file.
TP systems typically involve large amounts of data and are mission-critical
systems because the enterprise cannot function without them.
TP systems are efficient because they process a set of transaction-related
commands as a group rather than individually. To protect data integrity, however, TP
systems ensure that if any single element of a transaction fails, the system does not process
the rest of the transaction.
Business Support
Business support systems provide job-related information support to users at
all levels of a company. These systems can analyze transactional data, generate information
needed to manage and control business processes, and provide information that leads to
better decision-making.
The earliest business computer systems replaced manual tasks, such as
payroll processing. Companies soon realized that computers also could produce valuable
information. The new systems were called management information systems (MIS) because
managers were the primary users. Today, employees at all levels need information to
perform their jobs, and they rely on information systems for that support.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 40
A business support system can work hand in hand with a TP system. For
example, when a company sells merchandise to a customer, a TP system records the sale,
updates the customer’s balance, and makes a deduction from inventory. A related business
support system highlights slow- or fast-moving items, customers with past due balances, and
inventory levels that need adjustment.
To compete effectively, firms must collect production, sales, and shipping
data and update the company-wide business support system immediately The newest
development in data acquisition is called radio frequency identification (RFID) technology,
which uses high-frequency radio waves to track physical objects,. RFID’s dramatic growth
has been fueled by companies like Wal-Mart, which requires its suppliers to add RFID tags
to all items.
An important feature of a business support system is decision support
capability. Decision support helps users make decisions by creating a computer model and
applying a set of variables. For example, a truck fleet dispatcher might run a series of what-
if scenarios to determine the impact of increased shipments or bad weather. Alternatively, a
retailer might use what-if analysis to determine the price it must charge to increase profits by
a specific amount while volume and costs remain unchanged.
Knowledge Management
Knowledge management systems are called expert systems because they
simulate human reasoning by combining a knowledge base and inference rules that
determine how the knowledge is applied. A knowledge base consists of a large database that
allows users to find information by entering keywords or questions in normal English
phrases. A knowledge management system uses inference rules, which are logical rules that
identify data patterns and relationships. Many knowledge management systems use a
technique called fuzzy logic that allows inferences to be drawn from imprecise relationships.
Using fuzzy logic, values need not be black and white, like binary logic, but can be many
shades of gray. The results of a fuzzy logic search will display in priority order, with the
most relevant results at the top of the list.
User Productivity
Companies provide employees at all levels with technology that improves
productivity. Examples of user productivity systems include e-mail, voice mail, fax, video
and Web conferencing, word processing, automated calendars, database management,
spreadsheets, desktop publishing, presentation graphics, company intranets, and high-speed
Internet access. User productivity systems also include groupware. Groupware programs run
on a company intranet and enable users to share data, collaborate on projects, and work in
teams. GroupWise, offered by Novell, is a popular example of groupware. When companies
first installed word processing systems, managers expected to reduce the number of
employees as office efficiency increased. That did not happen, primarily because the basic
nature of clerical work changed. With computers performing most of the repetitive work,
managers realized that office personnel could handle tasks that required more judgment,
decision-making, and access to information.
Computer-based office work expanded rapidly as companies assigned more
responsibility to employees at lower organizational levels. Relatively inexpensive hardware,
powerful networks, corporate downsizing, and a move toward employee empowerment also
contributed to this trend. Today, administrative assistants and company presidents alike are
networked, use computer workstations, and need to share corporate data to perform their
jobs.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 41
System Development Life Cycle:
Systems development life cycle (SDLC) is a framework to plan, analyze,
design, implement, and support an information system. A systems development life cycle is
composed of a number of clearly defined and distinct work phases which are used by
systems engineers and systems developers to plan for, design, build, test, and
deliver information systems. Like anything that is manufactured on an assembly line, an
SDLC aims to produce high quality systems that meet or exceed customer expectations,
based on customer requirements, by delivering systems which move through each clearly
defined phase, within scheduled time-frames and cost estimates. Computer systems are
complex and often (especially with the recent rise of service-oriented architecture) link
multiple traditional systems potentially supplied by different software vendors.
Phases of SDLC: The SDLC model usually includes five steps, which are described:
1. Systems planning
2. Systems analysis
3. Systems design
4. systems implementation
5. Systems support and security.
SYSTEMS PLANNING
The systems planning phase usually begins with a formal request to the IT department,
called a systems request, which describes problems or desired changes in an information
system or a business process. In many companies,
IT systems planning is an integral part of
overall business planning. When managers
and users develop their business plans, they
usually include IT requirements that generate
systems requests. A systems request can
come from a top manager, a planning team, a
department head, or the IT department itself.
The request can be very significant or
relatively minor. A major request might
involve a new information system or the
upgrading of an existing system.
In contrast, a minor request might ask for a
new feature or a change to the user interface.
The purpose of this phase is to
perform a preliminary investigation to
evaluate an IT-related business opportunity
or problem. The preliminary investigation is
a critical step because the outcome will affect
the entire development process. A key part of
the preliminary investigation is a feasibility
study that reviews anticipated costs and
benefits and recommends a course of action
based on operational, technical, economic, and time factors. Suppose you are a systems
analyst and you receive a request for a system change or improvement. Your first step is to
determine whether it makes sense to launch a preliminary investigation at all. Often you will
need to learn more about business operations before you can reach a conclusion. After an
investigation, you might find that the information system functions properly, but users need
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 42
more training. In some situations, you might recommend a business process review, rather
than an IT solution. In other cases, you might conclude that a full-scale systems review is
necessary. If the development process continues, the next step is the systems analysis phase.
SYSTEMS ANALYSIS
The purpose of the systems analysis phase is to build a logical model of the
new system. The first step is requirements modeling, where you investigate business
processes and document what the new system must do to satisfy users. Requirements
modeling continue the investigation that began during the systems planning phase. To
understand the system, you perform fact-finding using techniques such as interviews,
surveys, document review, observation, and sampling. You use the fact-finding results to
build business models, data and process models, and object models. The system
requirements document describes management and user requirements, costs and benefits,
and outlines alternative development strategies.
SYSTEMS DESIGN
The purpose of the systems design phase is to create a physical model that will satisfy all
documented requirements for the system. At this stage, you design the user interface and
identify necessary outputs, inputs, and processes. In addition, you design internal and
external controls, including computer-based and manual features to guarantee that the
system will be reliable, accurate, maintainable, and secure. During the systems design phase,
you also determine the application architecture, which programmers will use to transform
the logical design into program modules and code. The deliverable for this phase is the
system design specification, which is presented to management and users for review and
approval. Management and user involvement is critical to avoid any misunderstanding about
what the new system will do, how it will do it, and what it will cost.
During the systems implementation phase, the new system is constructed. Whether the
developers use structured analysis or O-O methods, the procedure is the same — programs
are written, tested, and documented, and the system is installed. If the system was purchased
as a package, systems analysts configure the software and perform any necessary
modifications. The objective of the systems implementation phase is to deliver a completely
functioning and documented information system. At the conclusion of this phase, the system
is ready for use. Final preparations include converting data to the new system’s files,
training users, and performing the actual transition to the new system.
The systems implementation phase also includes an assessment, called a systems
evaluation, to determine whether the system operates properly and if costs and benefits are
within expectations.
SYSTEMS SUPPORT AND SECURITY
During the systems support and security phase, the IT staff maintains, enhances, and
protects the system. Maintenance changes correct errors and adapt to changes in the
environment, such as new tax rates. Enhancements provide new features and benefits. The
objective during this phase is to maximize return on the IT investment. Security controls
safeguard the system from both external and internal threats. A well-designed system must
be secure, reliable, maintainable, and scalable. A scalable design can expand to meet new
business requirements and volume. Information systems development is always a work in
progress. Business processes change rapidly, and most information systems need to be
updated significantly or replaced after several years of operation.
SWOT Analysis
Strategic planning starts with a management review called a SWOT analysis.
SWOT stands for strengths, weaknesses, opportunities, and threats. A SWOT analysis
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 43
usually starts with a broad overview. The first step is for top management to respond to
questions like these:
• What are our strengths, and how can we use them to achieve our business goals?
• What are our weaknesses, and how can we reduce or eliminate them?
• What are our opportunities, and how do we plan to take advantage of them?
• What are our threats, and how can we assess, manage, and respond to the possible risks?
A SWOT analysis is a
solid foundation for the strategic
planning process, because it examines
a firm’s technical, human, and
financial resources.
In following figure the bulleted lists
show samples of typical strengths,
weaknesses, opportunities, and
threats.
As the SWOT process continues, management reviews specific resources and business
operations. For example, suppose that during a SWOT analysis, a firm studies an important
patent that the company owns.
There are four main building blocks of SWOT Analysis;
Strengths: The strengths of an organization are the core competencies of the company;
these key factors which enable it to excel in certain aspects and gain all kinds of profit,
whether that is purely economical, organizational or other.
Weaknesses: We define weaknesses as the flaws that an organization has, something
which means that these weaknesses might lead to serious problems in the company’s
strategic planning and might even lead to worse situations, such as becoming a serious
threat for the organization’s existence.
Opportunities: These are certain steps which will help a company to perform better,
generate more profits etc. The opportunities can be of many different perspectives, such
as entering a new market, or in creating a new business unit and etc.
Threats: Threats are the potential reasons which might harm a company, such as a new
entrant in the main market of operation, a big economical recession and other reasons
which might threaten the current position of an organization.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 44
Chapter 05
Requirements Engineering 5.1. Requirement Engineering
Designing and building computer software is challenging, creative, and just
plain fun. In fact, building software is so persuasive that many software developers want to
jump right in before they have a clear understanding of what is needed. They argue that
things will become clear as they build, that project stakeholders will be able to understand
need only after examining early iterations of the software, that things change so rapidly that
any attempt to understand requirements in detail is a waste of time, that the bottom line is
producing a working program and all else is secondary. What makes these arguments
seductive is that they contain elements of truth. But each is flawed and can lead to a failed
software project.
The collection of tasks and techniques that lead to an understanding of
requirements is called requirements engineering. From a software process perspective,
Requirements engineering builds a bridge to design and construction. But
where does the bridge originate? One could argue that it begins at the feet of the project
stakeholders (e.g., managers, customers, end users), where business need is defined, user
scenarios are described, functions and features are delineated, and project constraints are
identified. Others might suggest that it begins with a broader system definition, where
software is but one component of the larger system domain. But regardless of the starting
point, the journey across the bridge takes you high above the project, allowing you to
examine the context of the software work to be performed; the specific needs that design and
construction must address; the priorities that guide the order in which work is to be
completed; and the information, functions, and behaviors that will have a profound impact
on the resultant design.
5.2. Software Requirements Definitions
There are a few definitions of software requirements.
Jones defines “software requirements as a statement of needs by a user that triggers the
development of a program or system”.
Alan Davis says “a software requirement is a user need or necessary feature, function, or
attribute of a system that can be sensed from a position external to that system”.
According to Ian Summerville, “requirements are a specification of what should be
implemented. They are descriptions of how the system should behave, or of a system
property or attribute. They may be a constraint on the development process of the system”.
IEEE defines software requirements as:
1. A condition or capability needed by user to solve a problem or achieve an objective.
Requirements engineering is a major software engineering action that begins
during the communication activity and continues into the modeling activity. It
must be adapted to the needs of the process, the project, the product, and the
people doing the work.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 45
2. A condition or capability that must be met or possessed by a system or system component
to satisfy a contract, standard, specification, or other formally imposed document.
3. A documented representation of a condition or capability as in 1 or 2.
These definitions slightly differ from one another but essentially say the same thing: a
software requirement is a document that describes all the services provided by the system
along with the constraints under which it must operate.
Role of Requirements Software requirements document plays the central role in the entire software
development process. To start with, it is needed in the project planning and feasibility phase.
In this phase, a good understanding of the requirements is needed to determine the time and
resources required to build the software. As a result of this analysis, the scope of the system
may be reduced before embarking upon the software development. Once these requirements
have been finalized, the construction process starts. During this phase the software engineer
starts designing and coding the software. Once again, the requirement document serves as
the base reference document for these activities. It can be clearly seen that other activities
such as user documentation and testing of the system would also need this document for
their own deliverables. On the other hand, the project manager would need this document to
monitor and track the progress of the project and if needed, change the project scope by
modifying this document through the change control process.
5.3. Tasks of Requirement Engineering Requirements engineering provides the appropriate mechanism for
understanding what the customer wants, analyzing need, assessing feasibility, negotiating a
reasonable solution, specifying the solution unambiguously, validating the specification, and
managing the requirements as they are transformed into an operational system . It
encompasses seven distinct tasks: inception, elicitation, elaboration, negotiation,
specification, validation, and management. It is important to note that some of these tasks
occur in parallel and all are adapted to the needs of the project.
Inception:
How does a software project get started? Is there a single event that becomes
the catalyst for a new computer-based system or product, or does the need evolve over time?
There are no definitive answers to these questions. In some cases, a casual conversation is
all that is needed to precipitate a major software engineering effort. But in general, most
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 46
projects begin when a business need is identified or a potential new market or service is
discovered. Stakeholders from the business community (e.g., business managers, marketing
people, product managers) define a business case for the idea, try to identify the breadth and
depth of the market, do a rough feasibility analysis, and identify a working description of the
project’s scope. All of this information is subject to change, but it is sufficient to precipitate
discussions with the software engineering organization.
At project inception, a basic understanding of the problem, the people who want a solution,
the nature of the solution that is desired, and the effectiveness of preliminary communication
and collaboration between the other stakeholders and the software team are established.
Elicitation:
It certainly seems simple enough—ask the customer, the users, and others
what the objectives for the system or product are, what is to be accomplished, how the
system or product fits into the needs of the business, and finally, how the system or product
is to be used on a day-to-day basis. But it isn’t simple—it’s very hard. Christel and Kang
identify a number of problems that are encountered as elicitation occurs.
• Problems of scope. The boundary of the system is ill-defined or the
customers/users specify unnecessary technical detail that may confuse, rather than
clarify, overall system objectives.
• Problems of understanding. The customers/users are not completely sure of
what is needed, have a poor understanding of the capabilities and limitations of
their computing environment, don’t have a full understanding of the problem
domain, have trouble communicating needs to the system engineer, omit
information that is believed to be “obvious,” specify requirements that conflict
with the needs of other customers/users, or specify requirements that are
ambiguous or un-testable.
• Problems of volatility. The requirements change over time.
To help overcome these problems, you must approach requirements gathering in an
organized manner.
Elaboration:
The information obtained from the customer during inception and elicitation
is expanded and refined during elaboration. This task focuses on developing a refined
requirements model that identifies various aspects of software function, behavior, and
information. Elaboration is driven by the creation and refinement of user scenarios that
describe how the end user (and other actors) will interact with the system. Each user
scenario is parsed to extract analysis classes—business domain entities that are visible to the
end user. The attributes of each analysis class are defined, and the services that are required
by each class are identified. The relationships and collaboration between classes are
identified, and a variety of supplementary diagrams are produced.
Negotiation:
It isn’t unusual for customers and users to ask for more than can be achieved,
given limited business resources. It’s also relatively common for different customers or
users to propose conflicting requirements, arguing that their version is “essential for our
special needs.” .These conflicts are resolved through a process of negotiation. Customers,
users, and other stakeholders are asked to rank requirements and then discuss conflicts in
priority. Using an iterative approach that prioritizes requirements, assesses their cost and
risk, and addresses internal conflicts, requirements are eliminated, combined, and/or
modified so that each party achieves some measure of satisfaction.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 47
Specification:
In the context of computer-based systems (and software), the term
specification means different things to different people. A specification can be a written
document, a set of graphical models, a formal mathematical model, a collection of usage
scenarios, a prototype, or any combination of these.
Some suggest that a “standard template” should be developed and used for a specification,
arguing that this leads to requirements that are presented in a consistent and therefore more
understandable manner. However, it is sometimes necessary to remain flexible when a
specification is to be developed. For large systems, a written document, using natural
language descriptions and graphical models may be the best approach. However, usage
scenarios may be all that are required for smaller products or systems that reside within
well-understood technical environments.
Validation:
The work products produced as a consequence of requirements engineering
are assessed for quality during a validation step. Requirements validation examines the
specification5 to ensure that all software requirements have been stated unambiguously; that
inconsistencies, omissions, and errors have been detected and corrected; and that the work
products conform to the standards established for the process, the project, and the product.
The primary requirements validation mechanism is the technical review. The review team
that validates requirements includes software engineers, customers, users, and other
stakeholders who examine the specification looking for errors in content or interpretation,
areas where clarification may be required, missing information, inconsistencies (a major
problem when large products or systems are engineered), conflicting requirements, or
unrealistic (unachievable) requirements.
Requirements management:
Requirements for computer-based systems change, and the desire to change
requirements persists throughout the life of the system. Requirements management is a set of
activities that help the project team identify, control, and track requirements and changes to
requirements at any time as the project proceeds.
5.4. Establishing The Groundwork In an ideal setting, stakeholders and software engineers work together on the
same team. In such cases, requirements engineering is simply a matter of conducting
meaningful conversations with colleagues who are well-known members of the team. But
reality is often quite different. Customer(s) or end users may be located in a different city or
country may have only a vague idea of what is required, may have conflicting opinions
about the system to be built, may have limited technical knowledge, and may have limited
time to interact with the requirements engineer. None of these things are desirable, but all
are fairly common, and you are often forced to work within the constraints imposed by this
situation.
There are some steps required to establish the groundwork for an understanding of software
requirements—to get the project started in a way that will keep it moving forward toward a
successful solution.
5.4.1. Identifying Stakeholders
Sommerville and Sawyer define a stakeholder as “anyone who benefits in a
direct or indirect way from the system which is being developed.” The common stakeholders
are business operations managers, product managers, marketing people, internal and external
customers, end users, consultants, product engineers, software engineers, support and
maintenance engineers, and others.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 48
Each stakeholder has a different view of the system, achieves different benefits when the
system is successfully developed, and is open to different risks if the development effort
should fail.
At inception, a list must be created of people who will contribute input as requirements are
elicited. Every stakeholder will be helpful to identify the next potential stakeholders.
5.4.2. Recognizing Multiple Viewpoints
Because many different stakeholders exist, the requirements of the system will
be explored from many different points of view. For example, the marketing group is
interested in functions and features that will excite the potential market, making the new
system easy to sell. Business managers are interested in a feature set that can be built within
budget and that will be ready to meet defined market windows. End users may want features
that are familiar to them and that are easy to learn and use. Software engineers may be
concerned with functions that are invisible to nontechnical stakeholders but that enable an
infrastructure that supports more marketable functions and features. Support engineers may
focus on the maintainability of the software.
Each of these constituencies (and others) will contribute information to the
requirements engineering process. As information from multiple viewpoints is collected,
merging requirements may be inconsistent or may conflict with one another. You should
categorize all stakeholder information (including inconsistent and conflicting requirements)
in a way that will allow decision makers to choose an internally consistent set of
requirements for the system.
5.4.3. Working toward Collaboration
Every stakeholder may have different opinions about the proper set of
requirements. Customers (and other stakeholders) must collaborate among themselves and
with software engineering practitioners if a successful system is to result. But how is this
collaboration accomplished?
The job of a requirements engineer is to identify areas of commonality (i.e.,
requirements on which all stakeholders agree) and areas of conflict or inconsistency (i.e.,
requirements that are desired by one stakeholder but conflict with the needs of another
stakeholder). It is, of course, the latter category that presents a challenge.
Collaboration does not necessarily mean that requirements are defined by
committee. In many cases, stakeholders collaborate by providing their view of requirements,
but a strong “project champion” (e.g., a business manager or a senior technologist) may
make the final decision about which requirements make the cut.
5.4.4. Asking the First Questions
Questions asked at the inception of the project should be “context free”. The first set of
context-free questions focuses on the customer and other stakeholders, the overall project
goals and benefits. For example, you might ask:
Who is behind the request for this work?
Who will use the solution?
What will be the economic benefit of a successful solution?
Is there another source for the solution that you need?
These questions help to identify all stakeholders who will have interest in the
software to be built. In addition, the questions identify the measurable benefit of a successful
implementation and possible alternatives to custom software development.
The next set of questions enables you to gain a better understanding of the
problem and allows the customer to voice his or her perceptions about a solution:
How would you characterize “good” output that would be generated by a
successful solution?
What problem(s) will this solution address?
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 49
Can you show me (or describe) the business environment in which the solution will
be used?
Will special performance issues or constraints affect the way the solution is
approached?
The final set of questions focuses on the effectiveness of the communication
activity itself. Gause and Weinberg call these “meta-questions” and propose the following
(abbreviated) list:
Are you the right person to answer these questions? Are your answers “official”?
Are my questions relevant to the problem that you have?
Am I asking too many questions?
Can anyone else provide additional information?
Should I be asking you anything else?
These questions (and others) will help to make the clear understanding of
what the customer want and initiate the communication that is essential to successful
elicitation. But a question-and-answer meeting format is not an approach that has been
overwhelmingly successful. In fact, the Q&A session should be used for the first encounter
only and then replaced by a requirements elicitation format that combines elements of
problem solving, negotiation, and specification.
5.5. Levels of Software Requirements
Software requirements are defined at various levels of detail and granularity.
Requirements at different level of detail also mean to serve different purposes.
1. Business Requirements:
These are used to state the high-level business objective of the organization
or customer requesting the system or product. They are used to document main system
features and functionalities without going into their nitty-gritty details. They are captured in
a document describing the project vision and scope.
2. User Requirements:
User requirements add further detail to the business requirements. They are
called user requirements because they are written from a user’s perspective and the focus of
user requirement describe tasks the user must be able to accomplish in order to fulfill the
above stated business requirements. They are captured in the requirement definition
document.
3. Functional Requirements:
The next level of detail comes in the form of what is called functional
requirements. They bring-in the system’s view and define from the system’s perspective the
software functionality the developers must build into the product to enable users to
accomplish their tasks stated in the user requirements - thereby satisfying the business
requirements.
4. Non-Functional Requirements
Software requirement is a document that describes all the services provided
by the system along with the constraints under which it must operate. That is, the
requirement document should not only describe the functionality needed and provided by the
system, but it must also specify the constraints under which it must operate. Constraints are
restrictions that are placed on the choices available to the developer for design and
construction of the software product. These kinds of requirements are called Non-Functional
Requirements. These are used to describe external system interfaces, design and
implementation constraints, quality and performance attributes. These also include
regulations, standards, and contracts to which the product must conform.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 50
Non-functional requirement play a significant role in the development of the
system. If not captured properly, the system may not fulfill some of the basic business needs.
If proper care is not taken, the system may collapse. They dictate how the system
architecture and framework. As an example of non-functional requirements, we can require
software to run on Sun Solaris Platform. Now it is clear that if this requirement was not
captured initially and the entire set of functionality was built to run on Windows, the system
would be useless for the client. It can also be easily seen that this requirement would have an
impact on the basic system architecture while the functionality does not change.
5.6. Eliciting Requirements
Requirements elicitation (also called requirements gathering) combines
elements of problem solving, elaboration, negotiation, and specification. In order to
encourage a collaborative, team-oriented approach to requirements gathering, stakeholders
work together to identify the problem, propose elements of the solution, negotiate different
approaches and specify a preliminary set of solution requirements.
5.6.1. Collaborative Requirements Gathering
Many different approaches to collaborative requirements gathering have been proposed.
Each makes use of a slightly different scenario, but all apply some variation on the following
basic guidelines:
Meetings are conducted and attended by both software engineers and other
stakeholders.
Rules for preparation and participation are established.
An agenda is suggested that is formal enough to cover all important points but
informal enough to encourage the free flow of ideas.
A “facilitator” (can be a customer, a developer, or an outsider) controls the meeting.
A “definition mechanism” (can be work sheets, flip charts, or wall stickers or an
electronic bulletin board, chat room, or virtual forum) is used.
The goal is to identify the problem, propose elements of the solution,
negotiate different approaches, and specify a preliminary set of solution requirements in an
atmosphere that is conducive to the accomplishment of the goal.
5.6.2. Quality Function Deployment
Quality function deployment (QFD) is a quality management technique that
translates the needs of the customer into technical requirements for software. QFD
“concentrates on maximizing customer satisfaction from the software engineering process”.
To accomplish this, QFD emphasizes an understanding of what is valuable to the customer
and then deploys these values throughout the engineering process. QFD identifies three
types of requirements:
Normal requirements:
The objectives and goals that are stated for a product or system during meetings with the
customer. If these requirements are present, the customer is satisfied. Examples of normal
requirements might be requested types of graphical displays, specific system functions, and
defined levels of performance. These requirements are also called functional requirements.
Expected Requirements:
These requirements are implicit to the product or system and may be so
fundamental that the customer does not explicitly state them. Their absence will be a cause
for significant dissatisfaction. Examples of expected requirements are: ease of
human/machine interaction, overall operational correctness and reliability, and ease of
software installation. These requirements are also called nonfunctional requirements.
Exciting requirements:
These features go beyond the customer’s expectations and prove to be very
satisfying when present. For example, software for a new mobile phone comes with standard
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 51
features, but is coupled with a set of unexpected capabilities (e.g., multitouch screen, visual
voice mail) that delight every user of the product.
Although QFD concepts can be applied across the entire software process,
specific QFD techniques are applicable to the requirements elicitation activity. QFD uses
customer interviews and observation, surveys, and examination of historical data (e.g.,
problem reports) as raw data for the requirements gathering activity. These data are then
translated into a table of requirements—called the customer voice table—that is reviewed
with the customer and other stakeholders. A variety of diagrams, matrices, and evaluation
methods are then used to extract expected requirements and to attempt to derive exciting
requirements.
5.6.3. Usage Scenarios
As requirements are gathered, an overall vision of system functions and
features begins to materialize. However, it is difficult to move into more technical software
engineering activities until you understand how these functions and features will be used by
different classes of end users. To accomplish this, developers and users can create a set of
scenarios that identify a thread of usage for the system to be constructed. The scenarios,
often called use cases, provide a description of how the system will be used. Example:
Use Case Title Material Calculation
Use Case ID UC001
Actions Systemization of Material Calculation Section
Description In this Module, Required material for Production will be calculated
and demand requisition will be generated to purchase department.
Exceptions and
Alternative Paths
If system fails at any time
If the user is authorized then he will restart the system,
login and request recovery for prior state
If the user is new, then he will go to the administrator and
register himself and get his Login ID and password.
If any exception occurs during calculations the user will
verify himself.
If any exception occurs during demand requisition
generation, the user will send a manual requisition till the
error recovers.
Pre-Conditions System is in persistent State
Authorized person of Material Management Dept. is
Logged In.
Particular Formulation Sheet is available to calculate
required Material
Stock in Hand Detail is available.
Post Conditions Required Material for a production target is calculated.
A demand Requisition is generated to Purchase Material.
Author Muhammad Shahid Azeem
Main Scenario Authorized Person of Material Management is Logged In
after providing his Login ID and Password
Gets Production Targets.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 52
Calculates required material using a particular Formulation
sheet for the given production Target.
Checks Material Stock in hand after calculations.
Then The Management will demand the material
accordingly, for this purpose a demand requisition is
generated.
During this process, the colors will be selected.
He saves all the record in computer software.
5.6.4. Elicitation Work Products
The work products produced as a consequence of requirements elicitation will vary
depending on the size of the system or product to be built. For most systems, the work
products include
A statement of need and feasibility.
A bounded statement of scope for the system or product.
A list of customers, users, and other stakeholders who participated in requirements
elicitation.
A description of the system’s technical environment.
A list of requirements (preferably organized by function) and the domain
constraints that applies to each.
A set of usage scenarios that provide insight into the use of the system or product
under different operating conditions.
Any prototypes developed to better define requirements.
Each of these work products is reviewed by all people who have participated in requirements
elicitation.
5.7. DEVELOPING USE CASES
Excellent software products are the result of a well-executed design based on
excellent requirements and high quality requirements result from effective communication
and coordination between developers and customers. That is, good customer-developer
relationship and effective communication between these two entities is a must for a
successful software project. In order to build this relationship and capture the requirements
properly, it is essential for the requirement engineer to learn about the business that is to be
automated.
It is important to recognize that a software engineer is typically not hired to
solve a computer science problem – most often than not, the problem lies in a different
domain than computer science and the software engineer must understand it before it can be
solved. In order to improve the communication level between the vendor and the client, the
software engineer should learn the domain related terminology and use that terminology in
documenting the requirements. Document should be structured and written in a way that the
customer finds it easy to read and understand so that there are no ambiguities and false
assumption.
One tool used to organize and structure the requirements is such a fashion is
called use case modeling.
It is modeling technique developed by Ivar Jacobson to describe what a new
system should do or what an existing system already does. It is now part of standard
software modeling language known as the Unified Modeling Language (UML). It captures a
discussion process between the system developer and the customer. It is widely used
because it is comparatively easy to understand intuitively – even without knowing the
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 53
notation. Because of its intuitive nature, it can be easily discussed with the customer who
may not be familiar with UML, resulting in a requirement specification on which all agree.
5.7.1. Use Case Model Components
A use case model has two components, use cases and actors.
In a use case model, boundaries of the system are defined by functionality
that is handled by the system. Each use case specifies a complete functionality from its
initiation by an actor until it has performed the requested functionality. An actor is an entity
that has an interest in interacting with the system. An actor can be a human or some other
device or system. An actor is anything that communicates with the system or product and
that is external to the system itself. Every actor has one or more goals when using the
system.
A use case model represents a use case view of the system – how the system
is going to be used. In this case system is treated as a black box and it only depicts the
external interface of the system. From an end-user’s perspective it and describes the
functional requirements of the system. To a developer, it gives a clear and consistent
description of what the system should do. This model is used and elaborated throughout the
development process. As an aid to the tester, it provides a basis for performing system tests
to verify the system. It also provides the ability to trace functional requirements into actual
classes and operations in the system and hence helps in identifying any gaps.
5.7.2. Developing Use Cases:
The first step in writing a use case is to define the set of “actors” that will be
involved in the story. Actors represent the roles that people (or devices) play as the system
operates.
It is important to note that an actor and an end user are not necessarily the
same thing. A typical user may play a number of different roles when using a system,
whereas an actor represents a class of external entities (often, but not always, people) that
play just one role in the context of the use case.
Primary actors interact to achieve required system function and derive the
intended benefit from the system. They work directly and frequently with the software.
Secondary actors support the system so that primary actors can do their work. Once actors
have been identified, use cases can be developed. Jacobson suggests a number of questions
that should be answered by a use case:
Who is the primary actor, the secondary actor(s)?
What are the actor’s goals?
What preconditions should exist before the story begins?
What main tasks or functions are performed by the actor?
What exceptions might be considered as the story is described?
What variations in the actor’s interaction are possible?
What system information will the actor acquire, produce, or change?
Will the actor have to inform the system about changes in the external environment?
What information does the actor desire from the system?
Does the actor wish to be informed about unexpected changes?
5.7.3. Limitations of Use Cases
Use cases alone are not sufficient. There are kinds of requirements (mostly
nonfunctional) that need to be understood. Since use cases provide a user’s perspective, they
describe the system as a black box and hide the internal details from the users. Hence, in a
use case, domain (business) rules as well as legal issues are not documented. The non-
functional requirements are also not documented in the use cases.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 54
5.8. Building The Requirements Model:
The intent of the analysis model is to provide a description of the required
informational, functional, and behavioral domains for a computer-based system. The model
changes dynamically as you learn more about the system to be built, and other stakeholders
understand more about what they really require. For that reason, the analysis model is a
snapshot of requirements at any given time. You should expect it to change.
As the requirements model evolves, certain elements will become relatively
stable, providing a solid foundation for the design tasks that follow. However, other
elements of the model may be more volatile, indicating that stakeholders do not yet fully
understand requirements for the system.
5.8.1. Elements of the Requirements Model
There are many different ways to model the requirements for a computer-
based system. Some software people argue that it’s best to select one mode of representation
(e.g., the use case) and apply it to the exclusion of all other models. Other practitioners
believe that it’s worthwhile to use a number of different modes of representation to depict
the requirements model. Different modes of representation force you to consider
requirements from different viewpoints—an approach that has a higher probability of
uncovering omissions, inconsistencies, and ambiguity. The specific elements of the
requirements model are dictated by the analysis modeling method that is to be used.
However, a set of generic elements is common to most requirements models.
5.8.1.1. Scenario-based elements:
The system is described from the user’s point of view using a scenario-based
approach.
Use Case Diagrams:
Basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Scenario-based elements of the requirements model are
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 55
often the first part of the model that is developed. As such, they serve as input for the
creation of other modeling elements.
Activity Diagrams Activity diagrams give a pictorial description of the use case. It is similar to a
flow chart and shows a flow from activity to activity. It expresses the dynamic aspect of the
system. Following is the activity diagram for the Delete Information use case.
5.8.1.2. Class-based elements:
Each usage scenario implies a set of objects that are manipulated as an actor
interacts with the system. These objects are categorized into classes—a collection of things
that have similar attributes and common behaviors.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 56
5.8.1.3. Behavioral elements:
The behavior of a computer-based system can have a profound effect on the
design that is chosen and the implementation approach that is applied. Therefore, the
requirements model must provide modeling elements that depict behavior.
5.8.1.4. Flow-oriented elements:
Information is transformed as it flows through a computer-based system. The
system accepts input in a variety of forms, applies functions to transform it, and produces
output in a variety of forms. A flow model can be designed for any computer-based system
to show the flow of information, regardless of size and complexity.
5.9. Negotiating Requirements
In an ideal requirements engineering context, the inception, elicitation, and
elaboration tasks determine customer requirements in sufficient detail to proceed to
subsequent software engineering activities. Unfortunately, this rarely happens. In reality,
you may have to enter into a negotiation with one or more stakeholders. In most cases,
stakeholders are asked to balance functionality, performance, and other product or system
characteristics against cost and time-to-market. The intent of this negotiation is to develop a
project plan that meets stakeholder needs while at the same time reflecting the real-world
constraints (e.g., time, people, and budget) that have been placed on the software team.
The best negotiations strive for a “win-win” result. That is, stakeholders win
by getting the system or product that satisfies the majority of their needs and you (as a
member of the software team) win by working to realistic and achievable budgets and
deadlines.
Boehm defines a set of negotiation activities at the beginning of each
software process iteration. Rather than a single customer communication activity, the
following activities are defined:
1. Identification of the system or subsystem’s key stakeholders.
2. Determination of the stakeholders’ “win conditions.”
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 57
3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of win-
win conditions for all concerned (including the software team).
Successful completion of these initial steps achieves a win-win result, which
becomes the key criterion for proceeding to subsequent software engineering activities.
5.10. Validating Requirements
As each element of the requirements model is created, it is examined for
inconsistency, omissions, and ambiguity. The requirements represented by the model are
prioritized by the stakeholders and grouped within requirements packages that will be
implemented as software increments. A review of the requirements model addresses the
following questions:
Is each requirement consistent with the overall objectives for the system/product?
Have all requirements been specified at the proper level of abstraction? That is, do
some requirements provide a level of technical detail that is inappropriate at this
stage?
Is the requirement really necessary or does it represent an add-on feature that may
not be essential to the objective of the system?
Is each requirement bounded and unambiguous?
Does each requirement have attribution? That is, is a source (generally, a specific
individual) noted for each requirement?
Do any requirements conflict with other requirements?
Is each requirement achievable in the technical environment that will house the
system or product?
Is each requirement testable, once implemented?
Does the requirements model properly reflect the information, function, and behavior
of the system to be built?
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 58
Chapter 07
Requirements Modeling 7.1. Systems Analysis Phase
The overall objective of the systems analysis phase is to understand the proposed
project, ensure that it will support business requirements, and build a solid foundation for
system development.
The systems analysis phase includes the four main activities shown in Figure 7.1
requirements modeling, data and process modeling, object modeling, and consideration of
development strategies and typical interaction among the modeling tasks:
1. Requirements Modeling
2. Data and process Modeling
3. Object modeling.
4. Development Strategies
FIGURE 7.1: The systems analysis phase consists of requirements modeling, data and process
modeling, object modeling, and consideration of development strategies.
7.1.1. Requirements Modeling
Requirements modeling involves fact-finding to describe the current system and
identification of the requirements for the new system, such as outputs, inputs, processes,
performance, and security.
Outputs refer to electronic or printed information produced by the system.
Inputs refer to necessary data that enters the system, either manually or in an automated
manner.
Processes refer to the logical rules that are applied to transform the data into meaningful
information.
Performance refers to system characteristics such as speed, volume, capacity, availability,
and reliability.
Security refers to hardware, software, and procedural controls that safeguard and protect the
system and its data from internal or external threats.
7.1.2. Data and Process Modeling:
Data and Process Modeling, you will continue the modeling process by
learning how to represent graphically system data and processes using traditional structured
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 59
analysis techniques. Structured analysis identifies the data flowing into a process, the
business rules that transform the data, and the resulting output data flow.
7.1.3. Object Modeling:
Object modeling is another popular modeling technique. While structured
analysis treats processes and data as separate components, object-oriented analysis (O-O)
combines data and the processes that act on the data into things called objects. These objects
represent actual people, things, transactions, and events that affect the system. During the
system development process, analysts often use both modeling methods to gain as much
information as possible.
7.1.4. Development Strategies: Development Strategies, you will consider various development options and
prepare for the transition to the systems design phase of the SDLC. You will learn about
software trends, acquisition and development alternatives, outsourcing, and formally
documenting requirements for the new system.
The deliverable, or end product, of the systems analysis phase is a system
requirements document, which is an overall design for the new system. In addition, each
activity within the systems analysis phase has an end product and one or more milestones.
Project managers use various tools and techniques to coordinate people, tasks, timetables,
and budgets.
7.2. Systems Analysis Skills
Analytical skills enable to identify a problem, evaluate the key elements, and
develop a useful solution. Interpersonal skills are especially valuable to a systems analyst
who must work with people at all organizational levels, balance conflicting needs of users,
and communicate effectively.
Because information systems affect people throughout the company, you
should consider team-oriented strategies as you begin the systems analysis phase.
7.2.1. Team-Based Techniques: JAD, RAD, and Agile Methods
The IT department’s goal is to deliver the best possible information system,
at the lowest possible cost, in the shortest possible time. To achieve the best results, system
developers view users as partners in the development process. Greater user involvement
usually results in better communication, faster development times, and more satisfied users.
The traditional model for systems development was an IT department that
used structured analysis and consulted users only when their input or approval was needed.
Although the IT staff still has a central role, and structured analysis remains a popular
method of systems development, most IT managers invite system users to participate
actively in various development tasks.
A popular example is joint application development (JAD), which is a user-
oriented technique for fact-finding and requirements modeling. Because it is not linked to a
specific development methodology, systems developers use JAD whenever group input and
interaction are desired.
Another popular user-oriented method is rapid application development
(RAD). RAD resembles a condensed version of the entire SDLC, with users involved every
step of the way. While JAD typically focuses only on fact-finding and requirements
determination,
RAD provides a fast-track approach to a full spectrum of system
development tasks, including planning, design, construction, and implementation.
Agile methods represent a recent trend that stresses intense interaction
between system developers and users. JAD, RAD, and agile methods are discussed in the
following sections.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 60
7.2.2. Joint Application Development
Joint application development (JAD) is a popular fact-finding technique that brings
users into the development process as active participants.
User Involvement
Users have a vital stake in an information system, and they should participate fully in
the development process. In near past, the IT department usually had sole responsibility for
systems development, and users had a relatively passive role. During the development
process, the IT staff would collect information from users, define system requirements, and
construct the new system. At various stages of the process, the IT staff might ask users to
review the design, offer comments, and submit changes.
Today, users typically have a much more active role in systems development. IT
professionals now recognize that successful systems must be user-oriented, and users need
to be involved, formally or informally, at every stage of system development.
One popular strategy for user involvement is a JAD team approach, which involves a
task force of users, managers, and IT professionals that works together to gather
information, discuss business needs, and define the new system requirements.
JAD Participants and Roles
A JAD team usually meets over a period of days or weeks in a special
conference room or at an off-site location. JAD participants should be insulated from the
distraction of day-to-day operations. The objective is to analyze the existing system, obtain
user input and expectations, and document user requirements for the new system. The JAD
group usually has a project leader, who needs strong interpersonal and organizational skills,
and one or more members who document and record the results and decisions. IT staff
members often serve as JAD project leaders, but that is not always the case. Systems
analysts on the JAD team participate in discussions, ask questions, take notes, and provide
support to the team. If CASE tools are available, analysts can develop models and enter
documentation from the JAD session directly into the CASE tool.
The JAD process involves intensive effort by all team members. Because of
the wide range of input and constant interaction among the participants, many companies
believe that a JAD group produces the best possible definition of the new system.
JAD Advantages and Disadvantages
Compared with traditional methods, JAD is more expensive and can be
cumbersome if the group is too large relative to the size of the project. Many companies
find, however, that JAD allows key users to participate effectively in the requirements
modeling process. When users participate in the systems development process, they are
more likely to feel a sense of ownership in the results, and support for the new system.
When properly used, JAD can result in a more accurate statement of system requirements, a
better understanding of common goals, and a stronger commitment to the success of the new
system.
7.2.2. Rapid Application Development (RAD):
Rapid application development (RAD) is a team-based technique that speeds
up information systems development and produces a functioning information system.
Like JAD, RAD uses a group approach, but goes much further. While the end
product of JAD is a requirements model, the end product of RAD is the new information
system.
RAD is a complete methodology, with a four-phase life cycle that parallels
the traditional SDLC phases. Companies use RAD to reduce cost and development time, and
increase the probability of success.
RAD relies heavily on prototyping and user involvement. The RAD process
allows users to examine a working model as early as possible, determine if it meets their
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 61
needs, and suggest necessary changes. Based on user input, the prototype is modified and
the interactive process continues until the system is completely developed and users are
satisfied. The project team uses CASE tools to build the prototypes and create a continuous
stream of documentation.
RAD Phases and Activities
The RAD model consists of four phases: requirements planning, user design,
construction, and cutover, as shown in Figure 7.2. Notice the continuous interaction between
the user design and construction phases.
FIGURE 7.2 The four phases of the RAD model are requirements planning, user design,
construction, and cutover.
Requirements Planning:
The requirements planning phase combines elements of the systems planning
and systems analysis phases of the SDLC. Users, managers, and IT staff members discuss
and agree on business needs, project scope, constraints, and system requirements. The
requirements planning phase ends when the team agrees on the key issues and obtains
management authorization to continue.
User Design:
During the user design phase, users interact with systems analysts and
develop models and prototypes that represent all system processes, outputs, and inputs. The
RAD group or subgroups typically use a combination of JAD techniques and CASE tools to
translate user needs into working models. User design is a continuous, working model of the
system that meets their needs.
Construction:
The construction phase focuses on program and application development
tasks similar to the SDLC. In RAD, however, users continue to participate and still can
suggest changes or improvements as actual screens or reports are developed.
Cutover:
The cutover phase resembles the final tasks in the SDLC implementation
phase, including data conversion, testing, changeover to the new system, and user training.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 62
Compared with traditional methods, the entire process is compressed. As a result, the new
system is built, delivered, and placed in operation much sooner.
RAD Objectives
The main objective of all RAD approaches is to cut development time and
expense by involving users in every phase of systems development. Because it is a
continuous process, RAD allows the development team to make necessary modifications
quickly, as the design evolves. In times of tight corporate budgets, it is especially important
to limit the cost of changes that typically occur in a long, drawn-out development schedule.
In addition to user involvement, a successful RAD team must have IT
resources, skills, and management support. Because it is a dynamic, user-driven process,
RAD is especially valuable when a company needs an information system to support a new
business function. By obtaining user input from the beginning, RAD also helps a
development team design a system that requires a highly interactive or complex user
interface.
RAD Advantages and Disadvantages
RAD has advantages and disadvantages compared with traditional structured
analysis methods. The primary advantage is that systems can be developed more quickly
with significant cost savings. A disadvantage is that RAD stresses the mechanics of the
system itself and does not emphasize the company’s strategic business needs. The risk is that
a system might work well in the short term, but the corporate and long-term objectives for
the system might not be met. Another potential disadvantage is that the accelerated time
cycle might allow less time to develop quality, consistency, and design standards. RAD can
be an attractive alternative, however, if an organization understands the possible risks.
Modeling Tools and Techniques:
Models help users, managers, and IT professionals understand the design of a
system. Modeling involves graphical methods and nontechnical language that represent the
system at various stages of development. During requirements modeling, various tools can
be used to describe business processes, requirements, and user interaction with the system.
Systems analysts use modeling and fact-finding interactively — first they
build fact finding results into models, then they study the models to determine whether
additional fact-finding is needed. To help them understand system requirements, analysts
use functional decomposition diagrams, business process models, data flow diagrams, and
Unified Modeling Language diagrams. Any of these diagrams can be created with CASE
tools or standalone drawing tools if desired.
Functional Decomposition Diagrams
A functional decomposition diagram (FDD) is a top down representation of a
function or process. Using an FDD, an analyst can show business functions and break them
down into lower-level functions and processes. Creating an FDD is similar to drawing an
organization chart — you start at the top and work your way down. Figure 7.3 shows an
FDD of a library system drawn with the Visible Analyst CASE tool. FDDs can be used at
several stages of systems development. During requirements modeling, analysts use FDDs
to model business functions and show how they are organized into lower-level processes.
Those processes translate into program modules during application development.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 63
FIGURE 7.3 FDD showing five top-level functions. The Library Operations function
includes two additional levels that show processes and sub-processes.
Business Process Modeling
A business process model (BPM) describes one or more business processes,
such as handling an airline reservation, filling a product order, or updating a customer
account. During requirements modeling, analysts often create models that use a standard
language called business process modeling notation (BPMN). BPMN includes various
shapes and symbols to represent events, processes, and workflows. When you create a
business process model using a CASE tool such as Visible Analyst, your diagram
automatically becomes part of the overall model. In the example shown in Figure 7.4, using
BPMN terminology, the overall diagram is called a pool, and the designated customer areas
are called swim lanes. Integrating BPM into the CASE development process leads to faster
results, fewer errors, and reduced cost. Part B of the Systems Analyst’s Toolkit describes
business process modeling in more detail.
FIGURE 7.4 Using the Visible Analyst CASE tool, an analyst can create a business process
diagram. The overall diagram is called a pool, and the two separate customer areas are called swim lanes.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 64
Data Flow Diagrams
Working from a functional decomposition diagram, analysts can create data
flow diagrams (DFDs) to show how the system stores, processes, and transforms data. The
DFD in Figure 4-10 describes adding and removing books, which is a function shown in the
Library Management diagram in Figure 7.3. Notice that the two shapes in the DFD represent
processes, each with various inputs and outputs. Additional levels of information and detail
are depicted in other, related DFDs.
FIGURE 7.5: A library system DFD shows how books are added and removed.
Unified Modeling Language
The Unified Modeling Language (UML) is a widely used method of
visualizing and documenting software systems design. UML uses object-oriented design
concepts, but it is independent of any specific programming language and can be used to
describe business processes and requirements generally.
UML provides various graphical tools, such as use case diagrams and
sequence diagrams. During requirements modeling, a systems analyst can utilize the UML to
represent the information system from a user’s viewpoint. A brief description of each
technique follows.
Use Case Diagrams:
During requirements modeling, systems analysts and users work together to
document requirements and model system functions. A use case diagram visually
represents the interaction between users and the information system. In a use case diagram,
the user becomes an actor, with a specific role that describes how he or she interacts with
the system. Systems analysts can draw use case diagrams freehand or use CASE tools that
integrate the use cases into the overall system design.
Figure 7.6 shows a simple use case diagram for a sales system where the
actor is a customer and the use case involves a credit card validation that is performed by the
system. Because use cases depict the system through the eyes of a user, common business
language can be used to describe the transactions. Figure 7.7 shows a student records
system, with several use cases and actors.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 65
FIGURE 7.6: Use case diagram of a sales system, where the actor is a customer and the use case involves a credit card validation.
AISMS
USER
Add new Item
Edit ItemIsExists
Delete Item
«extends»
«extends»
View Item
Information
Make LoginError MessagesValidae User
«extends»«uses»
Make Purchase Order
Make GRN
Make Sales Invoice
Check PhysicalStock
Check reports
USER
Receive ReturnedItems from Customer
Return Items to
Suppliers
IsExpired«extends»
FIGURE 7.7: Use case diagram of an Online Pharmacy
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 66
Sequence Diagrams:
A sequence diagram shows the timing of interactions between objects as
they occur. A systems analyst might use a sequence diagram to show all possible outcomes,
or focus on a single scenario. Figure 7.8 shows a simple sequence diagram of a successful
credit card validation. The interaction proceeds from top to bottom along a vertical timeline,
while the horizontal arrows represent messages from one object to another.
FIGURE 7.8 Sequence diagram showing a credit card validation process.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 67
Chapter 08
Data and Process Modeling 8.1. Data Flow Diagrams:
A data flow diagram (DFD) shows how data moves through an information
system but does not show program logic or processing steps. A set of DFDs provides a
logical model that shows what the system does, not how it does it. That distinction is
important because focusing on implementation issues at this point would restrict your search
for the most effective system design.
8.1.1. DFD Symbols
DFDs use four basic symbols that represent processes, data flows, data stores,
and entities. These Symbols are referenced by using all capital letters for the symbol name.
FIGURE 8.1: Data flow diagram symbols, symbol names, and examples of the Gane and
Sarson and Yourdon symbol sets. PROCESS SYMBOL:
The symbol for a process is a rectangle with rounded corners or a circle. The
name of the process appears inside the rectangle or circle. The process name identifies a
specific function and consists of a verb (and an adjective, if necessary) followed by a
singular noun. Examples of process names are APPLY RENT PAYMENT, CALCULATE
COMMISSION, ASSIGN FINAL GRADE, VERIFY ORDER, and FILL ORDER.
Processing details are not shown in a DFD. In DFDs, a process symbol can be
referred to as a black box, because the inputs, outputs, and general functions of the process
are known, but the underlying details and logic of the process are hidden.
DATA FLOW SYMBOL:
A data flow is a path for data to move from one part of the information
system to another. A data flow in a DFD represents one or more data items. For example, a
data flow could consist of a single data item (such as a student ID number) or it could
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 68
include a set of data (such as a class roster with student ID numbers, names, and registration
dates for a specific class).
The symbol for a data flow is a line with a single or double arrowhead. The
data flow name appears above, below, or alongside the line. A data flow name consists of a
singular noun and an adjective, if needed. Examples of data flow names are DEPOSIT,
INVOICE PAYMENT, STUDENT GRADE, ORDER, and COMMISSION. DATA STORE SYMBOL:
A data store is used in a DFD to represent data that the system stores because
one or more processes need to use the data at a later time.
In a DFD, the symbol for a data store is a flat rectangle that is open on the
right side and closed on the left side. The name of the data store appears between the lines
and identifies the data it contains. A data store name is a plural name consisting of a noun
and adjectives, if needed. Examples of data store names are STUDENTS, ACCOUNTS
RECEIVABLE, PRODUCTS, DAILY PAYMENTS,
A data store must be connected to a process with a data flow. Figure 8.2
illustrates typical examples of data stores. In each case, the data store has at least one
incoming and one outgoing data flow and is connected to a process symbol with a data flow.
Violations of the rule that a data store must have at least one incoming and one outgoing
data flow are shown in Figure 8.5. In the first example, two data stores are connected
incorrectly because no process is between them. Also, COURSES has no incoming data
flow and STUDENTS has no outgoing data flow. In the second and third examples, the data
stores lack either an outgoing or incoming data flow.
FIGURE 8.2: Examples of correct uses of data FIGURE 8.3 Examples of incorrect uses of data store
symbols in a data flow diagram. store symbols
ENTITY SYMBOL:
The symbol for an entity is a rectangle, which may be shaded to make it look
three-dimensional. The name of the entity appears inside the symbol.
A DFD shows only external entities that provide data to the system or receive
output from the system. A DFD shows the boundaries of the system and how the system
interfaces with the outside world. DFD entities also are called terminators, because they are
data origins or final destinations. Systems analysts call an entity that supplies data to the
system a source, and an entity that receives data from the system a sink. An entity name is
the singular form of a department, outside organization, other information system, or person.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 69
An external entity can be a source or a sink or both, but each entity must be connected to a
process by a data flow. With an understanding of the proper use of DFD symbols.
FIGURE 8.4 Examples of correct uses of FIGURE 8.5 Examples of incorrect uses of external
external entities in a data flow diagram. entities
FIGURE 8.6: Examples of correct and incorrect uses of data flows.
Guidelines for Drawing DFDs
There are several guidelines to draw a DFD:
Draw the context diagram so it fits on one
page.
Use the name of the information system as the
process name in the context diagram.
Use unique names within each set of symbols.
Do not cross lines.
Provide a unique name and reference number
for each process.
Obtain as much user input and feedback as possible.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 70
The main objective is to ensure that the model is FIGURE 8.7: Context diagram
accurate, easy to understand, and meets DFD for a grading system.
the needs of its users. .
CREATING A SET OF DFDs:
Step 1: Draw a Context Diagram
The first step in constructing a set of DFDs is to draw a context diagram. A
context diagram is a top-level view of an information system that shows the system’s
boundaries and scope. To draw a context diagram, place a single process symbol in the
center of the page. The symbol represents the entire information system, and identifies it as
process 0. Then place the system entities around the perimeter of the page and use data
flows to connect the entities to the central process. Data stores are not shown in the context
diagram because they are contained within the system and remain hidden until more detailed
diagrams are created. EXAMPLE: CONTEXT DIAGRAM FOR AN ORDER SYSTEM
The context diagram for an order system is shown in Figure 8.8. The ORDER
SYSTEM process is at the center of the diagram and five entities surround the process.
Three of the entities, SALES REP, BANK, and ACCOUNTING, have single incoming data
flows for COMMISSION, BANK DEPOSIT, and CASH RECEIPTS ENTRY, respectively.
The WAREHOUSE entity has one incoming data flow — PICKING LIST — that is, a
report that shows the items ordered and their quantity, location, and sequence to pick from
the warehouse. The WAREHOUSE entity has one outgoing data flow, COMPLETED
ORDER. Finally, the CUSTOMER entity has two outgoing data flows, ORDER and
PAYMENT, and two incoming data flows, ORDER REJECT NOTICE and INVOICE. The
context diagram for the order system appears more complex than the grading system because
it has two more entities and three more data flows. What makes one system more complex
than another is the number of components, the number of levels, and the degree of
interaction among its processes, entities, data stores, and data flows.
FIGURE 8.8: Context diagram DFD for an order system.
Step 2: Draw a Diagram 0 DFD
Context diagram provides the most general view of an information system
and contains a single process symbol, which is like a black box. To show the detail inside
the black box, you create DFD diagram 0. Diagram 0 (the numeral zero, and not the letter
O) zooms in on the system and shows major internal processes, data flows, and data stores.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 71
Diagram 0 also repeats the entities and data flows that appear in the context diagram. When
you expand the context diagram into DFD diagram 0, you must retain all the connections
that flow into and out of process 0.
EXAMPLE: DIAGRAM 0 DFD FOR AN ORDER SYSTEM: Figure 8.9 shows the diagram 0 for an order system. Process 0 on the order system’s
context diagram is exploded to reveal three processes (FILL ORDER, CREATE INVOICE, and
APPLY PAYMENT), one data store (ACCOUNTS RECEIVABLE), two additional data flows
(INVOICE DETAIL and PAYMENT DETAIL), and one diverging data flow (INVOICE).
The following walkthrough explains the DFD shown in Figure 8.9: 1. A CUSTOMER submits an ORDER. Depending on the processing logic, the FILL ORDER
process either sends an ORDER REJECT NOTICE back to the customer or sends a PICKING LIST
to the WAREHOUSE. 2. A COMPLETED ORDER from the WAREHOUSE is input to the CREATE INVOICE process,
which outputs an INVOICE to both the CUSTOMER process and the ACCOUNTS RECEIVABLE
data store.
3. A CUSTOMER makes a PAYMENT that is processed by APPLY PAYMENT.
APPLY PAYMENT requires INVOICE DETAIL input from the ACCOUNTS RECEIVABLE data
store along with the PAYMENT. APPLY PAYMENT also outputs PAYMENT DETAIL back to the
ACCOUNTS RECEIVABLE data store and outputs COMMISSION to the SALES DEPT, BANK
DEPOSIT to the BANK, and CASH RECEIPTS ENTRY to ACCOUNTING.
FIGURE 8.9: Diagram 0 DFD for the order system.
Step 3: Draw the Lower-Level Diagrams
This set of lower-level DFDs is based on the order system. To create lower-
level diagrams, leveling and balancing techniques are used. Leveling is the process of
drawing a series of increasingly detailed diagrams, until all functional primitives are
identified. Balancing maintains consistency among a set of DFDs by ensuring that input and
output data flows align properly. Leveling and balancing are described in more detail in the
following sections. LEVELING EXAMPLES:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 72
Leveling uses a series of increasingly detailed DFDs to describe an information system. For example, a system might consist of dozens, or even hundreds, of separate processes.
Using leveling, an analyst starts with an overall view, which is a context diagram with a single
process symbol. Next, the analyst creates diagram 0, which shows more detail. The analyst continues
to create lower-level DFDs until all processes are identified as functional primitives, which represent
single processing functions. More complex systems have more processes, and analysts must work
through many levels to identify the functional primitives. Leveling also is called exploding,
partitioning, or decomposing.
Figures 8.9 and 8.10 provide an example of leveling. Figure 8.9 shows diagram 0 for
an order system, with the FILL ORDER process labeled as process 1. Now consider Figure 8.10, which provide an exploded view of the FILL ORDER process. FILL ORDER (process 1) actually
consists of three processes: VERIFY ORDER (process 1.1), PREPARE REJECT NOTICE (process
1.2), and ASSEMBLEORDER (process 1.3).
FIGURE 8.10: Diagram 1 DFD shows details of the FILL ORDER process in the order
system. As Figure 8.10 shows, all processes are numbered using a decimal notation consist
of the parent’s reference number, a decimal point, and a sequence number within the new diagram. In Figure 5-17, the parent process of diagram 1 is process 1, so the processes in diagram 1 have
reference numbers of 1.1, 1.2, and 1.3. If process 1.3, ASSEMBLE ORDER, is decomposed further,
then it would appear in diagram 1.3 and the processes in diagram 1.3 would be numbered as 1.3.1, 1.3.2, 1.3.3, and so on. This numbering technique makes it easy to integrate and identify all DFDs. BALANCING EXAMPLES:
Balancing ensures that the input and output data flows of the parent DFD are
maintained on the child DFD. For example, Figure 8.11 shows two DFDs: The order system
diagram 0 is shown at the top of the figure, and the exploded diagram 3 DFD is shown at the
bottom.
The two DFDs are balanced, because the child diagram at the bottom has the
same input and output flows as the parent process 3 shown at the top. To verify the
balancing, notice that the parent process 3, APPLY PAYMENT, has one incoming data flow
from an external entity, and three outgoing data flows to external entities. Now examine the
child DFD, which is diagram 3. Now, ignore the internal data flows and count the data flows
to and from external entities. You will see that the three processes maintain the same one
incoming and three outgoing data flows as the parent process.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 73
As Figure 8.10 shows, all processes are numbered using a decimal notation
consisting of the parent’s reference number, a decimal point, and a sequence number within
the new diagram. In Figure 8.10, the parent process of diagram 1 is process 1, so the
processes in diagram 1 have reference numbers of 1.1, 1.2, and 1.3. If process 1.3,
ASSEMBLE ORDER, is decomposed further, then it would appear in diagram 1.3 and the
processes in diagram 1.3 would be numbered as 1.3.1, 1.3.2, 1.3.3, and so on. This
numbering technique makes it easy to integrate and identify all DFDs.
When you compare Figures 8.9 and 8.10, Figure 8.10 shows two data stores
(CUSTOMERS and PRODUCTS) that do not appear on Figure 8.9, which is the parent
DFD. Why not? The answer is based on a simple rule: When drawing DFDs, you show a
data store only when two or more processes use that data store. The CUSTOMERS and
PRODUCTS data stores were internal to the FILL ORDER process, so the analyst did not
show them on diagram 0, which is the parent. When you explode the FILL ORDER process
into diagram 1 DFD, however, you see that three processes (1.1, 1.2, and 1.3) interact with
the two data stores, which now are shown.
Order System Diagram 0 DFD:
Order System Diagram 3 DFD
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 74
FIGURE 8.11 The order system diagram 0 is shown at the top of the figure, and exploded diagram 3 DFD (for the APPLY
PAYMENT process) is shown at the bottom. The two DFDs are balanced, because the child diagram at the bottom has the
same input and output flows as the parent process 3 shown at the top.
LOGICAL VERSUS PHYSICAL MODELS
While structured analysis tools are used to develop a logical model for a new
information system, such tools also can be used to develop physical models of an
information system. A physical model shows how the system’s requirements are
implemented. During the systems design phase, you create a physical model of the new
information system that follows from the logical model and involves operational tasks and
techniques.
Sequence of Models
Many systems analysts create a physical model of the current system and
then develop a logical model of the current system before tackling a logical model of the
new system. Performing that extra step allows them to understand the current system better.
Four-Model Approach
Many analysts follow a four-model approach, which means that they
develop a physical model of the current system, a logical model of the current system, a
logical model of the new system, and a physical model of the new system. The major benefit
of the four model approach is that it gives you a clear picture of current system functions
before you make any modifications or improvements. That is important because mistakes
made early in systems development will affect later SDLC phases and can result in unhappy
users and additional costs. Taking additional steps to avoid these potentially costly mistakes
can prove to be well worth the effort. Another advantage is that the requirements of a new
information system often are quite similar to those of the current information system,
especially where the proposal is based on new computer technology rather than a large
number of new requirements. Adapting the current system logical model to the new system
logical model in these cases is a straightforward process. The only disadvantage of the four-
model approach is the added time and cost needed to develop a logical and physical model
of the current system. Most projects have very tight schedules that might not allow time to
create the current system models.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 75
Additionally, users and managers want to see progress on the new system —
they are much less concerned about documenting the current system. As a systems analyst,
you must stress the importance of careful documentation and resist the pressure to hurry the
development process at the risk of creating serious problems later.
Chapter 09
Design Concepts 9.1. Design within the Context of Software Engineering:
Software design sits at the technical kernel of software engineering and is
applied regardless of the software process model that is used. Beginning once software
requirements have been analyzed and modeled, software design is the last software
engineering action within the modeling activity and sets the stage for construction (code
generation and testing).
Figure 9.1: Translating the requirements model into the design model
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 76
Each of the elements of the requirements model provides information that is
necessary to create the four design models required for a complete specification of design.
The requirements model, by scenario-based, class-based, flow-oriented, and behavioral
elements, feed the design task. Using design notation and design methods, design produces a
data/class design, an architectural design, an interface design, and a component design.
The data/class design transforms class models into design class realizations
and the requisite data structures required to implement the software. The objects and
relationships defined in the CRC diagram and the detailed data content depicted by class
attributes and other notation provide the basis for the data design action. Part of class design
may occur in conjunction with the design of software architecture. More detailed class
design occurs as each software component is designed.
The architectural design defines the relationship between major structural
elements of the software, the architectural styles and design patterns that can be used to
achieve the requirements defined for the system, and the constraints that affect the way in
which architecture can be implemented. The architectural design representation—the
framework of a computer-based system—is derived from the requirements model.
The interface design describes how the software communicates with
systems that interoperate with it, and with humans who use it. An interface implies a flow of
information (e.g., data and/or control) and a specific type of behavior. Therefore, usage
scenarios and behavioral models provide much of the information required for interface
design.
The component-level design transforms structural elements of the software
architecture into a procedural description of software components. Information obtained
from the class-based models, flow models, and behavioral models serve as the basis for
component design. During design you make decisions that will ultimately affect the success
of software construction and, as important, the ease with which software can be maintained.
Importance of Software Design:
The importance of software design can be stated with a single word—quality.
Design is the place where quality is promoted in software engineering. Design provides you
with representations of software that can be assessed for quality. Design is used to translate
stakeholder’s requirements into a finished software product or system. Software design
serves as the foundation for all the software engineering and software support activities that
follow. Without design, the system will be unstable—will fail when small changes are made;
may be difficult to test; whose quality cannot be assessed until late in the software process,
when time is short and many most of budget has already been spent.
9.2 The Design Process:
Software design is an iterative process through which requirements are
translated into a “blueprint” for constructing the software. Initially, the blueprint depicts a
comprehensive view of software. Design is represented at a high level of abstraction that
can be directly traced to the specific system objective and more detailed data, functional,
and behavioral requirements. As design iterations occur, subsequent refinement leads to
design representations at much lower levels of abstraction. These can still be traced to
requirements, but the connection is more subtle.
9.2.1 Software Quality Guidelines and Attributes
Throughout the design process, the quality of the evolving design is assessed
with a series of technical reviews. Three characteristics that serve as a guide for the
evaluation of a good design:
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 77
The design must implement all of the explicit requirements contained in the
requirements model, and it must accommodate all of the implicit requirements desired
by stakeholders.
The design must be a readable, understandable guide for those who generate code and
for those who test and subsequently support the software.
The design should provide a complete picture of the software, addressing the data,
functional, and behavioral domains from an implementation perspective.
Each of these characteristics is actually a goal of the design process.
Quality Guidelines:
In order to evaluate the quality of a design representation, members of the
software team must establish technical criteria for good design. There are some guidelines
that must consider quality:
1. A design should exhibit an architecture that (1) has been created using recognizable
architectural styles or patterns, (2) is composed of components that exhibit good design
characteristics and (3) can be implemented in an evolutionary fashion, thereby facilitating
implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned into
elements or subsystems.
3. A design should contain distinct representations of data, architecture, interfaces, and
components.
4. A design should lead to data structures that are appropriate for the classes to be
implemented and are drawn from recognizable data patterns.
5. A design should lead to components that exhibit independent functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
7. A design should be derived using a repeatable method that is driven by information
obtained during software requirements analysis.
8. A design should be represented using a notation that effectively communicates its
meaning.
Quality Attributes:
Hewlett-Packard developed a set of software quality attributes that has been
given the acronym FURPS—functionality, usability, reliability, performance, and
supportability. The FURPS quality attributes represent a target for all software design:
• Functionality is assessed by evaluating the feature set and capabilities of the program, the
generality of the functions that are delivered, and the security of the overall system.
• Usability is assessed by considering human factors, overall aesthetics, consistency, and
documentation.
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of
output results, the mean-time-to-failure (MTTF), the ability to recover from failure, and the
predictability of the program.
• Performance is measured by considering processing speed, response time, resource
consumption, throughput, and efficiency.
• Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability—these three attributes represent a more common term, maintainability—and
in addition, testability, compatibility, configurability (the ability to organize and control
elements of the software configuration), the ease with which a system can be installed, and
the ease with which problems can be localized.
Not every software quality attribute is weighted equally as the software
design is developed. One application may stress functionality with a special emphasis on
security. Another may demand performance with particular emphasis on processing speed.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 78
A third might focus on reliability. Regardless of the weighting, it is important to note that
these quality attributes must be considered as design commences, not after the design is
complete and construction has begun.
9.2.2 The Evolution of Software Design:
The evolution of software design is a continuing process that has now
spanned almost six decades. Early design work concentrated on criteria for the development
of modular programs and methods for refining software structures in a top down manner.
Procedural aspects of design definition evolved into a philosophy called structured
programming. Later work proposed methods for the translation of data flow or data structure
into a design definition. Newer design approaches proposed an object-oriented approach to
design derivation. More recent emphasis in software design has been on software
architecture and the design patterns that can be used to implement software architectures and
lower levels of design abstractions. Growing emphasis on aspect-oriented methods, model-
driven development, and test-driven development emphasize techniques for achieving more
effective modularity and architectural structure in the designs that are created.
A number of design methods are being applied throughout the industry. Each
software design method introduces unique heuristics and notation, congested view of what
characterizes design quality. All of these methods have a number of common characteristics:
1. A mechanism for the translation of the requirements model into a design
representation.
2. A notation for representing functional components and their interfaces.
3. Heuristics for refinement and partitioning
4. Guidelines for quality assessment.
Regardless of the design method that is used, a set of basic concepts to data,
architectural, interface, and component-level design must be applied. These concepts are
considered in the sections that follow.
9.3 DESIGN CONCEPTS
A set of fundamental software design concepts has evolved over the history
of software engineering. Although the degree of interest in each concept has varied over the
years, each has stood the test of time. Each provides the software designer with a foundation
from which more sophisticated design methods can be applied. Each helps to answer the
following questions:
• What criteria can be used to partition software into individual components?
• How is function or data structure detail separated from a conceptual representation of the
software?
• What uniform criteria define the technical quality of a software design?
9.3.1 Abstraction
Many levels of abstraction can be maintained on a modular solution to any
problem. At the highest level of abstraction, a solution is stated in broad terms using the
language of the problem environment. At lower levels of abstraction, a more detailed
description of the solution is provided. Problem-oriented terminology is coupled with
implementation-oriented terminology in an effort to state a solution. Finally, at the lowest
level of abstraction, the solution is stated in a manner that can be directly implemented.
As different levels of abstraction are developed, you work to create both procedural and data
abstractions. A procedural abstraction refers to a sequence of instructions that have a
specific and limited function. The name of a procedural abstraction implies these functions,
but specific details are suppressed.
9.3.2 Architecture
Software architecture determines “the overall structure of the software and the
ways in which that structure provides conceptual integrity for a system”. Architecture is the
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 79
structure or organization of program components, the manner in which these components
interact, and the structure of data that are used by the components. In a broader sense,
however, components can be generalized to represent major system elements and their
interactions.
Shaw and Garlan describe a set of properties that should be specified as part of
an architectural design:
Structural properties: Structural properties define the components of a system (e.g.,
modules, objects, filters) and the manner in which those components are packaged and
interact with one another. For example, objects are packaged to encapsulate both data and
the processing that manipulates the data and interact via the invocation of methods.
Extra-functional properties: Extra Functional properties address how the design
architecture achieves requirements for performance, capacity, reliability, security,
adaptability, and other system characteristics.
Families of related systems: The architectural design should draw upon repeatable patterns
that are commonly encountered in the design of families of similar systems. In essence, the
design should have the ability to reuse architectural building blocks. Given the specification
of these properties, the architectural design can be represented using one or more different
models.
Structural models represent architecture as an organized collection of program components.
Framework models increase the level of design abstraction by attempting to identify
repeatable architectural design frameworks that are encountered in similar types of
applications.
Dynamic models address the behavioral aspects of the program architecture, indicating how
the structure or system configuration may change as a function of external events. Process
models focus on the design of the business or technical process that the system must
accommodate.
Functional models can be used to represent the functional hierarchy of a system.
9.3.3 Patterns
A design pattern describes a design structure that solves a particular design
problem within a specific context and “forces” that may have an impact on the manner in
which the pattern is applied and used.
The intent of each design pattern is to provide a description that enables a
designer to determine:
(1) Whether the pattern is applicable to the current work
(2) Whether the pattern can be reused (hence, saving design time)
(3) Whether the pattern can serve as a guide for developing a similar, but functionally or
structurally different pattern.
9.3.4 Separation of Concerns
Separation of concerns is a design concept that suggests that any complex
problem can be more easily handled if it is subdivided into pieces that can each be solved
and optimized independently. A concern is a feature or behavior that is specified as part of
the requirements model for the software. By separating concerns into smaller and therefore
more manageable pieces, a problem takes less effort and time to solve.
Separation of concerns is manifested in other related design concepts:
modularity, aspects, functional independence, and refinement.
9.3.5 Modularity
Modularity is the most common manifestation of separation of concerns.
Software is divided into separately named and addressable components, called modules that
are integrated to satisfy problem requirements.” Modularity is the single attribute of
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 80
software that allows a program to be intellectually manageable. As, Monolithic software
(i.e., a large program composed of a single module) cannot be easily grasped by a software
engineer.
9.3.6 Information Hiding
The principle of information hiding suggests that modules be “characterized
by design decisions that (each) hides from all others.” Modules should be specified and
designed so that information (algorithms and data) contained within a module is inaccessible
to other modules that have no need for such information. Hiding implies that effective
modularity can be achieved by defining a set of independent modules that communicate with
one another only that information necessary to achieve software function. Hiding defines
and enforces access constraints to both procedural detail within a module and any local data
structure used by the module.
9.3.7 Functional Independence
The concept of functional independence is a direct outgrowth of separation of
concerns, modularity, and the concepts of abstraction and information hiding.
Functional independence is achieved by developing modules with “single-minded” function
and an “discourage” to excessive interaction with other modules. Each module should
addresses a specific subset of requirements and has a simple interface when viewed from
other parts of the program structure. Software with effective modularity, that is, independent
modules, is easier to develop because function can be compartmentalized and interfaces are
simplified. Independent modules are easier to maintain and test.
Independence is assessed using two qualitative criteria: cohesion and coupling.
Cohesion is an indication of the relative functional strength of a module. Coupling is an
indication of the relative interdependence among modules. Cohesion is a natural extension
of the information-hiding. A cohesive module performs a single task, requiring little
interaction with other components in other parts of a program. Stated simply, a cohesive
module should (ideally) do just one thing. Although you should always strive for high
cohesion (i.e., single-mindedness), it is often necessary and advisable to have a software
component perform multiple functions. However, “schizophrenic” components (modules
that perform many unrelated functions) are to be avoided if a good design is to be achieved.
Coupling is an indication of interconnection among modules in a software structure.
Coupling depends on the interface complexity between modules, the point at which entry or
reference is made to a module, and what data pass across the interface. In software design,
you should strive for the lowest possible coupling. Simple connectivity among modules
results in software that is easier to understand and less prone to a “ripple effect”, caused
when errors occur at one location and propagates throughout a system.
9.3.8 Refinement
Stepwise refinement is a top-down design strategy originally proposed by
Niklaus Wirth. A program is developed by successively refining levels of procedural detail.
A hierarchy is developed by decomposing a macroscopic statement of function (a procedural
abstraction) in a stepwise fashion until programming language statements are reached.
Refinement is actually a process of elaboration. You begin with a statement of function (or
description of information) that is defined at a high level of abstraction. That is, the
statement describes function or information conceptually but provides no information about
the internal workings of the function or the internal structure of the information. You then
elaborate on the original statement, providing more and more detail as each successive
refinement (elaboration) occurs.
Abstraction and refinement are complementary concepts. Abstraction enables you to specify
procedure and data internally but suppress the need for “outsiders” to have knowledge of
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 81
low-level details. Refinement helps you to reveal low-level details as design progresses.
Both concepts allow you to create a complete design model as the design evolves.
9.3.9 Aspects
As requirements analysis occurs, a set of “concerns” is uncovered. These
concerns “include requirements, use cases, features, data structures, quality-of-service
issues, variants, intellectual property boundaries, collaborations, patterns and contracts”.
Ideally, a requirements model can be organized in a way that allows you to isolate each
concern (requirement) so that it can be considered independently. In practice, however, some
of these concerns span the entire system and cannot be easily compartmentalized.
9.3.10 Refactoring
An important design activity suggested for many agile methods, refactoring
is a reorganization technique that simplifies the design (or code) of a component without
changing its function or behavior. “Refactoring is the process of changing a software system
in such a way that it does not alter the external behavior of the code yet improves its internal
structure.”
When software is refactored, the existing design is examined for redundancy, unused design
elements, inefficient or unnecessary algorithms, poorly constructed or inappropriate data
structures, or any other design failure that can be corrected to yield a better design. For
example, a first design iteration might yield a component that exhibits low cohesion. After
careful consideration, you may decide that the component should be refactored into three
separate components, each exhibiting high cohesion.
9.3.12 Design Classes
The requirements model defines a set of analysis classes. Each describes
some element of the problem domain, focusing on aspects of the problem that are user
visible. The level of abstraction of an analysis class is relatively high. As the design model
evolves, a set of design classes defined that refine the analysis classes by providing design
detail that will enable the classes to be implemented, and implement a software
infrastructure that supports the business solution. Five different types of design classes, each
representing a different layer of the design architecture, can be developed:
• User interface classes: define all abstractions that are necessary for human computer
interaction (HCI). In many cases, HCI occurs within the context of a metaphor (e.g., a
checkbook, an order form, a fax machine), and the design classes for the interface may be
visual representations of the elements of the metaphor.
• Business domain classes: are often refinements of the analysis classes defined earlier. The
classes identify the attributes and services (methods) that are required to implement some
element of the business domain.
• Process classes implement lower-level business abstractions required to fully manage the
business domain classes.
• Persistent classes represent data stores (e.g., a database) that will persist beyond the
execution of the software.
• System classes implement software management and control functions that enable the
system to operate and communicate within its computing environment and with the outside
world.
There are four characteristics of a well-formed design class:
Complete and sufficient: A design class should be the complete encapsulation of all
attributes and methods that can reasonably be expected to exist for the class.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 82
Primitiveness: Methods associated with a design class should be focused on accomplishing
one service for the class. Once the service has been implemented with a method, the class
should not provide another way to accomplish the same thing.
High cohesion: A cohesive design class has a small, focused set of responsibilities and
single-mindedly applies attributes and methods to implement those responsibilities.
Low coupling: It is necessary for design classes to collaborate with one another. However,
collaboration should be kept to an acceptable minimum. If a design model is highly coupled
(all design classes collaborate with all other design classes), the system is difficult to
implement, to test, and to maintain over time. In general, design classes within a subsystem
should have only limited knowledge of other classes. This restriction, called the Law of
Demeter, suggests that a method should only send messages to methods in neighboring
classes.
FIGURE 9.2: Design class for Floor Plan and composite aggregation
9.4 THE DESIGN MODEL
The process dimension indicates the evolution of the design model as design
tasks are executed as part of the software process. The abstraction dimension represents the
level of detail as each element of the analysis model is transformed into a design equivalent
and then refined iteratively. The dashed line indicates the boundary between the analysis and
design models. In some cases, a clear distinction between the analysis and design models is
possible. In other cases, the analysis model slowly blends into the design and a clear
distinction is less obvious.
The elements of the design model use many of the same UML diagrams that
were used in the analysis model. The difference is that these diagrams are refined and
elaborated as part of design; more implementation-specific detail is provided, and
architectural structure and style, components that reside within the architecture, and
interfaces between the components and with the outside world are all emphasized.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 83
FIGURE 9.3 Dimensions of the design model
9.4.1 Data Design Elements
Like other software engineering activities, data design creates a model of data
and information that is represented at a high level of abstraction. This data model is then
refined into progressively more implementation-specific representations that can be
processed by the computer-based system. In many software applications, the architecture of
the data will have a profound influence on the architecture of the software that must process
it.
The structure of data has always been an important part of software design.
At the program component level, the design of data structures and the associated algorithms
required to manipulate them is essential to the creation of high-quality applications. At the
application level, the translation of a data model (derived as part of requirements
engineering) into a database is pivotal to achieving the business objectives of a system. At
the business level, the collection of information stored in disparate databases and
reorganized into a “data warehouse” enables data mining or knowledge discovery that can
have an impact on the success of the business itself. In every case, data design plays an
important role.
9.4.2 Architectural Design Elements
Architectural design elements give us an overall view of the software.
The architectural model is derived from three sources:
(1) Information about the application domain for the software to be built;
(2) Specific requirements model elements such as data flow diagrams or analysis classes,
their relationships and collaborations for the problem at hand;
(3) The availability of architectural styles and patterns.
The architectural design element is a set of interconnected subsystems, often
derived from analysis packages within the requirements model. Each subsystem may have
its own architecture (e.g., a graphical user interface might be structured according to a
preexisting architectural style for user interfaces).
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 84
9.4.3 Interface Design Elements
The interface design elements for software depict information flows into and
out of the system and how it is communicated among the components defined as part of the
architecture.
There are three important elements of interface design:
1. The user interface (UI);
2. External interfaces to other systems, devices, networks, or other producers or consumers
of information;
3. Internal interfaces between various design components.
These interface design elements allow the software to communicate externally
and enable internal communication and collaboration among the components that populate
the software architecture.
UI design (increasingly called usability design) is a major software
engineering action. Usability design incorporates aesthetic elements (e.g., layout, color,
graphics, interaction mechanisms), ergonomic elements (e.g., information layout and
placement, metaphors, UI navigation), and technical elements (e.g., UI patterns, reusable
components). In general, the UI is a unique subsystem within the overall application
architecture. The design of external interfaces requires definitive information about the
entity to which information is sent or received. In every case, this information should be
collected during requirements engineering and verified once the interface design
commences.8 The design of external interfaces should incorporate error checking and (when
necessary) appropriate security features.
The design of internal interfaces is closely aligned with component-level
design . Design realizations of analysis classes represent all operations and the messaging
schemes required to enable communication and collaboration between operations in various
classes. Each message must be designed to accommodate the requisite information transfer
and the specific functional requirements of the operation that has been requested. If the
classic input process-output approach to design is chosen, the interface of each software
component is designed based on data flow representations and the functionality described in
a processing narrative.
In some cases, an interface is modeled in much the same way as a class. In
UML, an interface is defined in the following manner: “An interface is a specifier for the
externally-visible operations of a class, component, or other classifier (including
subsystems) without specification of internal structure.” Stated more simply, an interface is a
set of operations that describes some part of the behavior of a class and provides access to
these operations.
9.4.4 Component-Level Design Elements
The component-level design for software fully describes the internal detail of
each software component. The component-level design defines data structures for all local
data objects and algorithmic detail for all processing that occurs within a component and an
interface that allows access to all component operations (behaviors).
The design details of a component can be modeled at many different levels of
abstraction. A UML activity diagram can be used to represent processing logic. Detailed
procedural flow for a component can be represented using either pseudo code.
Some other diagrammatic form (e.g., flowchart or box diagram). Algorithmic
structure follows the rules established for structured programming (i.e., a set of constrained
procedural constructs). Data structures, selected based on the nature of the data objects to be
processed are usually modeled using pseudo code or the programming language to be used
for implementation.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 85
9.4.5 Deployment-Level Design Elements
Deployment-level design elements indicate how software functionality and
subsystems will be allocated within the physical computing environment that will support
the software.
During design, a UML deployment diagram is developed and then refined. In
the figure 9.3, three computing environments are shown. The subsystems (functionality)
housed within each computing element are indicated. For example, the personal computer
houses subsystems that implement security, surveillance, home management, and
communications features. In addition, an external access subsystem has been designed to
manage all attempts to access the SafeHome system from an external source. Each
subsystem would be elaborated to indicate the components that it implements. The
deployment diagram shows the computing environment but does not explicitly indicate
configuration details. For example, the “personal computer” is not further identified. It could
be a Mac or a Windows-based PC, a Sun workstation, or a Linux-box. These details are
provided when the deployment diagram is revisited in instance form during the latter stages
of design or as construction begins. Each instance of the deployment (a specific, named
hardware configuration) is identified.
FIGURE 9.4: A UML deployment diagram
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 86
Chapter 10
Architectural Design 10.1 SOFTWARE ARCHITECTURE
Shaw and Garlan discuss software architecture in the following manner:
“Ever since the first program was divided into modules, software systems have had
architectures, and programmers have been responsible for the interactions among the
modules and the global properties of the assemblage. Historically, architectures have been
implicit accidents of implementation, or legacy systems of the past. Good software
developers have often adopted one or several architectural patterns as strategies for system
organization, but they use these patterns informally and have no means to make them
explicit in the resulting system.”
Today, effective software architecture and its explicit representation and design have
become dominant themes in software engineering.
10.1.1 What Is Architecture?
Software Architecture Definitions
UML 1.3:
Architecture is the organizational structure of a system. An architecture can be recursively
decomposed into parts that interact through interfaces, relationships that connect parts, and
constraints for assembling parts. Parts that interact through interfaces include classes,
components and subsystems.
Bass, Clements, and Kazman. Software Architecture in Practice, Addison-Wesley
1997:
'The software architecture of a program or computing system is the structure or structures of
the system, which comprise software components, the externally visible properties of those
components, and the relationships among them.
By "externally visible" properties, we are referring to those assumptions other components
can make of a component, such as its provided services, performance characteristics, fault
handling, shared resource usage, and so on. The intent of this definition is that a software
architecture must abstract away some information from the system (otherwise there is no
point looking at the architecture, we are simply viewing the entire system) and yet provide
enough information to be a basis for analysis, decision making, and hence risk reduction."
Garlan and Perry, guest editorial to the IEEE Transactions on Software Engineering,
April 1995:
Software architecture is "the structure of the components of a program/system, their
interrelationships, and principles and guidelines governing their design and evolution over
time."
IEEE Glossary
Architectural design: The process of defining a collection of hardware and software
components and their interfaces to establish the framework for the development of a
computer system.
Shaw and Garlan
The architecture of a system defines that system in terms of computational components and
interactions among those components. Components are such things as clients and servers,
databases, filters, and layers in a hierarchical system. Interactions among components at this
level of design can be simple and familiar, such as procedure call and shared variable access.
But they can also be complex and semantically rich, such as client-server protocols, database
accessing protocols, asynchronous event multicast, and piped streams.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 87
When you consider the architecture of a building, many different attributes come to mind. At
the most simplistic level, you think about the overall shape of the physical structure. But in
reality, architecture is much more. It is the manner in which the various components of the
building are integrated to form a cohesive whole. It is the way in which the building fits into
its environment and meshes with other buildings in its vicinity. It is the degree to which the
building meets its stated purpose and satisfies the needs of its owner. It is the aesthetic feel
of the structure—the visual impact of the building—and the way textures, colors, and
materials are combined to create the external facade and the internal “living environment.” It
is small details— the design of lighting fixtures, the type of flooring, the placement of wall
hangings, the list is almost endless. And finally, it is art.
But architecture is also something else. It is “thousands of decisions, both big and small”.
Some of these decisions are made early in design and can have a profound impact on all
other design actions. Others are delayed until later, thereby eliminating overly restrictive
constraints that would lead to a poor implementation of the architectural style.
But what about software architecture? Bass, Clements, and Kazman define this elusive term
in the following way:
The software architecture of a program or computing system is the structure or structures of
the system, which comprise software components, the externally visible properties of those
components, and the relationships among them.
The architecture is not the operational software. Rather, it is a representation that enables
you to
(1) analyze the effectiveness of the design in meeting its stated requirements,
(2) consider architectural alternatives at a stage when making design changes is still
relatively easy.
(3) reduce the risks associated with the construction of the software.
This definition emphasizes the role of “software components” in any architectural
representation. In the context of architectural design, a software component can be
something as simple as a program module or an object-oriented class, but it can also be
extended to include databases and “middleware” that enable the configuration of a network
of clients and servers. The properties of components are those characteristics that are
necessary for an understanding of how the components interact with other components. At
the architectural level, internal properties (e.g., details of an algorithm) are not specified.
The relationships between components can be as simple as a procedure call from one
module to another or as complex as a database access protocol.
Some members of the software engineering community make a distinction between the
actions associated with the derivation of a software architecture (what I call “architectural
design”) and the actions that are applied to derive the software design. As one reviewer of
this edition noted:
There is a distinct difference between the terms architecture and design. A design is an
instance of an architecture similar to an object being an instance of a class. For example,
consider the client-server architecture. I can design a network-centric software system in
many different ways from this architecture using either the Java platform (Java EE) or
Microsoft platform (.NET framework). So, there is one architecture, but many designs can
be created based on that architecture. Therefore, you cannot mix “architecture” and “design”
with each other.
A software design is an instance of a specific software architecture, the elements and
structures that are defined as part of an architecture are the root of every design that evolves
from them. Design begins with a consideration of architecture.
Design of software architecture considers two levels of the design pyramid (Figure 8.1)—
data design and architectural design. In the context of the preceding discussion, data design
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 88
enables you to represent the data component of the architecture in conventional systems and
class definitions (encompassing attributes and operations) in object-oriented systems.
Architectural design focuses on the representation of the structure of software components,
their properties, and interactions.
10.1.2: Importance of Architecture:
Bass and his colleagues [Bas03] identify three key reasons that software architecture is
important:
• Representations of software architecture are an enabler for communication between all
parties (stakeholders) interested in the development of a computer-based system.
• The architecture highlights early design decisions that will have a profound impact on all
software engineering work that follows and, as important, on the ultimate success of the
system as an operational entity.
• Architecture “constitutes a relatively small, intellectually graspable model of how the
system is structured and how its components work together”.
The architectural design model and the architectural patterns contained within it are
transferable. That is, architecture genres, styles, and patterns can be applied to the design of
other systems and represent a set of abstractions that enable software engineers to describe
architecture in predictable ways.
10.1.3 Architectural Descriptions
Each of us has a mental image of what the word architecture means. In reality, however, it
means different things to different people. The implication is that different stakeholders will
see an architecture from different viewpoints that are driven by different sets of concerns.
This implies that an architectural description is actually a set of work products that reflect
different views of the system.
For example, the architect of a major office building must work with a variety of different
stakeholders. The primary concern of the owner of the building (one stakeholder) is to
ensure that it is aesthetically pleasing and that it provides sufficient office space and
infrastructure to ensure its profitability. Therefore, the architect must develop a description
using views of the building that address the owner’s concerns.
The viewpoints used are a three-dimensional drawings of the building (to illustrate the
aesthetic view) and a set of two-dimensional floor plans to address this stakeholder’s
concern for office space and infrastructure.
But the office building has many other stakeholders, including the structural steel fabricator
who will provide steel for the building skeleton. The structural steel fabricator needs detailed
architectural information about the structural steel that will support the building, including
types of I-beams, their dimensions, connectivity, materials, and many other details. These
concerns are addressed by different work products that represent different views of the
architecture. Specialized drawings (another viewpoint) of the structural steel skeleton of the
building focus on only one of many of the fabricator’s concerns.
An architectural description of a software-based system must exhibit characteristics that are
analogous to those noted for the office building. Tyree and Akerman note this when they
write: “Developers want clear, decisive guidance on how to proceed with design. Customers
want a clear understanding on the environmental changes that must occur and assurances
that the architecture will meet their business needs. Other architects want a clear, salient
understanding of the architecture’s key aspects.” Each of these “wants” is reflected in a
different view represented using a different viewpoint.
The IEEE Computer Society has proposed IEEE-Std-1471-2000, Recommended Practice for
Architectural Description of Software-Intensive Systems, , with the following objectives:
(1) to establish a conceptual framework and vocabulary for use during the design of software
architecture,
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 89
(2) to provide detailed guidelines for representing an architectural description, and
(3) to encourage sound architectural design practices.
The IEEE standard defines an architectural description (AD) as “a collection of products to
document an architecture.” The description itself is represented using multiple views, where
each view is “a representation of a whole system from the perspective of a related set of
[stakeholder] concerns.” A view is created according to rules and conventions defined in a
viewpoint—“a specification of the conventions for constructing and using a view”.
10.1.4 Architectural Decisions
Each view developed as part of an architectural description addresses a specific stakeholder
concern. To develop each view (and the architectural description as a whole) the system
architect considers a variety of alternatives and ultimately decides on the specific
architectural features that best meet the concern. Therefore, architectural decisions
themselves can be considered to be one view of the architecture.
The reasons that decisions were made provide insight into the structure of a system and its
conformance to stakeholder concerns.
As a system architect, you can use the template suggested in the sidebar to document each
major decision. By doing this, you provide a rationale for your work and establish an
historical record that can be useful when design modifications must be made.
10.2 ARCHITECTURAL GENRES
Although the underlying principles of architectural design apply to all types of architecture,
the architectural genre will often dictate the specific architectural approach to the structure
that must be built. In the context of architectural design, genre implies a specific category
within the overall software domain. Within each category, you encounter a number of
subcategories.
In his evolving Handbook of Software Architecture, Grady Booch suggests the following
architectural genres for software-based systems:
• Artificial intelligence—Systems that simulate or augment human cognition, locomotion,
or other organic processes.
• Commercial and nonprofit—Systems that are fundamental to the operation of a business
enterprise.
• Communications—Systems that provide the infrastructure for transferring and managing
data, for connecting users of that data, or for presenting data at the edge of an infrastructure.
• Content authoring—Systems that are used to create or manipulate textual or multimedia
artifacts.
• Devices—Systems that interact with the physical world to provide some point service for
an individual.
• Entertainment and sports—Systems that manage public events or that provide a large
group entertainment experience.
• Financial—Systems that provide the infrastructure for transferring and managing money
and other securities.
• Games—Systems that provide an entertainment experience for individuals or groups.
• Government—Systems that support the conduct and operations of a local, state, federal,
global, or other political entity.
• Industrial—Systems that simulate or control physical processes.
• Legal—Systems that support the legal industry.
• Medical—Systems that diagnose or heal or that contribute to medical research.
• Military—Systems for consultation, communications, command, control, and intelligence
(C4I) as well as offensive and defensive weapons.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 90
• Operating systems—Systems that sit just above hardware to provide basic software
services.
• Platforms—Systems that sit just above operating systems to provide advanced services.
• Scientific—Systems that are used for scientific research and applications.
• Tools—Systems that are used to develop other systems.
• Transportation—Systems that control water, ground, air, or space vehicles.
• Utilities—Systems that interact with other software to provide some point service.
From the standpoint of architectural design, each genre represents a unique challenge.
Alexandre Francois [Fra03] suggests a software architecture for Immersipresence hat can be
applied for a gaming environment. He describes the architecture in the following manner:
SAI (Software Architecture for Immersipresence) is a new software architecture model for
designing, analyzing and implementing applications performing distributed, asynchronous
parallel processing of generic data streams. The goal of SAI is to provide a universal
framework for the distributed implementation of algorithms and their easy integration into
complex systems. . . . The underlying extensible data model and hybrid distributed
asynchronous parallel processing model allow natural and efficient manipulation of generic
data streams, using existing libraries or native code alike. The modularity of the style
facilitates distributed code development, testing, and reuse, as well as fast system design and
integration, maintenance and evolution.
10.3 ARCHITECTURAL STYLES
an architectural style is a descriptive mechanism to differentiate the one architecture from
other styles. The architectural style is a template for construction.
The software that is built for computer-based systems also exhibits one of many
architectural styles. Each style describes a system category that encompasses
(1) a set of components (e.g., a database, computational modules) that perform a function
required by a system;
(2) a set of connectors that enable “communication, coordination and cooperation” among
components;
(3) constraints that define how components can be integrated to form the system;
(4) semantic models that enable a designer to understand the overall properties of a system
by analyzing the known properties of its constituent parts.
An architectural style is a transformation that is imposed on the design of an entire system.
The intent is to establish a structure for all components of the system. In the case where an
existing architecture is to be reengineered, the imposition of an architectural style will result
in fundamental changes to the structure of the software including a reassignment of the
functionality of components.
An architectural pattern, like an architectural style, imposes a transformation on the design
of an architecture. However, a pattern differs from a style in a number of fundamental ways:
(1) the scope of a pattern is less broad, focusing on one aspect of the architecture rather than
the architecture in its entirety;
(2) A pattern imposes a rule on the architecture, describing how the software will handle
some aspect of its functionality at the infrastructure level.
(3) Architectural patterns tend to address specific behavioral issues within the context of the
architecture (e.g., how real-time applications handle synchronization or interrupts).
Patterns can be used in conjunction with an architectural style to shape the overall structure
of a system.
10.3.1 A Brief Taxonomy of Architectural Styles
Data-centered architectures:
A data store (e.g., a file or database) resides at the center of this architecture and is accessed
frequently by other components that update, add, delete, or otherwise modify data within the
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 91
store. Client software accesses a central repository. In some cases the data repository is
passive. That is, client software accesses the data independent of any changes to the data or
the actions of other client software. A variation on this approach transforms the repository
into a “blackboard” that sends notifications to client software when data of interest to the
client changes.
Data-centered architectures promote integrability. That is, existing components can be
changed and new client components added to the architecture without concern about other
clients (because the client components operate independently). In addition, data can be
passed among clients using the blackboard mechanism (i.e., the blackboard component
serves to coordinate the transfer of information between clients). Client components
independently execute processes.
FIGURE 10.1: Architecture
Data-flow architectures:
This architecture is applied when input data are to be transformed through a series of
computational or manipulative components into output data. A pipe-and-filter pattern has a
set of components, called filters, connected by pipes that transmit data from one component
to the next. Each filter works independently of those components upstream and downstream,
is designed to expect data input of a certain form, and produces data output (to the next
filter) of a specified form. However, the filter does not require knowledge of the workings of
its neighboring filters.
If the data flow degenerates into a single line of transforms, it is termed batch sequential.
This structure accepts a batch of data and then applies a series of sequential components
(filters) to transform it.
Call and return architectures:
This architectural style enables you to achieve a program structure that is relatively easy to
modify and scale. A number of sub-styles exist within this category:
• Main program/subprogram architectures. This classic program structure decomposes
function into a control hierarchy where a “main” program invokes a number of program
components that in turn may invoke still other components. Figure 9.3 illustrates an
architecture of this type.
• Remote procedure call architectures. The components of a main program/subprogram
architecture are distributed across multiple computers on a network.
Object-oriented architectures: The components of a system encapsulate data and the
operations that must be applied to manipulate the data. Communication and coordination
between components are accomplished via message passing.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 92
Layered architectures: The basic structure of a layered architecture is illustrated in Figure
9.4. A number of different layers are defined, each accomplishing operations that
progressively become closer to the machine instruction set. At the outer layer, components
service user interface operations. At the inner layer, components perform operating system
interfacing. Intermediate layers provide utility services and application software functions.
These architectural styles are only a small subset of those available. Once requirements
engineering uncovers the characteristics and constraints of the system to be built, the
architectural style and/or combination of patterns that best fits those characteristics and
constraints can be chosen. In many cases, more than one pattern might be appropriate and
alternative architectural styles can be designed and evaluated. For example, a layered style
(appropriate for most systems) can be combined with a data-centered architecture in many
database applications.
User Interface Design
As technologists studied human interaction, two dominant issues arose. First, a set of golden
rules were identified. These applied to all human interaction with technology products.
Second, a set of interaction mechanisms were defined to enable software designers to build
systems that properly implemented the golden rules. These interaction mechanisms,
collectively called the graphical user interface (GUI), have eliminated some of the most
egregious problems associated with human interfaces. But even in a “Windows world,” we
all have encountered user interfaces that are difficult to learn, difficult to use, confusing,
counterintuitive, unforgiving, and in many cases, totally frustrating. Yet, someone spent time
and energy building each of these interfaces, and it is not likely that the builder created these
problems purposely.
10.4 THE GOLDEN RULES
There are three golden rules:
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.
These golden rules actually form the basis for a set of user interface design principles that
guide this important aspect of software design.
10.4.1 Place the User in Control:
During a requirements-gathering session for a major new information system, a key user was
asked about the attributes of the window-oriented graphical interface. “What I really would
like,” said the user solemnly, “is a system that reads my mind. It knows what I want to do
before I need to do it and makes it very easy for me to get it done. That’s all, just that.”
My first reaction was to shake my head and smile, but I paused for a moment.
There was absolutely nothing wrong with the user’s request. She wanted a system that
reacted to her needs and helped her get things done. She wanted to control the computer, not
have the computer control her.
Most interface constraints and restrictions that are imposed by a designer are intended to
simplify the mode of interaction. But for whom? As a designer, you may be tempted to
introduce constraints and limitations to simplify the implementation of the interface. The
result may be an interface that is easy to build, but frustrating to use. Mandel defines a
number of design principles that allow the user to maintain control:
Define interaction modes in a way that does not force a user into unnecessary or
undesired actions:An interaction mode is the current state of the interface. For example, if
spell check is selected in a word-processor menu, the software moves to a spell-checking
mode. There is no reason to force the user to remain in spell-checking mode if the user
desires to make a small text edit along the way. The user should be able to enter and exit the
mode with little or no effort.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 93
Provide for flexible interaction: Because different users have different interaction
preferences, choices should be provided. For example, software might allow a user to
interact via keyboard commands, mouse movement, a digitizer pen, a multi-touch screen, or
voice recognition commands. But every action is not amenable to every interaction
mechanism. Consider, for example, the difficulty of using keyboard command (or voice
input) to draw a complex shape.
Allow user interaction to be interruptible and undoable: Even when involved in a
sequence of actions, the user should be able to interrupt the sequence to do something else
(without losing the work that had been done). The user should also e able to “undo” any
action.
Streamline interaction as skill levels advance and allow the interaction to be
customized: Users often find that they perform the same sequence of interactions
repeatedly. It is worthwhile to design a “macro” mechanism that enables an advanced user to
customize the interface to facilitate interaction.
Hide technical internals from the casual user: The user interface should move the user
into the virtual world of the application. The user should not be aware of the operating
system, file management functions, or other arcane computing technology.
In essence, the interface should never require that the user interact at a level that is “inside”
the machine (e.g., a user should never be required to type operating system commands from
within application software).
Design for direct interaction with objects that appear on the screen. The user feels a
sense of control when able to manipulate the objects that are necessary to perform a task in a
manner similar to what would occur if the object were a physical thing. For example, an
application interface that allows a user to “stretch” an object (scale it in size) is an
implementation of direct manipulation.
10.4.2 Reduce the User’s Memory Load
The more a user has to remember, the more error-prone the interaction with the system will
be. It is for this reason that a well-designed user interface does not tax the user’s memory.
Whenever possible, the system should “remember” pertinent information and assist the user
with an interaction scenario that assists recall. Mandel defines design principles that enable
an interface to reduce the user’s memory load:
Reduce demand on short-term memory. When users are involved in complex tasks, the
demand on short-term memory can be significant. The interface should be designed to
reduce the requirement to remember past actions, inputs, and results. This can be
accomplished by providing visual cues that enable a user to recognize past actions, rather
than having to recall them.
Establish meaningful defaults. The initial set of defaults should make sense for the average
user, but a user should be able to specify individual preferences. However, a “reset” option
should be available, enabling the redefinition of original default values.
Define shortcuts that are intuitive: When mnemonics are used to accomplish a system
function (e.g., alt-P to invoke the print function), the mnemonic should be tied to the action
in a way that is easy to remember (e.g., first letter of the task to be invoked).
The visual layout of the interface should be based on a real-world metaphor: For
example, a bill payment system should use a checkbook and check register metaphor to
guide the user through the bill paying process. This enables the user to rely on well-
understood visual cues, rather than memorizing an arcane interaction sequence.
Disclose information in a progressive fashion: The interface should be organized
hierarchically. That is, information about a task, an object, or some behavior should be
presented first at a high level of abstraction. More detail should be presented after the user
indicates interest with a mouse pick. An example, common to many word-processing
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 94
applications, is the underlining function. The function itself is one of a number of functions
under a text style menu. However, every underlining capability is not listed. The user must
pick underlining; then all underlining options (e.g., single underline, double underline,
dashed underline) are presented.
10.4.3 Make the Interface Consistent: The interface should present and acquire
information in a consistent fashion. This implies that (1) all visual information is organized
according to design rules that are maintained throughout all screen displays, (2) input
mechanisms are constrained to a limited set that is used consistently throughout the
application, and (3) mechanisms for navigating from task to task are consistently defined
and implemented. Mandel defines a set of design principles that help make the interface
consistent:
Allow the user to put the current task into a meaningful context. Many interfaces
implement complex layers of interactions with dozens of screen images. It is important to
provide indicators (e.g., window titles, graphical icons, consistent color coding) that enable
the user to know the context of the work at hand. In addition, the user should be able to
determine where he has come from and what alternatives exist for a transition to a new task.
Maintain consistency across a family of applications. A set of applications (or products)
should all implement the same design rules so that consistency is maintained for all
interaction.
If past interactive models have created user expectations, do not make changes unless
there is a compelling reason to do so. Once a particular interactive sequence has become a
de facto standard (e.g., the use of alt-S to save a file), the user expects this in every
application he encounters. A change (e.g., using alt-S to invoke scaling) will cause
confusion. The interface design principles discussed in this and the preceding sections
provide you with basic guidance.
10.5 USER INTERFACE ANALYSIS AND DESIGN
The overall process for analyzing and designing a user interface begins with the creation of
different models of system function (as perceived from the outside). You begin by
delineating the human- and computer-oriented tasks that are required to achieve system
function and then considering the design issues that apply to all interface designs. Tools are
used to prototype and ultimately implement the design model, and the result is evaluated by
end users for quality.
10.5.1 Interface Analysis and Design Models
Four different models come into play when a user interface is to be analyzed and designed.
A human engineer (or the software engineer) establishes a user model, the software engineer
creates a design model, the end user develops a mental image that is often called the user’s
mental model or the system perception, and the implementers of the system create an
implementation model. Unfortunately, each of these models may differ significantly. Your
role, as an interface designer, is to reconcile these differences and derive a consistent
representation of the interface. The user model establishes the profile of end users of the
system. In his introductory column on “user-centric design”.
To build an effective user interface, “all design should begin with an understanding of the intended
users, including profiles of their age, gender, physical abilities, education, cultural or ethnic
background, motivation, goals and personality”. In addition, users can be categorized as:
Novices. No syntactic knowledge1 of the system and little semantic knowledge of the
application or computer usage in general.
Knowledgeable, intermittent user: Reasonable semantic knowledge of the application but
relatively low recall of syntactic information necessary to use the interface.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 95
Knowledgeable, frequent user: Good semantic and syntactic knowledge that often leads to
the “power-user syndrome”; that is, individuals who look for shortcuts and abbreviated
modes of interaction.
The user’s mental model (system perception) is the image of the system that end users carry
in their heads. For example, if the user of a particular word processor were asked to describe
its operation, the system perception would guide the response. The accuracy of the
description will depend upon the user’s profile (e.g., novices would provide a sketchy
response at best) and overall familiarity with software in the application domain. A user who
understands word processors fully but has worked with the specific word processor only
once might actually be able to provide a more complete description of its function than the
novice who has spent weeks trying to learn the system.
The implementation model combines the outward manifestation of the computer based
system (the look and feel of the interface), coupled with all supporting information (books,
manuals, videotapes, help files) that describes interface syntax and semantics. When the
implementation model and the user’s mental model are coincident, users generally feel
comfortable with the software and use it effectively. To accomplish this “melding” of the
models, the design model must have been developed to accommodate the information
contained in the user model, and the implementation model must accurately reflect syntactic
and semantic information about the interface.
The models described in this section are “abstractions of what the user is doing or thinks he
is doing or what somebody else thinks he ought to be doing when he uses an interactive
system” [Mon84]. In essence, these models enable the interface designer to satisfy a key
element of the most important principle of user interface design: “Know the user, know the
tasks.”
10.5.2 The Process
The analysis and design process for user interfaces is iterative and can be represented using a
spiral model. Referring to Figure 11.1, the user interface analysis and design process begins
at the interior of the spiral and encompasses four distinct framework activities [Man97]: (1)
interface analysis and modeling, (2) interface design, (3) interface construction, and (4)
interface validation. The spiral shown in Figure 11.1 implies that each of these tasks will
occur more than once, with each pass around the spiral representing additional elaboration of
requirements and the resultant design. In most cases, the construction activity involves
prototyping—the only practical way to validate what has been designed.
Interface analysis focuses on the profile of the users who will interact with the system. Skill
level, business understanding, and general receptiveness to the new system are recorded; and
different user categories are defined. For each user category, requirements are elicited. In
essence, you work to understand the system perception (Section 11.2.1) for each class of
users.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 96
Once general requirements have been defined, a more detailed task analysis is conducted.
Those tasks that the user performs to accomplish the goals of the system
FIGURE 10.2: The user interface design process
are identified, described, and elaborated (over a number of iterative passes through the
spiral). Task analysis is discussed in more detail in Section 11.3. Finally, analysis of the user
environment focuses on the physical work environment. Among the questions to be asked
are
• Where will the interface be located physically?
• Will the user be sitting, standing, or performing other tasks unrelated to the interface?
• Does the interface hardware accommodate space, light, or noise constraints?
• Are there special human factors considerations driven by environmental factors?
The information gathered as part of the analysis action is used to create an analysis model
for the interface. Using this model as a basis, the design action commences. The goal of
interface design is to define a set of interface objects and actions (and their screen
representations) that enable a user to perform all defined tasks in a manner that meets every
usability goal defined for the system.
Interface construction normally begins with the creation of a prototype that enables usage
scenarios to be evaluated. As the iterative design process continues, a user interface tool kit
(Section 11.5) may be used to complete the construction of the interface.
Interface validation focuses on (1) the ability of the interface to implement every user task
correctly, to accommodate all task variations, and to achieve all general user requirements;
(2) the degree to which the interface is easy to use and easy to learn, and (3) the users’
acceptance of the interface as a useful tool in their work.
the activities described in this section occur iteratively. Therefore, there is no need to
attempt to specify every detail (for the analysis or design model) on the first pass.
Subsequent passes through the process elaborate task detail, design information, and the
operational features of the interface.
10.6 WEBAPP INTERFACE DESIGN 10.6.1 Interface Design Principles and Guidelines
The user interface of a WebApp is its “first impression.” Regardless of the value of its
content, the sophistication of its processing capabilities and services, and the overall benefit
of the WebApp itself, a poorly designed interface will disappoint the potential user and may,
in fact, cause the user to go elsewhere. Because of the sheer volume of competing WebApps
in virtually every subject area, the interface must “grab” a potential user immediately.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 97
Bruce Tognozzi [Tog01] defines a set of fundamental characteristics that all interfaces
should exhibit and in doing so, establishes a philosophy that should be followed by every
WebApp interface designer:
Effective interfaces are visually apparent and forgiving, instilling in their users a sense of
control. Users quickly see the breadth of their options, grasp how to achieve their goals, and
do their work.
Effective interfaces do not concern the user with the inner workings of the system. Work is
carefully and continuously saved, with full option for the user to undo any activity at any
time.
Effective applications and services perform a maximum of work, while requiring a
minimum of information from users.
In order to design WebApp interfaces that exhibit these characteristics, Tognozzi identifies a
set of overriding design principles:
Anticipation: A WebApp should be designed so that it anticipates the user’s next move. For
example, consider a customer support WebApp developed by a manufacturer of computer
printers. A user has requested a content object that presents information about a printer
driver for a newly released operating system. The designer of the WebApp should anticipate
that the user might request a download of the driver and should provide navigation facilities
that allow this to happen without requiring the user to search for this capability.
Communication: The interface should communicate the status of any activity initiated by
the user. Communication can be obvious (e.g., a text message) or subtle (e.g., an image of a
sheet of paper moving through a printer to indicate that printing is under way). The interface
should also communicate user status (e.g., the user’s identification) and her location within
the WebApp content hierarchy.
Consistency: The use of navigation controls, menus, icons, and aesthetics (e.g., color,
shape, layout) should be consistent throughout the WebApp. For example, if underlined blue
text implies a navigation link, content should never incorporate blue underlined text that
does not imply a link. In addition, an object, say a yellow triangle, used to indicate a caution
message before the user invokes a particular function or action, should not be used for other
purposes elsewhere in the WebApp. Finally, every feature of the interface should respond in
a manner that is consistent with user expectations.
Controlled autonomy: The interface should facilitate user movement throughout the
WebApp, but it should do so in a manner that enforces navigation conventions that have
been established for the application. For example, navigation to secure portions of the
WebApp should be controlled by userID and password, and there should be no navigation
mechanism that enables a user to circumvent these controls.
Efficiency: The design of the WebApp and its interface should optimize the user’s work
efficiency, not the efficiency of the developer who designs and builds it or the client server
environment that executes it.
Flexibility; The interface should be flexible enough to enable some users to accomplish
tasks directly and others to explore the WebApp in a somewhat random fashion. In every
case, it should enable the user to understand where he is and provide the user with
functionality that can undo mistakes and retrace poorly chosen navigation paths.
Focus: The WebApp interface (and the content it presents) should stay focused on the user
task(s) at hand. In all hypermedia there is a tendency to route the user to loosely related
content. Why? Because it’s very easy to do! The problem is that the user can rapidly become
lost in many layers of supporting information and lose sight of the original content that she
wanted in the first place.
Fitt’s law: “The time to acquire a target is a function of the distance to and size of the
target”. Based on a study conducted in the 1950s, Fitt’s law “is an effective method of
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 98
modeling rapid, aimed movements, where one appendage (like a hand) starts at rest at a
specific start position, and moves to rest within a target area”. If a sequence of selections or
standardized inputs (with many different options within the sequence) is defined by a user
task, the first selection (e.g., mouse pick) should be physically close to the next selection.
For example, consider aWebApp home page interface at an e-commerce site that sells
consumer electronics.
Each user option implies a set of follow-on user choices or actions. For example, the “buy a
product” option requires that the user enter a product category followed by the product
name. The product category (e.g., audio equipment, televisions, DVD players) appears as a
pull-down menu as soon as “buy a product” is picked. Therefore, the next choice is
immediately obvious (it is nearby) and the time to acquire it is negligible. If, on the other
hand, the choice appeared on a menu that was located on the other side of the screen, the
time for the user to acquire it (and then make the choice) would be far too long.
Human interface objects: A vast library of reusable human interface objects has been
developed for WebApps. Use them. Any interface object that can be “seen, heard, touched or
otherwise perceived” by an end user can be acquired from any one of a number of object
libraries.
Latency reduction: Rather than making the user wait for some internal operation to
complete (e.g., downloading a complex graphical image), the WebApp should use
multitasking in a way that lets the user proceed with work as if the operation has been
completed.
In addition to reducing latency, delays must be acknowledged so that the user understands
what is happening. This includes (1) providing audio feedback when a selection does not
result in an immediate action by the WebApp, (2) displaying an animated clock or progress
bar to indicate that processing is under way, and (3) providing some entertainment (e.g., an
animation or text presentation) while lengthy processing occurs.
Learnability: A WebApp interface should be designed to minimize learning time, to
minimize relearning required when the WebApp is revisited. In general the interface should
emphasize a simple, intuitive design that organizes content and functionality into categories
that are obvious to the user.
Metaphors: An interface that uses an interaction metaphor is easier to learn and easier to
use, as long as the metaphor is appropriate for the application and the user. A metaphor
should call on images and concepts from the user’s experience, but it does not need to be an
exact reproduction of a real-world experience. For example, an e-commerce site that
implements automated bill paying for a financial institution, uses a checkbook metaphor (not
surprisingly) to assist the user in specifying and scheduling bill payments. However, when a
user “writes” a check, he need not enter the complete payee name but can pick from a list of
payees or have the system select based on the first few typed letters. The metaphor remains
intact, but the user gets an assist from the WebApp.
Maintain work product integrity: A work product (e.g., a form completed by the user, a
user-specified list) must be automatically saved so that it will not be lost if an error occurs.
Each of us has experienced the frustration associated with completing a lengthy WebApp
form only to have the content lost because of an error (made by us, by the WebApp, or in
transmission from client to server). To avoid this, a WebApp should be designed to autosave
all user-specified data. The interface should support this function and provide the user with
an easy mechanism for recovering “lost” information.
Readability: All information presented through the interface should be readable by young
and old. The interface designer should emphasize readable type styles, font sizes, and color
background choices that enhance contrast.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 99
Track state: When appropriate, the state of the user interaction should be tracked and
stored so that a user can logoff and return later to pick up where she left off. In general,
cookies can be designed to store state information. However, cookies are a controversial
technology, and other design solutions may be more palatable for some users.
Visible navigation: A well-designed WebApp interface provides “the illusion that users are
in the same place, with the work brought to them”. When this approach is used, navigation
is not a user concern. Rather, the user retrieves content objects and selects functions that are
displayed and executed through the interface.
10.6.2 Interface Design Workflow for WebApps
Earlier in this chapter I noted that user interface design begins with the identification of user,
task, and environmental requirements. Once user tasks have been identified, user scenarios
(use cases) are created and analyzed to define a set of interface objects and actions.
Information contained within the requirements model forms the basis for the creation of a
screen layout that depicts graphical design and placement of icons, definition of descriptive
screen text, specification and titling for windows, and specification of major and minor
menu items. Tools are then used to prototype and ultimately implement the interface design
model. The following tasks represent a rudimentary workflow for WebApp interface design:
1. Review information contained in the requirements model and refine as required.
2. Develop a rough sketch of the WebApp interface layout.
An interface prototype (including the layout) may have been developed as part of the
requirements modeling activity. If the layout already exists, it should be reviewed and
refined as required. If the interface layout has not been developed, you should work with
stakeholders to develop it at this time. A schematic first-cut layout sketch is shown in
Figure 11.4.
3. Map user objectives into specific interface actions. For the vast majority of WebApps,
the user will have a relatively small set of primary objectives.
These should be mapped into specific interface actions as shown in Figure 11.4. In essence,
you must answer the following question: “How does the interface enable the user to
accomplish each objective?”
4. Define a set of user tasks that are associated with each action. Each interface action
(e.g., “buy a product”) is associated with a set of user tasks. These tasks have been identified
during requirements modeling. During design, they must be mapped into specific
interactions that encompass navigation issues, content objects, and WebApp functions.
5. Storyboard screen images for each interface action: As each action is considered, a
sequence of storyboard images (screen images) should be created to depict how the interface
responds to user interaction. Content objects should be identified (even if they have not yet
been designed and developed), WebApp functionality should be shown, and navigation links
should be indicated.
6. Refine interface layout and storyboards using input from aesthetic design. In most
cases, you’ll be responsible for rough layout and storyboarding, but the aesthetic look and
feel for a major commercial site is often developed by artistic, rather than technical,
professionals. Aesthetic design is integrated with the work performed by the interface
designer.
Software Engineering CMP-2540
By: Muhammad Shahid Azeem M Phil. (CS) [email protected]
Lecturer CS/IT @www.risingeducation.com Page 100
FIGURE 10.3: Mapping user objectives into interface actions
7. Identify user interface objects that are required to implement the interface. This task
may require a search through an existing object library to find those reusable objects
(classes) that are appropriate for the WebApp interface. In addition, any custom classes are
specified at this time.
8. Develop a procedural representation of the user’s interaction with the interface. This
optional task uses UML sequence diagrams and/or activity diagrams (Appendix 1) to depict
the flow of activities (and decisions) that occur as the user interacts with the WebApp.
9. Develop a behavioral representation of the interface. This optional task makes use of
UML state diagrams (Appendix 1) to represent state transitions and the events that cause
them. Control mechanisms (i.e., the objects and actions available to the user to alter a
WebApp state) are defined.
10. Describe the interface layout for each state. Using design information developed in
Tasks 2 and 5, associate a specific layout or screen image with each WebApp state described
in Task 8.
11. Refine and review the interface design model. Review of the interface should focus on
usability..