Master’s Thesis
Software Engineering
Thesis no: MCS-20xx-yy
Month Year
School of Computing
Blekinge Institute of Technology
SE – 371 79 Karlskrona
Sweden
Master Thesis
Software Engineering
Thesis no: MSE-2013:148
December 2013
Contact Information:
Author(s): GE LIANG, LIANG YU Address: Fogdevagen 2A, Karlskrona, Sweden
E-mail: [email protected]; [email protected]
External advisor(s): Thomas Axelsson
Company name: Ericsson AB
Address: Telefonaktiebolaget LM Ericsson 129 20 Hagersten
Phone: +46107140980
University advisor(s): Tony Gorschek, Professor of Software Engineering
Department/School name: Blekinge Institute of Technology
Quality Driven Re-engineering Framework
Ge Liang, Liang Yu
School of Computing
Blekinge Institute of Technology
SE – 371 79 Karlskrona
Sweden
Internet : www.bth.se/com
Phone : +46 455 38 50 00
Fax : +46 455 38 50 57
ABSTRACT
Context. Software re-engineering has been identified as a business critical activity to improve legacy
systems in industries. It is the process of understanding existing software and improving it, for
modified or improved functionality, better maintainability, configurability, reusability, or other quality
goals. However, there is little knowledge to integrate software quality attributes into the re-
engineering process. It is essential to resolve quality problems through applying software re-
engineering processes.
Objectives. In this study we perform an in-depth investigation to identify and resolve quality
problems by applying software re-engineering processes. At the end, we created a quality driven re-
engineering framework.
Methods. At first, we conducted a literature review to get knowledge for building the quality driven
re-engineering framework. After that, we performed a case study in Ericsson Company to validate the
processes of the framework. At last, we carried out an experiment to prove that the identified quality
problems has been resolved.
Results. We compared three existing re-engineering frameworks and identified their weaknesses. In
order to fix the weaknesses, we created a quality driven re-engineering framework. This framework is
used to improve software quality through identifying and resolving root cause problems in legacy
systems. Moreover, we validated the framework for one type of legacy system by successfully
applying the framework in a real case in Ericsson Company. And also, we proved that the efficiency
of a legacy system is improved after executing an experiment in Ericsson Company.
Conclusions. We conclude that the quality driven re-engineering framework is applicable, and it can
improve efficiency of a legacy system. Moreover, we conclude that there is a need for further
empirical validation of the framework in full scale industrial trials.
Keywords: Quality driven re-engineering, reverse
engineering, Root cause Analysis.
ii
CONTENTS QUALITY DRIVEN RE-ENGINEERING FRAMEWORK ..............................................................
ABSTRACT ...........................................................................................................................................I
CONTENTS ......................................................................................................................................... II
LIST OF FIGURES ............................................................................................................................. V
LIST OF TABLES .............................................................................................................................. VI
1 INTRODUCTION ....................................................................................................................... 1
1.1 RESEARCH QUESTION ............................................................................................................ 1 1.2 INDUSTRIAL ENVIRONMENT ................................................................................................... 2 1.3 STRUCTURE OF THESIS ........................................................................................................... 2 1.4 TERMINOLOGY ....................................................................................................................... 3
2 BACKGROUND .......................................................................................................................... 4
2.1 SOFTWARE QUALITY DRIVER ................................................................................................. 4 2.2 SOFTWARE RE-ENGINEERING ................................................................................................. 5
2.2.1 Software re-engineering lifecycle description .................................................................. 5 2.2.2 Reverse Engineering ......................................................................................................... 6 2.2.3 Root Cause Analysis ......................................................................................................... 6 2.2.4 Decision making for architecture of legacy system .......................................................... 8
2.3 BENEFITS OF SOFTWARE RE-ENGINEERING ............................................................................. 8
3 RELATED WORK .................................................................................................................... 10
3.1 SOFTWARE QUALITY ............................................................................................................ 10 3.1.1 Software quality metrics ................................................................................................. 10
3.2 SOFTWARE ARCHITECTURE AND QUALITY ATTRIBUTE ......................................................... 11 3.3 SOFTWARE RE-ENGINEERING AT ARCHITECTURE LEVEL ...................................................... 11
3.3.1 Software re-engineering taxonomy ................................................................................. 11 3.3.2 Software re-engineering framework ............................................................................... 12
4 RESEARCH METHODOLOGY ............................................................................................. 13
4.1 RESEARCH QUESTION AND HYPOTHESIS .............................................................................. 13 4.2 MAPPING RESEARCH QUESTION TO RESEARCH METHODOLOGY............................................ 13
4.2.1 Literature review ............................................................................................................ 14 4.2.2 Case study ....................................................................................................................... 15 4.2.3 Experiment ...................................................................................................................... 16
5 CREATION OF ARAR FRAMEWORK ................................................................................ 18
5.1 SEARCH STRATEGY .............................................................................................................. 18 5.2 DATABASE SELECTION ........................................................................................................ 19 5.3 SEARCH CRITERIA ................................................................................................................ 19 5.4 CREATE SEARCH STRING ...................................................................................................... 19 5.5 WEAKNESSES OF EXISTING RE-ENGINEERING FRAMEWORKS ................................................ 20
5.5.1 The first quality driven re-engineering framework ......................................................... 21 5.5.2 The Second quality driven re-engineering framework .................................................... 22 5.5.3 The third quality driven re-engineering framework ........................................................ 23 5.5.4 Summary of quality driven re-engineering framework weaknesses ................................ 24
5.6 ARAR FRAMEWORK DESIGN ................................................................................................ 25 5.6.1 Reverse Phase Design ..................................................................................................... 28 5.6.2 Root Cause Analysis Phase Design ................................................................................ 30 5.6.3 Architecture Selection Phase Design .............................................................................. 35 5.6.4 Refactoring Phase Design .............................................................................................. 38
5.7 VALIDITY THREATS ............................................................................................................. 38 5.7.1 Publication bias .............................................................................................................. 38 5.7.2 Threats to select primary publications............................................................................ 38
iii
5.7.3 Threats to select number of papers to analyze ................................................................ 38
6 ARAR FRAMEWORK ............................................................................................................. 39
6.1 REVERSE PHASE................................................................................................................... 39 6.1.1 Data Extraction .............................................................................................................. 39 6.1.2 Knowledge Inference ...................................................................................................... 40 6.1.3 Architecture Representation ........................................................................................... 40 6.1.4 Reverse Phase Sample .................................................................................................... 40
6.2 ROOT CAUSE ANALYSIS PHASE ........................................................................................... 43 6.2.1 Scenario Formulation ..................................................................................................... 44 6.2.2 Root Cause identification ................................................................................................ 47 6.2.3 Prioritization of Quality Attributes ................................................................................. 51
6.3 ARCHITECTURE SELECTION PHASE ...................................................................................... 51 6.3.1 Identify software architecture solution candidates ......................................................... 52 6.3.2 Architecture solution selection ....................................................................................... 52 6.3.3 Sample example .............................................................................................................. 53
6.4 REFACTORING PHASE .......................................................................................................... 54
7 APPLY ARAR FRAMEWORK ............................................................................................... 55
7.1 CASE STUDY DESIGN ........................................................................................................... 57 7.1.1 Case Definition ............................................................................................................... 57 7.1.2 Preparation ..................................................................................................................... 58 7.1.3 Data Collection ............................................................................................................... 59 7.1.4 Create Case Study Database .......................................................................................... 67
7.2 DATA ANALYSIS ................................................................................................................... 68 7.2.1 Select Analysis Technologies .......................................................................................... 68 7.2.2 Data Analysis .................................................................................................................. 68
7.3 PROCESS QUALITY CONTROL .............................................................................................. 70 7.3.1 Construct Validity ........................................................................................................... 70 7.3.2 Internal Validity .............................................................................................................. 70 7.3.3 External Validity ............................................................................................................. 70 7.3.4 Conclusion Validity......................................................................................................... 70
8 EVALUATE ARAR FRAMEWORK ...................................................................................... 72
8.1 DEFINITION .......................................................................................................................... 72 8.1.1 Goal Definition ............................................................................................................... 72 8.1.2 Summary of Definition .................................................................................................... 72
8.2 PLANNING ............................................................................................................................ 73 8.2.1 Context Selection ............................................................................................................ 73 8.2.2 Hypothesis Formulation ................................................................................................. 73 8.2.3 Variable Selection ........................................................................................................... 73 8.2.4 Selection of Subjects ....................................................................................................... 74 8.2.5 Experiment design .......................................................................................................... 74 8.2.6 Instrumentation ............................................................................................................... 74 8.2.7 Validity Evaluation ......................................................................................................... 74
8.3 OPERATION .......................................................................................................................... 75 8.3.1 Preparation ..................................................................................................................... 75 8.3.2 Execution ........................................................................................................................ 76 8.3.3 Data Validation .............................................................................................................. 76
8.4 DATA ANALYSIS .................................................................................................................. 77 8.4.1 Data Descriptive ............................................................................................................. 77 8.4.2 Data Reduction ............................................................................................................... 78
8.5 HYPOTHESIS TESTING .......................................................................................................... 80 8.5.1 Input data ........................................................................................................................ 80 8.5.2 Data Calculation ............................................................................................................ 80 8.5.3 Summary and Conclusion ............................................................................................... 82
9 DATA SYNTHESIS ................................................................................................................... 83
9.1 ARAR FRAMEWORK ............................................................................................................ 83 9.2 ARCHITECTURE SELECTION METHOD ................................................................................... 83
iv
9.3 ROOT CAUSE ANALYSIS IN SOFTWARE RE-ENGINEERING ...................................................... 83 9.4 USAGE OF CASE STUDY ........................................................................................................ 83 9.5 EXPERIMENT ........................................................................................................................ 83
10 CONCLUSION .......................................................................................................................... 84
10.1 ANSWERS TO RESEARCH QUESTIONS .................................................................................... 84 10.2 CONCLUSION ....................................................................................................................... 84 10.3 FUTURE WORK ..................................................................................................................... 85
APPENDIX A ...................................................................................................................................... 86
APPENDIX B .................................................................................................................................... 108
REFERENCE ................................................................................................................................... 110
v
LIST OF FIGURES Figure 1: Quality attribute measurement framework (SEI) ...................................................... 4 Figure 2: The SEI Horseshoe model for legacy system reengineering ..................................... 5 Figure 3: Classic processes of Root cause analysis .................................................................. 7 Figure 4: Top-level Decision process from Michael [17] ......................................................... 8 Figure 5: Relation between research methods and research questions ................................... 14 Figure 6: Design of Literature Review ................................................................................... 15 Figure 7: Design of Case Study .............................................................................................. 16 Figure 8: Design of experiment .............................................................................................. 17 Figure 9: Search Strategy........................................................................................................ 18 Figure 10: First Quality driven Re-engineering framework [78]............................................ 21 Figure 11: Second Quality driven Re-engineering framework [49] ....................................... 22 Figure 12: Third Quality driven Re-engineering framework [43] .......................................... 23 Figure 13: The Sharp end - Blunt end relationship [24] ......................................................... 34 Figure 14: Functional Unit Relations ..................................................................................... 35 Figure 15: Four Phases of the Framework .............................................................................. 39 Figure 16: The processes of reverse phase ............................................................................. 39 Figure 17: Data Extraction Sample ......................................................................................... 41 Figure 18: Architecture view of sample example ................................................................... 43 Figure 19: Root Cause Analysis Process ................................................................................ 44 Figure 20: Process of Scenario Formulation........................................................................... 45 Figure 21: Process of Root Cause Analysis ............................................................................ 47 Figure 22: FRAM relationships ............................................................................................. 48 Figure 23: Example of weight diagram .................................................................................. 49 Figure 24: Process of Quality Attributes Prioritization .......................................................... 51 Figure 25: Process of Architecture Selection Phase ............................................................... 52 Figure 26: Overview of Case study ........................................................................................ 56 Figure 27: Ericsson AUTO_TEST Environment.................................................................... 57 Figure 28 NetStatusserver system work flow ......................................................................... 58 Figure 29: FRAM Diagram for Scenario 1 ............................................................................. 63 Figure 30: Time and Events in the Case Study ....................................................................... 69 Figure 31: Comparing Efficiency in New System with Legacy System ................................ 78 Figure 32: Execution Time of Legacy system with Local Clients.......................................... 79 Figure 33: Execution Time of Legacy system with Remote Clients ...................................... 79 Figure 34: Execution Time of New system with Local Clients .............................................. 79 Figure 35: Execution Time of New system with Remote Clients .......................................... 79 Figure 36: High level architecture view of legacy system ...................................................... 99 Figure 37: Architecture Candidate 1 ..................................................................................... 106 Figure 38: Architecture Candidate 2 ..................................................................................... 107
vi
LIST OF TABLES Table 1 Definition of terms using in this thesis ........................................................................ 3 Table 2: Example for system modifiability .............................................................................. 5 Table 3: Software Quality Metrics ......................................................................................... 10 Table 4: Software Re-engineering Taxonomy ........................................................................ 12 Table 5: Research question and Aim ...................................................................................... 13 Table 6: Research question for literature review .................................................................... 18 Table 7: Synonyms of selected keywords............................................................................... 19 Table 8: Search result for the first search string ..................................................................... 20 Table 9: Search result for second search string ...................................................................... 20 Table 10: Weaknesses of existing quality driven re-engineering frameworks ....................... 25 Table 11: Processes in the framework .................................................................................... 25 Table 12: Keywords of four processes in the framework ....................................................... 26 Table 13: Selected papers for all processes ............................................................................ 27 Table 14: View Point Definition ............................................................................................. 28 Table 15: Typical elements and relations ............................................................................... 29 Table 16: Definition of Event ................................................................................................. 31 Table 17: Definition of Causal factor ..................................................................................... 31 Table 18: Definition of Root cause ......................................................................................... 31 Table 19: Scenario Definition ................................................................................................. 32 Table 20: Foreground Functional Unit definition ................................................................... 33 Table 21: Background Functional Unit definition .................................................................. 33 Table 22: FRAM Relationship definition ............................................................................... 34 Table 23 FRAM relationship .................................................................................................. 34 Table 24: Scale for pair-wise comparison using AHP ............................................................ 37 Table 25: Prioritized Quality Attributes example (PQA) ....................................................... 37 Table 26: Repository table of sample example ....................................................................... 42 Table 27: background information of example system .......................................................... 45 Table 28: Example keyword definition .................................................................................. 46 Table 29 Problem comments example .................................................................................... 46 Table 30 Example Scenarios .................................................................................................. 46 Table 31: Example Quality attributes mapping ...................................................................... 49 Table 32: Example of selected metrics ................................................................................... 49 Table 33: Weighted functional unit for scenario A ................................................................ 50 Table 34: Causal factors for scenario A.................................................................................. 50 Table 35: Causal factors from scenario B ............................................................................... 50 Table 36: example of root causes ........................................................................................... 50 Table 37: Mapping scenario with root causes ........................................................................ 51 Table 38 Calculation of Scenario ........................................................................................... 51 Table 39: Prioritized Quality Attributes example (PQA) ....................................................... 53 Table 40: FQA and FAS of sample example .......................................................................... 53 Table 41: PQA of sample example ......................................................................................... 54 Table 42: FQA' of sample example ........................................................................................ 54 Table 43: Normalized FQA' of sample example .................................................................... 54 Table 44: FQAr of sample example ........................................................................................ 54 Table 45: FVC of sample example ......................................................................................... 54 Table 46: Expected Schedule to conduct Case study ............................................................. 59 Table 47: Case Study Environment ........................................................................................ 59 Table 48: Example of Time Report Table .............................................................................. 61 Table 49: Collected data of Reverse Phase ............................................................................. 61 Table 50: Problem of NetStatusserver System ....................................................................... 62 Table 51: Table of Foreground Function units ....................................................................... 62 Table 52: Table of Background Function Units ..................................................................... 63
vii
Table 53: Scenarios Mapping List .......................................................................................... 63 Table 54: Coverage for each Quality Attribute ...................................................................... 64 Table 55: Metric of Performance Variability ......................................................................... 64 Table 56 Weight for each Function Unit ................................................................................ 64 Table 57: Analysis Result for Scenario 1 ............................................................................... 65 Table 58: Collected data of Root cause Analysis Phase ......................................................... 65 Table 59: Result Table of architecture evaluation .................................................................. 66 Table 60: Collected Data of Architecture Selection phase ..................................................... 67 Table 61: Collected data of Refactoring Phase ....................................................................... 67 Table 62: Collected Data in Case Study ................................................................................. 68 Table 63: Goal of experiment ................................................................................................. 72 Table 64: NetStatusserver Definition ..................................................................................... 72 Table 65: Independent Variables ............................................................................................ 73 Table 66: Experiment environment ........................................................................................ 76 Table 67: Collected Execution Time for each test case ......................................................... 77 Table 68: Efficiency Result .................................................................................................... 78 Table 69: Paired T-test analysis Input data set ....................................................................... 80 Table 70: Paired T-test Calculation ........................................................................................ 81 Table 71: Paired T-test Differences d result Table ................................................................. 81 Table 72 Paired T-test result ................................................................................................... 81 Table 73: Participants of the case study ................................................................................. 86 Table 74: Selected papers for re-engineering ......................................................................... 86 Table 75: Scenario List ........................................................................................................... 99 Table 76: Table of Foreground Function units’ relations ..................................................... 100 Table 77: New Requirement from stakeholders ................................................................... 101 Table 78 Background Function units of Scenario 1 ............................................................. 101 Table 79: Issue card 1: Data losing ....................................................................................... 102 Table 80: Root Cause statements for NetstusServer System ................................................ 102 Table 81: Issue card 2: Defects of workflow design ............................................................ 102 Table 82: Issue card 3: Queue Priority ................................................................................. 103 Table 83: Issue Card 4: Server Start/Stop ............................................................................. 103 Table 84: Issue card 5: Server Log Rotation ........................................................................ 103 Table 85: Issue card 6: Invalid Requests .............................................................................. 103 Table 86: Scenario A Example ............................................................................................. 104 Table 87: Scenario B Example ............................................................................................. 104 Table 88: List of Subjects ..................................................................................................... 108 Table 89: Collected Execution Time (seconds) of Experiment ............................................ 109 Table 90: Design of sent requests for each test case ............................................................. 109
1
1 INTRODUCTION With rapid changes in user requirements, associated with rapid changes in software and
hardware technology, increasing numbers of companies are dedicated to evolve their current
software systems in order to avoid obsolescent [49]. Software re-engineering is one general
process to reach this goal [99][42][43][44][73]. The concept of software re-engineering is to
improve or transform existing software system, so that it can be understood, controlled and
reused anew [49] [132] [44].
In software re-engineering, a system is restructured to conform with certain functional
and non-functional requirements [99]. However, in most case, software re-engineering
process is driven by functional requirements, although the non-functional requirements, in
terms of software qualities, are as crucial to the success of a system [137][42][43][44]. The
decision made in the development process typically affects more than just one software
quality attribute [137] [9]. And therefore it is difficult to integrate these desired software
quality attributes into the re-engineering process. In addition, it is hard to determine what
role software qualities play and how it fits in the re-engineering process [42].
The above-mentioned integration issue and lack of understanding the role of software
qualities in the re-engineering process have raised the challenge of conforming software
qualities requirements within re-engineering process [99]. In this context, there is a need to
involve software qualities as a guide for the re-engineering process. Several tools have been
developed to satisfy specific quality requirements at code level that are shown in [134][135].
However, they are experimental and all the tools used a trial-and-error strategy to select a
particular set of transformations which ensured that the re-engineered code satisfied given
quality constrains [99]. More than that, many studies have focused on the architecture level,
understanding the architecture of an existing system assists in predicting the impact that
evolutionary changes may have on a specific quality characteristics in the system [133]. The
frameworks that have been implemented on an architecture level
[99][42][43][44][84][83][46] include software qualities as a guide in the re-engineering
process.
However, in these frameworks, there is lacking processes to identify the quality
problems for the re-engineering systems, only visible and pre-defined quality problems have
been fixed instead of identifying and fixing the root cause quality problems at the
architecture level [6]. If the root cause quality problems are not fixed, there is a challenge to
spend more time, and human resources to fix the other associated quality problems. In this
situation, a framework for fixing root cause quality problem is required. In this thesis, a
framework named ARAR framework is developed and used to identify and fix the root cause
quality problems by analyzing the system architecture, and also desired quality constraints is
presented as a part of this framework.
1.1 Research Question In order to fulfill the gap, the following research questions are formulated for this study:
RQ1: What are the potential drawbacks of existing processes for quality driven
reengineering?
To identify the weaknesses of reengineering processes through the literature
review.
RQ2: What components should a quality driven re-engineering framework
contain?
To clarify the processes of quality driven re-engineering framework, and get
knowledge for each process.
RQ3: Can the proposed framework be applied in an industrial environment?
Validate the processes of the ―Quality-Driven Re-engineering Framework‖
through case study in industrial environment.
RQ4: Can the proposed framework improve the efficiency of a legacy system?
2
Conducting industrial experiment to prove the efficiency of the target system is
better than the legacy system. To answer research question 4 (RQ4), an
experiment was conducted in this paper in the form of a case study in Ericsson
(AUTO Test Department). We have defined two hypotheses based on RQ4:
Hypothesis: The efficiency of the new re-engineered system (NetStatusserver) is
improved as compared to the old system.
We followed the efficiency definition in ISO 9126, as illustrated in Section 1.4.
At first, RQ1 helps us to get knowledge of current existing quality driven reengineering
framework, and get the weaknesses about the existing frameworks. After this, we answer
RQ2 to create a framework that can fix the weaknesses described in RQ1. Then, RQ3 is
adopted to answer whether the proposed framework can be used in industry environment. At
the end, we answer RQ4 to evaluate the result of the proposed framework.
1.2 Industrial Environment In order to evaluate the ARAR framework, we conducted a case study and one
experiment in Ericsson AB, which had an internal NetStatusserver system need to be re-
engineered. Our case study has validated the ARAR framework by applying the framework
in the internal system step by step. We also evaluate the result of applying the ARAR
framework, which is the improvement of software quality (efficiency), through the
experiment in the industrial environment. The results of case study and experiment are
validated both form academia researchers and industry participants.
1.3 Structure of Thesis This paper is organized as follows: Section 1 shows the introduction of this thesis. After
that, background knowledge of re-engineering framework is presented in Section 2.
Section 3 introduces the related work on quality driven re-engineering framework.
Section 4 introduces the research questions and research methodology design. Section 5
describes the process to create ARAR framework. Section 6 presents the detailed steps of the
ARAR framework. Section 7 shows steps to conduct a case study in Ericsson to apply
the ARAR framework. Section 8 describes the processes to execute experiment in Ericsson
Company to improve efficiency of the NetStatusserver system. Section 9 is about the
synthesis of the result of this paper. At the end, Section 10 presents the conclusion and future
work.
3
1.4 Terminology Table 1 Definition of terms using in this thesis
Terms Definitions
Quality All the quality aspects defined in ISO 9126 standard [27]
categorizing into functionality, reliability, usability,
maintainability, efficiency, and portability.
Quality statement Direct natural language for quality requirements description
from stakeholders.
Reverse engineering Parse the coding document completely to extract the entire
architecture structures of the existing system [2].
Legacy system All existing systems requiring quality improvements.
Re-engineering A set of activities intended to restructure or rewrite part or
all of a legacy system without changing its functionality in
order to achieve quality requirements and objectives [11].
Architecture Refactoring Rebuild or rewrite the whole system’s architecture structure
by using new architecture design without changing
functionalities of the system [153].
Efficiency As defined in ISO 9126 standards, efficiency means the
capability of the software product to provide appropriate
performance, relative to the amount of resources used,
under stated condition [27].
4
2 BACKGROUND This chapter provides background knowledge of software re-engineering
[99][42][43][44][73] [11] and related technologies used in this paper. The first section
(Section 2.1) introduces software quality requirement as evolving or changing legacy
systems is actually driven by software quality requirements [137][42][43][44]. Moreover,
software re-engineering is one suitable process to handle quality requirements at architecture
level [99][42][43][44][73]. Thus in Section 2.2, we introduce software re-engineering
concept and general processes including the whole re-engineering lifecycle as shown in
Figure 2. In the end of this chapter (Section 2.3), the benefits of using software re-
engineering is presented.
2.1 Software quality driver The change to the software might be motivated by the environment by the enhancement
of its quality attributes. According to [14], quality attributes are all the features of a software
system that much important than functionality of the software to satisfy stakeholders
requirements. They are some non-functional qualities of software defined in ISO 9126 [27]
as follows:
Maintainability
Reliability
Efficiency
Usability
Security
These quality attributes are mainly affected by architectures of the software according to
published paper [14]. Generally, it is not easy to identify the level of a quality attribute for
some piece of software, because main quality attributes do not have widely accepted
measurement techniques. In order to resolve this problem, the Software Engineering Institute
(SEI) designed a framework based on the notion of scenarios as showing in Figure 1.
Figure 1: Quality attribute measurement framework (SEI)
The artifact is the software element whose quality attribute must be measured like the
whole system. The source is the person or system that interacts with the artifact to test its
quality. The stimulus is the action performed by the source, or the information inputted by
the source, to the system for testing. The environment is the state of the system and its
context during the testing. The measure is the expected value of the response, together with
the metrics to be used. An example to specify the modifiability of a system is shown in Table
2.
5
Table 2: Example for system modifiability
Source Developer
Stimulus Need to modify some functions of the
system
Artifact Whole system
Environment System running in normal operation
Response The modification has to be useful for
expected operation, and without affects other
operations.
Response measure A period time delay from starting
maintenance until back to work.
For one single quality problem, it can be fixed from different perspectives, such as
changing the source code of legacy system [100][101][135] or improving architecture design
[42][43][44][49][78] Changing source code of legacy system may bring more extra faults in
the system, and affect the other components of the system. However, if we apply software
re-engineering processes at architecture level to fix quality problems, the proposed
architecture solutions maximize the chance to satisfy stakeholders’ requirements. That is
why software re-engineering is utilized to handle quality requirements in this paper.
2.2 Software re-engineering Software systems are evolving at a high rate as research and available technology
improves all the time. Legacy systems often require to be updated or reengineered for better
performances. Software reengineering is the process of understanding existing software and
improving it, for modified or improved functionality, better maintainability, configurability,
reusability, or other quality goals [49]. Reengineering comes from a variety of requirements
including:
The need to fix defects of legacy systems [73].
The need to improve performance requirements from stakeholders [73]
The need to adapt changing hardware or software environment in a legacy system
[17]
The need to recover critical missed system artifacts [69]
2.2.1 Software re-engineering lifecycle description The generally fundamental process of re-engineering contains reverse engineering[1] and
forward engineering[33]. In the case of software reengineering based on architecture, the SEI
institute published the famous Horseshoe model [69] as shown in Figure 2.
Figure 2: The SEI Horseshoe model for legacy system reengineering
In this model, the source code is analyzed and models of increasing level of abstraction
are created up to the architecture level. The system is re-architected to match new
requirement specification to fulfill an expected level of quality performance. While the
6
quality attribute are driven by the system architecture, so the first thing is to recover the
system architecture intending to do an improvement, and then analyze, re-design the system
architecture. At last the new system is achieved through implementing the new architecture
design. All these steps present the general form of re-engineering process as shown in Figure
2. However, easier ways to improve the system are also applied, like transformations at
lower levels: code level and function level representation. At the code level, the
transformations contain simple actions like rewrite old programming language to another one
for legacy systems. At function level, the changes for implementing functions, adaptation of
a function to a new requirement or adaptation of the interfaces for a function are all good
transformation examples in software engineering. Besides, architecture level representations
include making a change about structure of a system, components and interactions or
reorganizing functions to modules. As we can see in Figure 2, lower-level representations
can carry out without higher level representations, but high level representations must
supported by lower level presentations [69]. For instance, if one system is required changes
at architecture level, then functional and code levels always are influenced by these
architecture changes.
The left arrow of the horseshoe model represents the reverse engineering of a system,
and the right arrow represents restructuring of a new system (Forward Engineering[69] [123])
with improved architecture after the architecture transformation.
2.2.2 Reverse Engineering Reverse engineering is the process of analyzing a subject system; first to identify the
system's components and their interrelationships and second to create representations of the
system in another form or at a higher level of abstraction [2]. According to [3], a reverse
engineering approach should consist of following steps:
1. Extraction: extract information from source code, documentations.
2. Abstraction: abstract the extracted information.
3. Presentation: transform abstract data into a representation.
Applying these reverse engineering steps, different information sources can be retrieved
as follows:
1. Deployment artifacts of a system: source code with comments, directory and file
structure, deployment description document, and build scripts etc.
2. People involved in the system: like software architects, designer, developers, testers
and supporters etc.
3. System instruction documentation like user manual book, system instruction paper.
4. Technical documents like requirement specification, analysis, system design,
implementation, test, deployment documents etc.
Forward engineering is the traditional way of designing systems, starting from abstract
logical and implementation independent specification to gradually lead to the
implementation of a physical system [69]. However, before the forward engineering, the root
cause analysis (Section 2.2.3) is conducted to identify root cause problems existing in legacy
system based on the recovered the architecture.
2.2.3 Root Cause Analysis Root cause Analysis is a methodology that a set of working methods based on the same
approach on a way of thinking through why things go wrong, and the method can be used in
anywhere and any circumstance [25] [90]. Before resolving a problem, we must first know
and understand what cause it. If we did not identify the real causes of the problem when
handling the problem, then we just address the symptoms and the problem still exists, but we
could bring new problems [25] . For this reason, it is very important to identify and eliminate
root cause problems. So root cause analysis could be defined as the process of identifying
causal factors, using some structured approaches or methods to provide a concentration for
identifying and fixing problems [70].
The general logic of root cause analysis follows a series of classical processes showing
in Figure 3:
7
Define undesired outcome Define the analysis requirements Gather data
Analyze dataForm conclusionCheck conclusionDefine the analysis requirements
Figure 3: Classic processes of Root cause analysis
In general, the basic principle of root cause analysis is to ask ―Why?‖ repeatedly about
events and conditions that caused or contributed to the event or human performance problem.
Once the causes are identified for events or human performance problem, then the next step
is to determine whether the causal factors demonstrate any sequence or precedence, in terms
of either time or scope of effect. If one casual factor precede another in time and affected it,
or if a causal factor accounted for more than one of the human performance errors that
occurred in an event sequence, it is a root cause. The goal of the analysis is to find which
causal factors, if it is correct, it can prevent or decrease the risk of the recurrence of the same
and similar errors or failures.
Nowadays, numerous techniques are available to conduct root cause analysis, such as
Management Oversight [25] and Risk Tree (MORT) [25], Assessment of Safety Significant
Event Team (ASSET) [25], and Functional Resonance Analysis Method (FRAM) [25] etc.
Since no single technique is the best for all cases, to select a most appropriate technique for
an event is always a difficult problem. In this paper, the FRAM technique is adapted to
identify root cause problems, so the knowledge of FRAM is presented in next section.
2.2.3.1 Functional Resonance Analysis Method (FRAM)
The Functional Resonance Analysis Method (FRAM) [24] proposes a method to identify
and evaluate performance variability. As an accident investigation and safety assessment
method the Functional Resonance Analysis Method (FRAM) follows four basic principles as
follows:
1. The principle of equivalence of success and failures: Success depends on their
ability to anticipate, recognize, and manage risk. It is a consequence of the ability of
groups, individuals, and organizations to anticipate the changing shape of risk
before damage occurs. Failure is due to the absence of that temporary or permanent
ability rather than to the organization, human, or technical inability of a system
component to function normally. Individuals and organizations must always adjust
their performance to the current conditions, because resource and time are limited. It
is inevitable that such adjustments are approximate [25] [24].
2. The principle of approximate adjustments: For a system, the executing conditions
are usually unpredicted and dynamically changing, people have to find effective
ways to overcome problems at work, and this capability is very important for safety.
Because of unpredicted events, the number of accidents and incidents could be
larger and larger. Thus, human performance enhances and detracts from system
safety. Moreover, due to the time and resources are limited, if inadequate
adjustments coincide and combine to create an overall instability, this could explain
why things go wrong sometimes [25] [24].
3. The principle of emergence [25] [24]: The variability of normal performance is
rarely changed to cause accidents or event malfunctions. However, the variability of
multiple functions may contribute in unexpected exceptions to make accidents and
failures. The failures and performance just can refer to the functions or malfunctions
instead of detail explaining why this happened. Therefore, it is impossible to
describe all the couplings in a system, and also impossible to predict the events
which can cause accidents.
4. The principle of functional resonance: The highlight of FRAM is the resonance
relations among functions [25] [24]. It shows that the variability of a number of
functions resonate with each other. Once one function makes a cause, the
8
consequences may spread through couplings. The resonance principle emphasizes
that the environment of the system is dynamic. This principle makes it is possible to
capture the real dynamics of system functions.
According to the [24], the FRAM method consists of four steps:
1. Identifying essential system functions and characterizing each function by five basic
parameters.
2. Characterizing the potential variability through common performance conditions.
3. Defining the functional resonance based on possible dependencies/couplings among
functions and the potential for functional variability.
4. Identifying the factors that cause variability.
In addition, once the root cause problems of a legacy system are identified, the next
important step is to design and select a suitable architecture design to resolve the problems.
Moreover, it is worth noting that the qualities of a system are related to the design of
software architecture [14]. For example, to enhance the maintainability of a system may
lower its performance. It is very important to make tradeoff to get expected level for a set of
quality attributes while architecting a software system [14] [53]. Therefore, the decision
making for architecture of legacy system is introduced in following section.
2.2.4 Decision making for architecture of legacy system In this paper, the decision making method presented by Michael et al [17] is used to
select a suitable architecture design based on quality requirements. The general decision
processes are showing in Figure 4. The detail processes and descriptions can be found in the
journal published by Michael et al [17].
Figure 4: Top-level Decision process from Michael [17]
The method in Figure 4 is a structured way to understand the benefits and drawbacks of
different architecture structures and increase the confidence in the decision taken. Because
different stakeholders tend to have different view of the importance of various quality
requirements for a system, and the different experience of the software developers may also
lead to a different interpretation of the strengths and drawbacks of architecture structures
[17]. Besides, this method facilitates the architecture always reflect the current quality
requirements.
2.3 Benefits of software re-engineering The benefits of software re-engineering are presented as follows:
Incremental development [73].
Re-engineering is conducted in different phases like reverse engineering and
forwarding engineering [73], when budget and required resources are available. The
re-engineering work always focuses on a working legacy system, and end users are
able to adapt to the re-engineered changes as it delivered in incremental way.
Lower risks [159].
9
Re-engineering is based on incremental improvement of legacy systems, rather
than radical system replacement. The risks of losing critical business knowledge or
information of legacy system, or producing a system that does not meet customers’
real needs, are drastically decreased.
Lower cost [132].
Re-engineering a legacy system cost significantly less than developing a new
system having same functionality with legacy system. For example, in the legacy
system some system designs and source code can be reused. In addition, some useful
components or functions can be reused in some other systems.
Revelation of legacy system [49].
As a legacy system is re-engineered, all existing services, workflows, and designs
are rediscovered, and all this information can be documented again for the re-
engineered system [49]. It is able to have a better understanding about the legacy
system, and also it is helpful to identify the problems of the system, as well as add new
functions into the system [49].
Better use of existing staff [49].
Existing staff expertise can be maintained, and extended accommodate new skills
during re-engineering period. The incremental nature of re-engineering supports that
existing staff skills can evolve at the same time when the system evolves. This
approach accompanies with less risks and expense comparing with hiring new staff
[49].
10
3 RELATED WORK
3.1 Software quality Software quality has already been considered as a very important topic since the early
days of software engineering [114]. Software quality refers to related but distinct two notions
that exist wherever quality is defined in software product:
Software functional quality reflects conformance to a given design based on
functional requirement specifications [115].
Software structural quality refers to meet non-functional requirements that support
delivery of the functional requirements, such as robustness or maintainability.
Structural quality is evaluated through analyzing inner software structure like its source
code, in effect how its architecture follows principles of software architecture. Non-
functional requirements are always used to guide and rationalize the various design decisions
taken during software development. However, functional quality is typically used and
measured in software testing area.
Over last few decades, a large number of researchers have investigated how to achieve
software quality requirements for a system. Boehm et al. [50] published a paper to classify
software quality attributes like flexibility, integrity, performance, maintainability etc. These
quality attributes are hard to handle because most of them are incorrect defined or
categorized in a wrong way. Therefore, International Organization Standardization (ISO)
published taxonomies of quality attributes [14], they divided quality into six characteristics
including functionality, reliability, usability, efficiency, maintainability and portability.
Kennet et al [51] presented the relations among software quality attributes. After that,
researchers focus on how to achieve software quality on software architecture and design
pattern. Bass et al investigated the relations between software architecture and quality
attributes [14]. Besides, the Software Engineering Institute (SEI) [53] made first attempt
research about relationship between architecture and quality attributes. Chung et al. [52]
published a Non-Functional Requirement (NFR) framework to handle quality requirements,
and this framework is one significant step to build relation between quality requirements and
software architecture, which is introduced in Section 3.2.
3.1.1 Software quality metrics Software quality metrics aim to control the quality of software products, as well as costs
and schedules in software development process. For procedure oriented software metrics
usually concentrate different aspects of software source code like control graph [118], inter-
connection among statement or functions [119]. However, for object-oriented software
metrics always focus on object, class, attributes, inheritance, methods [120]. Software
quality metrics support diverse re-engineering work, because they assist to forming an initial
understanding of legacy system, and uncover information about system design flaws. There
are some popular object-oriented software metrics as shown in Table 3:
Table 3: Software Quality Metrics
Name Purpose
Complexity Metrics Indicate about the level of complexity for a given class
Inheritance Metrics Indicate the quality of the class hierarchy layout of a project
Coupling Metrics Show reuse degree and the maintenance effort based on
coupling level between classes, and lower coupling level is
better for a software design.
Cohesion Metrics Detect possible flaws in the design, and high level of cohesion
is better for software design.
11
3.2 Software architecture and quality attribute The software architecture of a system is the overall structure of the system, and it can be
defined in different types from different viewpoints like function view, component view,
layer view etc. Hofmeister et al [16] introduced different methods to build software
architecture like the Rational Unified process [18].
Software architecture design connected with different quality requirements. Klein et al
[53] published how software qualities can be achieved in software architecture design
patterns in 1999. Moreover, in 2000 Bergey et al [54] analyzed the relation between software
architecture design and quality attributes. Based on the research work of Bergey et al [54],
Lars et al [55] presented a small taxonomy for quality attributes related to software
architecture design. It classified the quality attributes for different software architecture
designs and styles. Besides, Kazman et al [56] examined the software architecture design
and analysis to achieve quality attribute goals, and Mikael et al [63] [17] published an
investigation of a method for identifying software architectures with respect with quality
attributes. These publications made a foundation for further research about software
architecture and quality attributes.
As illustrated above, software architecture design is driven by quality attributes, and they
connected with each other tightly. So it is hard to select a suitable architecture in order to
improve a specific quality performance. Johansson et al [73] presented that different
stakeholders tend to have different views of the importance of various quality requirement
for a system, and the different experience of the software developers may also lead to a
different interpretation of the strengths and drawbacks of different architectures. To resolve
this problem, Mikael et al [63] [17] created a structured way to evaluate software
architectures based on quality attributes and analyze the benefits and liabilities of different
architectures.
3.3 Software re-engineering at architecture level The requirements for updating and evolving legacy systems are increasing, such as a
need to change system functions, upgrading new technologies, fixing implementation errors
in systems, or improving system performance [73]. Software re-engineering becomes more
and more important, because it can resolve all these problems in legacy systems through
applying re-engineering approaches [11]. Section 3.3.1 gives a brief introduction about
software re-engineering taxonomy, and Section 3.3.2 introduces software re-engineering
frameworks at architecture level.
3.3.1 Software re-engineering taxonomy In 1990, Chikofsky et al [123] presented taxonomy for reverse engineering and re-
engineering, and it defined re-engineering as ―the examination and alteration of a subject
system to reconstitute in a new form, that is the same or a higher level of abstraction as the
original subject system and the subsequent implementation of the new form.‖ Reengineering
always is considered as consisting of three steps, which are Reverse Engineering,
Restructuring, and Forward Engineering showing in Table 4.
12
Table 4: Software Re-engineering Taxonomy
Name Definition
Reverse Engineering[123] The process of analyzing a subject system
with two goals in mind [123]:
Identify the system’s components
and inter-relations
Create representation of the system
in another form or at a higher level
of abstraction
Restructuring[123] A transformation from one form of
representation to another at the same relative
level of abstraction [123], and there is no
modification for system functionality.
Forward Engineering[123] Traditional process of moving from high
level abstractions and logical,
implementation independent designs to the
physical implementation of a system [123].
3.3.2 Software re-engineering framework According to the software re-engineering taxonomy, a great number of researchers have
contributed many publications to resolve problems in legacy system at architecture level
through applying re-engineering processes. For instance, In 2003, Jiang Guo[49] created an
re-engineering framework to facilitate reuse of legacy system by using re-engineering, and
the main purpose of this paper is to recover behaviors of legacy system and reuse them to
lower the cost. Furthermore, Stoermer et al [78] created a quality attribute driven re-
engineering framework to recover the system architecture and reuse existing components in
new system configurations. This framework has applied in two real-world case studies. The
first case study introduced the model-centric reconstruction approach in the context of a
large satellite tracking system. The second case study provided the construction of a time
performance model for an existing embedded system in the automotive industry [78]. In
addition, Ladan Tahvildari and Kostas Kontogiannis[42] [43] [44] have presented quality-
driven re-engineering processes at architecture level in order to resolve quality problems of
legacy system. This paper presented a framework that allows specific NFR such as
performance and maintainability to guide the re-engineering process. Such requirements for
the migrant system are modeled using soft-goal interdependency graphs and are associated
with specific software transformations. In the last part of this paper, an evaluation for their
framework was conducted to check whether specific qualities for the new migrant system
can be achieved.
13
4 RESEARCH METHODOLOGY The aim of this paper is to create a quality-driven re-engineering framework in order to
improve the quality of legacy system through identifying and resolving root cause problems.
In this thesis, we selected literature review, case study and experiment as our research
methods.
We presented the research questions and hypothesis in Section 4.1. And then, we
illustrated the mapping between research questions and research methodologies in Section
4.2.
4.1 Research Question and hypothesis Table 5: Research question and Aim
Research question Research Methodology
RQ1: What are the potential drawbacks of existing
processes for quality driven reengineering?
Literature Review and lessons
learnt from the Industry Case.
RQ2: What components should a quality driven
re-engineering framework contain?
Literature Review
RQ3: Can the proposed framework be applied in
an industrial environment?
Case Study
RQ4: Can the proposed framework improve the
efficiency of a legacy system?
Experiment
To answer research question 4 (RQ4), an experiment was conducted in this paper in the
form of a case study in Ericsson (AUTO Test Department). We have defined one hypothese
based on RQ4:
Hypothesis: The efficiency of the new re-engineered system (NetStatusserver) is
improved as compared to the old system.
We followed the efficiency definition in ISO 9126 [27], as illustrated in Section 1.4.
4.2 Mapping research question to research methodology The general research design is shown in Figure 5. We adopted three research methods in
this paper. At first, we conducted a literature review, and then based on the knowledge we
gained from literature review we created a candidate re-engineering framework named
ARAR framework, and we refined each part of the framework iteratively to build our
proposed framework. Secondly, we applied this framework to the real case in Ericsson AB
through case study. As a result we got the target system, which the required quality problems
have been fixed. At the end, we executed an experiment on the re-engineering system (target
system).
14
Figure 5: Relation between research methods and research questions
The relationship between research questions and research methods is showing in Figure
5. At first, RQ1 and RQ2 are answered by literature reviews. After that, we apply the
proposed framework in an industry case study to answering RQ3, and the answer of this
research question is the target system. At last, an experiment is conducted to compare
efficiency for both target system and legacy system. As a result, the RQ4 is answered.
4.2.1 Literature review Literature review is selected to answer RQ1 and RQ2 since this is methodology used to
collect, know, comprehend, and apply different knowledge or methods in order to build a
solid foundation for research topic and research method [45]. In order to answer RQ1,
knowledge of existing quality driven re-engineering framework in literature is essential to
identify their weakness. And also, RQ2 brings forth a study of software quality driven re-
engineering literature for the purpose of fixing the identified weaknesses.
15
Literature Review
Quality Driven Re-
engineering papers
Weaknesses of
existing Quality Driven
Re-engineering
Frameworks
Find Problems
Search
Strategy
used
Solutions used
Fix Problems
ARAR framework
generate
Figure 6: Design of Literature Review
Figure 6 shows the design of literature review. At first, the weaknesses of existing
quality driven re-engineering frameworks are identified through the literature review. Then,
solutions are generated to fix identified weaknesses. As a result, ARAR framework is created
based on the solutions.
4.2.2 Case study The objective of RQ3 is to evaluate the ARAR framework in the industry environment.
Case study is selected to answer RQ3 since this method is an empirical inquiry to investigate
contemporary phenomenon in depth and within its real life context [48]. In this thesis, since
we aim to verify whether the ARAR framework can be applied in industrial environment, we
plan to conduct an instrumental case study. The aim of this instrumental case study is to
accomplish RQ3 other than understanding the particular industrial case itself [112].
16
Construct Validity & Internal Validity & External Validity & Reliability
7.5 Process Quality Control
RQ2
Proposed
Framework
Case study
Database
7.2.1 Research Question
7.2.2 Data Procedure
7.2.3 Set Up Case environment
7.2 Preparation
7.1.1 Define the Case
7.1.2 Define Unit of analysis
7.1.3 Define unit metrics
7.1 Case Design
7.4.1 Select Analysis technology
7.4.2 Data Analysis.
7.4 Data Analysis
7.3 Data Collection
7.3.1 Select source of data
7.3.2 Operation and data collection
7.3.3.Create Case Study DataBase.
Process Flow
Data Flow
Control Flow
Figure 7: Design of Case Study
In this case study, time and related human resources are recorded for applying the
framework. We followed the design as shown in Figure 7, and the detailed process is shown
in Section 7. After applying the framework, we got the actual cost of time and human
resources for applying the framework as the outcome as shown in Section 7.1.4.
4.2.3 Experiment The objective of RQ4 is to evaluate whether the efficiency of system has improved or
not. Experiment is selected to answer RQ4 since this method aims at evaluating the approach
in a ―systematic, disciplined, quantifiable and controlled way‖ [28]. In this thesis, we
compared the target system with legacy system from the respective of efficiency.
Figure 8 shows the design of this experiment. The detailed process for conducting this
experiment is shown in Section 8. The result of this experiment is the evaluation of the
system efficiency as shown in Section 8.5.
17
Experiment Processes
8.1 Definition
8.1.1 Goal Definition
8.1.2 Summary of Definition
H0 , H1answer
RQ4answer
8.2 Planning
8.2.1 Context Selection
8.2.2 Hypothesis Formulation
8.2.3 Variable Selection
8.2.4 Selection of subjects
8.2.5 Experiment Design
8.2.6 Instrumentation
8.2.7 Validity Evaluation
8.3 Operation
8.3.1 Preparation
8.3.2 Execution
8.3.3 Data Validation
8.4 Data Analysis
8.4.1 Data Descriptive
8.4.2 Data Reduction
8.5 Hypothesis Testing
8.5.1 Input Data
8.5.2 Data Calculation
8.5.3 Summary and Conclusion
Figure 8: Design of experiment
18
5 CREATION OF ARAR FRAMEWORK In this section, we created the proposed framework by means of literature review. The
motivation of conducting literature review is to gather knowledge to create a quality-driven
re-engineering framework in order to improve the quality through identifying and resolving
root-cause quality problems.
Table 6: Research question for literature review
Research question Aim
RQ1: What are the weaknesses
of existing processes for quality
driven reengineering?
To identify the weaknesses of
reengineering processes during the literature
review.
RQ2: What components should
quality driven re-engineering
framework contain?
To clarify what processes of quality
driven re-engineering, and how to get
knowledge for each process.
The research question in Table 6 defines what should we gather from the selected source
papers. Through this literature review, we can get some knowledge of quality driven re-
engineering, and it is the source and fundamental information for us to create our own steps
or improvements for quality driven re-engineering processes.
5.1 Search Strategy For this literature review, processes of search strategy [31], as shown in Figure 9, were
defined to get related publications.
Start
Select Databases
Create Keywords
Do trail Search
Refine KeywordsCheck against
know primary
study
Search Result
Stop
<70% match
Figure 9: Search Strategy
At beginning, we selected databases to search for literatures in order to cover as much
aspects of engineering as possible. After that, we formatted the search string, and did a trail
search to get the related papers in reference databases. The search result will be evaluated by
include and exclude criteria. If the result fulfills our expectation (more than 70% of papers
pass the criteria), we search in the full text database and select papers from both Search
result 1 and search result 2. Otherwise, we will refine the search string and do the search
again.
19
5.2 Database Selection We selected literatures from three databases which are Scopus, ISI Web of Science and
Inspect. They are selected as the references databases since they covered all aspects of the
engineering and huge amount of records.
5.3 Search criteria Once we searched in selected databases, include and exclude criteria helps us to select
the relevant papers in our research. We consider full papers from peer-reviewed journals,
conferences and workshops from year 2000 to 2013. Besides, we exclude studies that do not
explicitly related to software re-engineering, and software quality related to re-engineering:
Include criteria:
English peer-reviewed studies related to research questions.
Studies that focus on software re-engineering
Studies that focus on software quality related to re-engineering
Studies from 2000 to 2013
Exclude criteria:
Studies are not in English.
Studies are not related to research questions.
5.4 Create search string From analysis of Research Question 1, we found existing frameworks through literature
review at first, and compared and analyzed each framework to find the weaknesses and
drawbacks.
We defined the initial keywords as ting frameworks through literature review at fssre red
also we limited the searching in the domain of software engineering. Here, software
engineering means ―(1) The application of a systematic, disciplined, quantifiable approach to
the development, operation, and maintenance of software; that is, the application of
engineering to software. (2) The study of approaches as in (1).‖ [72]
For each defined keywords, we found related synonyms as shown in Table 7. At first, we
searched for the synonyms in the thesaurus website. And also, we searched the related
keywords in the Software Engineering Body of Knowledge [72].
Table 7: Synonyms of selected keywords
Keywords Synonyms
Software Application, program
Reengineering reconstitution, reconstruction,
recreation
quality driven non-functional based
Process framework, system, procedure
Based on these keywords and related synonyms, we formulated the first search string as:
(software OR program OR application) AND (reengineering OR reconstitute OR
reconstruction OR recreation) AND (“quality driven” OR “non functional based”) AND
(process OR framework OR system OR procedure)
20
Table 8: Search result for the first search string
Data Base Result Relevant Keywords
Scopus 132 30 Architecture [111], Evolution
[180], Quality aspects[71],
reverse engineering [122],
legacy system [44], hybrid re-
engineering [44], quality factor
[186],
Inspec 16 13 Refactoring [127]
Representation [127]
ISI 4 4 Transformation [34],
Migration[122],
We also found some related synonymous for these keywords:
Refactoring: round-trip engineering format
Legacy: aged format Moreover, we found some results that are not related to our research question, thus we
needed to exclude such as ―Quality of Service‖ and ―Image reconstruction‖.
Thus we built the search string as:
(software OR "legacy system" OR "aged software") AND (reengineering OR reconstitute
OR reconstruction OR recreation OR "architecture Evolution" OR "reverse engineering" OR
"round-trip engineering" OR "Refactoring" OR "hybrid re-engineering" OR "software
Representation" OR "software Transformation" OR "software Migration") AND ("quality
driven" OR "non functional" OR "quality aspects" OR "quality factor") AND NOT ("business
process reengineering") AND NOT (Quality of Service) AND NOT ("Image reconstruction")
Table 9: Search result for second search string
Data Base Result Relevant
Scopus 73 50
Inspec 50 35
ISI 18 14
Total 81 (18
overlapped)
As shown in the Table 9, this result fulfills our expectation, and the search result is shown in
Appendix A, Table 74.
5.5 Weaknesses of existing re-engineering frameworks As the re-engineering of legacy systems play an important role in software industries for
last few decades, some re-engineering cases focused on analysis and migration of systems
with traditional programming languages at code level [57] [63] [17], like Fortran, AWK,
COBOL. However, nowadays more interests of software re-engineering aim to improve
quality performance through applying quality driven re-engineering processes at a higher
abstract level of system design.
As shown in Table 74, 81 papers have been selected through the literature review. We
take the following steps to pick out the target papers and identify the weaknesses of software
re-engineering frameworks. Firstly, we narrow down all papers based on the ―Domain‖
column in Table 74. Because we aim to analyze existing re-engineering frameworks, we
include the keywords like (―software re-engineering‖ OR ―reconstruction‖ OR ―restructure‖
OR ―transformation‖) and (―approach‖ OR ―framework‖ OR ―method‖ OR ―process‖).
Through this first step, 21 papers are picked out from Table 74. Secondly, we select out all
21
journal articles [136] [32][148][174][44][49] from these 21 papers in first step, the reason is
that journal article is much more trustable comparing with conference papers. Thirdly, we
pick out the top 3 citation journal papers, which are [174][44][49], to investigate the
weaknesses of existing re-engineering frameworks.
5.5.1 The first quality driven re-engineering framework The quality attributes driven analysis framework created by Stoermer et al [78] aim to
recover the system architecture and evaluate the architecture through a quality driven
approach, and this framework has applied successfully for a modifiability scenario in an
embedded system. The general steps are showing in Figure 10.
Figure 10: First Quality driven Re-engineering framework [78]
Figure 10 shows a general view of the framework steps.
Step 1: Scope Identification
Step 2: Source Model Extraction
Step 3: Source Model Abstraction
Step 4: Element and Property Instantiation
Step 5: Quality Attribute Evaluation
Step 1 is to set the scope of software architecture re-engineering. The scope identifies
what architecture view [129] or parts of the system should be reconstructed based on the
quality attribute requirements from stakeholders. Unfortunately, the way to identify the
expected architecture view and how to connect with quality requirements are not clear. If the
architecture view is hard to identify, then we do not know what kind of data we need to
extracted in a system.
Step 2 aims to extract source elements and their relations from available resources, like
source files, system documentations etc. Source elements are typically linked with functions,
classes, files and directories. Relations among elements indicate how each element connects
with the others, such as call relations among functions. All extracted source elements and
relations are fundamental materials for Step 3.
22
Step 3 is to identify and apply aggregation strategies [130] [131] to abstract from
detailed source views, this step depends on the legacy system and the system architecture
views that decided to be recovered.
Step 4 is to assign the element types that are used to conduct system architecture
analysis. The elements could be layers or tasks with relations etc.
Step 5 is conducted with the particular quality attribute requirements and the
corresponding architecture tactics [78] based on the results from step 4. The architecture
tactics are used to find suitable mechanisms in the reconstructed views that support the
required quality attributes.
As we can see, in this framework, there are some existing problems as follows:
1. Hard to define the expected architecture view. There is no definition for the
expected architecture view.
2. The element types in Step 4 are not complete, currently supporting layers, tasks and
relations. We do not know if this framework supports component or subsystems of a
system.
3. Quality attribute analysis is not done. The architecture analysis is just based on the
quality attribute scenarios, but there is no analysis on both the relations among
quality attribute scenarios and what the reason to cause these quality scenarios.
4. No content to introduce how the quality problems are fixed.
5. Little information to describe how quality attributes evaluation is conducted on the
architecture design in Step 5.
5.5.2 The Second quality driven re-engineering framework In order to facilitate the reuse of the software components of the legacy system by
recovering the system architecture, Jiang Guo [49] created an objected-oriented re-
engineering framework, the general processes of the framework are showing in Figure 11.
Figure 11: Second Quality driven Re-engineering framework [49]
Figure 11 shows that this re-engineering method consists of three main steps for legacy
systems [49].
Extract dependency and control information
Extract objects
Reconstruct system
Before we start to extract dependency and control information of the legacy system,
some preparations of the legacy system should be ready, for example, identifying functional
decomposition design structures and figure out what programming language is applied in the
legacy system.
The Step 1 of the framework is to extract dependency and control data of the legacy
system, for instance, some work flows and interactions of specific components of the legacy
system. This step is used to have a better understanding about the system. The problem of
this step is that we cannot make sure all dependency and control data can be gathered, and
different people have different understanding about the dependency and control data of the
23
system. So more information is required, like what dependency and control data can be
retrieved and how to do it.
Step 2 is to retrieve data about different objects like functions, classes, layers,
components, and also their relations after we have gathered all data about dependency and
control information of the legacy system. All this data is the fundamental materials to do
system architecture reconstruction. However, for a legacy system which contains a large
number of function points, it is hard to say gather all objects data containing everything
related with architecture. The problem is that we do not have a clear goal to do data
collection. We should know what exact data or object to extract from the legacy system with
clear goal instead of gathering everything without goal.
Step 3 is recovery the architecture view of the legacy system based on the extracted data.
However, although the architecture of the legacy system is recovered, there is not detail
information about how to conduct architecture transformation and what kind of quality
problems have found after analyzing the recovered architecture of the legacy system.
In conclusion, for this framework all potential problems are concluded as follows:
1. Less information about how to collect dependency and control information in a
legacy system.
2. No clear goal to do data extraction. The viewpoint of the expected architecture is
not defined.
3. The process to implement new architecture design is not defined.
4. Little information to do architecture analysis intending to find quality problems in
the legacy system.
5. No description about how to conduct architecture transformation.
5.5.3 The third quality driven re-engineering framework This framework published by Tahvildari et al [43] support Non-functional requirement
such as performance and maintainability to guide the re-engineering processes. The system
architecture transformation is driven by the quality requirements from stakeholders, and
evaluation at each transformation step is conducted to determine whether specific qualities
for the system are achieved or not [43]. The general processes of the framework are shown in
Figure 12:
Figure 12: Third Quality driven Re-engineering framework [43] Figure 12 illustrates all major processes of the quality driven re-engineering:
Phase 1: Requirement analysis
Phase 2: Model analysis and Source code analysis
Phase 3: Architecture selection and transformation
Phase 4: System evaluation
24
Phase 1 is to identify concrete re-engineering goals for a system. The specification of the
criteria is specified and illustrated in the expected re-engineering system, for instance, better
response performance, higher efficiency etc. This process is good for single and independent
quality problems in a system. However, the problem is that if a quality problem is connected
with some others quality problems, then this step cannot identify correct goals. We have to
analyze their relations with quality attributes, and find out the root cause problems. Once the
root cause problems are fixed, then the other related quality problems can be resolved
automatically, and without bringing new quality problems in a system.
Phase 2 is capture system design, architecture and relations between different elements
to understand the legacy system through analyzing the source code files or system
documentations. During this phase, data related with system architecture can be extracted
and understanding the legacy system contributes to find out the quality problems existing in
the system. The problem in this phase is that the goal of extracting data is not defined, and it
means we do not know what exact data to collect. It will cost a lot of time and resources if
we have to gather all architecture related data, which is not completely used to recover
system architecture.
Phase 3 is to select a target software architecture which is used to fix a design or errors
in the system, and then transform system architecture design applying the selected system
architecture solution previously. The problem of this phase is that the architecture is selected
just based on software developers’ or architects’ experience. They did not make a complete
analysis about candidate architectures based on quality requirements from stakeholders. As a
result, the selected architecture might not be suitable for the legacy system to solve specific
quality problems.
Phase 4 is to assess the new system through checking the quality requirements one by
one in order to make sure all quality problems have been resolved.
In conclusion, the weaknesses of this framework are showing as follows:
1. No clear goal to do data extraction. The viewpoint of the expected architecture is
not defined.
2. Less information to find out quality problems, the relations among quality attributes
are not analyzed.
3. No mapping between quality problems and quality attributes.
4. Lack of information to describe how to implement new architecture design.
5. Less information to selected appropriate architecture solution for legacy system.
5.5.4 Summary of quality driven re-engineering framework
weaknesses According to the analysis for three existing quality driven re-engineering
frameworks in Section 5.5, we concluded all weaknesses of existing quality driven re-
engineering processes in Table 10.
25
Table 10: Weaknesses of existing quality driven re-engineering frameworks
Weakness Description Existing in
frameworks
Re-
engineering
phase
Solution for
weaknesses
Target architecture view is
not clearly defined during
recovering system
architecture from source
code.
[78] [49] [43] Architecture
reverse phase
Reverse
architecture
design
None of tradeoff analysis for
handling multiple quality-
attributes.
[78] [49] [43] Architecture
transformation
phase
Architecture
decision making
include quality
attributes tradeoff
analysis,
architecture
comparison and
evaluation.
No architecture
transformation
[78]
Lack of system architecture
transformation processes
information.
[49] [43]
No detail information to do
architecture solution
evaluation
[78] [49]
No information to compare
or evaluate different
architecture solutions.
[43]
Little information to explain
why quality problems
happen and where the
problems are found in the
legacy system.
[78] [49] [43] Bottleneck
identification
based on quality
requirements.
None introduction about how
to map with quality attributes
based on quality
requirements from
stakeholders.
[78] [49] [43]
Lack of information to
describe how to implement
new architecture design
[78] [49] [43] Forward
engineering
phase
Architecture
Refactoring
5.6 ARAR framework design The main idea of ARAR framework is to fix the root cause quality problem in software
re-engineering from architecture point view in order to resolve the weaknesses mentioned in
Table 10. There are four phases included in ARAR framework which are Architecture
reverse (A) phase, Root cause analysis (R) phase, Architecture selection (A) phase and
Refactoring (R) phase. The name of ARAR framework is picked up from the first letter of
the four phases. The processes of this framework are illustrated in Table 11.
Table 11: Processes in the framework
Process No. Process Name Process Description
Phase 1 Architecture Reverse To recover the architecture view of the
legacy system through this phase.
Phase 2 Root cause Analysis To identify root cause problems through a
series of processes defined in this phase.
Phase 3 Architecture Selection To redesign a new architecture to fix root
cause problems found in previous phase.
Phase 4 Refactoring To implement the new architecture design
in last phase.
26
Table 12: Keywords of four processes in the framework
Process
Name
Keywords
Reverse
Phase
(“Reverse Engineering” OR “Reverse legacy system” OR “Reverse
processes” AND “Architecture reverse” OR “Architecture
reconstruction” OR “Architecture recovery”)
Root cause
Analysis
Phase
(“Root cause techniques” OR “Bottlenecks identification technique”
OR “Root cause Analysis” OR “Root cause identification”)
Architecture
Selection
Phase
(“Architecture design” OR “Software architectures” OR
“Architecture structure” AND “Quality attributes” OR “Software
architecture assessment” OR “Architecture evaluation”)
Refactoring
Phase
(“Refactoring” OR “System refactoring” OR “Architecture
refactoring”)
We searched the keywords in IEEE, ScienceDirect, Inspec, ACM, Springer, and
Google Scholar digital libraries following the search strategy defined in Figure 9. All
the search results are showing in Table 13.
27
Table 13: Selected papers for all processes
Process
Name
No. of
papers
Database
Name
Selected Paper
Reverse
Phase
3 IEEE Software architecture reconstruction a
process oriented Taxonomy [79]
IEEE Software Quality Attribute Analysis by
Architecture Reconstruction (SQUA3RE)
[80]
ScienceDirect Comparison of software architecture reverse
engineering methods [103]
Root
cause
Analysis
Phase
7 Google Comparative Analysis of Nuclear Event
Investigation methods tools and techniques
[25]
Google Root Cause Analysis for beginners [87]
Google Architecture-based Static Analysis of
Medical Device Software, Initial Results [88]
Google A Resilience Engineering approach to the
evaluation of performance variability:
development and application of the
Functional Resonance Analysis Method for
Air Traffic Management safety assessment
[89]
ScienceDirect Comparing a multi-linear (STEP) and
systemic (FRAM) method for accident
analysis [104]
Google A Statistical Comparison of Three Root
Cause Analysis Tools [26]
Google Root Cause Analysis: A Framework for Tool
Selection[90]
Architec
ture
Selectio
n Phase
7 Springer An Investigation of a Method for Identifying
a Software Architecture Candidate with
Respect to Quality Attributes [63]
Springer Consensus Building when Comparing
Software Architectures [106]
ACM A Method for Understanding Quality
Attributes in Software Architecture
Structures [107]
Springer Characterization and Evaluation of Multi-
Agent System Architectural Styles [108]
ACM A Quality-Driven Decision Support Method
for Identifying Software [17]
IEEE Consolidating Different Views of Quality
Attribute Relationships [81]
Springer Towards a method for the evaluation of
reference architectures Experiences from a
case [110]
Refactor
ing
Phase
3 IEEE A survey of software refactoring [82]
ScienceDirect Refactoring Towards a Layered Architecture
[140]
Inspec Object-oriented software refactoring [141]
28
5.6.1 Reverse Phase Design Generally reverse engineering is used to identify software components and their
interdependence and produces software design-level abstraction [1]. It provides supports in
recapturing lost information, restructuring legacy systems or transforming old systems to a
new and more maintainable architecture [2]. Based on these supports, some benefits are
uncovered by applying reverse engineering such as, saving maintenance cost, quality
improvements, software reuse facilitation and so on.
The main usage for this reverse phase is to recover the existing architecture of a legacy
system. This architecture reconstruction plays as a starting point for reengineering the system
to a desired architecture in order to fix the existing performance problems. [79] Moreover,
the reverse phase requires people involvement like developers, maintainers, designers and
architects, who are familiar with the system [4].
Reverse engineering is the process of analyzing a subject system; first to identify the
system's components and their interrelationships and second to create representations of the
system in another form or at a higher level of abstraction [2]. Reverse engineering is the first
part in software re-engineering processes defined by SEI [69] in Figure 2. Stringfellow et al
[103] compared reverse engineering approaches to find out relations among them in 2006.
The general reverse engineering processes are introduced in Section 2.2.2. But these general
processes cannot resolve the weaknesses of reverse engineering in Table 10. Therefore, we
improve the general reverse processes in our ARAR framework, the detail steps are showing
as follows:
5.6.1.1 Data Extraction
While gathering data from some existing artifacts, the architecture view, which is a
representation of a whole system from the perspective of a related set of understanding or
concerns, of legacy system is uncovered gradually. The extracted data plays as a
fundamental for building repository table in the next section.
5.6.1.1.1 Define Viewpoint
Software architecture is defined by IEEE as ―the fundamental organization of a system
embodied in its components, their relationship to each other and the environment, and
principles guiding its design and evolution [5]‖.
The software architecture can be viewed from different viewpoints, and different people
have different understanding or concerns about the system. [5][6]
Table 14: View Point Definition
Definition: View Point
A specification of the convention for constructing and using a view. A pattern or a
template from which to develop individual views by establishing the purposes and
audience for a view and the techniques for its creation and analysis. [5]
After defined the view and viewpoint, the goal is clear for what kinds of data should be
extracted and what architecture view can be achieved. Further, the expected architecture
view could be established depending on the extracted data. If without clear objective, a lot of
effort could be spent on extracting information and generating architectural views that may
not be helpful or serve any purpose [4].
5.6.1.1.2 Gather Data
Through analyzing the source artifacts manually like source code, instruction files and
execution traces of the system including writing specific scripts like Bash, AWK, Perl and so
on, the elements of interest and relations among them can be identified and organized to
generate fundamental views of the system. The scripts improve the efficiency of the data
gathering, as well as personal productivity.
29
A list of typical elements and relations with each other extracted from a system is shown
in Table 15:
Table 15: Typical elements and relations
Source
element
Relation Target element Description
File contains Function A function is defined in a file.
File Includes File A file includes another file
Function Calls Function Function calls functions
Based on the elements and relations among elements, some views of a system can be
generated. For example, the ―includes‖ relations between files show dependences of files in
the system, and directory structure of the system. The directory structure is very important to
identify the component of the system especially for some large projects. The ―calls‖ relation
between functions uncovers that how different functions interact with each other. Moreover,
this relation plays a key role for analyzing the workflow of each system service.
Static analysis is the way to gather data only through analyzing the existing artifacts of a
system. As we can see, the above information is gathered through static analysis on the
existing artifacts of a system.
However, if the legacy system is available and executable, we can take dynamic analysis
[1] to extract dynamic information from the run-time system. Dynamic analysis is the way to
gather data through observing the real run-time system. Combining the static analysis [88]
and dynamic analysis for a system to collect data can support to generate a more complete
and accurate architecture representation [4]. For instance, some components of systems are
loaded at run-time, or the run-time configuration of some systems is changing with the time
depending on the loading of configuration file. So in this condition, both static and dynamic
analysis should be utilized to extract data of the system.
In addition, besides the above manual way to extract data of the system, four typical
dynamic tools for architecture representation are SNiFF+ [7], Rigi [8] [9], Imagix 4D [7],
Mooze [10] [11].
However, there are drawbacks for tool-based data extraction of a system. A tool is not
always supporting all existing programming languages, and not all tools are open-sourced [7].
Besides, in some instances, for embedded systems, it is no meanings to output the
information from code instrumentation [4].
5.6.1.2 Knowledge Inference
The knowledge inference section aims to record the extracted data in order to build a
repository table.
5.6.1.2.1 Data Analysis
Concentrating on the defined viewpoint, all the extracted data should be reviewed based
on the existing artifacts of the system in order to make sure that all expected information has
already been collected.
5.6.1.2.2 Repository Establish
Analyze the result from data gathering, and then build repository including elements and
relationships between elements.
5.6.1.3 Architecture Representation
The purpose of this section is to visualize the expected architecture view of a system. In
this section, all the gathered data in the repository table should be utilized to visualize the
defined architecture views of the system.
30
5.6.1.3.1 Architecture Visualization
Gallagher et al [12] have presented a framework to handle software representation on
several key areas like static representation, dynamic representation, views, navigation, task
support, implementation and visualization [12]. However, in this framework, we focus on
manually architecture visualization by using the UML tools.
Visualize the elements and element relationships in UML diagram: check all the
extracted data in the repository table, and visualize all the information existing in
the repository table in order to get all expected architecture views [12].
Group and combine the related workflow or functions into specific components or
layers or subsystems [12].
Draw and document all the diagrams related with the defined architecture views.
5.6.2 Root Cause Analysis Phase Design Root Cause Analysis (RCA) [87] [88] is the core of the ARAR framework, we aim to
clearly identify and resolve the root cause quality problem based on the reversed
architecture. In literature, there are various definitions of RCA, the most reasonable answer
seems to be that RCA is a methodology – a set of working methods based on the same
approach on a way of thinking through why things go wrong [25]. There are different kinds
of RCA methods such as cause-and-effect diagram [26], interrelationship diagram [90],
current reality tree [26][90], FRAM [104] and STEP [104]. Moreover, RCA has been applied
in different areas like nuclear [25] and security airport system [89].
To solve a problem, one must first recognize and understand what is causing it. If the
real cause of the problem is not identified, then one is merely addressing the symptoms and
the problem will continue to exist [25]. Thus, definition of the root cause is of utmost
importance. Unfortunately, there are no universally accepted standards for root cause [25], it
can be anything that anybody wants it to be. Analysis of the root cause seems to be an
endless exercise because no matter how deep you go there’s always at least one more cause
you can look for [25]. However, in ARAR, we aim to identify the root cause quality
problems that, once these problems fixed, there will be no related problem occurs. And also,
each root cause quality problem should have a clearly and understandable cause-and-effect
relationship. Thus, we defined the root cause in ARAR through event and causal factor as
shown in Table 16, Table 17.
In ARAR framework, we designed three steps to figure out why the specific quality
attributes are not working as expected. The three steps are identifying root cause problems
from reversed architecture, mapping root cause problems to quality attributes, and
prioritization the mapped quality attributes to prepare solving the root cause quality problem
in the next phase.
Before going into detail of this phase, there are two concepts need to be clarified:
1. Quality attributes: All quality attributes defined in ISO 9126 standard [27]
categorizing into functionality, reliability, usability, maintainability, efficiency,
and portability.
2. Root cause: In ARAR framework, the definition of root cause consists of two
elements: event and causal factor which are shown in Table 16 and Table 17
respectively. After that, we defined the root cause as shown Table 18.
31
Table 16: Definition of Event
Definition – Event
An Event is an action or occurrence that happened in a specific point in time relative to
the software failure or performance problem under investigation [25] based on the
revised architecture.
Criteria:
Each event should be described as an occurrence or happening rather than a state,
circumstance, issue, conclusion, or result; i.e., ―firewall broken‖, not ―Operating
System has a bug‖
Each event should be described by a short sentence with one subject and one verb;
i.e., ―user input request‖, not ―user input request and click refresh button‖
Each event should describe a single, discrete occurrence; i.e., ―Firewall broken‖,
not ―The setting of Operating System changed and Firewall broken‖
Each event should be quantified when possible; i.e., ― the peak load is 350 request
per minute‖, not ― the peak load is high‖
Table 17: Definition of Causal factor
Definition –Causal factor
A causal factor is a state or circumstance that affected the sequence of events [25].
Criteria:
Each condition describe states or circumstances rather than happenings or
occurrences; i.e., ― there is a hole on the wall‖, not ―the wall broken‖
Each condition should be quantified when possible; i.e., ―After system has been
running for 25 hours‖, not ― After system has been running more than one day‖
Table 18: Definition of Root cause
Definition – Root cause
A root cause is a causal factor that reasons for a sequence of events and causal factors,
which if resolved will prevent (or minimize the risk of) the recurrence [25] of the same
and similar unexpected events in the legacy system.
A causal factor can be defined as a root cause when:
The causal factor that cannot derive any other events or causal factors within the
reversed architecture view.
The causal factor can be an evidence for analyst and management to understand
what actions must to implement solution and who in the organization will take the
action [25].
The causal factor can be resolved with known technique and knowledge.
5.6.2.1 Scenario Formulation
In this step, all scenarios are formulated based the reversed architecture, and the
definition of scenario is shown in Table 19.
32
Table 19: Scenario Definition
Definition – Scenario
A scenario is a complete description that contains one single event and its related
conditions in order to identify one problem of legacy system.
Criteria:
In one scenario, the event can have more conditions.
For different scenarios, one condition can trigger more events.
Checkpoint of Scenario:
1. Completeness
Are the event and related conditions writing in the same level of details?
Is any related condition missing?
2. Correctness
Does the scenario conflict with other scenarios?
Does the scenario define based on the reversed architecture?
3. Unambiguous
Is the scenario written in clear and unambiguous language?
Example:
If server receives 355 requests per second from client after 08:00 am CET,
the average response time from server to client increased from 5ms to 5s.
Server application crashes if server receives ―#/*/#‖ or ―(*)‖ string within
client request.
If user refreshes the page after click ―OK‖ button, the connection to server
denied.
In order to formulate the scenarios, the problems need to be identified at first. These
problems are elicited from stakeholders since they have the knowledge and requirements of
the system. After that, scenarios are formulated through identified problems on the reversed
architecture level. At the end, all scenarios are confirmed with stakeholders. Here,
stakeholders are individuals and organizations that are actively involved in the project, or
whom is affected by system execution.
5.6.2.1.1 Problem Elicitation
To solve a problem, one must first recognize and understand what the problem is it. If
the problem is not identified clearly, then one is merely addressing the symptoms and the
problem will continue to exist [143]. For this reason, eliciting and understanding the problem
itself is of the most importance. Problem elicitation is a process for understanding the
problem of the legacy system from stakeholders’ perspective [143]. The output of this step is
problem statements that describe the current problem exist in the system.
5.6.2.1.2 Scenario Formation
Scenario formatting is to generate events and related conditions from problem statements
based on the reversed architecture view. The elicited problems have the new requirements
which also documented here.
5.6.2.1.3 Confirmation
This step is to confirm the scenario is validated and stand for stakeholders’ expectation.
And it can go further only if all stakeholders have an agreement on the formulated scenarios.
Otherwise, the scenarios need to be refined, and to confirm with the stakeholders again.
Moreover, accuracy keyword definition is criteria that needed to confirm with stakeholder in
this step.
33
5.6.2.2 Root Cause identification
In this phase, we applied FRAM [26] to analyze the root cause problem. At first,
function units for each scenario are identified based on the reversed architecture view. After
that, FRAM relationship is analyzed through the function units. Afterwards, the FRAM
relationship can be used as input to draw FRAM diagram. Then, scenarios can be mapped to
different quality attributes. At the end, the diagram can be analyzed to generate root cause
statements.
5.6.2.2.1 Identify Functional Unit
A functional unit is defined as an action of an event or condition of the system [25].
The action can be categorized as [25]:
Data transfer: Moving data from one side to another or others side(s).
i.e., Receive user input data; Send out control signal
Data transformation. Converting data from original format to target format.
i.e., Calculate max input size; Derive average temperature
Data storage: Saving information on physical storage.
i.e., Store customer order; Record request time
Data retrieval: Extracting expected data from physical storage.
i.e., List handled requests; Retrieve latest user information update
Functional unit not only can be characterized according to their being part of the event or
part of the condition, but they can be differentiated by their being foreground functional unit
or background functional unit which defined inTable 20 and Table 21respectively [25].
Table 20: Foreground Functional Unit definition
Definition – Foreground Functional Unit [89]
In each scenario, a functional unit that describes that problem itself is a foreground
functional unit. Foreground functional unit is defined at the sharp end of scenario.
Example
Scenario: Server application crashes if server receives ―#/*/#‖ or ―(*)‖ string within
client request.
Foreground Functional Units: Server side receiver.
Table 21: Background Functional Unit definition
Definition – Background Functional Unit[89]
A background functional unit provides support and means (Input, control, resources
and precondition) for the performance of the set of the foreground function [2].
Background function may not explicit mentioned in the scenario, but it can be
identified within the reversed architecture view by different FRAM relationships
which we present in future. Background functional unit is defined at the blunt end of
the scenario.
Example
Scenario: Server application crashes if server receives ―#/*/#‖ or ―(*)‖ string within
client request.
Background Functional Units: Client side sender
Furthermore, the relationship between sharp and blunt end [24] is shown in Figure 13.
This example shows how performance variability of people at the sharp end is determined by
a host of factors.
34
Mo
rals
, so
cia
l n
orm
s
Go
ve
rnm
en
t
Re
gu
lato
r
Co
mp
an
y
Ma
na
ge
me
nt
Lo
ca
l w
ork
pla
ce
facto
r
Unsafe acts
”Blunt end” Factors removed in Space and time ”Sharp end” Factors at
Work here and now
Figure 13: The Sharp end - Blunt end relationship [24]
People at the sharp end are the persons who are working in the time and place where
events and conditions happen and therefore where problem occurs. At the blunt end one
finds the people whose actions in another time and place have an effect on the people at the
sharp end. [24]
The distinction between foreground and background is relative. Background functional
unit can become foreground functions, when it is defined at the sharp end of a scenario.
Foreground functional units should be identified in this step, while background functional
units [89] can be added in further step.
5.6.2.2.2 Identify FRAM Relations
The intention of this step is to find all related background functional unit of a foreground
functional unit. In Table 22, it defines five relationships in this framework which are input,
output, precondition, resource and control.
Table 22: FRAM Relationship definition
Definition – FRAM Relationship [24]
1. Input (I): that which the function processes or transforms or that which starts the
function;
2. Output (O): that which is the result of the function, either a specific output or
product, or a state change;
3. Preconditions (P): conditions that must be exist before a function can be
executed;
4. Resources (R): that which the function needs or consumes to produce the output;
5. Control (C): how the function is monitored or controlled.
Table 23 FRAM relationship
Functional Unit X
Input
Output
Precondition
Resource
Control
35
C
RP
I
O
Functional Unit
Figure 14: Functional Unit Relations
Table 23 and Figure 14 show an example diagram of the functional unit. For each
functional unit, all these five relationships should be described.
5.6.2.2.3 Draw FRAM Diagram
Based on the identified functional units and their relationships, we can draw the FRAM
Diagram in this step.
5.6.2.2.4 Quality Attributes Mapping
All scenarios can be mapped to different quality attributes based on ISO 9126. This
mapping relationship helps us to prioritize different quality attributes.
5.6.2.2.5 Diagram Analysis
This step is to figure out the root cause of each scenario by analysis related FRAM
diagram. For each FRAM diagram, we weight and prioritize the relationship in order to find
the causal factors, and generate root causes statements at the end.
5.6.2.3 Quality Attributes Prioritization
The main idea of this section is to identify the quality attributes of related scenarios. The
formulated scenarios and quality-mapping table will be used as input to prioritize the quality
attributes.
5.6.2.3.1 Prioritization of Quality Attributes
The prioritization list is aiming for identify quality attributes which have most affection
of the legacy system, that means if this quality attribute improved the system will improve to
the most extent.
5.6.3 Architecture Selection Phase Design Johansson et al. [13] have presented that different stakeholders tend to have different
views of the importance of various quality requirements for a system, and the differing
experiences of the software developers may also lead to a different interpretation of the
strengths and weaknesses of different architectures.
Moreover, in a system a suitable architecture is not only decided based on functional
requirements, but on a large context by quality attributes [14] [15] [16]. Usually there are
many quality attributes requirements in a system, a diverse of potential architecture solutions
are available only for a single quality attribute. So it is not easy to make a decision which
architecture solution is most appropriate for a system. For some projects, the decision is
often made lying on intuition, especially from some senior software developer or architects
[17].
36
Usually it is very hard to create a software architecture for a system or part of a system
so that the architecture fulfills the desired quality requirements [17], and even a diverse of
potential architecture solutions are available only for a single quality attribute. So it is not
easy to make a decision which architecture solution is most appropriate for a system. For
software projects, the decision is often made lying on intuition, especially from some senior
software developer or architects [17], as a result some selected architecture solution is not
very suitable to fulfill users’ required quality requirements. For this problem, Mikael et al
[63] [17] created a structured way to evaluate software architectures based on quality
attributes and analyze the benefits and liabilities of different architectures. This method helps
stakeholders to select a correct architecture design, and also ensure that the selected
architecture always reflects expected quality requirements. In order to fix the weaknesses of
existing re-engineering frameworks in Table 10, we applied this architecture selection
method in our ARAR framework, and the detail steps are illustrated as follows.
5.6.3.1 Identify software architecture solution candidates
The intension of this section is to design possible software architecture solutions based
on the quality attributes list, which plays as an input of this phase from Root-case analysis
phase. Besides, if there are new requirements for the legacy system, the new requirements
should be involved in architecture candidate design. In previous phase, the quality attributes
have been connected with the all the root cause problems. So through handling the existing
root cause problems in the architecture candidates design, the related quality attributes can
be improved.
We follow the architecture design method created by Hofmeister [16] to produce
architecture candidates. Stakeholders are invited to join the detail architecture design in order
to ensure stakeholders can understand the differences and similarities when pair-comparison
happens between two architecture candidates. The actual number of architecture candidates
is dependent on what system to develop and the system domain. Moreover, in the ARAR
framework, the Analytic Hierarchy Process (AHP) [19] method is utilized to prioritize the
architecture candidates, as we know, the AHP method is not so suitable for a large number of
variables to do pair-wise comparison [19] comparing with small number of variables.
Considering the calculation complexity [19], we recommend a small number (less than ten)
of the architecture candidates. In addition, tremendous financial cost and human resources
are consumed for designing a large number of architecture candidates. There are various
methods, like [15] [16] [18], are available to use for building expected architecture
candidates design.
For each architecture candidate, there is no constraint about the level of granularity of
the system such as system, module, subsystem, component, layer and so on [16]. However,
once the granularity of system is decided for the first architecture candidate design in the
beginning, then the other architecture candidates should follow the same granularity of the
first one, so that it can facilitate the comparison among different architecture candidates. For
example, supposing the first architecture candidate focuses on the ―component level‖ design,
but the second architecture candidate is created based on the ―layer level‖ of the system, it is
really hard to do the comparison between these two architecture designs.
5.6.3.2 Architecture solution selection
The purpose of this section is to provide a reasonable way supporting to select a best-fit
architecture solution through pair-wise comparisons among all designed software
architecture candidates referred quality attributes.
In this section, there is a sequence of sub steps including data collection, quality
attributes prioritization, and architecture selection. These steps contribute to do a more
confidence selection among all architecture candidates as presented as follows.
5.6.3.2.1 Data Collection
37
This section aims to collect data from stakeholders, and the data is used to evaluate how
good the certain architecture candidate for different quality attributes involving two basic
tasks [17]:
A comparison of different architecture solutions for a specific quality attribute
A comparison of different quality attributes for a specific software architecture
candidate.
Analytic Hierarchy Process (AHP): The AHP method has been applied successfully in
software engineering [19]. Generally, AHP enables the pair-wise evaluation for all elements
according to a certain scale as illustrated in Table 24.
Table 24: Scale for pair-wise comparison using AHP
Relative
intensity
Definition Explanation
1 Equal valuable The two variables are of
equal valuable
3 Slightly more valuable One variable (row in table) is
slightly more valuable than
the other (column in table)
5 High more valuable One variable is highly more
valuable than the other
7 Very highly more valuable One variable is very highly
more valuable than the other
9 Extremely more valuable One variable is extremely
more valuable than the other
2, 4, 6, 8 Intermediate value Values Used when
compromising between the
other numbers.
The comparison is transferred into n×n matrix, where n is the number of elements,
together with the reciprocal values. A detail and more extensive description of AHP and
application processes can be found in some related research papers [19][20][17].
Participants Selection: The participants, who can provide data for architecture
candidates and quality attributes, should be familiar about the legacy system. Because the
data collected from them results the selection of the architecture solution. If it is possible,
they could come from different positions like developer, designer, architect, and tester etc.
5.6.3.3 Quality attributes prioritization
According to the numbers of root cause problems for each quality attribute, a quality
attributes prioritization list can be generated. For example, assuming there are 12 root cause
problems in a legacy system, and three quality attributes like Efficiency, Reliability,
Usability are reflected based on ISO9126 standard. If efficiency is corresponding to 6 root
cause problems, reliability is related with 3 root cause problems, and usability is also
connected with 3 root cause problems. Thus, the prioritization list is efficiency, reliability
and usability with value 0.5, 0.25, 0.25 as shown in Table 25.
Table 25: Prioritized Quality Attributes example (PQA)
Quality Attribute Priority
Efficiency 0.5
Reliability 0.25
Usability 0.25
38
In addition, the quality prioritization list can be sent to the all participants of stakeholders,
who are familiar about the system. After receiving the feedback data from stakeholders, the
median value can be used to do the quality attributes prioritization, and the median value can
filter the most positive and negative data intending to ensure reasonable data.
At last, the average values of these two quality prioritization list from are calculated as
the final values.
5.6.3.4 Architecture selection
This step is intending to use the collected data to select a suitable architecture candidate.
In this step, the method published by Mikael Svahnberg et al in 2003 [17] is applied to
conduct the architecture solution selection.
5.6.4 Refactoring Phase Design In ARAR, Refactoring phase is the implementation of the design that comes from
architecture selection part. This is forward engineering [69] to implement the new system.
5.7 Validity Threats In this section, three potential threats are identified as validity threats of this literature
review, which are publication bias, threats to select primary publications, and threats to
select number of papers to analyze.
5.7.1 Publication bias Publication bias refers to the general problem that positive research outcomes are more
likely to be published than negative ones [30]. To decrease the affect by this threat, two
solutions were executed while searching expected publications.
We did not restrict the sources of knowledge to a particular publisher like journal or
conference. Thus, the scope of the field is covered sufficiently to some extent.
We did not include grey literature like technical reports, work in progress,
unpublished or none peer-reviewed publications [30] in order to ensure to get
reliable knowledge.
5.7.2 Threats to select primary publications For this literature review, it is impossible to get all relevant papers about re-engineering.
We followed three strategies to reduce the probability of missing important publications.
Refine keywords by tile, author, and publish date etc.
Search in several databases like IEEE, Inspec, Springer, ACM and Google Scholar.
This cause high number of duplicate papers, which help us to identify primary
papers.
The search processes are applied by two independent researchers with good
experience to conduct literature review.
5.7.3 Threats to select number of papers to analyze There are too many candidate papers in the result list after searching in databases.
However, the time for two researchers to conduct the literature review is limited. The
selected number of papers should be reasonable to do the data collection. Therefore, the
main strategy is to evaluate the capability time to read a paper for a researcher, and then
select appropriate number of papers based on both time duration of literature review and
reading capability of researchers. For instance, if each researcher can read and analyze a
paper completely in a day, then two researchers in 20 days can finish 40 papers analysis in
total.
39
6 ARAR FRAMEWORK There are 4 phases in this framework which are shown in Figure 15.
1. Reverse Phase
Source
code2. Root Cause
Analysis Phase
Architecture
View
3. Architecture
Selection Phase
1. Quality Attributes list
2. Root Cause Statements
3. New Requirements
4 Architecture
Refactoring
Optimal
architectureTarget
system
Start
End
Figure 15: Four Phases of the Framework
Reverse phase aims to recover the existing architecture of a legacy system. The details of
this phase are shown in Section 6.1. Moreover, Section 6.2 shows the process of root causes
analysis phase which aims to find the problems related to different quality attributes. Besides,
the problems will be solved in Architecture selection phase which show in Section 6.3. At
last, the selected architecture is developed in Refactoring phase, and we present in Section
6.4.
6.1 Reverse Phase In this reverse phase, there are three sections, as shown in Figure 16. Section 6.1.1 aims
to define the viewpoint and extract information from legacy systems according to the
existing source code files, and system artifacts. In section 6.1.2, a repository table is created
based on all data extracted in section 6.1.1. After that, expected architecture views can be
generated according the repository table of last section in section 6.1.3.
6.1.1.1. Define viewpoint
6.1.1.2. Gather data
Section 6.1.1
Data Extraction
6.1.2.1. Data analysis
6.1.2.2. Repository establish
Section 6.1.2
Knowledge Inference
6.1.3.1. architecture visualization
Section 6.1.3
Architecture representation
Reverse phase (I)
Source code
artifacts
Architecture
view
Element RelationshipElement DescriptionFile include File
File contain Function
Function call Function
…
Data
repository
Figure 16: The processes of reverse phase
6.1.1 Data Extraction Data extraction contains two steps: the first step is to define viewpoint of the target
architecture diagram, and the second step is to gather data from source code file or system
artifacts based on defined viewpoint.
40
6.1.1.1 Define Viewpoint
Define the viewpoint according to viewpoint definition in Table 14. For example, we can
define the viewpoint as class or function, or component, or module etc.
6.1.1.2 Gather Data
1. Static analysis[88]
Step 1: prepare existing legacy system artifacts and all source files.
Step 2: extract data from source files and system artifacts. For example, if the viewpoint
is defined as function, then we can extract all function names and relations from system
artifacts including all source files.
2. Dynamic analysis[1]
Step 3: if the legacy system is available and executable, we can take dynamic analysis to
extract dynamic information from the run-time system. Dynamic analysis is the way to
gather data through observing the real run-time system. Combining the static analysis and
dynamic analysis for a system to collect data can support to generate a more complete and
accurate architecture representation [4]. For instance, some components of systems are
loaded at run-time, or the run-time configuration of some systems is changing with the time
depending on the loading of configuration file. So in this condition, both static and dynamic
analysis should be utilized to extract data of the system.
6.1.2 Knowledge Inference Step 1: Data analysis. Review all collected data to make sure all expected information
has been extracted from legacy system.
Step 2: Establish repository table. Build the repository table to save all collected data.
6.1.3 Architecture Representation Architecture visualization: Firstly visualize the elements and relations in UML diagram
based on repository table. Secondly group related functions or components into specific
layers or subsystems. Thirdly document and generate all diagrams.
6.1.4 Reverse Phase Sample For the reverse phase, in order to validate the correctness of the defined processes, we
have created a mock project. In this example, we achieved expected architecture view
successfully through following all processes in Figure 15. Besides, this example is helpful to
have a better understanding for the reverse chapter.
6.1.4.1 Data Extraction
Firstly we defined the expected viewpoint as basic function unit of the target small
legacy system implemented by using C language, which has 2 folders and 7 files. Then we
did static analysis based on these source artifacts intending to extract information related the
defined viewpoint. The result is shown in Figure 17.
For these source artifacts, we wrote some AWK scripts to remove the comments in the
source code, and we used bash script to fetch all the function names in all source code files.
After that, the relations among functions are identified through manually code analysis for
each function. Besides, we also made an analysis for the file directory structure for this
example system, so that we have a general high level about the system components.
However, we did not do dynamic analysis for the system, because the system is not
executable. At last, after reviewing all existing artifacts of the system, we finished the data
extraction.
41
Data
Extraction
ServerClient
Client.cc
CreateProje
ct.cc
UpdateProject.
cc
ReceiveMes
sage.cc
BuildCoone
ction.ccSearchProject.
ccDeleteProject
.cc
Figure 17: Data Extraction Sample
6.1.4.2 Knowledge Inference
First of all, based on the extracted data, we check each function and its relations to make
sure the workflow is completeness. Secondly, we transferred all data into a table namely
repository table including all functions (viewpoint) and their relations, as well as files and
file relations (file directory structure) as showed in Table 26.
42
Table 26: Repository table of sample example
Element Relation Element Description
Folder (Client) has Client.cc Client has the
Client.cc file
Folder (Server) has CreatProject.cc
UpdateProject.cc
DeleteProject.cc
SearchProject.cc
Common.cc
File1
(CreateProject.cc)
Contains F2: CreateProject();
F1: InitialProject();
File2
(UpdateProject.cc)
Contains F3: UpdateProject();
File 3
(SearchProjct.cc)
Contains F4: SearchProject();
File 4
(DeleteProject.cc)
Contains F10:
DeleteOneProject();
F5: DeleteProject();
File 5
(Common.cc)
Contains F6:
BuildConnection();
F7: Disconnect();
F2 Calls F1
F4
F6
F7
F3 Calls F4
F6
F7
F4 Calls F6
F7
F10 Calls F6
F7
F5 Calls F6
F7
6.1.4.3 Architecture Representation
In this section, we drawn all functions and mapped the relations by using the UML tools
based on the gathered data existing in the repository Table 26. After that, we grouped the
specific functions to identify the components based on the file and file relations in repository,
and then we got the expected architecture view diagram as shown in Figure 18.
43
userOperation
:Da
ta
de
st
request
:ProjectServices :ServerReceiver :ResponseSender:Control :Data
source destsenderreceiver
:MemoryDataManager
:ClientInitial :MessageSender :MessageReceiver
:Co
ntr
ols
ou
rce
se
nd
er
rece
ive
r
:Da
ta
so
urc
ed
est
:Sample Example
:Client
:Server
:MemoryData
response
sender
receiver
:Da
ta
de
st
so
urc
e
Figure 18: Architecture view of sample example
6.2 Root Cause Analysis Phase There are 3 steps in this phase as shown in Figure 19.
44
Root Cause Analysis Phase (II)
Quality
Attributes
prioritization
list
Root Cause
Statements
ISO 9126
Reversed
Architecture View
Chapter 6.1
Scenario Formulation
Section 6.2.1
Root Cause Identification
Section 6.2.2
Quality Attributes
Prioritization
Section 6.2.3
New
Requirements
Figure 19: Root Cause Analysis Process
6.2.1 Scenario Formulation Scenarios (as we defined in Table 19) are formulated in three steps, which are problem
elicitation, scenario formulation and confirmation, as shown in the Figure 20.
45
6.2.1 Scenario Formulation
ScenariosScenariosScenarios
No
Yes
Yes
ScenariosNew
RequirementsNew
Requirements
Problem Elicitation
Section 6.2.1.1
Scenario Formation
Section 6.2.1.2
Confirmation
Section 6.2.1.3
Reversed
Architecture View
Section 6.1
Figure 20: Process of Scenario Formulation
6.2.1.1 Problem Elicitation
Before elicit the problem, it is important to understand the background of the system:
1. Understand the application domain
Application domain defines the type of the project which helps analyst to have
a general view of the system.
2. Understand the surrounding environment
This recommendation brings a chance for analyst to see the position of system
in the environment, and helps analyst to identify the events conditions related to the
interaction with other systems within the environment.
3. Understand the roles of the stakeholders in the system.
Different roles of stakeholders reflect different views of the system, it helps
analyst to get the full picture of the system.
Table 27 shows the background information of the system mentioned in Figure 18.
Table 27: background information of example system
Type Content
Application domain Client-Server software system used for 20 people
Surroundings 10GB Server application, common connected clients
6 to 18
Stakeholders one project manager, 5 software developers and 5
software testers.
46
During the problem elicitation, there are two things are essential:
1. Keyword definitions
In order to get a clearly understanding of the problem, it is very important to set a
keywords definition with stakeholders. The keyword is an ambiguous word that makes
the problem unclear [72]. In practical, the keywords are mostly used to describe the
quality system, such as peak-load, slow reflection, low response and etc. Quantify the
keywords if it possible, otherwise clearly define the scope of the keywords if it hard to
quantify. Table 28 is the example keyword definitions.
Table 28: Example keyword definition
Keyword Definition
Slow reply time Server application reply more than 1 minute to client.
High load Server connected more than 15 clients at the same
time.
Long running time More than a full week (7days * 24 hours)
2. Record Problem comments
While eliciting the problem from the stakeholder, they also have comments
regarding why the problem happens from their point view. These problem comments
will be valuable reference to analyze the problem in deep.
Three methods are recommended to elicit the problem: Interview [23], Question form
and Observation [23]. Table 29 shows the example problem comments.
Table 29 Problem comments example NO. Criteria Description
1 Problem Server application data lost.
Statement After long running time, server application lost clients
data.
When it happens After a full week running.
How it happens Server application running.
Potential cause
2 Problem Server crashes when receiving unexpected requests.
Statement Server crashes when receiving unexpected requests from
clients.
When it happens Clients send login requests to Server application
How it happens Server receives unexpected requests.
Potential cause No invalid request verification
6.2.1.2 Scenario Formation
The process of scenario formation is shown in Table 30. Moreover, there are three
recommended techniques: Natural language [21], I*[22] and UML [23] for helping create
formations.
Table 30 Example Scenarios
Scenario Description
Scenario A Server application loses clients data after running
more than one full week time (7 days * 24 hours).
Scenario B Server application crashes if server receives ―#/*/#‖
or ―(*)‖ string within client request.
47
6.2.1.3 Confirmation
To confirm the scenarios are validated and stand for stakeholders’ expectation. And also,
when eliciting problem information from stakeholders, they not only provide the events and
conditions, but also sometime they give the functional requirements for the target system. In
practice, it is recommended to separate these new requirements from the problem statements
as these functional requirements have little affection of improving the quality of the system.
6.2.2 Root Cause identification As Figure 21 shows, there are 5 steps in this phase, which aims to identify root causes
from formulated scenarios and prioritize them.
Output:
Root Cause
Statements
6.2.2 Root Cause Identification
Identify Functional Unit
Section 6.2.2.1
Identify FRAM
Relationship
Section 6.2.2.2
Draw FRAM Diagram
Section 6.2.2.3
Diagram Analysis
Section 6.2.2.5
Reversed
Architecture View
Section 6.1
ScenariosScenariosScenarios
ISO 9126
Quality Attributes
Mapping
Section 6.2.2.4
Output:
Quality
mapping table
Figure 21: Process of Root Cause Analysis
6.2.2.1 Identify Functional Unit
Functional units are categorized into two types: foreground and background functional
units. And the process aims to identify both of them. Examples for this phase are shown in
Table 86 and Table 87:
Identify foreground functional unit.
Point out where the problem occurs.
Identify background functional unit.
Point out which functional units related to foreground unit.
6.2.2.2 Identify FRAM Relations
This step identifies five FRAM relations for each functional unit:
1. Identify FRAM relationships for foreground functional unit;
48
2. For each identified relationship, find supported background functional units. If one
functional unit’s output can support the input, precondition, resources and control of
one foreground functional unit, it is identified as the background functional unit of
that foreground function unit.
3. Identify FRAM relationships for these background functional units.
Example for Scenario A and Scenario B are shown in Table 86 and Table 87
respectively.
6.2.2.3 Draw FRAM Diagram
Based on the identified functional units and their relationships, we can draw the FRAM
Diagram in this step. All types of relations are shown in the Figure 22.
FRAM Relation Diagram
A’s output is an input of B
A
O
B
I
A’s output is a precondition of B
A
O
B
P
A’s output is a resource of B
A
O
B
R
A’s output is a controler of B
A
O
B
C
Figure 22: FRAM relationships
6.2.2.4 Quality Attributes Mapping
All scenarios can be mapped to different Quality attributes based on ISO 9126.
The process is shown as follow,
Table 31 shows the example quality attributes:
Mapping Scenario to sub-quality attributes according to ISO 9126
Categorize sub-quality attributes to quality attributes.
49
Table 31: Example Quality attributes mapping
Scenario Sub quality attribute Quality attribute
Scenario A Stability Maintainability
Scenario B Fault Tolerance Reliability
6.2.2.5 Diagram Analysis
The intention of this step is to figure out the root cause of each scenario by analysis
related FRAM diagram. The process is shown as following:
1. Select related quality metric for each scenario.
These metrics will be used to measure each relationship between functional units.
Table 32 shows the example of selected metrics for scenarios mentioned above.
Table 32: Example of selected metrics
Scenario Metrics Value
Scenario A Package completeness 1: complete packages
-1: incomplete package
Scenario B Requests completeness 1: processed all coming requests
-1: not processed all coming requests
2. Weight and priority the relationships in FRAM diagram.
Running the legacy system to measure each FRAM relationship between functional
units by using selected metrics.
SDB
O
SDP
I
SSS
I
SDD
O
I
O
I
+1
-1
-1
-1
Figure 23: Example of weight diagram
For scenario A, according to problem statement, we knew that the output of Server
Data Processor (SDP) has incomplete package. Thus, we weight the relationship
contains this output as ―-1‖. And then we run in the legacy system to see how that
affects other function units, the weighted diagram is shown in Figure 23, and Table
33 shows the weighted functional units.
50
Table 33: Weighted functional unit for scenario A
Function unit Value Priority
Server Data Processor -2 1
Server Side Sender -1 2
Server Data Dispatcher -1 3
Server Database 0 4
3. Identify causal factors.
Causal factors can be identified during the measurement of each functional unit. In
addition, the recorded problem comments can also help to identify casual factors.
For scenario A, following on priority order, we analyzed the FRAM relationships
for each function unit, and find the causal factors as shown in Table 34.
Table 34: Causal factors for scenario A
Function Units Causal Factors
Server Data Processor 1. Protocol Error
Server Side Sender 2. Failure of sending request
Server Data Dispatcher 3. Transaction id Exception
Server Database 4. Request length over maximum
number
Similarly, we analyzed the causal factors for Scenario B in Table 35.
Table 35: Causal factors from scenario B
Function Units Causal Factors
Client Side Sender 1. Protocol error
Server Side Receiver
User Request Form
4. Generate Root cause statements.
Different casual factors can be synthesis into root cause statements. One root cause
statement can map to multiple causal factors, vice versa.
In Table 36, we generated the root causes according to Table 34 and Table 35
Table 36: example of root causes
Root cause Statement Causal Factors
Unreliable of transmission protocol
in system
1. Protocol Error
2. Failure of sending request
3. Transaction id Exception
Defects in database Design 4. Request length over maximum
number
6.2.2.6 Quality Attributes Prioritization
The formulated scenarios and quality mapping table will be used as input to prioritize the
quality attributes. The process is shown in Figure 24
51
Output:
Quality Attributes
prioritization list
6.2.3 Quality Attributes PrioritizationScenariosScenariosScenarios
Quality attributes
Prioritizaiton
Section 6.2.3.1
Quality Mapping
table
Figure 24: Process of Quality Attributes Prioritization
6.2.3 Prioritization of Quality Attributes This step calculates the scenario coverage for each quality attribute. In other words, the
coverage means the percentage of covered scenarios for each quality attribute.
For example, Table 37 shows the mapping between root cause statements and scenarios.
Table 37: Mapping scenario with root causes
Root cause Statement Causal Factors Scenario
Unreliable of transmission
protocol in system
1. Protocol Error Scenario A
2. Failure of sending request Scenario A
3. Transaction id Exception Scenario A
Defects in database Design 4. Request length over
maximum number
Scenario B
Table 38 shows the calculation of scenario coverage in order to prioritize quality
attributes.
Table 38 Calculation of Scenario
Quality
attribute
Sub quality
attribute
Scenario Number of
covered scenarios
Coverage Priority
Maintainability Stability Scenario A 1 50% 1
Reliability Fault Tolerance Scenario B 1 50% 1
6.3 Architecture Selection Phase Generally, this chapter, in Figure 25, describes two sections for architecture solution
selection. Firstly we design architecture candidates following the approach created by
Hofmeister at al [16] by taking consideration on root-cause problems and quality
requirements from stakeholders, and then select one the most suitable architecture design for
the legacy system among all candidates by using the processes defined by Mikael at el [17].
52
6.3.1.1. Identify architecture candidates
Section 6.3.1
Identify architecture candidates
6.3.2.1. Data collection (FQA,FAS)
6.3.2.2. Quality attributes prioritization
6.3.2.3. Architecture selection
Section 6.3.2
Architecture Solution Selection
Architecture Solution Selection phase (III)
Prioritized
Quality
Attributes
(PQA)
Selected
Architecture
solution
Root-case
Statements
Figure 25: Process of Architecture Selection Phase
6.3.1 Identify software architecture solution candidates 1. Generate issue cards and provide solutions for each root-cause problem by applying
the processes created by Hofmeister et al [16].
While providing solutions for each root-cause problem, some customers and experienced
experts should be involved. The reason is that experienced experts could suggest
professional solutions for a problem, and customers can post their feedback in time for
different solutions.
2. Based on all issue cards from previous step, design the four views defined by
Hofmeister et al [16] step by step, which are concept view, layer view, module view, and
code view from top to down. The detail processes to design four views are illustrated in the
―Applied software architecture‖ book published by Hofmeister et al [16].
6.3.2 Architecture solution selection 1. Collect architecture candidate evaluation data from customers by applying Analytic
Hierarchy Process (AHP) [19]. The detail processes are described in Mikael et al [17].
The data can be collected through both questionnaires and seminar meetings. If the
evaluators have involved in the architecture candidates design, then questionnaire is better
than meeting, else seminar meeting is better than questionnaire. The reason is that face to
face meeting is convenient to answer questions and confusions from evaluators when they do
evaluation for all architecture candidates.
2. Prioritize quality attributes based on the mapping between quality attributes and root-
cause problems.
53
For example, assuming there are 12 root-cause problems in a legacy system, and three
quality attributes like Efficiency, Reliability, Usability are reflected based on ISO9126
standard [27]. If efficiency is corresponding to 6 root-cause problems, reliability is related
with 3 root-cause problems, and usability is also connected with 3 root-cause problems.
Thus, the prioritization list is efficiency, reliability and usability with value 0.5, 0.25, 0.25 as
shown in Table 39.
Table 39: Prioritized Quality Attributes example (PQA)
Quality Attribute Priority
Efficiency 0.5
Reliability 0.25
Usability 0.25
3. Refine quality attribute prioritization. Firstly we send quality attributes list to all
participants, who are familiar about the system. Secondly, collect feedback data. Thirdly
calculate median value based on all collected data for each quality attribute. The reason is
that the median value filters the most positive and negative data intending to ensure
reasonable data. At last, we use average values of these two quality prioritization list from
are calculated as the final values.
4. Calculate the decision table values for all architecture candidates based on collected
data following the steps presented in Mikael et al [17] strictly. The decision table data shows
that how much satisfaction the specific architecture design is suitable to resolve the problems
for a legacy system, and the architecture design with highest value is the best.
5. Select the most suitable architecture design according to the decision table.
6.3.3 Sample example In this section, we have conducted a small example to simulate real situation of
architecture candidate selection in order to ensure the correctness of the defined processes.
6.3.3.1 Identify architecture candidates
Based on the root cause statements, considering the complexity of calculations we
designed two architecture candidates namely AS1 and AS2 respectively.
6.3.3.2 Architecture selection
We followed the three steps defined in the framework showing as follows: data
collection, prioritized quality attributes, and architecture selection.
6.3.3.2.1 Data collection
When we finished the architecture candidates, we sent out the data collection tables, with
AS1 and AS2 documentation, as well as AHP method instructions to three stakeholders, who
are two developers and one architect. All the participants are responsible for current system
maintenance.
In order to verify the correctness of the architecture selection processes, we assumed that
this small contains two quality attributes: Efficiency and Maintainability namely QA1 and
QA2 respectively. QA1is related 2 root cause statements, and QA 2 is connected with 3 root
cause statements.
Based on the collected data, firstly we did some calculations by following the AHP
method to get all FQA and FAS tables from all participants, and then the median values are
picked out as the final values of FQA and FAS tables as shown in Table 40.
Table 40: FQA and FAS of sample example
(FQA) AS1 AS2 (FAS) AS1 AS2
QA1 0.6 0.4 QA1 0.5 0.6
QA2 0.3 0.7 QA2 0.5 0.4
54
6.3.3.2.2 Prioritized Quality Attributes
We got the prioritized quality attributes shown in Table 41.
Table 41: PQA of sample example
Quality attribute Priority
QA1 0.4
QA2 0.6
6.3.3.2.3 Architecture selection
In this section, we strictly follow the processes defined in [17] by Mikael et al in 2003.
According to Table 40, we transformed the FQA table to be FQA’ which shown in Table 42
and Table 43.
Table 42: FQA' of sample example
(FQA’ based on
row one) AS1 AS2 (FQA’ based on
row two) AS1 AS2
QA1 0.6 0.4 QA1 0.3 1.05
QA2 0.6 0.27 QA2 0.3 0.7
Table 43: Normalized FQA' of sample example
(Normalized FQA’
based on row one) AS1 AS2 (Normalized FQA’
based on row two) AS1 AS2
QA1 0.6 0.4 QA1 0.22 0.78
QA2 0.69 0.31 QA2 0.3 0.7
Based on the normalized FQA’ tables and 2 times FQA table, we calculated the average
values, then we got FQAr table shown in Table 44.
Table 44: FQAr of sample example
(FQAr) AS1 AS2
QA1 0.5 0.5
QA2 0.4 0.6
The FVC table, shown in Table 73, can be generated after finishing the variance
calculation.
Table 45: FVC of sample example
(FVC) AS1 AS2
QA1 0.036 0.036
QA2 0.038 0.038
At last, the result of AS 2 is higher than AS 1, so the architecture solution 2 is the
suitable architecture solution.
Although we have assumed some input data for the architecture selection chapter, our
intension is to verify that the whole processes are correct.
6.4 Refactoring Phase The selected architecture is implemented in this phase. The main tasks of this phase are
developing and testing the new system based on the selected architecture. Selection of the
development and testing methodologies is according to the scale of the system and the
planned time duration for refectory.
55
7 APPLY ARAR FRAMEWORK This section presents the design, operation, results and analysis of the case study
performed in order to test the ARAR framework in a real industrial project of Ericsson.
In this case study, all stakeholders described in Table 73 come from Ericsson Company
including a product line manager, a system tester, a test & verification expert, a verification
engineer, and a software developer. We started the case study by applying the ARAR
framework phase by phase. From 2012-03-09 to 2012-03-19, we applied the first
architecture reverse phase of the framework, and at the end of this phase, a face to face
meeting was held to confirm the output of the first phase in the framework. When the
confirm meeting was passed, the second phase of the framework named root cause analysis
started at 2012-03-19. During this phase, firstly several face to face meetings were conducted
with all stakeholders to do quality problem elicitation. Secondly, we formulated scenarios
based on quality problems gathered from stakeholders. Thirdly, we conducted face to face
meetings to revise the scenarios, and then apply FRAM method to identify root cause
problems connected with the quality problems. In the end of this phase, another face to face
meeting was carried out to confirm the correctness of the result of this phase. Following the
second phase in the ARAR framework, the third phase, architecture candidate selection,
started at 2012-04-27. We first designed architecture candidates based on the outputs from
previous phase, and then we introduced detail architecture design and sent architecture
design documents to stakeholders so that they can evaluate the architecture candidates from
their own requirements about quality attributes. After that, based on the architecture
candidates’ evaluation result we selected the best architecture design from all architecture
candidates at 2012-05-17. When the architecture design was selected, we started to
implement this architecture design function by function, and we had daily meeting to
manage our programming progresses. Finally, at 2012-06-08 we accomplished the
implementation work and present the target system to all stakeholders. The following Figure
26 shows all processes and time line of the case study that we conducted in NetStatusserver
system of Ericsson Company.
The section is organized as follow. In Section 7.1 the design of the case study is
presented. Data analysis is shown in Section 7.2. Section 7.3 shows the threats to validity in
this case study.
56
Stakeholder Researchers
Set Up Environment
Define View Point
Gathering data from source code
Build Knowledge Repository
Architecture Visualization
Architecture Evaluation
Face to Face Meeting
Revise of Architecture
Problem ElicitationScenario Formulation
Face to Face Meeting
FRAM Analysis
Identify Root Cause
Analyze Root Cause and Related Quality Attributes
Revise Scenario
Face to Face Meeting
Design Alternative architecture solution
send out the architecture-solution selection form
Architecture evaluation
Architecture selection
Face to Face Meeting
Refactoring based on selected architecture
Face to Face Meeting
Prepare to apply the framework
Daily Meeting
Finish Refactoring
128h
318h
212h
155h
2012-03-19
2012-03-09
2012-04-27
2012-06-08
2012-05-17
Figure 26: Overview of Case study
57
7.1 Case Study Design
7.1.1 Case Definition This industrial case study is conducted in an internal project of Ericsson Company
AUTO-TEST environment, where the ARAR framework is selected to improve the quality
of internal software.
7.1.1.1 Case Context
Ericsson AUTO-TEST environment is a testing environment for testers to develop and
perform test automation test cases for large scale software business solution. The structure of
the AUTO-TEST environment is shown in Figure 27.The Netstatus-server application is
allocate in SERVER 1 whereas the Netstatus client applications are allocated in all the
servers. All the servers are connected with each other physically.
Testers send their test cases to the AUTO-TEST SERVER which is consisted of three
mirror servers. The test case is consisted of a batch of requests; these requests are sent from
Netstatus client to Netstatus-server application. When handling the requests from different
test cases, there are many resources are required. These resources either can be shared by
many test cases, or they can be using exclusive by only one test case at the same time. If the
resources are using exclusive currently, all other test cases that require the resources should
waiting for the running test case releases the resources. Otherwise, there will be a conflict of
using the resource, and thus the result of the test case is not reliable. In order to avoid the
resource conflict, the Netstatus-server system is created to schedule and control test cases
execution based on their required resources. The work flow of NetStatusserver System is
shown in Figure 28.
TESTERS
SERVER 2
AUTOTEST SERVER
SERVER 3
SERVER 1
Requests Requests
Test Results
Test Cases
Netstatus-Client Netstatus-Client
Netstatus-Client
Netstatus-Server
Figure 27: Ericsson AUTO_TEST Environment
58
Client
Test Case uses
Resources X
Check status of
Resources X
Evaluate
Answer
Execute Test Case
Next Test Case
NetstatusServer
Search in Server
Database
Update Server
Database
Return Answer
OK / NOK
NOK (wait 90 seconds)
Figure 28 NetStatusserver system work flow
7.1.1.2 Unit of Analysis
In this case study, the unit of analysis is the process of applying the ARAR framework to
improve the efficiency of the legacy NetStatusserver system.
7.1.1.3 Metrics of Unit Definition
The unit of analysis is the subject which needs to analysis inside the case to represent the
goal of research question [28]. In this case study, the metrics are the time duration for
applying the proposed framework and the result of the fact to face meeting in each phase.
The details of both metrics are shown in Section 7.1.3.1.
7.1.2 Preparation Preparation is the pre-step to start data collection; it provides the knowledge, workflow
and environment to start collecting case study data.
7.1.2.1 Research Question
We aim to answer RQ2 through the case study:
RQ2. Can the proposed framework be applied in industry environment?
7.1.2.2 Data Procedure
Before conducting the case study, we have set up the schedule for different phase of
applying the ARAR framework.
59
Table 46: Expected Schedule to conduct Case study
Phase Planned Time Activates Artifacts
Reversing Phase 2 weeks Gathering system
data
System
documentation
and code
Root Cause
Analysis Phase
6 weeks Scenario Elicitation
meeting
Scenarios
Quality attributes
mapping
Quality
attributes
priority list
Selection of
Architecture
Phase
2 weeks Architecture
Selection
Selected
Architecture
Refactoring Phase 4 weeks Daily Project Meeting New system
Total 14 weeks
7.1.2.3 Case study Environment
Before starting data collection, hardware and software environment need to be setup.
The details are shown in Table 47.
Table 47: Case Study Environment
Name Description
Participants Five participants in Table 73
Operating System GNU/Linux x86_64
Location Office Room in Ericsson (closed)
Type of Execution Server Linux Server
Network Internal Ericsson Testing Network
7.1.3 Data Collection In this section, at first we defined what data to collect while conducting the case study in
Section 7.1.3.1, and then we collected all required data while applying the ARAR framework
phase by phase in Section7.1.3.2.
7.1.3.1 Selecting Source Data
Two perspectives are taken into consideration in:
1. Time duration to get all outputs after applying the framework strictly step by
step in reasonable people involvement.
In the ARAR framework, there are four phases including Reverse phase, Root
cause Analysis phase, Architecture Selection phase, and Refactoring phase,
and they depends on the output of each other from the first phase to the end.
For instance, the output of Reverse phase is the input of the Root cause
Analysis phase, and the Architecture Selection phase cannot start without the
output from Root cause Analysis phase. The same relation exists between
Architecture Selection phase and Refactoring phase. If any phase is failed to
get the output, then the framework cannot be executed any more. Therefore,
we calculated the time duration in four phases of the ARAR framework
separately. In each phase, the case study executor cannot stop calculating time
until all outputs of this phase are achieved.
2. Result of the face to face meeting.
At the end of phase in the ARAR framework, a face to face meeting is
conducted with all stakeholders to confirm the correctness of the outputs.
During the meeting, stakeholders first receive a brief introduction about the
60
processes to get outputs, and then they confirm whether they are satisfied with
the outputs or not from their own perspectives.
Thus, two kinds of source data is identified according to the above two perspectives as
follows:
1. Time duration to get all outputs while applying the ARAR framework
calculating by working hours.
2. Result of each face to face meeting with stakeholders.
Time duration is the all working hours spending to get all outputs while applying the
ARAR framework in this case study for all participants. It is very crucial to validate whether
the framework can be applied or not, because it is meaningless if it spends too much time
and effort, which is higher than maximum expectation from stakeholders, to apply the
framework in a specific project.
A face to face meeting is conducted to evaluate the correctness of the outputs after each
phase of the ARAR framework. Two results are defined for the face to face meeting with
stakeholders, which are success and failure. Success means stakeholders are satisfied with all
the gathered outputs after evaluation from their own perspectives. Failure means
stakeholders disagree with one output or several outputs, or they think their requirements are
not fulfilled completely.
In this case study, there are five participants from Ericsson Company including a System
Tester, a Test & Verification expert, a Product-line Manager, a Verification Engineer, and a
Software Developer. The detail information of all participants is showing in Appendix A
Table 10.
7.1.3.2 Operation and Data Collection
In this section, we collected the all data defined in Section 7.1.3.1, by using mixed data
collection techniques like Interviews [48], Document [48] while applying the whole
framework step by step in NetStatusserver system of Ericsson. As recommended by [48]. He
suggests using different sources of data collection since each of these approaches has got its
own pros and cons and all these sources are complementary to each other. In general, cases
studies are considered to be more convincing and accurate using different sources.
Interview In the end of each phase while applying the ARAR framework, we conducted a
face to face meeting with stakeholders to confirm the correctness of the outputs.
During the meeting, we first briefly introduced the processes to get the outputs of
specific phase in the framework for all stakeholders, and then stakeholders
confirm whether the gathered outputs cover their own requirements or not from
their own perspectives. This technique has high level of user acceptance. Because
it helps with better understanding what stakeholders wants and makes it more
possible to get a better acceptance.
Documentation for time duration:
The verification engineer and software developer in APPENDIX A Table 73 are
executers in the case study. The working hours are recorded by using
documentation every week, and the example table of one week is shown in Table
48.
61
Table 48: Example of Time Report Table
Day Task Time (hour)
Monday Prepare for the presentation 8
Framework presentation meeting 12
Tuesday Plan for first part of case study 8
Potential root cause problems optimization
discussion
10
Wednesda
y
Read the book for questionnaire design 8
Discuss the questions in the questionnaire form 6
Thursday Discussion for the root cause analysis
optimization method
4
Redefine the root cause analysis 16
Friday Start for the reverse part in the cause study 9
Total 81
7.1.3.2.1 Collect Data in Reverse Phase
We firstly defined the viewpoint as function, and then write AWK scripts to collect the
function related data manually through analyzing two source code files of NetStatusserver
system. Secondly we organized all collected data in the repository table. Thirdly we
recovered the architecture view of the legacy system by using the UML tools.
In the end of the reverse phase, we conducted a meeting with all the stakeholders to
confirm the architecture view of the legacy system. We collected all feedbacks about the
architecture view from stakeholders, and then we revised the architecture view, as illustrated
in APPENDIX A Figure 36. Table 49 shows the collected data of Reverse Phase
Table 49: Collected data of Reverse Phase
Phase No. 1
Phase Name Reverse phase
Time Cost (Hour) 128
Participants All participants in APPENDIX A Table 73
Input(s) Two source code files
Intermediate Artifact(s) Repository Table
Output(s) Reverse architecture diagram in APPENDIX
A Figure 36
Confirm Meeting Result Pass
7.1.3.2.2 Collect Data in Root Case Analysis Phase
7.1.3.2.2.1 Scenario Formulation
Problem Elicitation
In order to know the exactly problem of NetstatServer system, we held a problem
elicitation meeting with stakeholders. During the meeting, we elicited several
problems statements which are shown in Table 50 and also we confirmed two
words definitions:
Slow of reply: if the waiting time of server side sends out reply more than 2
seconds, then there is a slow of reply occurs.
Peak load: the peak load of server application is maximum 12 requests per
second. This value is calculation from historical data from December 2011 to
June 2012.
62
Table 50: Problem of NetStatusserver System
Problem Description
Problem 1 When server sends out reply to clients, the reply is always slow in transmission
Problem 2 Requests sent from client side lost during server application
peak load time.
Problem 3 During test cases execution, there is a long time interval for
waiting different resources.
Problem 4 The requests between clients and server application are
missing when server application running at peak load.
Problem 5 When restarting of the server application, requests and server
status information are missing.
Problem 6 Serve application crashes when receiving invalidate requests
Problem 7 There is a long waiting time for test case status change from
waiting to activation.
Formulation of Scenario
After we elicited the problem of the system, we formulate the scenario and
formulate the new requirements based on the elicited problem. Scenario 1 is
shown as follow, other scenarios and new requirements are shown in APPENDIX
A Table 75 and Table 77.
Scenario 1
When: Any time, even the server is off peak load
How: client sends requests to server
Statement: The reply of server is slow (more than 2 seconds) in any time, even the
server is off peak load when client sends requests to server.
Confirm Meeting
In the confirm meeting, we presented the scenarios to stakeholder to confirm that all
scenarios are in the functional level, and quality attributes are mapping correctly.
7.1.3.2.2.2 Root Cause Analysis
All 7 scenarios are analyzed in this part, we illustrate the details of analysis process for
scenario 1 as an example, other scenarios analysis results are shown in APPENDIX A Table
80.
Identify Function Unit
The first step is to identify foreground function unit. The foreground function unit
describes the problem itself. For Scenario I, the foreground function is shown in
Table 51.
Table 51: Table of Foreground Function units
Function unit Description
Send Request Client (SRC) Sending request from Client to Server
Receive Request Server (RRS) Receiving request in Client side.
Send Request Server (SRS) Send request from Server to Client
Receive Request Client (RRC) Receiving request in Client side
Request Handling (RH) Processing received request in Server
Side
Moreover, we identified the background function units. The background function
units are identified through the FRAM relations of foreground function units. For
63
example, only if the server has been initiated (ISP), then the server application
can receive (RSM). In other words, the output of ISP is the precondition of RSM.
Thus, ISP is the background function unit of RSM, and there is a line between ISP
and RSM. Table 52 shows all the related Background function for Scenario 1. The
FRAM Relationships of background function units is shown in APPENDIX A
Table 78.
Table 52: Table of Background Function Units
Function unit Description
Initial Server APP (ISP) Initiate Server side application
Initial Client APP (ICP) Initiate Client application
User Sent Request (USR) User sending test cases
7.1.3.2.2.3 Draw FRAM Diagram
We draw the FRAM diagram based on the analyzed function units and their relations
which shown in Figure 29.
USR
I
OC
R P
SRC
I
OC
R P
SRS
P
RO
C I
RH
I
RC
O P
RRC
P
RO
C I
RRS
I
OC
R P
ICP
I
OC
R P
ISP
I
OC
R P
4 -4
-4
4
-4
4
4
4
4
-4
Background functional unit
Foreground functional unit
Figure 29: FRAM Diagram for Scenario 1
7.1.3.2.2.4 Mapping Scenario to Quality Attributes
Based on ISO 9126, we mapped the scenarios to quality attributes, Table 53 shows the
scenarios and their related quality attributes. And also, based on the category, we calculated
the coverage of the quality attributes which is shown in Table 54.
Table 53: Scenarios Mapping List
Scenarios Sub Characteristic Characteristic quality
Attribute
Scenario 1 Time Behavior Efficiency
Scenario 2 Reliability Compliance Reliability
Scenario 3 Time Behavior Efficiency
Scenario 4 Reliability Compliance Reliability
Scenario 5 Operability Usability
Scenario 6 Fault Tolerance Reliability
Scenario 7 Time Behavior Efficiency
64
Table 54: Coverage for each Quality Attribute
Quality
attributes
Scenarios Coverage
Efficiency Scenario 1,4,7 43%
Reliability Scenario 2,3,6 43%
Usability Scenario 5 14%
7.1.3.2.2.5 Diagram Analysis
Select Metric
Since performance is the main concerning of NetStatusserver system from
stakeholders’ perspective, we select the performance variability of output as the
metric to analyze the system which is shown in Table 55.
Table 55: Metric of Performance Variability
Performance variability of Output
Output Accuracy +1 Accuracy means
the output is
received
completely in
expected time.
Inaccuracy -1 Accuracy means
the output is not
received
completely in
expected time.
Weight FRAM Diagram
We run the legacy system in order to measure output of each function unit.
Furthermore, the function unit is prioritized according to their weight. The result
is shown in Table 56.
Table 56 Weight for each Function Unit
Function unit Value Priority
SRS -1 1
RRS -1 1
RRC -1 1
SRC -1 1
RH 1 2
USR 1 2
ICP 2 3
ISP 2 3
Analysis Causal Factor
Based on the priority list, we analyzed each function unit from their relations. For
example, Receive-Message-Client (RMC) (Preconditions) by running legacy
system. The result shows the input of the RMC can be the causal factor for
RMC’s inaccuracy output. Thus, we analyzed the input of the RMC, and there is a
causal factor that the length and maximum size of UDP Fragmentation is missing.
As a result, this problem is a causal factor for RMC. The analysis result for
Scenario 1 is shown in Table 57.
65
Table 57: Analysis Result for Scenario 1
Function Units Causal Factors
SendRequestClient (SRC) 5. hostname error
6. socket open error
7. failure of sending request
ReceiveRequestServer (RRS) 8. no receive status flag in UDP
protocol
SendRequestServer (SRS) 9. UDP fragmentation size limitation
ReceiveRequestClient (RRC) 10. Request lost during transmission
11. Fragmentation error (length,
maximum number)
12. Transaction id Exception
General Root Cause Statements
In this part, the root causes is generated based on root cause statements that
analyzed from each scenario. The root causes and root cause statements for all
scenarios are shown in APPENDIX A Table 80.
Confirm Meeting
In this meeting, we presented the root cause statements to stakeholder, and
explained how we generated the priority list. As a result, they committed the root
cause statements and quality priority list.
As a result, Table 58 shows the collected data of Root Cause Analysis Phase.
Table 58: Collected data of Root cause Analysis Phase
Phase No. 2
Phase Name Root cause Analysis Phase
Time Cost (Hour) 318
Participants All participants in APPENDIX A Table 77
Input(s) Output of reverse phase APPENDIX A
Figure 36.
Intermediate Artifact(s) List of Scenarios in APPENDIX A Table 75
Output(s) New requirements in APPENDIX A Table 76
Root Cause Statements APPENDIX A Table
80
Quality Attributes Priority list in Table 54
Confirm Meeting Result Pass
7.1.3.2.3 Collect Data in Architecture Solution Selection Phase
Step 1: we transformed each root-cause description, as shown in Table 80, into issue
cards showing in the APPENDIX A, Table 79, Table 81, Table 82, Table 83, Table 84 and
Table 85.
For each every root-cause problem, we gave solutions that were expected to fix the
existing problems. In order to ensure we proposed correct and suitable solutions for a
specific problem, we invited three experts (an architect, an expert tester, and a line manager)
who are familiar about the legacy system from Ericsson to join the discussion about the
solutions for each problem. They were always helpful to post appropriate solutions for a
problem, or provide some valuable feedbacks to improve our solutions from their
professional viewpoints. For an instance, there is a problem related with data losing in the
legacy system of Ericsson. In the beginning they shared their using experience for the legacy
system with us to help us to understand why and how existing problems happen. Besides,
they suggested us to analyze the data transmission protocol and change the current
transmission protocol to TCP. At last, they confirmed the solution for this issue card.
66
In conclusion, in this step people can easily provide solutions for a specific problem if
they do not consider whether the proposed solutions are suitable for the legacy system or
stakeholders are satisfied about the solutions or not. Therefore, our experience is that it is
very crucial to involve people who are familiar about the legacy system in this step, and
confirm the final solutions with them when all of you reach an agreement. If you did not do
this, there could be a lot of problems in later phases, and even could make the project failure.
Step 2: through applying the architecture design processes step by step strictly created by
Hofmeister [16] from top to down based on all defined issue cards in the previous step, we
created two architecture candidates, as shown in APPENDIX A, Figure 37 and Figure 38.
When we designed the architecture candidates in Ericsson, we invited one expert in
Ericsson who is very familiar about the architecture of the legacy system to join our daily
meeting. We introduced and explained our new design to him every day, and collected all
comments from him to improve our architecture design. The purpose is to make him a better
understanding about the architecture candidates design, because in next step we need to
collect architecture evaluation data from his viewpoint.
After applying this step in Ericsson, our experience is that even though the process of
architecture design in Hofmeister [16] is clear, it is not easy to design software architecture if
you do not have enough software architecture background knowledge and practical
experience. So it is very helpful to get some knowledge about software architecture design
especially architecture design patterns, or consult some experienced or advanced architecture
experts when you do architecture design if these resources are available.
In addition, it is useful to save time and make correct architecture candidates selection if
you can invite the people who will evaluate all architecture candidates as many as you can to
join the whole architecture design procedure. More understanding for the architecture
candidates they have, better evaluation result they can make.
Step 3: we firstly generated questionnaires, which is used to collect architecture
candidates’ evaluation data, following the method published by Mikael Svahnberg et al [17].
Secondly we sent all questionnaires to target evaluators in Ericsson, who are familiar
about the legacy system architecture with different roles including a system architect, a
software developer and three software testers.
Thirdly, we collected all evaluation data from evaluators.
In conclusion, we got a lot of problems when we applied in Ericsson. First problem is to
select correct evaluators, because if people do not have any architecture knowledge, they can
hardly understand your detail design even though how much explanation you contribute. So
our suggestion is to select the evaluators with good knowledge about software architecture
design. Second problem is that people do not know to evaluate what especially all
architecture candidates are complete different design or very complex design when they do
evaluation. Therefore, our feedback after applying this step is that do define the comparative
criteria clearly in the questionnaire for architecture candidates evaluation.
Step 4: we did calculation following the processes defined by Mikael Svahnberg et al [17]
according to the collected result tables from stakeholders, and the final result is showing in
Table 59. As a result, the architecture candidate 1 is selected as the better one to apply in
legacy system based on the calculation result data.
In this step, our experience is that there are a lot of calculations and formulas to follow.
It is quite easy to make mistakes if you did it manually. So our suggestion is to use Microsoft
Excel tool, we can set up all calculation formula in order to ensure all data is calculated
correctly and efficiently. So all what we need to do is to input the source data in Excel table,
and the complex calculation is executed automatically.
Table 59: Result Table of architecture evaluation
Value Variance
Architecture 1 0.528137 0.025533
Architecture 2 0.471863 0.025533
67
In the end of this phase, the result of the selected architecture is confirmed in a meeting
with all stakeholders. The collected data of this phase is shown in Table 60.
Table 60: Collected Data of Architecture Selection phase
Phase No. 3
Phase Name Architecture Selection Phase
Time Cost (Hour) 212
Participants All participants in APPENDIX A Table 73
Input(s) Output of Root cause Analysis Phase:
root cause statement in APPENDIX
A Table 80.
Quality attribute priority list in Table
54.
Output of new requirements in
APPENDIX A Table 77.
Intermediate Artifact(s) Architecture candidates showing in Figure 37
and Figure 38.
Output(s) Selected architecture design in Figure 37.
Confirm Meeting Result Pass
7.1.3.2.4 Collect Data in Refactoring Phase
In the refactoring phase, all the work is related with software implementation and testing.
Firstly, we defined all interfaces for the NetStatusserver system. Secondly, we implemented
the defined interfaces one by one, and at the same time finished all Unit tests. Thirdly, we
conducted and passed function testing and system testing. Finally, we released the new
NetStatusserver system.
In the end of the refactoring phase, we also conducted a meeting with all stakeholders in
order to have a demo show of the new NetStatusserver system. So the stakeholders can
confirm that whether the new system meets their requirements. The collected data of this
phase is shown in Table 61.
Table 61: Collected data of Refactoring Phase
Phase No. 4
Phase Name Refactoring Phase
Time Cost (Hour) 155
Participants System Tester
Software Developer
Verification Engineer
Input(s) Output of architecture selection
phase: selected architecture design in
APPENDIX A Figure 37.
Old NetStatusserver System
Intermediate Artifact(s) None
Output(s) New NetStatusserver System
Confirm Meeting Result Pass
7.1.4 Create Case Study Database In the case study database, there are several kinds of data for case study which are
outputs artifacts after applying the ARAR framework, case study involved people, and time
duration to apply the framework. The collected data in this case study is shown in Table 62.
68
Table 62: Collected Data in Case Study
Phase Time Cost
(Hour)
Participants
(number)
Confirm Meeting Result
Reverse
Phase
128 6 Pass
Root cause
Analysis
318 6 Pass
Architecture
selection
212 6 Pass
Refactoring 155 3 Pass
Time SUM 813
7.2 Data analysis
7.2.1 Select Analysis Technologies In this case study, the Time-series Analysis technique [35] is selected to conduct data
analysis. All the reasons are listed as follows:
First of all, this case study is a single case. According to the case study book [48], the
Cross-case synthesis technique is applied specially to the analysis of multiple cases, since
Time-series Analysis, Pattern Matching, Explanation Building, and Logic Model techniques
are available for either single or multiple case studies. So the Cross-case synthesis technique
is rejected in this case study.
Secondly, the goal of the case study is neither a description of an object, nor an
explanation of a situation. Moreover, there are not comparisons between empirical patterns
with theoretically predictions at all. However, Pattern Matching technique is one of the most
desirable techniques is using a pattern-matching logic, which compares an empirically based
pattern with a predicted one or with several alternative predictions [39]. Explanation
Building technique actually is a special type of pattern matching, but it is much more
difficult than pattern matching. The goal of explanation building is to analyze the case study
data by building an explanation about the case [48]. Besides, conceptually Logic Model
technique is considered as another form of pattern-matching. It is especially useful in
handling case study evaluations [48], including matching empirically observed events to
theoretically predicted events [40]. Thus, these two techniques are rejected in this case study.
Thirdly, the only purpose of the case study is to answer the 2nd
research question,
through applying the ARAR framework step by step. It focuses on sequential processes of
framework over time, and particularly causal relationships among different steps in the
ARAR framework. Actually, Time-series analysis technique is to conduct a time-series
analysis, directly analogous to the time-series analysis conducted in experiments and quasi-
experiments [48]. The resulting array of a case study by using this technique, for instance, a
word table consisting of time and types of events as the rows and columns, may not only
mean an insightful descriptive pattern, but also might hint at possible causal relations,
because any presumed causal condition can precede any presumed outcome condition [35]. It
exactly matches the purpose of the case study.
7.2.2 Data Analysis Using Time-series Analysis technique [35], firstly the time series and events have to be
identified clearly before starting data analysis. The time series of this case is the time
duration to apply four phases of the ARAR framework, as illustrated, like reverse phase, root
cause analysis, architecture selection, refactoring from the beginning to the end. The events
are outputs of each phase, as well as face to face meetings at the end of each phase.
Data analysis consists of two parts. The first one is to interpret the time spent to get all
outputs in each phase of the ARAR framework. The other is to analyze the face to face
meeting at the end of each phase in the framework. Both of them are mandatory to answer
the 2nd
research question. The time in Figure 30 shows that in reasonable time period all
69
outputs are achieved after applying the ARAR framework step by step, but it cannot ensure
the correctness of the outputs. However, the face to face meeting at the end of each phase is
conducted to check whether stakeholders are satisfied with the gathered outputs or not, and
from stakeholders’ perspectives to validate the correctness of the outputs.
Figure 30: Time and Events in the Case Study
As we can see from above Figure 30, there are four different time periods for applying
four phases in the ARAR framework, and at the end of each phase there is a face to face
meeting marked as milestone.
For each phase, we followed the steps defined in the ARAR framework strictly, and we
continuously calculated time till all outputs of that phase are collected. The relation among
four phases of the framework is that each latter phase depends on the output of former phase
and their order cannot be changed. For example, the ―Root cause Analysis phase‖ depends
on the output of the ―Reverse phase‖ to start, and the ―Architecture Selection phase‖ cannot
start without outputs from the previous phase called ―Root cause Analysis phase‖ Thus the
time duration for applying the framework was calculated phase by phase, and the sequence is
fixed as shown in Figure 30. The time duration can be affected by scope of project, as well
as involved people. For instance, large or complex projects require more time to recover
architecture views than small projects.
For each face to face meeting, all stakeholders in APPENDIX A Table 73 participated to
confirm whether the outputs satisfy their requirement or not. The relation among four face to
face meeting seems independent with each other, but their order is fixed while applying the
ARAR framework. Moreover, each face to face meeting works as a connector in the middle
of two contiguous phases, and it indicates the relation of the four phases. For instance, the
face to face meeting at the end of the ―Reverse phase‖ not only validates the output, namely
architecture view of legacy system, but also triggers the starting of the ―Root cause Analysis‖
phase.
In addition, the result of each face to face meeting is also very important. If one of them
is failed, then the next step cannot start to execute, and even it make the case study failure.
So all involved stakeholders for the meetings should have generic background knowledge
about the case environment, as well as some general understanding about the processes of
the ARAR framework.
In conclusion, to fulfill the aim of this case study, firstly all outputs artifacts should be
collected after applying the ARAR framework, and then the time duration can be gathered
once output artifacts are achieved. Besides, all gathered outputs artifacts have to be
confirmed correct by stakeholders. Only we collected both time duration of applying the
framework and the results of face to face meetings, and then we can provide a complete and
reliable answer for the second research question.
Finally, based on the above analysis and the collected case study data, we concluded that
the purposed framework (ARAR framework) can be applied in NetStatusserver system of
Ericsson successfully.
70
7.3 Process Quality Control In this section, we discuss the threats to this case study. The discussion of validity and
threats is based on the threats validation mentioned in [28]. The validity threats considered in
this case study are construct, internal, external and conclusion validity threats.
7.3.1 Construct Validity Construct validity concerns generalizing the result of the experiment to the concept or
theory behind the study [28]. In this case study, the data is collected from elicitation meeting,
confirm meeting and daily meeting. Mono operation [28] is avoided by collecting data from
different aspects of stakeholders which include developer, system tester, testing expert and
product line manager.
Moreover, in architecture selection phase, there is a risk that architecture solution is
selected according to the person who provides the solution rather than the solution itself.
This validity threat is alleviated by sending out the architecture solutions with anonymous
author name.
7.3.2 Internal Validity Threats to internal validity are influences that can affect the casual relationship between
treatment and outcome [41]. In this case study, threats to internal validity is instrumentation
[28].
The instrumentation threat is the effect cause by artifacts used in case study [28]. In this
case study, the artifacts include scenario elicitation form, architecture selection question
form and the new system. We alleviated this threat by developing the question form based on
the previously validate question form (i.e. architecture selection question form), introducing
the intention of artifacts in details before sending to stakeholder, and confirming the result
with stakeholder after they sent back of the question forms.
7.3.3 External Validity Threats to external validity are conditions that limit our ability to generalize the result to
industrial practice. [28] In this case study, it means ability to generalize whether the ARAR
framework can be used in industrial environment. In this case study, there is a risk that the
people involved in case study activities are irrelevant. In order to alleviate this validity threat,
we selected stakeholder from different aspects which shown in APPENDIX A Table 73. The
fact that more than one stakeholder are involved in the case study activities improves the
ability to generalize the result.
Moreover, lacking of the background knowledge also is a validity threat that limits
stakeholders to generalize result during case study activity. For example, in root cause
analysis phase, due to the lack of FRAM knowledge, it is difficult for stakeholders to give
the needed input for building the FRAM diagram. Although we alleviated this threat by
provide related background knowledge before collecting stakeholder data, we acknowledge
this as a validate threat in this case study.
7.3.4 Conclusion Validity Threats to conclusion validity are concerned with issues that affect the ability to draw the
correct conclusion about relations between the treatment and the outcome of an experiment
[28]. In this case study, the conclusion validity threats are fishing and reliability of treatment
implementation.
Fishing means the researchers are looking for a specific outcome, and thus the analysis
are no longer independent [28]. We alleviated this threat by involving both researchers and
stakeholders in case study activates, and setting confirming meeting with stakeholders to
make sure all the outcomes are based on their expectation.
Reliability of treatment implementation means the risk that the implementation is not
similar between different persons applying the treatment or between different occasions [28].
In this case study, if different person apply the ARAR framework in the same scale of the
system, the result is reliable since we strictly follow the ARAR framework processes.
71
However, if the person apply the framework in different scale of the system, there will be a
risk that the time spend in each phase can vary. We alleviated this threat by providing the
time duration for each phase and the scale of the system so that other person can predict their
cost of time in advance.
72
8 EVALUATE ARAR FRAMEWORK In this section, we explained the details of the experiment which contains experiment
definition, planning, design and execution of the experiment. The aim of this experiment is
to answer Research Question 4 (RQ4).
8.1 Definition
8.1.1 Goal Definition The goal of this experiment is to evaluate the efficiency for both new and old
NetStatusserver system. As illustrated in the book, namely ―Experimentation in Software
Engineering‖, the ―Goal Definition‖ part contains five aspects including ―Object of study‖,
―Purpose‖, ―Quality focus‖, ―Perspective‖, and ―Context‖ [1] showing in Table 63.
Table 63: Goal of experiment
Object of study Purpose Quality focus Perspective Context
NetStatusserver
systems
Evaluate Efficiency Researchers Real industrial
project in
Ericsson
Table 64: NetStatusserver Definition
System Name Description
Old legacy
NetStatusserver
system
The current running system with efficiency problems complained
by stakeholders
New
NetStatusserver
system
The new implemented system based on legacy system with new
design, which complained performance problems have fixed
through applying the ―ARAR Framework‖.
Object of study. The object of this study is the NetStatusserver systems including
both new and old NetStatusserver systems as shown in Table 64. The new system
provides the same functionality as the old system.
Purpose. The purpose of the experiment is to evaluate the efficiency of
NetStatusserver system.
Quality Focus. The quality focus can be single or multiple aspects. In this
experiment, we focus on the single quality attribute namely efficiency as defined in
ISO 9126 [27]standards which means The capability of the software product to
provide appropriate performance, relative to the amount of resources used, under
stated condition.
Context. The experiment is run within the context of the NetStatusserver in Linux
environment. The experiment is conducted within auto-testing environment in
Ericsson Company in Sweden Karlskrona.
Perspective. The perspective is from the point of view of the researchers, who
would like to know the performance differences in efficiency for both new and old
NetStatusserver systems.
8.1.2 Summary of Definition The summary of goal definition is made according to Section 8.1.1:
Identify the performance of the NetStatusserver system of Ericsson for the purpose of
evaluation with respect to the efficiency from the point of view of the researchers in the
context of auto-test industrial environment in Ericsson.
73
8.2 Planning
8.2.1 Context Selection The experiment context is exact the same with the context in case study as described in
Section 7.1.1.1.
8.2.2 Hypothesis Formulation In this experiment, there are two hypotheses have been defined:
Null Hypothesis [1], H0: The efficiency of the new NetStatusserver system is
the same with the legacy system.
Alternative Hypothesis [1], H1: The efficiency of the new re-engineered
system (NetStatusserver) is improved as compared to the old system.
8.2.3 Variable Selection In this part, we select the dependent variable and independent variables. The independent
variables are those variables that we can control and change in the experiment [28]. In this
experiment, the independent variables are Number of Clients and location of clients. Table
65 shows the details of independent variables.
Number of Clients: the number of connected clients to the server application
at the same time.
Location of Clients: the location of clients sending requests.
Table 65: Independent Variables
Name Measurement Range of variable Metrics
Number of Clients Ordinal [1,10] Numeric
Location of Clients Nominal [Local, Remote] Enumeration
From stakeholders’ requirements, the maximum connected client number of the system
is 10 clients at the same time. Besides, there are two types of clients:
Local clients: the client application and server application are located in same
physical machine.
Remote Clients: the client application and server application are located in
different physical machines.
The dependent variable is the effect of the changes in the independent variables [28].
The dependent variable of this experiment is derived directly from the hypothesis: the
efficiency of system. This variable is measured by using two direct measures [28] which are
the Execution Time and Number of Executed Request:
Execution Time: the time duration from client sending out amount of
requests till client receiving their responses. It includes three time duration
which are sending time, receiving time and processing time.
Sending time: The time duration that client sending requests to server-
application.
Receiving time: The time duration that server-application sending
responses to clients.
Processing time: The time duration that server-application handling the
requests. (format not sure)
Number of Executed Requests: the total number of executed requests sent
by clients during execution time.
Generally, time is an important variable for measuring efficiency. In this experiment,
execution time is a measure that describes the capability of the NetStatusserver system to
handle the requests from different test cases.
In addition, a test case is consisting of a batch of requests. That means, when testers
executing their test cases, the related requests are sent from clients to server application. In
order to compare the Execution Time for different subjects, the number of requests should be
defined and fixed. The process of calculation the number of requests is shown in Section
74
8.2.4. As a result, the efficiency for the can be defined as the number of executed requests
per second, the formula is shown as:
Where
— the efficiency result for the execution of ith test case.
— the number of request in execution.
— the execution time for ith test case.
8.2.4 Selection of Subjects The subject selection connects to generalization of the experiment results [28]. The
subjects of this experiment are all possible combinations of independent variables (Number
of Clients and Location of Clients) to execute 6100 requests.
The number of requests is calculated based on analyzing the 18000 history real record
data of NetStatusserver system from December 2011 to June 2012; we calculated the
maximum requests number in 10 minutes for legacy NetStatusserver system. The reason why
we calculated the requests number in 10 minutes is because this is the shortest time duration
that can be trusted to present the capability of the system. The list of subject is shown in
Appendix Table 88. And also, the number of requests assigned to different subjects is shown
in APPENDIX B Table 88.
8.2.5 Experiment design After we selected our independent and dependent variables, we have determined the
measurement scales for the variables.
8.2.5.1 General Design Principles
Blocking. Blocking is used to systematically eliminate the undesired effect in the
comparison among the treatments [28]. To minimize the effect of the client number and
client type, we defined two groups, which are number of client, and type of client.
8.2.5.2 Standard Design type
One factor two treatments. In this experiment, we compared the two treatments, which
use different client number and client types for both new and old NetStatusserver systems.
Paired comparison design. In this design, each subject uses both treatments on the
same subjects. The comparison for the experiment is to check whether the difference
between the paired measures is zero [28]. In this thesis, we run all 20 subjects on both legacy
and target system. In order to minimize the effect of the order, the sequence of executing the
number of requests is designed randomly.
8.2.6 Instrumentation Executable Ksh scripts files
These scripts aim to generate the subjects which mentioned in APPENDIX B
Table 88. The specific number of requests in each script is assigned based on
APPENDIX B Table 90. All the requests are real history data of
NetStatusserver system in Ericsson.
NetStatusserver systems
We run the experiment files on both new NetStatusserver system and the old
system. Both the new system and the old system are all available in this
experiment.
8.2.7 Validity Evaluation In this experiment, we have considered four levels of validity like ―Internal validity‖,
―External validity‖, ―Conclusion validity‖, and ―Construct validity‖.
75
Internal validity: It is mainly focused on the causal relationship between treatment and
outcome, in other words that the treatment causes outcome. According to [28] , factors that
impact on the internal validity are how the subjects are selected and divided into different
classes, how the subjects are treated and compensated during the experiment, if special
events occur during the experiment etc.
In this experiment, one threat to internal validity is treatments for each subject are
performing in different time period. For example, if one subject running with new system in
the morning while this subject running with legacy system in the afternoon, then it is
impossible to judge which system is more efficiency since the running environment is
different in the morning comparing with that in the afternoon. We alleviated this threat by
running all treatments for one subject one by one in the same time duration. More than that,
in order to avoid the affection from the server application, both new and legacy systems’
server application will restart after each running. In addition, before start collecting data, the
instruments have been well defined. For example, the limitation of Number of Client is
defined based on stakeholders’ requirement, and the type of the client is defined based on the
industrial environment.
External validity: Threats to external validity concern the ability to generalize
experiment results outside the experiment setting. External validity is affected by the
experiment design chosen, but also by the objects in the experiment and the subjects chosen
[28].
In this experiment, there are two main risks: conducting the experiment in the wrong
environment. For example, the experiment result is different while running in Linux
operating system compared with running in Windows operating system. To avoid this threat,
we ran all samples in the same GNU/Linux environment.
Conclusion validity: Threats to conclusion validity are concerned with issues that affect
the ability to draw the correct conclusion about relations between the treatment and the
outcome of an experiment [28]
To overcome the threats of conclusion validity, several operations have been conducted
describing as follows:
High statistical power. For each sample, we ran it 3 times for both new and
old NetStatusserver systems, and recorded all data each time. Furthermore,
the geometric mean value was taken to analyze after data reduction.
High reliability of measures. The maximum of client number is identified
through face to face meetings with stakeholders, and the type of client is
decided based on the real testing environment in Ericsson.
High reliability of treatment implementation. The same sample was executed
in the same way for both Old and New NetStatusserver systems.
Construct validity: Construct validity concerns generalizing the result of the experiment
to the concept of theory behind the experiment [28] . It is concerns whether we measure what
we believe we measure [28]. The goal is well defined before starting the experiment. And
also, we avoid mono-operation bias by selecting two independent variables. The possible
validity threat is interaction of different treatments. In order to alleviate this threat, all
planned subjects are running individually with given treatment. For example, when running
the subject 1 with in new system, other subjects will not running until this subject finished.
8.3 Operation The operation phase consists of three sections: preparation where subjects are chosen
and forms are prepared, execution where the subjects perform their tasks according to
different treatments and data is collected, data validation where the collected data is
validated [28].
8.3.1 Preparation As presented in the book ―Experimentation in Software Engineering‖ [28], two
important aspects in the preparation of an experiment are participants and materials such as
forms and tools. In this experiment, participants are the researchers. The materials contain
76
new and old systems. Before starting experiment execution, we prepared and configured
both the new and old NetStatusserver systems to ensure that both of them are ready for
experiment execution.
In addition, in order to simplify the complexity of the experiment execution, for each
sample in Table 5, we implemented two Ksh Script files to run both new and old
NetStatusserver Systems. So the experimenters just need to execute the script files with a
command and record the data after successful execution for both NetStatusserver systems.
8.3.2 Execution This experiment is executed during a period time in a closed office room of Ericsson by
two researchers through running both old and new NetStatusserver system. This section
contains data collection and experiment environment description.
8.3.2.1 Data Collection
In this section, data is primarily collected manually by the participants that fill out forms.
In each form, the execution time to run a test sample from APPENDIX B Table 89 in both
new and old NetStatusserver system was recorded.
The following steps are followed by researchers manually to gather all execution time
data:
Firstly, we randomly selected a sample from all available samples in
APPENDIX B Table 88.
Secondly, based on the selected sample, we run the corresponded executable
files on both new and old NetStatusserver system.
Thirdly, we recorded each successful execution result data in data collection
table, as shown in APPENDIX B Table 89.
For each selected sample, we repeated the execution 3 times both on new and old
NetStatusserver systems. All the collected data is showing in APPENDIX B Table 89.
8.3.2.2 Experiment Environment
Table 66: Experiment environment
Name Description
Participants Two Researchers
Operating System GNU/Linux x86_64
Execution files Ksh Scripts
Location Office Room in Ericsson (closed)
8.3.3 Data Validation In this experiment, the collected data is validated from two aspects. First aspect is to
check the data is collected in correct way. The second one is to make sure the collected data
is reasonable. Some actions had already applied when experimenter collected data:
Simplify the steps to collect data. We created different Ksh scripts file to run
in both new and old NetStatusserver system to simplify the execution steps.
We only need to run the scripts file and record execution time.
Conduct training session for all experiment participants to make them
understand all steps to collected data.
Elaborate the forms for data collection to all participants before the data
collection to make sure they filled them out correctly.
Organize several seminars with all researchers after the data collection.
During the seminars, we reviewed the collected data to check the data is
reasonable, and also make sure all test samples were executed completely.
Besides, we exchanged our opinions to refine unusual collected data.
77
8.4 Data Analysis The analysis part aims to interpret the experiment data in a better understandable way for
drawing valid conclusions [28]. In the first step, we characterize the data by using
descriptive statistics. After that, the data set will be reduced in order to make sure abnormal
or false data points are excluded. In the third step, the defined hypothesis will be tested by
using paired t-test [28] method.
8.4.1 Data Descriptive As a first step in analyzing data, data descriptive [28] is selected to visualize the
collected data. After that, we summarize the collected data in order to understand the nature
of the data [28].
8.4.1.1 Analysis of Direct Measurement
As the definition of indirect measurements, we evaluate our efficiency by two direct
measures execution time and number of executed requests. The number of executed requests
for each defined test case is 6100, and we explained the reason for selection of this number
in Section 8.2.4. Moreover, the raw data for execution time is shown in Table 67.
Table 67: Collected Execution Time for each test case
Legacy System New System
Remote(s) Local(s) Remote(s) Local(s)
1 608 585 69 69
2 601 580 33 33
3 598 575 23 24
4 592 572 15 18
5 590 568 15 15
6 586 571 13 12
7 589 570 11 9
8 591 568 8 7
9 590 567 8 7
10 588 564 7 7
Average 593 572 20 20
STDV 7 6 18 18
For both new and legacy system, the raw data of direct measure execution time shows
that the more clients connected, the less execution time will cost. That is because of the
sending time deceased. The more connected clients, the less number of requests will send
from client to server-application. And thus, it will take less time for clients to send the same
amount of requests.
Comparing the remote client in legacy system with that in new system, we can see that
the average execution time for Legacy system remote client is 593 seconds and the average
execution time for new system is 20 seconds. Moreover, the max value for legacy system
remote client is 608 and min value is 572 seconds, while the max and min value for new
system is 69 seconds and 20 seconds respectively. It shows the new system costs less time
for executing the requests from remote clients than legacy system. After that, we compared
the local client for both legacy system and new system. The average execution time for
legacy and new system is 572 seconds and 20 seconds respectively. And also, for legacy
system, the max execution time is 585 seconds and the min execution time is 564 seconds.
For new system, the max and min execution time is 69 seconds and 7 seconds respectively.
Similar, the results shows the new system costs less time for executing the requests from
local clients than the legacy system. As a result, we can see that the new system costs less
time than the legacy system in executing the same amount of requests.
78
8.4.1.2 Analysis of Efficiency
To start with, we calculate the efficiency for different client based on the formula
mentioned in Section 8.2.3, the result is shown in Table 68.
Table 68: Efficiency Result
Client
No.
Legacy System New System
Remote(second) Local(second) Remote(second) Local(second)
1 10.02 10.43 88.84 88.84
2 10.13 10.51 183.02 184.85
3 10.18 10.60 269.18 254.17
4 10.29 10.66 399.35 338.89
5 10.31 10.74 406.67 416.13
6 10.38 10.68 469.23 523.29
7 10.35 10.70 554.55 704.92
8 10.30 10.73 733.14 833.49
9 10.37 10.76 766.51 833.49
10 10.34 10.82 833.49 871.43
Figure 31 shows the efficiency for both new and legacy system with different numbers
of connected clients. The efficiency value for specific connected client is the average value
which we collect for remote client and local client.
We can also see that, in Figure 31, with the increasing number of the connected client,
the efficiency of the new system increases rapidly while the efficiency for the old system
keeps almost the same. And also, the data set below shows the efficiency of new system is at
least 9 times higher than the efficiency of legacy system. Based on the result, we can see
that the new system has better efficiency comparing to legacy system when the system
connected with different number of clients.
Figure 31: Comparing Efficiency in New System with Legacy System
8.4.2 Data Reduction The aim of data exclusive is to find and remove the outliers. The scatter plot diagrams of
the collect ET data are shown in the following figures.
1 2 3 4 5 6 7 8 9 10
New System 94.66 195.9 278.7 361.1 438.3 528.0 666.2 832.9 851.7 908.1
Old System 10.89 11.00 11.07 11.16 11.21 11.22 11.21 11.21 11.25 11.27
0
100
200
300
400
500
600
700
800
900
1000
Effi
cie
ncy
(R
eq
ue
sts/
seco
nd
)
client number
Efficiency comparison
79
Figure 32: Execution Time of Legacy system with Local Clients
Figure 33: Execution Time of Legacy system with Remote Clients
Figure 34: Execution Time of New system with Local Clients
Figure 35: Execution Time of New system with Remote Clients
80
In Figure 32 and Figure 33, it shows the collected Execution Time for legacy system
with remote and local clients respectively. We find that there are three points have much
higher value (10 seconds) comparing to other points, which shown in Figure 32 with
Horizontal axis value 2, and in Figure 33 with Horizontal axis value 2 and 7. This could be
the reason that, when test cases are running, the performance of server is low due to many
other jobs are running at the same time, and this will cause the data package losing problem.
In order to solving the problem, the system takes more time to do the transmission control
and re-transmission of the data package. Since this is a common situation in the physical
server, we keep these points in this experiment.
In Figure 34 and Figure 35, we analyze the data sets of the new system. It can be seen
that the all collected data are well organized and in small differences. However, since both
legacy and new system are running in the same server, it is interested to understand why the
new system seems not affected by the server load issue. The reason is that the new system
has mitigated the affection of the server load issue by using well defined re-transmission
approach and transmission control rules. As a result, we evaluated all collected data, and we
keep all these points for the following experiment.
8.5 Hypothesis Testing
8.5.1 Input data As we mentioned above, we executed three times for all test cases. The number of
executed requests for each test case is described in shown in APPENDIX B Table 90, and we
set the input data as the geometric mean value of each test case, the result is shown in Table
69.
Table 69: Paired T-test analysis Input data set
New System (second) Old system(second)
R1 88.84 10.02
R2 183.02 10.13
R3 269.18 10.18
R4 399.35 10.29
R5 406.67 10.31
R6 469.23 10.38
R7 554.55 10.35
R8 733.14 10.30
R9 766.51 10.37
R10 833.49 10.34
L1 88.84 10.43
L2 184.85 10.51
L3 254.17 10.60
L4 338.89 10.66
L5 416.13 10.74
L6 523.29 10.68
L7 704.92 10.70
L8 833.49 10.73
L9 833.49 10.76
L10 871.43 10.82
8.5.2 Data Calculation After building the input data, we calculate the result based on the paired T-test process
mentioned in [28]. One factor (NetStatusserver System) has two treatments (new system and
old system), and we use the geometric mean value of Execution Time to measure the
efficiency of the system. The test is performed as shown in Table 70
81
Table 70: Paired T-test Calculation
Paired T-test
Input Means values of Efficiency from test case i running result with new
system.
Means values of Efficiency from test case i running result with old
system.
The paired samples: , ,… which shown in
tb_lg4
, where .
. That means the new and old
system has the same efficiency
.
Calculation
[28] Calculate
, whereas
and
Criterion
[28] One sided ( :
): Reject if . Here
―0.05‖ means the possibility of rejecting is 0.05 while is
true. ―19‖ means the degrees of freedom [13]. The value of is
2.0930 according to paired t-test table.
Table 71: Paired T-test Differences d result Table
Differences d (second)
1 2 3 4 5
78.81 172.89 259.00 389.06 396.35
6 7 8 9 10
458.85 544.20 722.84 756.15 823.15
11 12 13 14 15
78.41 174.34 243.56 328.22 405.39
16 17 18 19 20
512.61 694.22 822.76 822.73 860.61
Table 72 Paired T-test result
Name Value
Sd 265.84
Geometric mean 386.78
t0 6.51
t(0.05,19) 2.09
Finally, the result of the Paired T-test is shown in the following Table 71 and
Differences d (second)
1 2 3 4 5
78.81 172.89 259.00 389.06 396.35
6 7 8 9 10
458.85 544.20 722.84 756.15 823.15
11 12 13 14 15
78.41 174.34 243.56 328.22 405.39
16 17 18 19 20
512.61 694.22 822.76 822.73 860.61
Table 72. Since t0> t(0.05,19), the null hypothesis H0 is rejected with a one sided test at the
0.05 level.
82
8.5.3 Summary and Conclusion From the analysis of the experiment data, we have rejected the Null hypothesis with one
sided criterion. Based on the statistic data, we are able to show that new system has better
efficiency. As a result, research question 4 (RQ4) has been answered in this experiment.
83
9 DATA SYNTHESIS
9.1 ARAR Framework As we presented in Section 5, these quality driven frameworks are only focus on specific
kinds of quality attributes. However, ARAR framework has extended the quality attributes to
all ISO9126 quality attributes [27]. And also, among all these quality driven reengineering
frameworks, ARAR framework is the only framework involved FRAM method to find the
root cause problem.
9.2 Architecture Selection method A research conducted by Mikael Svahnberg et al in 2003 [17] has presented a Quality-
Driven Decision-Support method for Identifying Software Architecture Candidates. This
method is selected in the ARAR framework to select optional software architecture. The
result of case study approves that this method is validate to identify and priority software
architecture candidates..
9.3 Root cause analysis in software re-engineering In Bergey et al [69], it illustrates an architecture based re-engineering life cycle which
shown in Figure 2. In this re-engineering life cycle [69], predefined software qualities are
improved through re-architecting legacy system. However, it is difficult to figure out which
software qualities have problem and needed to improve. In this paper, the ARAR framework
has involved root cause analysis phase to identify quality problems. Once these quality
problems have been resolved, the other quality problems connected with the root cause
problems can be resolved automatically without extra expense.
9.4 Usage of Case study This paper also provides a detailed industrial case study to illustrate how to apply the
ARAR framework. The case study can be used as an instruction to follow the framework
process, and also it is an example when you decide to apply the ARAR framework in your
specific legacy systems.
9.5 Experiment We have executed an experiment successfully in Ericson to validate efficiency of the
NetStatusserver system after applying the ARAR framework. The experiment result proves
that the efficiency is improved through applying the ARAR framework, and all stakeholders
are satisfied about the experiment result. Besides, the experiment processes can be reused
when some other experiment is conducted to test the ARAR framework.
84
10 CONCLUSION
10.1 Answers to research questions In this section, study results are mapped to each research questions in order to present
and verify its completeness. According to the studying result, each research questions are
answered below.
RQ1: What are the potential drawbacks of existing processes for quality driven
reengineering?
Section 5.5 presents the potential drawbacks of existing processes for quality driven
reengineering through comparing their processes with each other. Then, all weaknesses are
mapped with the re-engineering standard lifecycle, as illustrated in Figure 2, defined by SEI
[69]. At last, we classify all weaknesses and create solutions to resolve all of them. Thus, all
weaknesses descriptions, as shown in Table 10, are the exact answer for RQ1.
RQ2: What components should a quality driven re-engineering framework contain?
Section 5.6 and Section 6 present the design of the proposed quality driven reengineering
framework which helps to fix the drawbacks mentioned in RQ1. In this section the proposed
framework is presented in four parts which are Architecture Reverse, Root cause analysis,
Architecture Solution Selection and Refactoring.
RQ3: Can the proposed framework be applied in an industrial environment?
Section 7 presents the application of ARAR framework in industrial environment. In this
section, a case study is presented to verify the ARAR framework. The results, which
contains both statistic and feedback from the company, have approved ARAR framework
can be applied in industrial environment.
RQ4: Can the proposed framework improve the efficiency of a legacy system?
Hypothesis: The efficiency of the new re-engineered system (NetStatusserver) is improved
as compared to the old system.
Section 8 presents the evaluation of ARAR framework’s result. In this section, an
experiment is performed in order to evaluate the efficiency for system, in terms of news
system, after applying ARAR framework. As a result, both data analysis and hypothesis
testing show the efficiency of the new system is improved comparing to the legacy system.
10.2 Conclusion In this paper, we have done four contributions for our research study.
The first contribution is that we have created the ARAR framework to improve quality
performance through identifying and resolving root cause problems in legacy systems. This
framework is available to apply in different legacy systems, as well as different quality
requirements like all quality attributes defined in ISO 9126 [27].
The second contribution is that we have conducted an industrial case study successfully
by applying the ARAR framework in NetStatusserver system in Ericsson. Through this case
study, we proved that the ARAR framework is applicable for legacy systems like
NetStatusserver system defined in Section 6.
The third contribution of this paper is that a successful experiment is executed in
Ericsson to prove that the efficiency of the NetStatusserver system is improved after
applying the ARAR framework. Besides, all stakeholders in Ericsson are satisfied about the
efficiency performance of target system which has applied the ARAR framework.
The fourth contribution is that we have analyzed three existing currently re-engineering
frameworks, and we have listed their weaknesses in Table 10.
85
In conclusion, the ARAR framework is applicable and success for a specific legacy
system in Ericsson Company.
10.3 Future work In this paper, although we have improved efficiency for a specific legacy system, the
ARAR framework requires to be tested in diverse legacy systems with different quality
requirements. Therefore, future work of this paper could be testing the ARAR framework in
different legacy systems, like large scale systems, real-time systems etc. or testing the ARAR
framework in different quality requirements like usability, reliability etc. In addition, in the
second part of ARAR framework, the way to assign weight value for different function units
also can be extended in the future.
86
APPENDIX A Table 73: Participants of the case study
Participants Name Description
System Tester Familiar about the NetStatusserver
system
Clear about the system architecture
Currently maintaining the system
Understand the problems or
complaints from end users
Test & Verification Expert Quite familiar about the
NetStatusserver system
Understand the problems of the
NetStatusserver
Know the architecture of the system
Product-line Manager Clear about the goal of this project
Verification Engineer Quite much familiar about the
Framework
Case study designer and executor
Software Developer Quite much familiar about the
Framework
Case study designer and executor
Table 74: Selected papers for re-engineering
Author Title Year Published in Citation
Number
Domain Type
Allier, S
Sahraoui, H
A
Sadou, S
Vaucher, S
Restructuring
object-
oriented
applications
into
component-
oriented
applications
by using
consistency
with execution
traces [37]
2010 Lecture node in
computer
science
1 Software
system
restructure
approach
Conferenc
e
paper
Arboleda,
H.a
Royer, J.-
C.b
Component
types
qualification
in Java legacy
code driven by
communicatio
n integrity
rules [38]
2011 Proceedings of
the 4th India
Software
Engineering
Conference
2011, ISEC'11
0 Software
component
qualificatio
n
Conferenc
e paper
Arcuri, A On search
based
2009 2009 1st
International
0 Software
evolution
Conferenc
e paper
87
software
evolution[58]
Symposium on
Search Based
Software
Engineering.
SSBSE 2009
Bode, S
Riebisch, M
Impact
evaluation for
quality-
oriented
architectural
decisions
regarding
evolvability
[59]
2010 Lecture Notes in
Computer
Science
(including
subseries
Lecture Notes in
Artificial
Intelligence and
Lecture Notes in
Bioinformatics)
3 Software
architectur
e
Journal
article
Bois, B D
Demeyer, S
Verelst, J
Refactoring -
Improving
coupling and
cohesion of
existing code
[60]
2004 Proceedings -
Working
Conference on
Reverse
Engineering,
WCRE
24 Software
refactoring
at source
code level
Conferenc
e paper
Bratthall, L
Wohlin, C
Understanding
some software
quality aspects
from
architecture
and design
models [64]
2000 Proceedings
IWPC 2000. 8th
International
Workshop on
Program
Comprehension
1 Software
quality
Conferenc
e paper
Breivold,
Hongyu Pei
Crnkovic,
Ivica
Larsson,
Magnus
A systematic
review of
software
architecture
evolution
research [65]
2012 Information and
Software
Technology
26 Software
architectur
e evolution
Journal
article
Bryton, S.a
Brito E
Abreu, F.a
Monteiro,
M.b
Reducing
subjectivity in
code smells
detection:
Experimenting
with the Long
Method [66]
2010 Proceedings -
7th International
Conference on
the Quality of
Information and
Communications
Technology,
QUATIC 2010
0 Informatio
n
technology
Conferenc
e paper
Bryton, S
Abreu, F B
Strengthening
refactoring:
Towards
software
evolution with
quantitative
and
experimental
grounds [67]
2009 4th International
Conference on
Software
Engineering
Advances,
ICSEA 2009,
Includes SEDES
2009: Simposio
para Estudantes
de
Doutoramento
em Engenharia
de Software
2 Software
refactoring
Conferenc
e paper
88
Bryton, S
Abreu, F B
E
Modularity-
oriented
refactoring
[68]
2008 Proceedings of
the European
Conference on
Software
Maintenance and
Reengineering,
CSMR
2 Software
refactoring
Conferenc
e paper
Bushehrian,
O
Automatic
actor-based
program
partitioning
[113]
2010 Journal of
Zhejiang
University:
Science C
0 Reverse
engineerin
g
Journal
article
Bushehrian,
O
A new metric
for automatic
program
partitioning
[116]
2009 Proceedings -
IEEE 9th
International
Conference on
Computer and
Information
Technology, CIT
2009
0 Software
architectur
e
Conferenc
e paper
Bushehrian,
O
Applying
heuristic
search for
distributed
software
performance
enhancement
[121]
2009 Proceedings of
the 2009 2nd
International
Conference on
Computer
Science and Its
Applications,
CSA 2009
0 Software
performanc
e
Conferenc
e paper
Chardigny,
S
Seriai, A
Software
architecture
recovery
process based
on object-
oriented
source code
and
documentation
[124]
2010 Lecture note in
computer
science
1 Software
architectur
e recovery
process
Conferenc
e paper
Chardigny,
S
Seriai, A
Tamzalit, D
Oussalah, M
Quality-driven
extraction of a
component-
based
architecture
from an
object-
oriented
system [127]
2008 12th European
Conference on
Software
Maintenance and
Reengineering.
Developing
Evolvable
Systems
? Software
architectur
e
Conferenc
e paper
Choi, Yunja
Jang, Hoon
Reverse
Engineering
Abstract
Components
for Model-
Based
Development
and
2010 Proceedings
2010 IEEE 12th
International
Symposium on
High-Assurance
Systems
Engineering
(HASE)
1 Reverse
engineerin
g
Conferenc
e paper
89
Verification of
Embedded
Software
[128]
Chung, S
Davalos, S
An, J B C
Iwahara, K
Legacy to
Web
migration:
service-
oriented
software
reengineering
methodology
[136]
2008 Int. J. Serv. Sci.
(Switzerland)
2 Software
re-
engineerin
g
methodolo
gy
Journal
article
Cleland-
Huang, J
Settimi, R
Zou,
Xuchang
Solc, P
Automated
classification
of non-
functional
requirements
[138]
2007 Requir. Eng.
(UK)
39? Software
requiremen
t
Journal
article
De Lucia, A
Qusef, A
Requirements
Engineering in
Agile
Software
Development
[139]
2003 J. Emerg.
Technol. Web
Intell. (Finland)
? Software
requiremen
t
Journal
article
Dobrica, L Exploring
approaches of
integration
software
architecture
modeling with
quality
analysis
models [32]
2011 Proceedings -
9th Working
IEEE/IFIP
Conference on
Software
Architecture,
WICSA 2011
0 Software
architectur
e
integration
approach
Conferenc
e paper
Dobricǎ, L
Ioniţǎ, A D
Pietraru, R
Olteanu, A
Automatic
transformation
of software
architecture
models [145]
2011 UPB Scientific
Bulletin, Series
C: Electrical
Engineering
0 Software
architectur
e
transformat
ion method
Journal
article
Etzkorn, L
Delugach, H
Towards a
semantic
metrics suite
for object-
oriented
design [146]
2000 Proceedings.
34th
International
Conference on
Technology of
Object-Oriented
Languages and
Systems -
TOOLS 34
47? Software
design
Conferenc
e paper
Fuentes-
Fernández,
Rubén
Pavón, Juan
Garijo,
Francisco
A model-
driven process
for the
modernization
of component-
based systems
[147]
2012 Science of
Computer
Programming
0 Software
design
Journal
article
90
Gilles, O
Hugues, J
A MDE-based
optimisation
process for
real-time
systems:
Optimizing
systems at the
architecture-
level using the
real DSL and
library of
transformation
and heuristics
[148]
2011 Int. J. Comput.
Syst. Sci. Eng.
(UK)
0 Software
architectur
e
transformat
ion process
Journal
article
Gu, J
Ding, E
Luo, B
Feature-
oriented re-
engineering
using product
line approach
[149]
2010 2nd International
Conference on
Information
Science and
Engineering,
ICISE2010 -
Proceedings
0 Software
re-
engineerin
g approach
Conferenc
e paper
Guo, Jiang Software
reuse through
re-engineering
the legacy
systems [49]
2003 Information and
Software
Technology
10 Software
re-
engineerin
g
framework
Journal
article
Hsueh, N.-
L.a
Kuo, J.-Y.b
Lin, C.-C.a
Object-
oriented
design: A
goal-driven
and pattern-
based
approach
[151]
2009 Software and
Systems
Modeling
8 Software
design
Journal
article
Hsueh, N.-
L.a
Wen, L.-C.a
Ting, D.-H.a
Chu, W.b
Chang, C.-
H.c
Koong, C.-
S.d
An approach
for evaluating
the
effectiveness
of design
patterns in
software
evolution
[152]
2011 Proceedings -
International
Computer
Software and
Applications
Conference
2 Software
architectur
e
evaluation
approach
Conferenc
e paper
Ivkovic, I
Kontogianni
s, K
A framework
for software
architecture
refactoring
using model
transformation
s and semantic
annotations
[153]
2006 10th European
Conference on
Software
Maintenance and
Reengineering
0 Software
architectur
e
refactoring
framework
Conferenc
e paper
Khodamora
di, K
Architectural
styles as a
2008 Communication
in counter and
0 Software
architectur
Conferenc
e paper
91
Habibi, J
Kamandi, A
guide for
software
architecture
reconstruction
[154]
information
science
e
reconstruct
ion
approach
Kim, S.a
Kim, D.-K.a
Lu, L.a
Kim, S.b
Park, S.c
A feature-
based
approach for
modeling role-
based access
control
systems [155]
2011 Journal of
Systems and
Software
1 Software
design
Journal
article
Knodel, J
John, I
Ganesan, D
Pinzger, M
Usero, F
Arciniegas,
J L
Riva, C
Asset
recovery and
their
incorporation
into product
lines [156]
2005 WCRE: 12TH
WORKING
CONFERENCE
ON REVERSE
ENGINEERING
2005,
PROCEEDINGS
0 Software
product
line
Conferenc
e paper
Ko, J W
Song, Y J
Graph based
model
transformation
verification
using mapping
patterns and
graph
comparison
algorithm
[157]
2012 International
Journal of
Advancements
in Computing
Technology
4 Software
transformat
ion
verification
Journal
article
Laguna,
Miguel A
Crespo,
Yania
A systematic
mapping study
on software
product line
evolution:
From legacy
system
reengineering
to product line
refactoring
[102]
2012 Science of
Computer
Programming
0 Software
product
line
evolution
Journal
article
Lee, H.a
Choi, H.a
Kang, K.C.a
Kim, D.b
Lee, Z.b
Experience
report on
using a
domain
model-based
extractive
approach to
software
product line
asset
development
[158]
2009 Lecture Notes in
Computer
Science
(including
subseries
Lecture Notes in
Artificial
Intelligence and
Lecture Notes in
Bioinformatics)
2 Software
product
line
developme
nt
Journal
article
Li, Y Reengineering
a scientific
2011 Proceedings -
International
1 Software
re-
Conferenc
e paper
92
software and
lessons
learned [159]
Conference on
Software
Engineering
engineerin
g
Lindvall,
Mikael
Impact
Analysis in
Software
Evolution
[160]
2003 ELSEVIER 1 Software
evolution
Journal
article
Mathrani,
Anuradha
Mathrani,
Sanjay
Test strategies
in distributed
software
development
environments
[161]
2013 Computers in
Industry
0 Software
test
strategies
Journal
article
Matinlassi,
M
Quality-driven
software
architecture
model
transformation
[162]
2005 Proceedings -
5th Working
IEEE/IFIP
Conference on
Software
Architecture,
WICSA 2005
1 Quality
driven
software
architectur
e
transformat
ion process
Conferenc
e paper
Miller, J A
Ferrari, R
Madhavji, N
H
An
exploratory
study of
architectural
effects on
requirements
decisions
[163]
2010 Journal of
Systems and
Software
3 Software
architectur
e and
requiremen
t
Journal
article
Navas, Juan
F
Babau, Jean-
Philippe
Pulou,
Jacques
Reconciling
run-time
evolution and
resource-
constrained
embedded
systems
through a
component-
based
development
framework
[164]
2012 Science of
Computer
Programming
0 Software
evolution
Journal
article
Neukirchen,
Helmut
Zeiss,
Benjamin
Grabowski,
Jens
An approach
to quality
engineering of
TTCN-3 test
specifications
[165]
2008 International
Journal on
Software Tools
for Technology
Transfer
4 Software
quality
Journal
article
Parsa, S
Bushehrian,
O
Performance-
driven object-
oriented
program re-
modularisatio
n [166]
2008 IET Software 4 Software
transformat
ion
Journal
article
93
Ping, Zhang
Yang, Su
Understanding
the aspects
from various
perspectives
in aspects-
oriented
software
reverse
engineering
[167]
2010 2010
International
Conference on
Computer
Application and
System
Modeling
(ICCASM 2010)
0 Software
reverse
engineerin
g
Conferenc
e paper
Qayum, F
Heckel, R
Local search-
based
refactoring as
graph
transformation
[168]
2009 Proceedings - 1st
International
Symposium on
Search Based
Software
Engineering,
SSBSE 2009
4 Software
refactoring
Conferenc
e paper
Räihä, O.a
Koskimies,
K.a
Mäkinen,
E.b
Generating
software
architecture
spectrum with
multi-
objective
genetic
algorithms
[169]
2011 Proceedings of
the 2011 3rd
World Congress
on Nature and
Biologically
Inspired
Computing,
NaBIC 2011
0 Software
architectur
e
Conferenc
e paper
Reddy,
K.N.a
Rao, A.A.b
Chand,
M.G.c
Kumar J.,
K.d
A quantitative
method to
detect design
defects and to
ascertain the
elimination of
design defects
after
refactoring
[170]
2008 Proceedings of
the 2008
International
Conference on
Software
Engineering
Research and
Practice, SERP
2008
0 Software
design
Conferenc
e paper
Sagonas,
Konstantino
s
Avgerinos,
Thanassis
Automatic
refactoring of
erlang
programs
[171]
2009 PPDP'09 -
Proceedings of
the 11th
International
ACM SIGPLAN
Symposium on
Principles and
Practice of
Declarative
Programming
4 Software
refactoring
Conferenc
e paper
Shatnawi, R
Li, W
An empirical
assessment of
refactoring
impact on
software
quality using a
hierarchical
quality model
[172]
2011 International
Journal of
Software
Engineering and
its Applications
0 Software
refactoring
evaluation
Journal
article
94
Silva, N V
Oliveira, A
S R
Carvalho, N
B
Design and
Optimization
of Flexible
and Coding
Efficient All-
Digital RF
Transmitters
[173]
2013 IEEE Trans.
Microw. Theory
Tech. (USA)
0 Software
design
Journal
article
Stoermer,
C.a
Rowe, A.b
O'Brien, L.c
Verhoef, C.d
Model-centric
software
architecture
reconstruction
[174]
2006 Software -
Practice and
Experience
4 Software
architectur
e
reconstruct
ion
approach
Journal
article
Störmer, C Software
Quality
Attribute
Analysis by
Architecture
Reconstructio
n (SQUA
3RE) [80]
2007 Proceedings of
the European
Conference on
Software
Maintenance and
Reengineering,
CSMR
0 Software
quality
Conferenc
e paper
Stroggylos,
K
Spinellis, D
Refactoring -
Does it
improve
software
quality? [176]
2007 Proceedings -
ICSE 2007
Workshops: 5th
International
Workshop on
Software
Quality, WoSQ
2007
0 Software
refactoring
Conferenc
e paper
Sward, R E
Chamillard,
A T
Cook, D A
Using
software
metrics and
program
slicing for
refactoring
[177]
2004 CrossTalk 0 Software
refactoring
Journal
article
Tahvildari,
L
Evolving
legacy
systems
through a
multi-
objective
decision
process [178]
2005 Proceedings
Twelfth
International
Workshop on
Software
Technology and
Engineering
Practice
0 Software
evolution
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
Requirements
driven
software
evolution
[179]
2004 Program
Comprehension,
Workshop
Proceedings
1 Software
evolution
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
On the role of
design
patterns in
quality-driven
re-engineering
2002 Proceedings of
the Sixth
European
Conference on
Software
1 Software
design
pattern re-
engineerin
g
Conferenc
e paper
95
[99] Maintenance and
Reengineering
Tahvildari,
L
Kontogianni
s, K
Developing a
multi-
objective
decision
approach to
select source-
code
improving
transformation
[180]
2004 IEEE
International
Conference on
Software
Maintenance,
ICSM
0 Software
transformat
ion
approach
at code
level
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
A software
transformation
framework for
quality-driven
object-
oriented re-
engineering
[181]
2002 INTERNATION
AL
CONFERENCE
ON
SOFTWARE
MAINTENANC
E,
PROCEEDINGS
8 Software
framework
re-
engineerin
g
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
A
methodology
for developing
transformation
s using the
maintainabilit
y soft-goal
graph [182]
2002 Proceedings
Ninth Working
Conference on
Reverse
Engineering
WCRE 2002
4 Software
transformat
ion
methodolo
gy
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
Improving
design quality
using meta-
pattern
transformation
s: a metric-
based
approach
[183]
2004 JOURNAL OF
SOFTWARE
MAINTENANC
E AND
EVOLUTION-
RESEARCH
AND
PRACTICE
22 Software
transformat
ion
approach
Journal
article
Tahvildari,
L
Kontogianni
s, K
Quality-driven
object-
oriented code
restructuring
[184]
"Second
Workshop on
Software
Quality" W13S
Workshop - 26th
International
Conference on
Software
Engineering
2 Software
quality
driven
restricting
Conferenc
e paper
Tahvildari,
L
Kontogianni
s, K
Mylopoulos,
J
Quality-driven
software re-
engineering
[43]
2003 J. Syst. Softw.
(USA)
26 Software
quality
driven re-
engineerin
g
Journal
article
Tahvildari,
L
Kontogianni
Requirements-
driven
software re-
2001 Reverse
Engineering -
Working
9 Software
re-
engineerin
Conferenc
e paper
96
s, K
Mylopoulos,
J
engineering
framework
[42]
Conference
Proceedings
g
framework
Tahvildari,
Ladan
Quality-driven
object-
oriented re-
engineering
framework
[44]
2004 IEEE
International
Conference on
Software
Maintenance,
ICSM
22 Software
re-
engineerin
g
framework
Conferenc
e paper
Tang, A
Kuo, F.-C.
Lau, M F
Towards
independent
software
architecture
review [36]
2008 Lecture Notes in
Computer
Science
(including
subseries
Lecture Notes in
Artificial
Intelligence and
Lecture Notes in
Bioinformatics)
0 Software
architectur
e
Journal
article
Trifu, A
Seng, O
Genssler, T
Automated
design flaw
correction in
object-
oriented
systems [109]
2004 CSMR 2004:
EIGHTH
EUROPEAN
CONFERENCE
ON
SOFTWARE
MAINTENANC
E AND
REENGINEERI
NG,
PROCEEDINGS
12 Software
design
Conferenc
e paper
Unphon,
Hataichanok
Dittrich,
Yvonne
Software
architecture
awareness in
long-term
software
product
evolution [47]
2010 Journal of
Systems and
Software
11 Software
architectur
e
Journal
article
Van
Rompaey, B
Du Bois, B
Demeyer, S
Pleunis, J
Putman, R
Meijfroidt,
K
Duenas, J C
Garcia, B
SERIOUS:
software
evolution,
refactoring,
improvement
of operational
and usable
systems [105]
2009 2009 13th
European
Conference on
Software
Maintenance and
Reengineering
1 Software
evolution
Conferenc
e paper
Vanhanen,
Jari
Lassenius,
Casper
Perceived
effects of pair
programming
in an
industrial
context [117]
2007 SEAA 2007:
33rd
EUROMICRO
Conference on
Software
Engineering and
Advanced
Applications,
8 Pair
programmi
ng
Conferenc
e paper
97
Proceedings
Wu, Y
Yang, Y
Peng, X
Qiu, C
Zhao, W
Recovering
object-
oriented
framework for
software
product line
reengineering
[186]
2011 Lecture note in
computer
science
0 Software
re-
engineerin
g
framework
Conferenc
e paper
Yang,
Hongji
Zheng,
Shang
Chu, W.C.-
C.
Tsai, Ching-
Tsorng
Linking
Functions and
Quality
Attributes for
Software
Evolution [98]
2012 Proceedings of
the 2012 19th
Asia-Pacific
Software
Engineering
Conference
(APSEC)
0 Software
evolution
Conferenc
e paper
Yang, Su Understanding
Crosscutting
Concerns
From Various
Perspectives
in Software
Reverse
Engineering
[185]
2010 Proceedings of
the 2010 Sixth
International
Conference on
Networked
Computing and
Advanced
Information
Management
(NCM 2010)
0 Software
reverse
engineerin
g
Conferenc
e paper
Yang, Su
Fan, Li
Sheng-ming,
Hu
Ping, Chen
Aspect-
oriented
software
reverse
engineering
[150]
2006 J. Shanghai
Univ. (China)
1 Software
reverse
engineerin
g
Journal
article
Yang, Su
Wei-Dong,
Zhong
Re-
Modularizing
Traverse
Feature from
Various
Perspectives
in Software
Reverse
Engineering
[62]
2010 2010
Proceedings of
International
Conference on
Computational
Intelligence and
Software
Engineering
(CiSE 2010)
0 Software
reverse
engineerin
g
Conferenc
e paper
Zhang, Y
Liu, X
Liu, R
Theoretical
study on
hybrid re-
engineering
[125]
2007 2007 8th
International
Conference on
Electronic
Measurement
and Instruments,
ICEMI
0 Software
re-
engineerin
g study
Conferenc
e paper
Zou, Y Incremental
quality driven
software
migration to
object
2004 20TH IEEE
INTERNATION
AL
CONFERENCE
ON
0 Software
quality
driven
migration
Conferenc
e paper
98
oriented
systems [71]
SOFTWARE
MAINTENANC
E,
PROCEEDINGS
Zou, Y Quality driven
software
migration of
procedural
code to object-
oriented
design [122]
2005 ICSM 2005:
PROCEEDINGS
OF THE 21ST
IEEE
INTERNATION
AL
CONFERENCE
ON
SOFTWARE
MAINTENANC
E
2 Software
quality
driven
migration
Conferenc
e paper
Zou, Y
Kontogianni
s, K
Quality driven
transformation
compositions
for object
oriented
migration [34]
2002 APSEC 2002:
NINTH ASIA
PACIFIC
SOFTWARE
ENGINEERING
CONFERENCE
8 Software
quality
driven
transformat
ion
Conferenc
e paper
Zou, Y
Kontogianni
s, K
Migration to
object
oriented
platforms: A
state
transformation
approach
[175]
2002 Conference on
Software
Maintenance
5 Software
transformat
ion
approach
Conferenc
e paper
Zou, Y
Kontogianni
s, K
Incremental
Transformatio
n of
Procedural
Systems to
Object
Oriented
Platforms
[126]
2003 Proceedings -
IEEE Computer
Society's
International
Computer
Software and
Applications
Conference
0 Software
transformat
ion
Conferenc
e paper
99
Client_Initial
Receive_Request
String_Checking
Send_Requests
Read_CofFlie
Memory_Table
Listen_Dispatch
Send_Message
Start_ServerApp
Request_Check
Cliam_Project
Update_Project
Unclaim_Project
Queue_Project
Unqueue_Project
Get_Project
Get_Queue
Claim_Lock
UnCliam_Lock
Get_Lock
UnQueue_Project
Read_MC
Reset_MC
Function Name
Server Response
Client Request
Execution Result
Server
Client
Function Block
Figure 36: High level architecture view of legacy system
Table 75: Scenario List
Scenario List
No. Properties Description
1 Problem Slow reply of Server application
When it happens Any time, even the server is off peak load.
How it happens Client sends messages to server
Statement The reply of server is slow (more than 2 seconds) in any time,
even the server is off peak load when client sends messages to
server.
Potential cause Messages dropped while sending from client to server.
2 Problem Incoming message from client is lost.
Statement During server application peak load, lost of incoming message
occurs when users update projects.
When it happens During peak load
How it happens Client sends update project requests.
Potential cause Message aborted when it resends over 12 times.
3 Problem Long time interval for waiting resources.
Statement It takes a long time (90 seconds) to update current status
messages while executing test cases.
When it happens During the test case execution.
How it happens Updated current status message
Potential cause Long waiting time to start test case execution.
4 Problem Message dropped during transportation between client and
server.
Statement Message fragmentations are missing due to unknown package
size limitation when many projects are running at the same time.
When it happens Many projects are running at the same time.
100
How it happens The server fails messages.
Potential cause Unknown UDP fragment size limitation
5 Problem Missing messages and server status information during restarting
server application.
Statement Some messages and server status information are lost when
restart the server application.
When it happens Restart server application.
How it happens Queue table in system memory is cleared.
Potential cause The server status data is not saved during restart.
6 Problem Server crashes when receiving invalid requests.
Statement The server crashes when updating or writing new client side
scripts.
When it happens Updating or writing new client side scripts.
How it happens server receives invalidate requests.
Potential cause No invalid request verification
Problem Long waiting time for test cases status from waiting to
activation.
7 Statement It takes a long waiting time (90 seconds) to activate the test case
from queue until the tests start.
When it happens Project move from wait status to active status until the tests start.
How it happens Activate the project from queue.
Potential cause Long waiting time to start test case execution.
Table 76: Table of Foreground Function units’ relations
Foreground function units for scenario 1
Scenario 1: The reply of server is slow (more than 2 seconds) in any time, even the server is
off peak load when client sends messages to server.
Functional Unit: Send Request Client (SRC)
Send Request Client (SRC)
Input 1. User request
Output 1. Request sent to server via selected port
Precondition 1. Network port opened
Resource
Control
Functional Unit: Receive Request Server (RRS)
Receive Request Server (RRS)
Input 1. Request sent from client
Output 1. Formatted request
Precondition 1. server environment variable initiated
Resource
Control
Functional Unit: Receive Request Client (RRC)
Receive Request Client (RRC)
Input 1. response from server
Output 1. Display received request
Precondition 1. Network port opened
Resource
Control
Functional Unit: Send Request Server (SRS)
Send Request Server (SRS)
Input 1. update request
101
Output 1. Request sent to client
Precondition
Resource
Control 1. Environment variables
Functional Unit: Request Handling (RH)
Request Handling (RH)
Input 1. formatted request
Output 1. handled request
Precondition
Resource
Control 1. server environment variables
Table 77: New Requirement from stakeholders
New Requirements
No. Name Description
1 Log file rotation Some log information is lost when log file rotation is going.
2 Priority Control Add a priority value to control resource distribution in the
system, and the value range from 1 to 5. If parameter is omitted
the default values should be 2. 3 represents for Polly and 4
represents for DevAutoTest. Besides, the valves are available to
change if there is a requirement.
3 Blocking Control Implement two new system mode:
Model 1 “Halt”: the server we reject any new resource claim,
and return busy reply. The will be used during network
maintenance for example.
Model 2 “Block”: the server will reject any new climes for
resources with the answer unavailable. This would be useful in
case of a full server shutdown or a server application restart.
Table 78 Background Function units of Scenario 1
Background function units for scenario 1
Scenario 1: The reply of server is slow (more than 2 seconds) in any time, even the server is
off peak load when client sends messages to server.
Functional Unit: Initial Server APP (ISP)
Initial Server APP (ISP)
Input
Output 1. server environment variables
Precondition
Resource
Control
Functional Unit: Initial Client APP (ICP)
Initial Client APP (ICP)
Input
Output 1. client environment variables
Precondition
Resource
Control
Functional Unit: Initial Client APP (ICP)
Initial Client APP (ICP)
Input
Output 1. request to client
102
Precondition
Resource
Control
Table 79: Issue card 1: Data losing
Unreliable of UDP Protocol
1. Client fails to send message to server
2. Unpredicted fragmentation size cause data losing
3. No acknowledge data send back to client
Influence
1. Resend data
2. Server fails to response
3. Slow reply (response-time)
Solution
Change transmission protocol
Related strategy
None
Table 80: Root Cause statements for NetstusServer System
Root cause Statement Causal Factors
Unreliable of UDP transmission protocol in
NetStatusserver system
5. Host Name error.
6. Socket open error.
7. Failure of sending requests.
8. UDP no acknowledgement.
9. Missing UDP fragmentation size.
10. Requests lost when client sending to
server.
11. Missing UDP fragmentation
maximum number.
Defects in workflow Design 12. Transaction ID exception.
13. Unpredictable Update-Project String.
14. Requests are aborted after resending
12 times.
15. The time interval between requests
resending is long.
16. While executing UpdateProject
GetProject, GetQueue, UnQueue or
UnClaimProject, it takes a long time
to check the exclusive resource is
available or not.
17. Server stops listening the port, while
processing the request.
Table 81: Issue card 2: Defects of workflow design
Defects of workflow design in the system
1. Poor performance while searching in a queue table
2. Useless methods exist in GetProject, and GetQueue
3. NetStatusserver stops listen clients while processing ongoing requests
4. The interval waiting time is too long while distributing testing resources.
5. 12 times resending for lost requests.
Influence
1. long time is spent to get response of server
Solution
103
1. Change the way of searching in queue table
2. Redesign the workflow for GetProject and GetQueue
3. Redesign the method to keep listening clients all the time.
4. Redesign the project table for test resources distribution in NetStatusserver
5. Redesign the workflow for UpdateProject
Related strategy
Queue Priority
Table 82: Issue card 3: Queue Priority
Queue Priority
1. Add a priority value for each waiting request in the memory queue.
Influence
1. Usability
Solution
1. Define resource distribution table
Related strategy
1. Defects of workflow design in the system
Table 83: Issue Card 4: Server Start/Stop
Server Start/Stop
1. Data lost while restart the server
Influence
1. Reliability(data losing)
Solution
1. Backup data existing in memory to save on local disk while stop the server, and
reload it while start the server.
2. Add a parameter for user to select save data or not
Related strategy
None
Table 84: Issue card 5: Server Log Rotation
Server Log Rotation
1. Data lost while rotating log file from running server
Influence
1. Reliability (data losing)
Solution
1. Use Appache Log4j to do log rotation.
Related strategy
None
Table 85: Issue card 6: Invalid Requests
Invalid Requests
1. Invalid requests are sent to server
Influence
1. Response failure
Solution
1. Add a component to check and remove invalid requests in server side.
Related strategy
None
104
Table 86: Scenario A Example Example – Identify FRAM Relationship
Scenario A: Server application loses clients data after processing data more than one full
week time (7 days * 24 hours).
1. Identify foreground functional unit:
Server data processor (SDP)
Input Dispatched data
Output Processed data
Precondition
Resource Active data resources
Control
2. Identify background functional unit:
Output supporting as foreground Input
Server Data Base (SDB)
Input Processed data
Output
Precondition
Resource
Control
Server side Sender (SSS)
Input Processed data
Output
Precondition
Resource
Control
Server data Dispatcher (SDD)
Input
Output Dispatched data
Precondition
Resource
Control
3. Identify FRAM relationships for background functional unit
Server data base (SDB)
Input Processed data
Output Active data resources
Precondition
Resource
Control
Server side sender (SSS)
Input Processed data
Output
Precondition
Resource
Control
Table 87: Scenario B Example Example – Identify FRAM Relationship
Scenario B: Server application crashes if server receives ―#/*/#‖ or ―(*)‖ string within client
request.
4. Identify foreground functional unit:
105
Server side receiver (SSR)
Input Client side request
Output
Precondition
Resource
Control
5. Identify background functional unit:
Output supporting as foreground Input
Client side sender (CSS)
Input
Output Client side request
Precondition
Resource
Control
6. Identify FRAM relationships for background functional unit
Client side sender (CSS)
Input User request
Output Client side request
Precondition
Resource
Control
User Request Form (URF)
Input
Output User request
Precondition
Resource
Control
106
Interfaces Common Utils
Server
IProject
IMaintain
ILock
IStatistic
IServer
Message Receiver
Connection
Message Sender
Log Message
Check
Filter Services
Claim Project
Update Project
Queue Project
Unqueue Project
Unclaim Project
Update Queue
Get Queue
Get Project
ProjectServices
getMaintainConfig
Claim Lock
Unclaim Lock
Get Lock
Maintain
LockServices
Statistics
Reset statistics
StatisticsService
StartServer
StopServer
ServerApp
Add
Search
Update
Delete
TableManage
Server Initial
Server ConfigureConfiguration
Project
Table
Lock
Table
Client InitialMessage
receiver
String FilterMessage
Sender
Client
Figure 37: Architecture Candidate 1
107
Controller
1.Insert
Knowledge Source
Blackboard
Project Table
ProjectCounter
…
…
…
Behavior Table
Claim Project
Update Project
…
…
2. Search
3. Update
4. Delete
5. Sizeof
Client
Server
Sender
Re
cie
ve
r
Request
Verification &
Validation
Requests
ClientClient
Request
queue
Rules
Lock Table
LockName
…
…
…
Data
MovementData
Movement
Changes
Changes
Validate
requests
Result Info
Response
6. Read
Figure 38: Architecture Candidate 2
108
APPENDIX B
Table 88: List of Subjects
Sample Description Value
S1 Single local client sending 6100 requests <Local, 1, 6100>
S2 Two local clients sending 6100 requests <Local, 2, 6100>
S3 Three local clients sending 6100 requests <Local, 3, 6100>
S4 Four local clients sending 6100 requests <Local, 4, 6100>
S5 Five local clients sending 6100 requests <Local, 5, 6100>
S6 Six local clients sending 6100 requests <Local, 6, 6100>
S7 Seven local clients sending 6100 requests in total <Local, 7, 6100>
S8 Eight local clients sending 6100 requests in total <Local, 8, 6100>
S9 Night clients sending 6100 requests in total <Local, 9, 6100>
S10 Ten clients sending 6100 requests in total <Local, 10, 6100>
S11 Single remote client sending 6100 requests in total <Remote, 1, 6100>
S12 Two remote clients sending 6100 requests in total <Remote, 2, 6100>
S13 Three remote clients sending 6100 requests in total <Remote, 3, 6100>
S14 Four remote clients sending 6100 requests in total <Remote, 4, 6100>
S15 Five remote clients sending 6100 requests in total <Remote, 5, 6100>
S16 Six remote clients sending 6100 requests in total <Remote, 6, 6100>
S17 Seven remote clients sending 6100 requests <Remote, 7, 6100>
S18 Eight remote clients sending 6100 requests <Remote, 8, 6100>
S19 Nigh remote clients sending 6100 requests <Remote, 9, 6100>
S20 Ten remote clients sending 6100 requests <Remote, 10, 6100>
109
Table 89: Collected Execution Time (seconds) of Experiment
ID Description
Legacy System New System
1 2 3 1 2 3
S1 <Local, 1, 6100> 88.41 88.41 89.71 10.41 10.43 10.45
S2 <Local, 2, 6100> 184.85 184.85 184.85 10.45 10.63 10.46
S3 <Local, 3, 6100> 254.17 254.17 254.17 10.66 10.55 10.59
S4 <Local, 4, 6100> 338.89 338.89 338.89 10.65 10.66 10.68
S5 <Local, 5, 6100> 406.67 406.67 435.71 10.78 10.66 10.78
S6 <Local, 6, 6100> 554.55 508.33 508.33 10.63 10.68 10.74
S7 <Local, 7, 6100> 762.50 677.78 677.78 10.76 10.78 10.55
S8 <Local, 8, 6100> 762.50 871.43 871.43 10.68 10.74 10.78
S9 <Local, 9, 6100> 871.43 871.43 762.50 10.78 10.80 10.70
S10 <Local, 10, 6100> 871.43 871.43 871.43 10.89 10.72 10.85
S11 <Remote, 1, 6100> 88.41 89.71 88.41 10.67 10.69 10.69
S12 <Remote, 2, 6100> 179.41 184.85 184.85 10.80 10.85 10.80
S13 <Remote, 3, 6100> 265.22 265.22 277.27 10.82 10.92 10.87
S14 <Remote, 4, 6100> 338.89 338.89 554.55 10.92 11.00 11.00
S15 <Remote, 5, 6100> 406.67 406.67 406.67 10.91 11.07 11.07
S16 <Remote, 6, 6100> 469.23 469.23 469.23 11.07 11.15 11.05
S17 <Remote, 7, 6100> 554.55 554.55 554.55 11.05 11.04 11.00
S18 <Remote, 8, 6100> 762.50 677.78 762.50 11.02 11.05 10.94
S19 <Remote, 9, 6100> 871.43 762.50 677.78 11.07 10.98 11.02
S20 <Remote, 10, 6100> 871.43 871.43 762.50 10.98 11.11 11.05
Table 90: Design of sent requests for each test case
Number of
Client Number of request sent
Number
in total
1 6100 6100
2 3050 3050 6100
3 2033 2033 2034 6100
4 1525 1525 1525 1525 6100
5 1220 1220 1220 1220 1220 6100
6 1017 1017 1017 1017 1016 1016 6100
7 870 870 872 872 872 872 872 6100
8 762 762 762 762 763 763 763 763 6100
9 677 677 677 678 678 678 678 678 679 6100
10 610 610 610 610 610 610 610 610 610 610 6100
110
REFERENCE
[1] E. Stroulia, and T. Systä, "Dynamic Analysis For Reverse Engineering and Program
Understanding", Applied Computing Review, ACM, vol 10, issue 1, 2002.
[2] A. Jain, S. Soner, A. Gadwal; , "Reverse engineering: Journey from code to design,"
Electronics Computer Technology (ICECT), 2011 3rd International Conference on ,
vol.5, no., pp.102-106, 8-10 April 2011
[3] S. Tilley, K. Wong, M. Storey, H. Muller, Programmable reverse engineering,
International Journal of Software Engineering and Knowledge Engineering 4 (4)
(1994) 501–520.
[4] R. Kazman, L. O'Brien, Verhoef, C.; " Architecture Reconstruction Guidelines Third
Edition, " Carnegie Mellon University.2003.
[5] IEEE "Recommended Practice for Architectural Description for Software Intensive
Systems," technical report, Architecture Working Group, 2000.
[6] P.B. Kruchten, "The 4+1 View Model of Architecture," IEEE Software, vol. 12, no. 6, pp.
42-50, Nov. 1995.
[7] B. Bellay, H. Gall, "A comparison of four reverse engineering tools," Reverse
Engineering, 1997. Proceedings of the Fourth Working Conference on , vol., no.,
pp.2-11, 6-8 Oct 1997
[8] Rigi User Manual. [Online]. Available: http://www.rigi.csc.uvic.ca/rigi/manual/user.html
[Accessed: 11-May-2012]
[9] K. Wong, Rigi User's Manual V 5.4.3, Part of the Rigi distribution package, pp. 161,
1996.
[10] J. Fabry, A. Kellens, S. Denier, S. Ducasse, "AspectMaps: Extending Moose to
visualize AOP software, " Science of Computer Programming,2010
[11] R. Perez-Castillo, D. Guzman, I.G.-R. M. Piattini, C. Ebert, "Reengineering
Technologies," Software, IEEE , vol.28, no.6, pp.13-17, Nov.-Dec. 2011
[12] K. Gallagher, A. Hatch, M. Munro, "Software Architecture Visualization: An
Evaluation Framework and Its Application," Software Engineering, IEEE
Transactions on , vol.34, no.2, pp.260-270, March-April 2008
[13] E. Johansson, M. Höst, A. Wesslén, L. Bratthall, ―The Importance of Quality
Requirements in Software Platform Development - A Survey‖, in Proceedings of
HICSS-34, Maui Hawaii, January 2001.
[14] L. Bass, P. Clements, R. Kazman, ―Software Architecture in Practice‖, Addison-Wesley
Publishing Co., Reading MA, 1998.
[15] J. Bosch, ―Design & Use of Software Architectures – Adopting and Evolving a Product
Line Approach―, Addison-Wesley, Harlow UK, 2000.
111
[16] C. Hofmeister, R. Nord, D. Soni, ―Applied Software Architecture‖, Addison-Wesley,
Reading MA., 2000.
[17] M. Svahnberg, C. Wohlin, L. Lundberg and M. Mattsson, "A Quality-Driven Decision
Support Method for Identifying Software Architecture Candidates", International
Journal of Software Engineering and Knowledge Management, Vol. 13, No. 5, pp.
547-573, 2003.
[18] I. Jacobson, G. Booch, J. Rumbaugh, ―The Unified Software Development Process‖,
Addison-Wesley, Reading MA, 1999.
[19] J. Karlsson and K. Ryan, ―A Cost-Value Approach for Prioritizing Requirements‖, in
IEEE Software 14 (5):67–74, 1997.
[20] T. L. Saaty, ―The Analytic Hierarchy Process‖, McGraw Hill, Inc., New York NY, 1980.
[21] E. Charniak, ―Introduction to artificial intelligence,‖ Addison-Wesley, pp. 2, 1984.
[22] E. Yu, ; ―Modelling Strategic Relationships for Process Reengineering,‖ Ph.D. thesis,
also Tech. Report DKBS-TR-94-6, Dept. of Computer Science, University of
Toronto,1995.
[23] R.B. France, D.-K. Kim, S. Ghosh, E. Song; , "A UML-based pattern specification
technique," Software Engineering, IEEE Transactions on , vol.30, no.3, pp. 193- 206,
March 2004
[24] E. Hollnagel, ―Barriers and accident prevention. Aldershot: Ashgate,‖ 2004
[25] Z. Stanislovas, N. Marc, "Comparative Analysis of Nuclear Event Investigation
Methods, Tools and Techniques," Publications Office of the European Union, 2011
[26] A. Doggett, ―A Statistical Comparison of Three Root Cause Analysis Tools,‖ Journal of
Industrial Technology, Volume 20, Number 2, pp.9, 2004.
[27] COSMIC – Common Software Measurement International Consortium, The COSMIC
Functional Size Measurement Method - version 3.0 Measurement Manual (The
COSMIC Implementation Guide for ISO/IEC 19761: 2003), September 2007.
[28] C. Wohlin, P. Runeson, M. Höst, , M.C. Ohlsson, B. Regnell, A. Wesslen,
Experimentation in Software Engineering: An Introduction. John Wiley & Sons Inc.,
1999.
[29] L. Bratthall and C. Wohlin, "Is It Possible to Decorate Graphical Software Design and
Architecture Models with Qualitative Information? - An Experiment", IEEE
Transactions on Software Engineering, Vol. 28, No. 12, pp. 1181-1193, 2002.
[30] B. Kitchenham and S. Charters, "Guidelines for performing systematic literature
reviews in software engineering," Software Engineering Group, Keele University and
Department of Computer Science, University of Durham, United Kingdom, Technical
Report EBSE-2007-01, 2007.
112
[31] M. Unterkalmsteiner, T. Gorschek, M. Islam, C. Cheng, R. Permadi, R. Feldt,
―Evaluation and Measurement of Software Process Improvement—A Systematic
Literature Review‖, IEEE Transactions on Software Engineering, vol. 38 no. 2, 2012.
[32] L. Dobrica, "Exploring approaches of integration software architecture modeling with
quality analysis models, " in Proceedings - 9th Working IEEE/IFIP Conference on
Software Architecture, WICSA 2011, 2011, pp. 113–122.
[33] M. G. J. van den Brand, P. Klint, and C. Verhoef. 1997. Reverse engineering and
system renovation\—an annotated bibliography. SIGSOFT Softw. Eng. Notes 22, 1
(January 1997), 57-68.
[34] Y. Zou and K. Kontogiannis, ―Quality driven transformation compositions for object
oriented migration,‖ in APSEC 2002: NINTH ASIA PACIFIC SOFTWARE
ENGINEERING CONFERENCE, 2002, pp. 346–355.
[35] Yin, K. Robert, Applications of case study research, 3rd ed., SAGE, Thousand Oaks,
Calif., 2012
[36] A. Tang, F.-C. Kuo, and M. F. Lau, ―Towards independent software architecture
review,‖ Lecture Notes in Computer Science (including subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5292 LNCS, pp.
306–313, 2008.
[37] S. Allier, H. A. Sahraoui, S. Sadou, and S. Vaucher, ―Restructuring object-oriented
applications into component-oriented applications by using consistency with
execution traces,‖ Lecture Notes in Computer Science (including subseries Lecture
Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 6092
LNCS. DIRO, Université de Montréal, Canada, pp. 216–231, 2010.
[38] H. . Arboleda and J.-C. . Royer, ―Component types qualification in Java legacy code
driven by communication integrity rules,‖ in Proceedings of the 4th India Software
Engineering Conference 2011, ISEC’11, 2011, pp. 155–164.
[39] W. Trochim,. Outcome pattern matching and program theory. Evaluation and program
Planning, Volume 12, Pages 355-366, 1989.
[40] J. Wholey, Evaluation: Performance and promise. Washington, DC: The Urban Institute.
1979.
[41] R.B. Svensson, T. Gorschek, B. Regnell, R. Torkar, A. Shahrokni, R. Feldt, A. Aurum,
"Prioritization of quality requirements: State of practice in eleven
companies," Requirements Engineering Conference (RE), 2011 19th IEEE
International , vol., no., pp.69-78, Aug. 29 2011-Sept. 2 2011
[42] L. Tahvildari, K. Kontogiannis, J. Mylopoulos, Requirements-driven software re-
engineering framework, Reverse Engineering, 2001. Proceedings. Eighth Working
Conference on , vol., no., pp.71-80, 2001.
[43] L. Tahvildari, K. Kontogiannis, J. Mylopoulos, Quality-Driven Software Re-
engineering. Journal of Systems and Software (JSS), Special Issue on: Software
Architecture - Engineering Quality Attributes, Elsevier Publishers, Volume 66, Issue
3, pp. 225-239, June 2003.
113
[44] L. Tahvildari, , Quality-Driven Object-Oriented Re-engineering Framework. PhD
Dissertation Synopsis, Proceedings of International Conference on Software
Maintenance (ICSM), Chicago, Illinois, USA, pp. 479-483, September 2004.
[45] Y. Levy, J. Ellis. Timothy, A Systems Approach to Conduct an Effective Literature
Review in Support of Information Systems Research. Informing Science Journal 9,
pp. 181212, 2006
[46] T. Mens,; L. Doctors,; Habra, N.; Vanderose, B.; Kamseu, F.; , QUALGEN: Modeling
and Analysing the Quality of Evolving Software Systems, Software Maintenance and
Reengineering (CSMR), 2011 15th European Conference on , vol., no., pp.351-354,
1-4 March 2011.
[47] H. Unphon and Y. Dittrich, ―Software architecture awareness in long-term software
product evolution,‖ Journal of Systems and Software, vol. 83, no. 11, pp. 2211–2226,
2010.
[48] Yin, K. Robert. , Case Study Research –Design and Methods. Sage, 2009
[49] J. Guo, , Software reuse through re-engineering the legacy systems, Information and
Software Technology, Volume 45, Issue 9, 15 June 2003, Pages 597-609, ISSN 0950-
5849, 10.1016/S0950-5849(03)00047-8
[50] Brown, R. John, H. Kaspar, M. Lipow, G. J. MacLeod, M.J. Merrit, Characteristics of
software quality. Vol. 1. North-Holland Publishing Company, 1978.
[51] K. Henningson and C. Wohlin, "Understanding the Relations Between Software Quality
Attributes-A Survey Approach". Proceedings of the 12th. International Conference
for Software Quality, Proceedings on CD, 2002.
[52] L.K. Chung, B.A. Nixon, E. Yu, J. Mylopoulos, Non-Functional Requirements in
Software Engineering. Kluwer Publishing, Dordrecht, 2000
[53] M. Klein, , L. Bass, , R. Kazman, Attribute-based architecture styles. Technical Report
CMU/SEI-99-TR-022 ADA371802, Software Engineering Institute, Carnegie Mellon
University, Pittsburgh, PA., 1999
[54] J. Bergey, M. Barbacci, W. William Using quality attribute workshops to evaluate
architectural design approaches in a major system acquisition: A case study.
Technical Report CMU/SEI-2000-TN-010, Software Engineering Institute, Carnegie
Mellon University, Pittsburgh, PA., 2000
[55] L. Lundberg, J. Bosch, D. Häggander and P.O. Bengtsson, "Quality Attributes In
Software Architecture Design, " Proc. IASTED Third Int"l Conf. Software Eng. and
Applications, pp. 353-362, Oct. 1999.
[56] R. Kazman, , M. Klein, , P. Clements, Attam: Method for architecture evaluation.
Technical Report CMU/SEI-2000-TR-004 ADA382629, Software Engineering
Institute, Carnegie Mellon University, Pittsburgh, PA., 2000
[57] I. Baxter ,"Transformational maintenance by reuse of design histories," Ph.D. thesis,
Department of Computer Science, University of California, Irvine., 1990
114
[58] A. Arcuri, "On Search Based Software Evolution," Search Based Software Engineering,
2009 1st International Symposium on , pp.39,42, 13-15 May 2009
[59] S. Bode and M. Riebisch, "Impact evaluation for quality-oriented architectural decisions
regarding evolvability, " Lecture Notes in Computer Science (including subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol.
6285 LNCS, pp. 182–197, 2010.
[60] B. D. Bois, S. Demeyer, and J. Verelst, "Refactoring - Improving coupling and cohesion
of existing code, " in Proceedings - Working Conference on Reverse Engineering,
WCRE, 2004, pp. 144–151
[61] R.L. Akers, I.D. Baxter, M. Mehlich, B.J. Ellis, K.R. Luecke, "Re-engineering C++
component models via automatic program transformation, " Reverse Engineering,
12th Working Conference on ,pp. 10 pp., 7-11 Nov. 2005.
[62] S. Yang and Z. Wei-Dong, ―Re-Modularizing Traverse Feature from Various
Perspectives in Software Reverse Engineering,‖ in 2010 Proceedings of International
Conference on Computational Intelligence and Software Engineering (CiSE 2010),
pp. 4.
[63] M. Svahnberg, C. Wohlin, "An Investigation of a Method for Identifying a Software
Architecture Candidate with Respect to Quality Attributes, " Empirical Software
Engineering 10(2): 149-181, 2005
[64] L. Bratthall and C. Wohlin, ―Understanding some software quality aspects from
architecture and design models,‖ in Proceedings IWPC 2000. 8th International
Workshop on Program Comprehension, pp. 27–34.
[65] H. P. Breivold, I. Crnkovic, and M. Larsson, ―A systematic review of software
architecture evolution research,‖ Information and Software Technology, vol. 54, no. 1,
pp. 16–40, Jan. 2012.
[66] S. . Bryton, F. . Brito E Abreu, and M. . Monteiro, ―Reducing subjectivity in code
smells detection: Experimenting with the Long Method,‖ in Proceedings - 7th
International Conference on the Quality of Information and Communications
Technology, QUATIC 2010, 2010, pp. 337–342.
[67] S. Bryton and F. B. Abreu, ―Strengthening refactoring: Towards software evolution
with quantitative and experimental grounds,‖ in 4th International Conference on
Software Engineering Advances, ICSEA 2009, Includes SEDES 2009: Simposio para
Estudantes de Doutoramento em Engenharia de Software, 2009, pp. 570–575.
[68] S. Bryton and F. B. E. Abreu, ―Modularity-oriented refactoring,‖ in Proceedings of the
European Conference on Software Maintenance and Reengineering, CSMR, 2008, pp.
294–297.
[69] J. Bergey, D. Smith, N. Weiderman, and S. Woods. Options analysis for reengineering
(OAR): Issues and conceptual approach. Technical Report CMU/SEI-99-TN- 014,
Carnegie Mellon Software Engineering Institute, 1999.
[70] M.A. Doggett, A Statistical Comparison of Three Root Cause Analysis Tools. Journal
of Industrial Technology. Volume 20, Number 2, pp.9, 2004.
115
[71] Y. Zou, ―Incremental quality driven software migration to object oriented systems,‖ in
20TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE
MAINTENANCE, PROCEEDINGS, 2004, pp. 136–146.
[72] P. Bourque and R. Dupuis, ―Guide to the software engineering body of knowledge 2004
version,‖ Software Engineering Body of Knowledge 2004 SWEBOK, 2004.
[73] D.C. Tucker, D.M. Simmonds, "A Case Study in Software Reengineering," Information
Technology: New Generations (ITNG), 2010 Seventh International Conference on ,
vol., no., pp.1107-1112, 12-14 April 2010
[74] P. Antonini, G. Canfora, A. Cimitile, "Re-engineering legacy systems to meet quality
requirements: an experience report," Software Maintenance, 1994. Proceedings.,
International Conference on , vol., no., pp.146-153, 19-23 Sep 1994
[75] S. Singh, K.S. Kahlon, P.S. Sandhu, "Re-engineering to analyze and measure object
oriented paradigms," Information Management and Engineering (ICIME), 2010 The
2nd IEEE International Conference on , vol., no., pp.472-478, 16-18 April 2010
[76] E.J. Byrne, "A conceptual foundation for software re-engineering," Software
Maintenance, 1992. Proceerdings., Conference on , vol., no., pp.226-235, 9-12 Nov
1992
[77] P. Bengtsson, J. Bosch, "Scenario-based software architecture reengineering," Software
Reuse, 1998. Proceedings. Fifth International Conference on , vol., no., pp.308-317,
2-5 Jun 1998
[78] C. Stoermer, L. O'Brien, C. Verhoef, "Moving towards quality attribute driven software
architecture reconstruction," Reverse Engineering, 2003. WCRE 2003. Proceedings.
10th Working Conference on , vol., no., pp. 46- 56, 13-16 Nov. 2003
[79] S. Ducasse, D. Pollet, "Software Architecture Reconstruction: A Process-Oriented
Taxonomy," Software Engineering, IEEE Transactions on , vol.35, no.4, pp.573-591,
July-Aug. 2009
[80] C. Stormer, "Software Quality Attribute Analysis by Architecture Reconstruction
(SQUA3RE)," Software Maintenance and Reengineering, 2007. CSMR '07. 11th
European Conference on , vol., no., pp.361-364, 21-23 March 2007
[81] M. Svahnberg, K. Henningsson, "Consolidating different views of quality attribute
relationships," Software Quality, 2009. WOSQ '09. ICSE Workshop on , vol., no.,
pp.46-50, 16-16 May 2009
[82] T. Mens, T. Tourwe, "A survey of software refactoring," Software Engineering, IEEE
Transactions on , vol.30, no.2, pp. 126- 139, Feb 2004
[83] A. Kumar, A. Kumar, ―Design of Quality Model during Reengineering of Legacy
System,‖ Global Journal of Computer Science and Technology, vol. 11, no. 8, Jul.
2011.
[84] M. Riebisch, S. Bode, R.Brcina, ―Problem-solution mapping for forward and
reengineering on architectural level,‖ in Proceedings of the 12th International
116
Workshop on Principles of Software Evolution and the 7th annual ERCIM Workshop
on Software Evolution, New York, NY, USA, 2011, pp. 106–115.
[85] L. Tahvildari, K. Kontogiannis, ―A workbench for quality based software re-
engineering to object-oriented platforms,‖ In Proceedings of the ACM International
Conference in Object-Oriented Programming, Systems, Languages, and Applications
(OOPSLA) - Doctoral Symposium, pages 157–158, Minneapolis,Minnesota, USA,
October 2000.
[86] G. Grau, "A Reengineering Framework for Informing Decisions over Requirements
Models," In Proceedings of Workshops and Doctoral Consortium of the 19th
International Conference on Advanced Information Systems Engineering (CAiSE'07).
Vol. 2, Pages: 853-860.
[87] J.R. James, Lee, N. V. H.; , ―Root Cause Analysis For Beginners,‖ Quality
Progress.2004 (7) pp45-53.
[88] D. Ganesan, M. Lindvall, R. Cleaveland, ―Architecture-based Static Analysis of
Medical Device Software: Initial Results,‖ Workshop on High Confidence Medical
Device Software and Systems, 2011
[89] L. Macchi, E. Hollnagel, J. Leonhard, ―Resilience Engineering approach to safety
assessment: an application of FRAM for the MSAW system.,‖ in EUROCONTROL
Safety R&D Seminar, Munich, France, 2009
[90] A.M. Doggett, "Root Cause Analysis: A Framework for Tool Selection," Quality
Management Journal, vol. 12, no. 4, pp. 34-45, 2005.
[91] G. Grau, X. Franch, ―ReeF: Defining a Customizable Reengineering Framework,‖ in
Advanced Information Systems Engineering, vol. 4495, 2007, pp. 485–500.
[92] G. Grau, X. Franch, N.A.M. Maiden, ―PRiM: An i-based process reengineering method
for information systems specification,‖ Information and Software Technology,
Volume 50, Issues 1–2, January 2008, Pages 76-100
[93] D.E. Baburin, M.A. Bulyonkov, P.G. Emelianov, N.N. Filatkina, ―Visualization
Facilities in Program Reengineering,‖ Programming and Computer Software, vol. 27,
no. 2, pp. 69–77, 2001.
[94] B. Xu, Y. Zhou, ―Extracting objects from Ada83 programs: A case study,‖ Journal of
Computer Science and Technology, vol. 16, no. 6, pp. 574–581, 2001.
[95] D.E. Wilkening, J.P.Loyall, M.J. Pitarys, Littlejohn, K. ―A reuse approach to software
reengineering,‖ Journal of Systems and Software, vol. 30, no. 1–2, pp. 117–125,
Jul.1995.
[96] S.R. Mackey L.M. Meredith, ―Software migration and reengineering: A pilot project in
reengineering,‖ Journal of Systems and Software, vol. 30, no. 1–2, pp. 137–150, Jul.
1995.
[97] C. Choquet, A. Corbi re, "Reengineering Framework for Systems in Education",
Educational & Technology, Vol. 9, No 4, 2006, pp. 228-241.
117
[98] H. Yang, S. Zheng, W. C.-C. Chu, and C.-T. Tsai, ―Linking Functions and Quality
Attributes for Software Evolution,‖ in Proceedings of the 2012 19th Asia-Pacific
Software Engineering Conference (APSEC), vol. vol.1, pp. 250 – 9.
[99] L. Tahvildari, K. Kontogiannis, "On the role of design patterns in quality-driven re-
engineering," Software Maintenance and Reengineering, 2002. Proceedings. Sixth
European Conference on , vol., no., pp.230-240, 2002
[100] K. Gowthaman, K. Mustafa, R.A. Khan, "Reengineering legacy source code to model
driven architecture," Computer and Information Science, 2005. Fourth Annual ACIS
International Conference on , vol., no., pp. 262- 267, 2005
[101] E. Burd, M. Munro, ―A method for the identification of reusable units through the
reengineering of legacy code,‖ Journal of Systems and Software, vol. 44, no. 2, pp.
121–134, Dec. 1998.
[102] M. A. Laguna, Y. Crespo, ―A systematic mapping study on software product line
evolution: From legacy system reengineering to product line refactoring,‖ Science of
Computer Programming, 2012.
[103] C. Stringfellow, C.D. Amory, D. Potnuri, Andrews, A. Georg, M.; , "Comparison of
software architecture reverse engineering methods," Information and Software
Technology, vol. 48, pp. 484-497, 2006.
[104] I.A. Herrera, R. Woltjer, ―Comparing a multi-linear (STEP) and systemic (FRAM)
method for accident analysis,‖ Reliability Engineering & System Safety, vol. 95, no.
12, pp. 1269–1275, Dec. 2010.
[105] B. Van Rompaey, B. Du Bois, S. Demeyer, J. Pleunis, R. Putman, K. Meijfroidt, J. C.
Duenas, and B. Garcia, ―SERIOUS: software evolution, refactoring, improvement of
operational and usable systems,‖ in 2009 13th European Conference on Software
Maintenance and Reengineering, pp. 277 – 80.
[106] M. Svahnberg, C. Wohlin, ―Consensus Building when Comparing Software
Architectures‖, in Proceedings of the 4th International Conference on Product
Focused Software Process Improvement (PROFES 2002), Lecture Notes in Computer
Science (LNCS 2559), Springer Verlag, Berlin Germany, 2002
[107] M. Svahnberg, C. Wohlin, L. Lundberg, M. Mattsson, ―A method for understanding
quality attributes in software architecture structures,‖ in SEKE '02: Proceedings of the
14th international conference, on Software engineering and knowledge engineering,
pages 819-826, New York, NY, USA,
[108] P. Davidsson, S. Johansson, M. Svahnberg, "Characterization and Evaluation of Multi-
agent System Architectural Styles", A. Garcia et al. (Eds.): SELMAS 2005, LNCS
3914, pp. 179-188, 2006.
[109] A. Trifu, O. Seng, and T. Genssler, ―Automated design flaw correction in object-
oriented systems,‖ in CSMR 2004: EIGHTH EUROPEAN CONFERENCE ON
SOFTWARE MAINTENANCE AND REENGINEERING, PROCEEDINGS, 2004,
pp. 174–183.
[110] S. Angelov, J. Trienekens, P. Grefen, "Towards a Method for the Evaluation of
Reference Architectures: Experiences from a Case, " in Software Architecture, vol.
118
5292, R. Morrison, D. Balasubramaniam, and K. Falkner, Eds. Springer Berlin /
Heidelberg, 2008, pp. 225–240.
[111] F. Cuadrado, B. García, J. C. Dueñas, and H. A. Parada, ―A case study on software
evolution towards service-oriented architecture,‖ in Proceedings - International
Conference on Advanced Information Networking and Applications, AINA, 2008, pp.
1399–1404.
[112] R.E. Stake, The Art of Case Study Research, Thousand Oaks, California: Sage, 1995
[113] O. Bushehrian, "Automatic actor-based program partitioning, " Journal of Zhejiang
University: Science C, vol. 11, no. 1, pp. 45–55, 2010
[114] C. Ghezzi, M. Jazayeri, D. Mandrioli, Fundamentals of Software Engineering. Prentice
Hall, 2003
[115] S. Roger, Software Engineering: A Practitioner's Approach (Sixth, International ed.),
McGraw-Hill Education Pressman, p. 388, 2005
[116] O. Bushehrian, "A new metric for automatic program partitioning, " in Proceedings -
IEEE 9th International Conference on Computer and Information Technology, CIT
2009, 2009, vol. 2, pp. 260–265.
[117] J. Vanhanen and C. Lassenius, ―Perceived effects of pair programming in an industrial
context,‖ in SEAA 2007: 33rd EUROMICRO Conference on Software Engineering
and Advanced Applications, Proceedings, 2007, pp. 211–218.
[118] T. McCabe, A complexity measure. IEEE Transactions on Software Engineering, 1976.
[119] R. Adamov, L. H. Richter. A proposal for measuring the structural complexity of
programs. Journal of Systems and Software, 1990.
[120] F.B. Abreu, R. Carapuca, Candidate metrics for object-oriented software within a
taxonomy framework. The Journal of Systems and Software, 1994
[121] O. Bushehrian, "Applying heuristic search for distributed software performance
enhancement, " in Proceedings of the 2009 2nd International Conference on
Computer Science and Its Applications, CSA 2009.
[122] Y. Zou, ―Quality driven software migration of procedural code to object-oriented
design,‖ in ICSM 2005: PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL
CONFERENCE ON SOFTWARE MAINTENANCE, 2005, pp. 709–713.
[123] E.J. Chikofsky, J.H. CrossII, Reverse engineering and design recovery : A taxonomy.
IEEE Software, January 1990.
[124] S. Chardigny, A. Seriai, "Software architecture recovery process based on object-
oriented source code and documentation, " Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 6285 LNCS. MGPS, Port-Saint-Louis, France, pp. 409–416,
2010.
119
[125] Y. Zhang, X. Liu, and R. Liu, ―Theoretical study on hybrid re-engineering,‖ in 2007
8th International Conference on Electronic Measurement and Instruments, ICEMI,
2007, pp. 1107–1110.
[126] Y. Zou and K. Kontogiannis, ―Incremental Transformation of Procedural Systems to
Object Oriented Platforms,‖ in Proceedings - IEEE Computer Society’s International
Computer Software and Applications Conference, 2003, pp. 290–295.
[127] S. Chardigny, A. Seriai, D. Tamzalit, M. Oussalah, ―Quality-driven extraction of a
component-based architecture from an object-oriented system,‖ in 12th European
Conference on Software Maintenance and Reengineering. Developing Evolvable
Systems, pp. 269 – 73.
[128] Y. Choi and H. Jang, ―Reverse Engineering Abstract Components for Model-Based
Development and Verification of Embedded Software,‖ in Proceedings 2010 IEEE
12th International Symposium on High-Assurance Systems Engineering (HASE), pp.
122 – 31.
[129] P. Clements,; F. Bachmann,; L. Bass,; D. Garlan,; J. Ivers,; R. Little,; R. Nord, J.
Stafford, , Documenting Software Architectures: Views and Beyond, Addison
Wesley, 2002
[130] L.M.G. Feijs, R. L. Krikhaar, R. Algebra, International Journal of Computer
Mathematics, 70, pp 57-74, 1999.
[131] L.M.G. Feijs, R.C. van Ommering, , Theory of Relations and its Applications to
Software Structuring, Phillips Research Internal Report, 1994.
[132] A. Kumar, B.S. Gill, ―Maintenance vs. Reengineering Software Systems,‖ Global
Journal of Computer Science and Technology, vol. 11, no. 23, Oct. 2012.
[133] L. Tahvildari, A. Singh, ―Software Bugs,‖ in Wiley Encyclopedia of Electrical and
Electronics Engineering, John Wiley & Sons, Inc., 2001.
[134] P. J. Finnigan, R. C. Holt, I. Kalas, S. Kerr, K. Kontogiannis, H.A. Muller, J.
Mylopoulos, S.G. Perelgut, M. Stanley, K. Wong, "The software bookshelf," IBM
Systems Journal , vol.36, no.4, pp.564-593, 1997
[135] Z. Ying, K.Kontogiannis, "A framework for migrating procedural code to object-
oriented platforms," Software Engineering Conference, 2001. APSEC 2001. Eighth
Asia-Pacific , vol., no., pp. 390- 399, 4-7 Dec. 2001
[136] S. Chung, S. Davalos, J. B. C. An, and K. Iwahara, ―Legacy to Web migration:
service-oriented software reengineering methodology,‖ Int. J. Serv. Sci. (Switzerland),
vol. 1, no. 3–4, pp. 333 – 65.
[137] R. Tiarks, ―Quality-driven refactoring,‖ Technical report, University of Bremen, 2005.
[138] J. Cleland-Huang, R. Settimi, X. Zou, and P. Solc, ―Automated classification of non-
functional requirements,‖ Requir. Eng. (UK), vol. 12, no. 2, pp. 103 – 20.
[139] A. De Lucia and A. Qusef, ―Requirements Engineering in Agile Software
Development,‖ J. Emerg. Technol. Web Intell. (Finland), vol. 2, no. 3, pp. 212 – 20.
120
[140] A. Cavalcanti, A. Sampaio, Refactoring Towards a Layered Architecture. Electron.
Notes Theor. Comput. Sci. 2005.
[141] W. XU, Q. HUA, N. FEI, Object-oriented software refactoring, Computer Engineering,
Vol.31, No.5. 2005, pp. 82-84
[142] T. Cohene, S.Easterbrook, "Contextual Risk Analysis for Interview Design" 13 IEEE
International Requirements Engineering Conference (RE‟05). Paris, Fance. 95-104,
2005.
[143] D. Zowghi, C. Coulin, "Requirements Elicitation: A Survey of Techniques,
Approaches, and Tools," Engineering and Managing Software Requirements, 2005.
[144] J. Wood, D. Silver, "Joint application development," Wiley, 1995.
[145] L. Dobricǎ, A. D. Ioniţǎ, R. Pietraru, and A. Olteanu, ―Automatic transformation of
software architecture models,‖ UPB Scientific Bulletin, Series C: Electrical
Engineering, vol. 73, no. 3, pp. 3–16, 2011.
[146] L. Etzkorn and H. Delugach, ―Towards a semantic metrics suite for object-oriented
design,‖ in Proceedings. 34th International Conference on Technology of Object-
Oriented Languages and Systems - TOOLS 34, pp. 71–80.
[147] R. Fuentes-Fernández, J. Pavón, and F. Garijo, ―A model-driven process for the
modernization of component-based systems,‖ Science of Computer Programming,
vol. 77, no. 3, pp. 247–269, 2012.
[148] O. Gilles and J. Hugues, ―A MDE-based optimisation process for real-time systems:
Optimizing systems at the architecture-level using the real DSL and library of
transformation and heuristics,‖ Int. J. Comput. Syst. Sci. Eng. (UK), vol. 26, no. 6, pp.
447 – 61.
[149] J. Gu, E. Ding, and B. Luo, ―Feature-oriented re-engineering using product line
approach,‖ in 2nd International Conference on Information Science and Engineering,
ICISE2010 - Proceedings, 2010, pp. 255–260.
[150] S. Yang, L. Fan, H. Sheng-ming, and C. Ping, ―Aspect-oriented software reverse
engineering,‖ J. Shanghai Univ. (China), vol. 10, no. 5, pp. 402 – 8.
[151] N.-L. . Hsueh, J.-Y. . Kuo, and C.-C. . Lin, ―Object-oriented design: A goal-driven and
pattern-based approach,‖ Software and Systems Modeling, vol. 8, no. 1, pp. 67–84,
2009.
[152] N.-L. . Hsueh, L.-C. . Wen, D.-H. . Ting, W. . Chu, C.-H. . Chang, and C.-S. . Koong,
―An approach for evaluating the effectiveness of design patterns in software
evolution,‖ in Proceedings - International Computer Software and Applications
Conference, 2011, pp. 315–320.
[153] I. Ivkovic and K. Kontogiannis, ―A framework for software architecture refactoring
using model transformations and semantic annotations,‖ in 10th European
Conference on Software Maintenance and Reengineering
121
[154] K. Khodamoradi, J. Habibi, and A. Kamandi, ―Architectural styles as a guide for
software architecture reconstruction,‖ Communications in Computer and Information
Science, vol. 6 CCIS. Computer Engineering Department, Sharif University of
Technology, Tehran, Iran, pp. 985–989, 2008.
[155] S. Kim, D.-K. Kim, L. Lu, S. Kim, S. Park, ―A feature-based approach for modeling
role-based access control systems,‖ Journal of Systems and Software, vol. 84, no. 12,
pp. 2035–2052, 2011.
[156] J. Knodel, I. John, D. Ganesan, M. Pinzger, F. Usero, J. L. Arciniegas, and C. Riva,
―Asset recovery and their incorporation into product lines,‖ in WCRE: 12TH
WORKING CONFERENCE ON REVERSE ENGINEERING 2005,
PROCEEDINGS, 2005, pp. 120–129.
[157] J. W. Ko and Y. J. Song, ―Graph based model transformation verification using
mapping patterns and graph comparison algorithm,‖ International Journal of
Advancements in Computing Technology, vol. 4, no. 8, pp. 262–269, 2012.
[158] H. Lee, H. Choi, K. C. Kang, D. Kim, Z. Lee, ―Experience report on using a domain
model-based extractive approach to software product line asset development,‖
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), vol. 5791 LNCS, pp. 137–149,
2009.
[159] Y. Li, ―Reengineering a scientific software and lessons learned,‖ in Proceedings -
International Conference on Software Engineering, 2011, pp. 41–45.
[160] M. Lindvall, ―Impact Analysis in Software Evolution,‖ vol. 59, Elsevier, 2003, pp.
127–210.
[161] A. Mathrani and S. Mathrani, ―Test strategies in distributed software development
environments,‖ Computers in Industry, vol. 64, no. 1, pp. 1–9, 2013.
[162] M. Matinlassi, ―Quality-driven software architecture model transformation,‖ in
Proceedings - 5th Working IEEE/IFIP Conference on Software Architecture, WICSA
2005, 2005, vol. 2005, pp. 199–200.
[163] J. A. Miller, R. Ferrari, and N. H. Madhavji, ―An exploratory study of architectural
effects on requirements decisions,‖ Journal of Systems and Software, vol. 83, no. 12,
pp. 2441–2455, 2010.
[164] J. F. Navas, J.-P. Babau, and J. Pulou, ―Reconciling run-time evolution and resource-
constrained embedded systems through a component-based development framework,‖
Science of Computer Programming, no. 0, p. -, 2012.
[165] H. Neukirchen, B. Zeiss, and J. Grabowski, ―An approach to quality engineering of
TTCN-3 test specifications,‖ International Journal on Software Tools for Technology
Transfer, vol. 10, no. 4, pp. 309–326, 2008.
[166] S. Parsa and O. Bushehrian, ―Performance-driven object-oriented program re-
modularisation,‖ IET Software, vol. 2, no. 4, pp. 362–378, 2008.
122
[167] Z. Ping and S. Yang, ―Understanding the aspects from various perspectives in aspects-
oriented software reverse engineering,‖ in 2010 International Conference on
Computer Application and System Modeling (ICCASM 2010), vol. vol.11, 2010.
[168] F. Qayum and R. Heckel, ―Local search-based refactoring as graph transformation,‖ in
Proceedings - 1st International Symposium on Search Based Software Engineering,
SSBSE 2009, 2009, pp. 43–46.
[169] O. Räihä, K. Koskimies, E. Mäkinen, ―Generating software architecture spectrum with
multi-objective genetic algorithms,‖ in Proceedings of the 2011 3rd World Congress
on Nature and Biologically Inspired Computing, NaBIC 2011, 2011, pp. 29–36.
[170] K. N. Reddy, A. A. Rao, M. G. Chand, K. Kumar J., ―A quantitative method to detect
design defects and to ascertain the elimination of design defects after refactoring,‖ in
Proceedings of the 2008 International Conference on Software Engineering Research
and Practice, SERP 2008, 2008, pp. 79–85.
[171] K. Sagonas and T. Avgerinos, ―Automatic refactoring of erlang programs,‖ in
PPDP’09 - Proceedings of the 11th International ACM SIGPLAN Symposium on
Principles and Practice of Declarative Programming, 2009, pp. 13–23.
[172] R. Shatnawi and W. Li, ―An empirical assessment of refactoring impact on software
quality using a hierarchical quality model,‖ International Journal of Software
Engineering and its Applications, vol. 5, no. 4, pp. 127–150, 2011.
[173] N. V Silva, A. S. R. Oliveira, and N. B. Carvalho, ―Design and Optimization of
Flexible and Coding Efficient All-Digital RF Transmitters,‖ IEEE Trans. Microw.
Theory Tech. (USA), vol. 61, no. 1, pp. 625 – 32.
[174] C. Stoermer, A. Rowe, L. O’Brien, and C. . Verhoef, ―Model-centric software
architecture reconstruction,‖ Software - Practice and Experience, vol. 36, no. 4, pp.
333–363, 2006.
[175] Y. Zou and K. Kontogiannis, ―Migration to object oriented platforms: A state
transformation approach,‖ in Conference on Software Maintenance, 2002, pp. 530–
539.
[176] K. Stroggylos and D. Spinellis, ―Refactoring - Does it improve software quality?,‖ in
Proceedings - ICSE 2007 Workshops: 5th International Workshop on Software
Quality, WoSQ 2007, 2007.
[177] R. E. Sward, A. T. Chamillard, and D. A. Cook, ―Using software metrics and program
slicing for refactoring,‖ CrossTalk, no. 7, pp. 20–24, 2004.
[178] L. Tahvildari, ―Evolving legacy systems through a multi-objective decision process,‖
in Proceedings Twelfth International Workshop on Software Technology and
Engineering Practice
[179] L. Tahvildari and K. Kontogiannis, ―Requirements driven software evolution,‖ in
Program Comprehension, Workshop Proceedings, 2004, vol. 12, pp. 258–259.
123
[180] L. Tahvildari and K. Kontogiannis, ―Developing a multi-objective decision approach
to select source-code improving transformations,‖ in IEEE International Conference
on Software Maintenance, ICSM, 2004, pp. 427–431.
[181] L. Tahvildari and K. Kontogiannis, ―A software transformation framework for quality-
driven object-oriented re-engineering,‖ in INTERNATIONAL CONFERENCE ON
SOFTWARE MAINTENANCE, PROCEEDINGS, 2002, pp. 596–605.
[182] L. Tahvildari and K. Kontogiannis, ―A methodology for developing transformations
using the maintainability soft-goal graph,‖ in Proceedings Ninth Working Conference
on Reverse Engineering WCRE 2002, pp. 77–86.
[183] L. Tahvildari and K. Kontogiannis, ―Improving design quality using meta-pattern
transformations: a metric-based approach,‖ JOURNAL OF SOFTWARE
MAINTENANCE AND EVOLUTION-RESEARCH AND PRACTICE, vol. 16, no.
4–5, pp. 331–361, 2004.
[184] L. Tahvildari and K. Kontogiannis, ―Quality-driven object-oriented code restructuring,‖
in ―Second Workshop on Software Quality‖ W13S Workshop - 26th International
Conference on Software Engineering, pp. 47–52.
[185] S. Yang, ―Understanding Crosscutting Concerns From Various Perspectives in
Software Reverse Engineering,‖ in Proceedings of the 2010 Sixth International
Conference on Networked Computing and Advanced Information Management
(NCM 2010), vol. vol.1, pp. 145 – 50.
[186] Y. Wu, Y. Yang, X. Peng, C. Qiu, and W. Zhao, ―Recovering object-oriented
framework for software product line reengineering,‖ Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics), vol. 6727 LNCS. School of Computer Science, Fudan
University, Shanghai 201203, China, pp. 119–134, 2011.