Xavier Palomo Teruel
NETgine, a Lear C-Code Embedded Software
Verification Framework
BACHERLOR’S DEGREE FINAL PROJECT
Supervisors Josep Yepes, Xavier Munté and Roberto Giral
Degree of Industrial Electronics and Automation Engineering
Tarragona
2015
3
Gratefulness ≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈
To my parents Elvira and Angel, who gave me the opportunity to course a degree and
supported me every day in my decisions.
To Sara, whose companionship has been vital during these years to achieve this goal.
To my brother Marc, and my sister Miriam, who have been my reference since I was a kid.
To my Lear supporters, Xavier, Josep and Juan Manuel who have guided me in this project.
≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈≈
Universitat Rovira i Virgili
5
Table of Contents
1 Introduction ............................................................................................................................ 10
2 State of the Art ........................................................................................................................ 11
2.1 The Electronic Control Unit (ECU) ....................................................................... 11
2.2 Object Oriented Programming ............................................................................... 12
2.3 Model – View - Controller ..................................................................................... 13
2.4 Visual Basic .NET .................................................................................................. 14
2.5 Software Component .............................................................................................. 15
3 Catastrophic Accidents due to Software Bugs ....................................................................... 16
4 Brief Introduction to Software Testing ................................................................................... 22
4.1 Fault, Failure and Error .......................................................................................... 22
4.2 V-Model ................................................................................................................. 24
4.2.1 Verification Phases.......................................................................................... 25
4.2.2 Validation Phases ............................................................................................ 26
4.3 Testing Methodologies ........................................................................................... 27
4.3.1 C-Unit .............................................................................................................. 27
4.3.2 Chronograms-based testing ............................................................................. 28
5 Project Objectives ................................................................................................................... 29
6 Requirements .......................................................................................................................... 30
7 Project’s Structure and Development ..................................................................................... 38
7.1 High Level Abstraction .......................................................................................... 38
7.2 Structure Diagram .................................................................................................. 39
7.3 Layers description and functionality ...................................................................... 40
7.3.1 Import Layer.................................................................................................... 40
7.3.2 Mock ............................................................................................................... 41
7.3.3 Generation ....................................................................................................... 42
7.3.4 Errors Handler ................................................................................................. 44
7.3.5 Running Engine ............................................................................................... 44
7.3.6 Check Layer .................................................................................................... 45
7.3.7 Views and Controllers ..................................................................................... 45
7.3.8 Model .............................................................................................................. 50
7.3.9 Constants ......................................................................................................... 50
7.3.10 APIs ............................................................................................................... 51
8 Results .................................................................................................................................... 52
8.1 Outer Appearance................................................................................................... 52
Universitat Rovira i Virgili
6
8.2 Unit Test Folders’ Structure ................................................................................... 57
8.3 C-Unit Reports ....................................................................................................... 60
8.4 MVT Reports ......................................................................................................... 64
8.4.1 Passed Test ...................................................................................................... 64
8.4.2 Failed Test ....................................................................................................... 68
9 Tool’s Test Battery ................................................................................................................. 72
10 Tool’s impact ........................................................................................................................ 86
10.1 Time Point of View .............................................................................................. 86
10.2 Economical Point of View ................................................................................... 86
10.3 Intangible Point of View ...................................................................................... 86
11 Conclusions .......................................................................................................................... 87
11.1 Results on Real Projects ....................................................................................... 87
11.2 Further Improvements .......................................................................................... 87
12 Bibliography and References ............................................................................................... 90
13 Annexes ................................................................................................................................ 92
Universitat Rovira i Virgili
7
List of Figures
Figure 2-1: Electronic Control Unit (ECU)
Figure 2-2: Model – View – Controller Structure
Figure 4-1: Influence on the Project Chain
Figure 4-2: Influence on the V-Model Methodology
Figure 4-3: Chronograms-based testing
Figure 7-1: Tool’s High Level Diagram
Figure 7-2: Tool’s Structure and Interaction Diagram
Figure 7-3: Import Layer
Figure 7-4: Generation Layer
Figure 7-5: Views and Controllers I
Figure 7-6: Views and Controllers II
Figure 7-7: Views and Controllers III
Figure 7-8: Test Structure
Figure 8-1: Main Menu View
Figure 8-2: About View
Figure 8-3: Test Configuration View
Figure 8-4: Test Export View
Figure 8-5: Import Test Case View
Figure 8-6: Import Libraries View
Figure 8-7: DFAs Import View
Figure 8-8: Import SWC View
Figure 8-9: Continue View
Figure 8-10: Unit Test Folders
Figure 8-11: 02_SW_COMPONENTS Folders and Files
Figure 8-12: 00_tools Items
Figure 8-13: report Folder Content
Figure 8-14: ut_SGNmock_files Sample Content
Figure 8-15: C-Unit Report
Figure 8-16: Test Coverage
Figure 8-17: Lines of Code Executed
Universitat Rovira i Virgili
8
Figure 8-18: Main Report I
Figure 8-19: Main Report II
Figure 8-20: All Signals Report
Figure 8-21: Passing Output Signal
Figure 8-22: Main Report Failed Test I
Figure 8-23: Main Report Failed Test II
Figure 8-24: All Signals Failed Test Report
Figure 8-25: Failed Signal
List of Tables
Table 6-1: List of Requirements
Table 6-2: List of Optional Requirements
Table 6-3: Requirements Summary
Universitat Rovira i Virgili
9
Acronyms
ECU: Electronic Control Unit
BCM: Brake Control Module
MVC: Model-View-Controller
XML: eXtensible Markup Language
VB: Visual Basic
XSL: Extensible Stylesheet Language
XSLT: XSL Transformations
HTML: HyperText Markup Language
MIL: Model in the loop
SIL: Software in the loop
PIL: Processor in the loop
HIL: Hardware in the loop
SWC: Software Component
TC: Test Case
OEM: Original Equipment Manufacturers
OOP: Object Oriented Programming
API: Application Programming Interface
MVT: Modelling Verification Toolkit
DFA: Data Flow Architecture
OEM: Original Equipment Manufacturer
COM: Component Object Model
DLL: Dynamic Link Library
Introduction Universitat Rovira i Virgili
10
1 Introduction
Lately, the amount of car’s electronic devices and solutions has hugely increased, and
therefore their complexity. That is the main reason why it is becoming more and more difficult
to test every single component and assure with guarantees their correct functionality.
As those components happen to interact with each other, every single case must be tested,
considering all the possible combinations, and their evolution in time.
All of that, of course, is becoming almost impossible for humans. Apart from both the
complexity and the time it takes, that would certainly lead to errors and oversights.
For that reason, especially companies are developing new software tools in order to test their
components automatically, in such a way that they save up workers time and offer sturdier
products, which is the main goal of a company, because customer’s confidence is paramount.
NETgine has become another software tool which contributes to ease the developer’s and
tester’s work when having to test the software that controls some system or subsystem of a
vehicle.
State of the Art Universitat Rovira i Virgili
11
2 State of the Art
2.1 The Electronic Control Unit (ECU)
In automotive electronics, an electronic control unit (ECU) [1] is a generic term for
any embedded system that controls one or more of the electrical system or subsystems in
a motor vehicle.
Types of ECU include Electronic/Engine Control Module (ECM), Powertrain Control Module
(PCM), Transmission Control Module (TCM), Brake Control Module (BCM or EBCM),
Central Control Module (CCM), Central Timing Module (CTM), General Electronic Module
(GEM), Body Control Module (BCM), Suspension Control Module (SCM), control unit, or
control module. Taken together, these systems are sometimes referred to as the car's computer.
(Technically there is no single computer but multiple ones.) Sometimes one assembly
incorporates several of the individual control modules (PCM is often both engine and
transmission).
Some modern motor vehicles have up to 80 ECUs. Embedded software in ECUs continues to
increase in line count, complexity, and sophistication. Managing the increasing complexity
and number of ECUs in a vehicle has become a key challenge for OEM.
The development of an ECU involves both hardware and software required to perform the
functions expected from that particular module. Automotive ECU's are being developed
following the V-model. Recently the trend is to dedicate a significant amount of time and
effort to develop safe modules by following standards like ISO 26262. It is rare that a module
is developed fully from scratch. The design is generally iterative and improvements are made
to both the hardware and software. The development of most ECU's are carried out by tier 1
suppliers based on specifications provided by the OEM.
Figure 2-1: Electronic Control Unit (ECU)
State of the Art Universitat Rovira i Virgili
12
2.2 Object Oriented Programming
Object oriented programming is a philosophy which uses a different set of programming
languages than the old procedural programming languages. Everything in OOP is grouped as
self sustainable objects. Hence it contributes to reusability.
First of all, the programmer defines and programs a class, which can be thought of
“something” able to do some functions. That class might contain variables, methods and
properties of that class.
After its creation, we will be able to get objects of that class through an instantiation. These
objects will have the same features of the parent class, but they will also be able to implement
new or modified functionalities.
As all of that represents a very different point of view in comparison with the procedural
programming, let’s apply the object oriented programming in a day to day example.
We all have probably dealed with very simple calculator, only being able to sum, substract,
multiply and divide, and also provided of some “Equals to” button. We could see a calculator
like the one described as the parent class.
The functionalities of this calculator are some basics that every single calculator might have.
Even so, probably an engineer might want more powerful functionalities of a calculator such
as root squares, exponentials, get provided of the number pi, etc. But he still wants the basic
and parent operations. In OOP, we could instantiate an object of parent calculator, what would
enables us to take profit of the programmed functions to sum, substract, multiply and divide,
and then just adding the new operations we desire.
As in OOP everything can be seen as an object, we are able to design high level classes such
as Windows forms, signals, etc.
This philosophy is the one which has been followed during the development of this project.
State of the Art Universitat Rovira i Virgili
13
2.3 Model – View - Controller
Model–View–Controller (MVC) [2] [3] is a software architectural pattern for
implementing user interfaces. It divides a given software application into three interconnected
parts, so as to separate internal representations of information from the ways that information
is presented to or accepted from the user.
The central component of MVC, the model, captures the behaviour of the application in terms
of its problem domain, independent of the user interface. The model directly manages the data,
logic and rules of the application. A view can be any output representation of information,
such as a chart or a diagram; multiple views of the same information are possible, such as a
bar chart for management and a tabular view for accountants. The third part, the controller,
accepts input and converts it to commands for the model or view.
In addition to dividing the application into three kinds of components, the model–view–
controller design defines the interactions between them.
A controller can send commands to the model to update the model's state (e.g., editing a
document). It can also send commands to its associated view to change the view's
presentation of the model (e.g., by scrolling through a document).
A model notifies its associated views and controllers when there has been a change in its
state. This notification allows the views to produce updated output, and the controllers to
change the available set of commands. In some cases an MVC implementation may
instead be 'passive' and other components must poll the model for updates rather than
being notified.
A view requests information from the model that it uses to generate an output
representation to the user.
Let’s explain this using a simple and common example: think of a Windows form that asks
you to fill your name in a text box. That form would also probably be provided of both an
“Ok” and a “Cancel” button.
Figure 2-2: Model – View – Controller Structure
State of the Art Universitat Rovira i Virgili
14
The view of this little program would be the display the user interacts with, without much
logic. The controller will be called whenever a button is pressed, by catching an event of the
view. For example, if the user presses “Cancel”, the view would call its controller and the
controller would probably close the form, meanwhile if the user presses “Ok”, the view would
catch this event and call the function which is to handle this button of the controller. If the user
has introduced data in the text box, this information would be stored in the model.
2.4 Visual Basic .NET
Visual Basic [4] is a third-generation event-driven programming language and integrated
development environment (IDE) from Microsoft for its COM programming model first
released in 1991. Microsoft intended Visual Basic to be relatively easy to learn and use. Visual
Basic was derived from BASIC and enables the rapid application development
(RAD) of graphical user interface (GUI) applications, access to databases using Data Access
Objects, Remote Data Objects, or ActiveX Data Objects, and creation of ActiveX controls and
objects.
Like the BASIC programming language, Visual Basic was designed to accommodate a
steep learning curve. Programmers can create both simple and complex GUI applications.
Programming in VB is a combination of visually arranging components or controls on a form,
specifying attributes and actions for those components, and writing additional lines of code for
more functionality. Since VB defines default attributes and actions for the components, a
programmer can develop a simple program without writing much code. Programs built with
earlier versions suffered performance problems, but faster computers and native code
compilation has made this less of an issue.
Forms are created using drag-and-drop techniques. A tool is used to place controls (e.g., text
boxes, buttons, etc.) on the form. Controls have attributes and event handlers associated with
them. Default values are provided when the control is created, but may be changed by the
programmer. Many attribute values can be modified during run time based on user actions or
changes in the environment, providing a dynamic application. For example, code can be
inserted into the form resize event handler to reposition a control so that it remains in the
center of the form, expands to fill up the form, etc. By inserting code into the event handler for
a key press in a text box, the program can automatically translate the case of the text being
entered, or even prevent certain characters from being inserted.
Visual Basic can create executable files, ActiveX controls, or DLL files, but is primarily used
to develop Windows applications and to interface database systems. Dialog boxes with less
functionality can be used to provide pop-up capabilities. Controls provide the basic
functionality of the application, while programmers can insert additional logic within the
appropriate event handlers.
For example, a drop-down combination box automatically displays a list. When the user
selects an element, an event handler is called that executes code that the programmer created
to perform the action for that list item.
State of the Art Universitat Rovira i Virgili
15
Alternatively, a Visual Basic component can have no user interface, and instead provide
ActiveX objects to other programs via COM. This allows for server-side processing or an add-
in module.
Visual Basic .NET (VB.NET) is a multi-paradigm, high level programming language,
implemented on the .NET Framework. Microsoft launched VB.NET in 2002 as the successor
to its original Visual Basic language.
As this application was thought to be visual and interactive, I chose to use Visual Studio .NET
to develop it.
2.5 Software Component
Many systems, as the electronics of a car, are commonly broken down into smaller parts or
subsystems, so that the whole system can be tested easily.
The smallest part of that whole system is usually referred to as software component (also
known as “componentware”).
Following the example of the electronics of a car, we could imagine the system to be all the
electrical and electronic connections which are established in a car, and some software
components of that system could be the lightning system, the control of the wipers, the
windows, the air-conditioning, and so on.
For a software developer or for a tester, it would get almost impossible to test the system as a
whole, given all the possible interrelations between these subsystems, and he should probably
had to analyze a great amount of combinations of inputs to test the car’s outputs.
By dividing this system into software components, that job gets really eased.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
16
3 Catastrophic Accidents due to Software Bugs
Historically, there has been a great amount of catastrophes due to software bugs. Some of
them were really tragic even causing people to die. Those bugs have affected in many fields,
such as in aeronautics, in medical equipment, in automotive, etc.
An overview of some of the most famous ones, are here summarized chronologically.
While technology wasn't to blame per se, there are plenty of recorded examples where faulty
hardware and software have cost the organizations concerned dearly, both financially and in
terms of reputation - and resulted in some near misses for the public.
For some of them, the total losses cost were approximately calculated.
1. Mariner Bugs Out (1962)
Cost: $18.5 million.
Disaster: [8] The Mariner 1 rocket with a space probe headed for Venus diverted from its
intended flight path shortly after launch. Mission Control destroyed the rocket 293 seconds
after liftoff.
Cause: A programmer incorrectly transcribed a handwritten formula into computer code,
missing a single superscript bar. Without the smoothing function indicated by the bar, the
software treated normal variations of velocity as if they were serious, causing faulty
corrections that sent the rocket off course.
2. CIA gives the Soviets Gas (1982)
Cost: Millions of dollars, significant damage to Soviet economy.
Disaster: [8] Control software went haywire and produced intense pressure in the Trans-
Siberian gas pipeline, resulting in the largest man-made non-nuclear explosion in Earth’s
history.
Cause: CIA operatives allegedly planted a bug in a Canadian computer system purchased by
the Soviets to control their gas pipelines. The purchase was part of a strategic Soviet plan to
steal or covertly obtain sensitive U.S. technology. When the CIA discovered the purchase,
they sabotaged the software so that it would pass Soviet inspection but fail in operation.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
17
3. Faulty Soviet early warning System nearly causes WWIII (1983)
Cost: Almost the World War III.
[5] [8] The threat of computers purposefully starting World War III is still the stuff of science
fiction, but accidental software glitches have brought us too close in the past. Although there
have been numerous alleged events of this ilk, the secrecy around military systems makes it
hard to sort the urban myths from the real incidents.
However, one example that is well recorded happened back in 1983, and was the direct result
of a software bug in the Soviet early warning system. The Russian system told them that the
United States had launched five ballistic missiles. However, the duty officer for the system,
claims he had a "funny feeling in my gut", and reasoned if the U.S. was really attacking they
would launch more than five missiles.
The trigger for the near apocalyptic disaster was traced to a fault in software that was
supposed to filter out false missile detections caused by satellites picking up sunlight
reflections off cloud-tops.
4. The Therac 25 Tragedy (1985 - 1987)
Cost: 6 people’s life.
[6] [7] [8] The Therac 25 was a software controlled radiation treatment device for ailing
cancer patients. In the early nineties, a combination of software and system failures resulted in
the death of six persons.
The Therac 25 was the result of an evolutionary development from a predecessor machine.
Levenson and Turner, who investigated the tragedy, suggest that the safety requirements were
well understood, but the system and software architectures was flawed in that all hardware
safety interlocks were removed. This left the software checks as the only safety safeguard. The
software architecture was also flawed because it did not guarantee the integrity of treatment
commands entered by the operator. There were several errors, among them the failure of the
programmer to detect a race condition (i.e., lack of coordination between concurrent tasks).
5. Wall Street Crash (1987)
Cost: $500 billion in one day.
Disaster: [8] On “Black Monday” (October 19, 1987), the Dow Jones Industrial Average
plummeted 508 points, losing 22.6% of its total value. The S&P 500 dropped 20.4%. This was
the greatest loss Wall Street ever suffered in a single day.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
18
Cause: A long bull market was halted by a rash of SEC investigations of insider trading and
by other market forces. As investors fled stocks in a mass exodus, computer trading programs
generated a flood of sell orders, overwhelming the market, crashing systems and leaving
investors effectively blind.
6. Patriot fails Soldiers (1991)
Cost: 28 soldiers dead, 100 injured.
Disaster: [8] During the first Gulf War, an American Patriot Missile system in Saudi Arabia
failed to intercept an incoming Iraqi Scud missile. The missile destroyed an American Army
barracks.
Cause: A software rounding error incorrectly calculated the time, causing the Patriot system
to ignore the incoming Scud missile.
7. Pentium fails long Division (1993)
Cost: $475 million, corporate credibility.
Disaster: [8] Intel’s highly-promoted Pentium chip occasionally made mistakes when dividing
floating-point numbers within a specific range. For example, dividing 4195835.0/3145727.0
yielded 1.33374 instead of 1.33382, an error of 0.006%. Although the bug affected few users,
it became a public relations nightmare. With an estimated 5 million defective chips in
circulation, Intel offered to replace Pentium chips only for consumers who could prove they
needed high accuracy. Eventually Intel replaced the chips for anyone who complained.
Cause: The divider in the Pentium floating point unit had a flawed division table, missing
about five of a thousand entries and resulting in these rounding errors.
8. Airbus A-340 Shenanigans (1995)
[6] The BBC news at 08.30 GMT, 15 March 1995 reported a slight problem, which occurred
on the morning of 15 Mar 1995 with the ultra high-tech, packed full of software and therefore
utterly wonderful Airbus A340.
Apparently on the final part of its approach to Gatwick, both the pilots’ screens went blank, to
be replaced by a polite little message saying "Please wait...". Somewhat unnerved, the pilots
requested that the plane turn left, but it turned right instead. They then tried to get it to adopt a
3 degree approach to the runway, but it chose a 9 degree plummet instead. At this point, from
the report, they appeared to gain manual control and landed safely.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
19
9. Is Flying Safe anymore? (1995)
[6] The next time you board a plane, try not to think about this: Flight Simulator running on
your notebook may be more reliable than the software that keeps planes from colliding in
midair. That's because the FAA's air-traffic-control system still uses software from the 1970s.
It runs on a vacuum-tube IBM 9020e mainframe that dates back a decade earlier. This system
contributed to almost a dozen failures at air-traffic-control centers in the past year, including
unnerving back-to-back breakdowns on July 23 and 24, 1995 in Chicago.
For more than a decade, the FAA has been working to replace this antiquated system. Sadly,
the alternative, the Advanced Automation System with its million-plus lines of code written
since the early 1980s, is riddled with bugs. And six years later, computer scientists from two
leading universities have had to comb through it to see if any code is salvageable. Faced with
software that's too unreliable to trust in life-and-death situations, the FAA must rely instead on
its old and collapsing -- but well-understood -- air-traffic-control system.
10. The Explosion of the Ariane 5 (1996)
[5] [8] In 1996, Europe's newest and unmanned satellite-launching rocket, the Ariane 5, was
intentionally blown up just seconds after taking off on its maiden flight from Kourou, French
Guiana. The European Space Agency estimated that total development of Ariane 5 cost more
than $8bn (£4bn). On board Ariane 5 was a $500 million (£240 million) set of four scientific
satellites created to study how the Earth's magnetic field interacts with Solar Winds.
According to a piece in the New York Times Magazine, the self-destruction was triggered by
software trying to stuff "a 64-bit number into a 16-bit space."
"This shutdown occurred 36.7 seconds after launch, when the guidance system's own
computer tried to convert one piece of data--the sideways velocity of the rocket--from a 64-bit
format to a 16-bit format. The number was too big, and an overflow error resulted. When the
guidance system shut down, it passed control to an identical, redundant unit, which was there
to provide backup in case of just such a failure. But the second unit had failed in the identical
manner a few milliseconds before. And why not? It was running the same software," the
article stated.
11. Mars Climate Observer metric problem (1998)
[5] [8] Two spacecrafts, the Mars Climate Orbiter and the Mars Polar Lander, were part of a
space program that, in 1998, was supposed to study the Martian weather, climate, and water
and carbon dioxide content of the atmosphere. But a problem occurred when a navigation
error caused the lander to fly too low in the atmosphere and it was destroyed.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
20
What caused the error? A sub-contractor on the NASA program had used imperial units (as
used in the U.S.), rather than the NASA-specified metric units (as used in Europe).
12. The two-digit year-2000 problem (1999/2000)
[5] Many IT vendors and contractors did very well out of the billions spent to avoid what
many feared would be the disaster related to the Millennium Bug. Rumours of astronomical
contract rates and retainers abounded. And the sound of clocks striking midnight in time zones
around the world was followed by... not panic, not crashing computer systems, in fact nothing
more than New Year celebrations.
So why include it here? That the predictions of doom came to naught is irrelevant, as we're not
talking about the disaster that was averted, but the original disastrous decision to use and keep
using for longer than was either necessary or prudent double digits for the date field in
computer programs. A report by the House of Commons Library pegged the cost of fixing the
bug at £400 billion.
13. Dot-Bomb Collapse (2000)
Cost: $5 trillion in market value, thousands of companies failed.
Disaster: [8] A speculative bubble from 1995–2001 fuelled a rapid increase in venture capital
investments and stock market values in the Internet and technology sectors. The “dot-com
bubble” began to collapse in early 2000, erasing trillions in stock market value, wiping out
thousands of companies and jobs, and launching a global recession.
Cause: Companies and investors dismissed standard business models, and instead focused on
increasing market share at the expense of profits.
14. Cancer Treatment to Die For (2000)
Cost: Eight people dead, 20 critically injured.
Disaster: [8] Radiation therapy software by Multidata Systems International miscalculated the
proper dosage, exposing patients to harmful and in some cases fatal levels of radiation. The
physicians, who were legally required to double-check the software’s calculations, were
indicted for murder.
Cause: The software calculated radiation dosage based on the order in which data was
entered, sometimes delivering a double dose of radiation.
Catastrophic Accidents due to Software Bugs Universitat Rovira i Virgili
21
15. When the laptops exploded (2006)
[5] It all began simply, but certainly not quietly, when a laptop manufactured by Dell burst
into flames at a trade show in Japan. There had been rumours of laptops catching fire, but the
difference here was that the Dell laptop managed to do it in the full glare of publicity and
video captured it in full colour.
"We have captured the notebook and have begun investigating the event," Dell spokeswoman
Anne Camden reported at the time, and investigate Dell did. At the end of these investigations
the problem was traced to an issue with the battery/power supply on the individual laptop that
had overheated and caught fire.
It was an expensive issue for Dell to sort out. As a result of its investigation Dell decided that
it would be prudent to recall and replace 4.1m laptop batteries.
Company chief executive Michael Dell eventually laid the blame for the faulty batteries with
the manufacturer of the battery cells--Sony. But that wasn’t the end of it. Apple reported
issues for iPods and Macbooks and many PC suppliers reported the same. Matsushita alone
has had to recall around 54 million devices. Sony estimated at the time that the overall cost of
supporting the recall programs of Apple and Dell would amount to between ¥20 billion
(£90m) and ¥30 billion.
Brief Introduction to Software Testing Universitat Rovira i Virgili
22
4 Brief Introduction to Software Testing
[9] [10] Software testing is an investigation conducted to provide stakeholders with
information about the quality of the product or service under test. Software testing can also
provide an objective, independent view of the software to allow the business to appreciate and
understand the risks of software implementation. Test techniques include, but are not limited
to, the process of executing a program or application with the intent of finding software
bugs (errors or other defects).
It involves the execution of a software component or system to evaluate one or more
properties of interest. In general, these properties indicate the extent to which the component
or system under test:
Meets the requirements that guided its design and development.
Responds correctly to all kinds of inputs.
Performs its functions within an acceptable time.
Is sufficiently usable.
Can be installed and run in its intended environments.
Achieve the general result its stake-holders desire.
As the number of possible tests for even simple software components is practically infinite, all
software testing uses some strategy to select tests that are feasible for the available time and
resources. As a result, software testing typically (but not exclusively) attempts to execute a
program or application with the intent of finding software bugs (errors or other defects).
Software testing can provide objective, independent information about the quality of software
and risk of its failure to users and/or sponsors.
Software testing can be conducted as soon as executable software (even if partially complete)
exists. The overall approach to software development often determines when and how testing
is conducted. For example, in a phased process, most testing occurs after system requirements
have been defined and then implemented in testable programs. In contrast, under an agile
approach, requirements, programming, and testing are often done concurrently.
4.1 Fault, Failure and Error
Fault: An incorrect step, process, or data definition in a computer program which causes the
program to perform in an unintended or unanticipated manner. It is an inherent weakness of
the design or implementation which might result in a failure. A fault might be present and
latent in the systems like they were in Patriot Missile failure and Therac-25 accidents
described before. These faults lead to a failure when the exact scenario is met.
Fault avoidance: using techniques and procedures which aim to avoid the introduction of
faults during any phase of the safety lifecycle of the safety-related system
Brief Introduction to Software Testing Universitat Rovira i Virgili
23
Fault tolerance: the ability of a functional unit to continue to perform a required function in the
presence of faults or errors
Failure: The inability of a system or component to perform its required functions within
specified performance requirements
Error: A discrepancy between a computed, observed, a measured value or a condition and the
true, specified, or theoretically correct value or condition.
Another fact which must be highlighted is that bugs, what means some kind of fault in
software terms, really raise the project’s cost the later they are found. For that reason, the
investment in testing is so huge. A bug found at the end of the project may cause the whole
project to be redone again. If it’s found when the production has started, it can even cause
millions of losses. That’s why ideally, the maximum amount of bugs should be detected and
corrected the sooner, in the requirements. But, at least, while testing, cause the production has
not started yet.
This application, NETgine, helps in the testing process. More concretely, helps testing the
software before its integration to the hardware.
Figure 4-1: Influence on the Project Chain
Brief Introduction to Software Testing Universitat Rovira i Virgili
24
4.2 V-Model
The V-model [11] [12] represents a software development process (also applicable to
hardware development) which may be considered an extension of the waterfall model. [13]
Instead of moving down in a linear way, the process steps are bent upwards after
the coding phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing. The
horizontal and vertical axes represents time or project completeness (left-to-right) and level of
abstraction (coarsest-grain abstraction uppermost), respectively.
Something important to bear in mind, is that NETgine is not testing the ECU itself, so it does
not test the hardware of the ECU directly but the software that will be loaded to it. That means
enforcing the efforts made in Unit Testing.
Figure 4-2: Influence on the V-Model Methodology
Brief Introduction to Software Testing Universitat Rovira i Virgili
25
4.2.1 Verification Phases
4.2.1.1 Requirements Analysis
In the requirements analysis phase, the first step in the verification process,
the requirements of the system are collected by analyzing the needs of the user. This phase is
concerned with establishing what the ideal system has to perform. However it does not
determine how the software will be designed or built. Usually, the users are interviewed and a
document called the user requirements document is generated.
The user requirements document will typically describe the system’s functional, interface,
performance, data, security, etc. requirements as expected by the user. It is used by business
analysts to communicate their understanding of the system to the users. The users carefully
review this document as this document would serve as the guideline for the system designers
in the system design phase. The user acceptance tests are designed in this phase.
There are different methods for gathering requirements of both soft and hard methodologies
including; interviews, questionnaires, document analysis, observation, throw-away prototypes,
use cases and static and dynamic views with users.
4.2.1.2 System Design
Systems design is the phase where system engineers analyze and understand the business of
the proposed system by studying the user requirements document. They figure out possibilities
and techniques by which the user requirements can be implemented. If any of the requirements
are not feasible, the user is informed of the issue. A resolution is found and the user
requirement document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is
generated. This document contains the general system organization, menu structures, data
structures, etc. It may also hold example business scenarios, sample windows, reports for the
better understanding. Other technical documentation like entity diagrams, data dictionary will
also be produced in this phase. The documents for system testing are prepared.
4.2.1.3 Architecture Design
The phase of the design of computer architecture and software architecture can also be
referred to as high-level design. The baseline in selecting the architecture is that it should
realize all which typically consists of the list of modules, brief functionality of each module,
their interface relationships, dependencies, database tables, architecture diagrams, technology
details etc. The integration testing design is carried out in the particular phase.
Brief Introduction to Software Testing Universitat Rovira i Virgili
26
4.2.1.4 Module Design
The module design phase can also be referred to as low-level design. The designed system is
broken up into smaller units or modules and each of them is explained so that the programmer
can start coding directly. The low level design document or program specifications will
contain a detailed functional logic of the module, in pseudo-code:
Database tables, with all elements, including their type and size.
All interface details with complete API [14] references.
All dependency issues.
Error message listings.
Completion of inputs and outputs for a module.
The unit test design is developed in this stage.
4.2.2 Validation Phases
4.2.2.1 Unit Testing
In the V-Model, Unit Test Plans (UTPs) are developed during module design phase. These
UTPs are executed to eliminate bugs at code level or unit level. A unit is the smallest entity
which can independently exist, e.g. a program module. Unit testing verifies that the smallest
entity can function correctly when isolated from the rest of the code or units.
4.2.2.2 Integration Testing
Integration Test Plans are developed during the Architectural Design Phase. These tests verify
that units created and tested independently can coexist and communicate among themselves.
Test results are shared with customer's team.
4.2.2.3 System Testing
System Tests Plans are developed during System Design Phase. Unlike Unit and Integration
Test Plans, System Test Plans are composed by client's business team. System Test ensures
that expectations from application developed are met. The whole application is tested for its
functionality, interdependency and communication. System Testing verifies that functional
and non-functional requirements have been met. Load and performance testing, stress testing,
regression testing, etc., are subsets of system testing.
Brief Introduction to Software Testing Universitat Rovira i Virgili
27
4.2.2.4 User Acceptance Testing
User Acceptance Test (UAT) plans are developed during the Requirements Analysis phase.
Test Plans are composed by business users. UAT is performed in a user environment that
resembles the production environment, using realistic data. UAT verifies that delivered system
meets user's requirement and system is ready for use in real time.
4.3 Testing Methodologies
Although there are many ways to test a system, especially two are of interest in Lear. Those
two are C-Unit, which is a coding test methodology, and testing by chronograms.
4.3.1. C-Unit
[15] [16] C-Unit is a lightweight system for writing, administering, and running unit tests in C.
It provides C programmers a basic testing functionality with a flexible variety of user
interfaces.
C-Unit is built as a static library which is linked with the user's testing code. It uses a simple
framework for building test structures, and provides a rich set of assertions for testing
common data types. In addition, several different interfaces are provided for running tests and
reporting results.
A good unit test is:
Able to be fully automated.
Has full control over all the pieces running (using mocks or stubs to achieve this
isolation when needed).
Can be run in any order if part of many other tests.
Runs in memory (no DB or File access, for example).
Consistently returns the same result (You always run the same test, so no random
numbers, for example. save those for integration or range tests).
Runs fast.
Tests a single logical concept in the system.
Readable.
Maintainable.
Trustworthy (when you see its result, you don’t need to debug the code just to be sure).
Brief Introduction to Software Testing Universitat Rovira i Virgili
28
4.3.2. Chronograms-based testing
Another methodology used to test a system, is by chronograms. In that way we define the
evolution of each input signal and we get the evolution of the outputs.
This testing methodology has mainly 2 advantages compared to C-Unit:
- It is more visual and understandable for anyone, especially for those not used to
code.
- We are testing the whole evolution, not only the specific transitions in which
there is some value change (as happens with C-Unit).
For example, in the image shown above, the turning on of a car’s light is represented.
The output, which means the light turning on, is the blue signal, which is seen as a Boolean
value, given that it can only be either turned on or off. The red signal refers to the car’s button
we use to turn on a car’s light. Of course, we can notice that when the light’s button is set to
one, the light keeps off, that is because this is not the only requirement. What we also need to
turn on this light, is the car ignited. That is what the second signal, the purple one, represents.
We can see that finally, the light of the car is set to one a short time after the car’s ignition,
and it is again off when the driver sets off the light button.
Figure 4-3: Chronograms-based testing
Project Objectives Universitat Rovira i Virgili
29
5 Project Objectives
As mentioned before, a bunch of software programs are being launched to reach the necessary
robustness and reliability to sell products.
In the case that concerns us, the NETgine tool, aims to test the ECU’s software.
The main goals NETgine has are two:
- First of all, it should let the developer run unit tests automatically, so that he
will not need to be typing the whole test.
- On the other hand, after the test is run, the developer will be able to get a
graphical report indicating whether the test was successful or not, and if not,
where did it fail. Plus drawing all the signals evolution in time and other useful
information.
All that supposes joining and merging both of the methodologies used in Lear to test the
systems, inasmuch as using this tool will ease to pass from one methodology to the other.
Apart from that, and not less important, this tool will help to define a standardization of the
test’s structure, given that some common steps are to be followed when aiming to use
NETgine.
As this tool is thought to be used within the workers, is must be quite user friendly, so that
developers used to work with C-Unit can easily be introduced to that tool and find in it a better
way to develop the tests.
Requirements Universitat Rovira i Virgili
30
6 Requirements
Each requirement is broken down into 4 parts. The first one, SWRS, stands for Software
Requirements Specifications. The second one describes the kind of requirement, attending the
following legend:
F: Functionality
R: Reliability - Robustness
T: Testability
M: Maintainability
In the third position, the area it refers to can be found. IN stands for Input, O stands for
Output, RE for Reporting and P for the whole project.
In the last position, we can find the ordinary number for the test case referring to each area and
its version.
Requirement ID Brief Description
SWRS-F-IN-1v1 Import an Excel test case
SWRS-F-IN-2v1 Select the desired software component
SWRS-F-IN-3v1 Let the user select the tick time the test is going to evaluate
SWRS-R-IN-4v1 Errors when importing: incorrect software component
SWRS-R-IN-5v1 Do not let the user choose the same software component more than
once
SWRS-R-IN-6v1 Let the user delete a previously selected software component
SWRS-F-O-1v1 Create a C-Unit tree of folders
SWRS-F-O-2v1 Generate the C code unit test
SWRS-F-O-3v1 Save the generated C code and the needed files in the
corresponding directories so that it compiles
SWRS-F-RE-1v1 Create a suitable XML to generate MVT kind of reports
SWRS-R-RE-2v1 Correctness of the reports
SWRS-M-P-1v1 Comments and proper indentation
Table 6-1: List of Requirements
Requirements Universitat Rovira i Virgili
31
Optional Requirements
Requirement ID Brief Description
SWRS-F-IN-OP1 Availability of choosing more than one software component
SWRS-F-O-OP1 Availability to run several tests
SWRS-F-O-OP2 Availability to add more kind of reports in the future
Sections of a Requirement
- Name and ID
- Reference
- Purpose
- Verification Method
- Requirement Text
List of Requirements
Reference: SWRS-F-IN-1v1
Purpose
The program must be able to import an Excel file which defines the case to test in
chronograms.
Verification Method
We should be able to verify its correct importation by debugging the application and checking
its importation into computer’s memory. Also, at the end checking the graphical transitions
and comparing them to the test defines ones.
Requirement Text
Some form should appear asking which test the user wants to import, and let him browse its
location.
Only if all the introduced parameters are correct, proceed to import the test. We will be able to
test its correct implementation by software debugging and looking whether the test
information is stored in memory or not.
Reference: SWRS-F-IN-2v1
Purpose
A software component (at least one) must be also selected.
Table 6-2: List of Optional Requirements
Requirements Universitat Rovira i Virgili
32
Verification Method
By debugging the application, we will be able to prove if the software component has been
properly open and stored in memory.
Requirement Text
Apart from the Excel test, we must select the software component according to what we want
to test. To achieve this, the same procedure as the one to import the test can be used. Either by
displaying another form or in the same where the user is required to select a test, he will
browse to select it.
Reference: SWRS-F-IN-3v1
Purpose
Let the user type the time at which every transition will be evaluated.
Verification Method
By debugging the application, we will be able to prove if the introduced tick time is the used
to test the system. This could also be verified in the reports.
Requirement Text
At some form or view there should be a text box in which the user can introduce the tick time
he desires to use.
Reference: SWRS-R-IN-4v1
Purpose
The user should only be able to import the correct software component extensions and valid
ones.
Verification Method
Testing the tool it has to be tried to import incorrect files and do not let that.
Requirement Text
As the software component will be defined in a .c file, the user should only be able to import
this files extensions, or at least one of its headers.
Reference: SWRS-R-IN-5v1
Purpose
Do not let the user select more than once the same SWC.
Verification Method
When testing the tool, it must be tried to select a SWC that has already been selected.
Requirements Universitat Rovira i Virgili
33
Requirement Text
In no case will be needed to import a SWC that has already been imported, so the user must
not be able to do it, except when he has firstly erased it and selects it again.
Reference: SWRS-R-IN-6v1
Purpose
Similarly to the previous requirement, the user should be able to erase a selected SWC at any
time.
Verification Method
That should also be proved and verified in the form when the user is required to select a SWC.
Requirement Text
Some kind of erase button or an option to erase a selected SWC must be added to the program.
Reference: SWRS-F-O-1v1
Purpose
Create the C-Unit folders’ tree.
Verification Method
Manually and visually checking the given test location.
Requirement Text
When running the application, the user will be asked where to generate all the corresponding
folders of a C-Unit test. A display showing how this tree should look like will be further
presented in order to validate its implementation.
Reference: SWRS-F-O-2v1
Purpose
Generate a C-Unit test from a basis template which will be filled from the imported test case.
Verification Method
Manually checking the generated test and proving its correctness.
Requirement Text
This requirement is the main functionality of the application: to generate a template and merge
it with the given test case and the software component in order to ease and quicken the tester’s
job.
Requirements Universitat Rovira i Virgili
34
Reference: SWRS-F-O-3v1
Purpose
To save the C-Unit test and all its complements in the selected folder.
Verification Method
By checking the corresponding folders and verifying its correct functionality.
Requirement Text
The generated test will be saved in the corresponding folder of the folders’ tree generated
before.
Reference: SWRS-F-RE-1v1
Purpose
Generate XML files to create the reports.
Verification Method
Checking that the reports are located in the corresponding folders and the data they contain is
correct.
Requirement Text
XML files must be created in order to use an already existing XSLT engine to convert this
XML to HTML. The XML data will have to follow to proper structure.
Reference: SWRS-F-RE-2v1
Purpose
Prove the information given by the test reports.
Verification Method
Both kinds of reports can be compared in order to prove their information. Results from a
previous test will be useful to prove the exactness.
Requirement Text
The correctness of the reports’ information is a main feature which has to be achieved so that
the test is valid.
Reference: SWRS-M-P-1v1
Purpose
Follow the programming standards.
Requirements Universitat Rovira i Virgili
35
Verification Method
Visually, in the source code.
Requirement Text
The typed code must follow some coding standards such as indentation, code comments,
variables naming, etc.
Optional Requirements
Reference: SWRS-F-IN-OP1
Purpose
The user should be able to select more than one software component to test several systems.
Verification Method
Some display will show and prove that more than one software component has been selected.
Requirement Text
Preferably, he should be able to see which ones has he already chosen. For that reason, a
display will show the name of the software component file.
Reference: SWRS-F-O-OP1
Purpose
Prove that a battery of tests can be carried out.
Verification Method
Introducing many and different test cases with different specifications.
Requirement Text
The tool should be able to run several test cases.
Reference: SWRS-F-O-OP2
Purpose
Let more kind of reports to be added in the future.
Verification Method
In the source code and with the creation of the classes structure.
Requirement Text
The application must be thought to have further improvements. The availability to add new
types of reports is an example.
Requirements Universitat Rovira i Virgili
36
Basic Summary
Tool Requirement Category
Import an Excel test case Mandatory
Select the desired software component Mandatory
Do not let the user choose software components with non-
valid extensions Mandatory
Generate the C code test Mandatory
Let the user delete a previously selected software
component Mandatory
Create a C-Unit test folders’ tree Mandatory
Let the user select the desired test tick time Mandatory
Do not let the user choose more than once the same
software component Mandatory
Comment the code and follow the standards Mandatory
Save the C-Unit and its complements into the chosen
directory Mandatory
Correctness of the chronograms reports Mandatory
Create a suitable XML if needed for the MVT kind of
reports Mandatory
Availability to test several test cases at a time Advanced option
Availability to add more kind of reports in the future Advanced option
Select more than one software component Advanced option
Design Decisions
1. When a given signal from the test case does not change its value along the whole test,
an assertion will be triggered at the beginning of the test to ensure its initial value.
2. An Edit Mode has been implemented so that the user can open and see the generated
test before executing it and get the results. The main advantage of this, is that the user
is able to check whether the code compiles or not, and if not, manually change this
code to fix these errors before getting the results. Even though, this feature should not
be necessary because the code should always compile.
3. NETgine checks that all the signals that the test case use, appear in the software
component too. That must be done because if the test case contains signals which the
Table 6-3: Requirements Summary
Requirements Universitat Rovira i Virgili
37
software component does not handle, they will not be evaluated and consequently the
test will fail.
4. Similar to the 2nd
design decision but in the other way round, NETgine checks that all
the signals that the software component handles are taken into account in the test case.
If it is not the case, it is quite probable that the test will fail because the unit test
solution will not compile, inasmuch as there will be missing signals declared in the
mocked file and consequently some compiling errors will be triggered.
Even though, for both the 3rd
and 4th
design decisions, NETgine enables the user to continue in
either case, considering that the user could still want to continue and just test a smaller piece
of the whole system, knowing that the test will fail.
5. The Software Component must have an Init() and a Task() functions preceded by the
SWC’s name in capital letters as follows:
AHL_Init();
AHL_Task();
Where “AHL” is the SWC’s name.
SON_Init();
SON_Task();
Where “SON” is the SWC’s name.
For that reason, the user must introduce these functions names in the configuration view, so
that the program knows the exact name of the functions in order to write the test dynamically.
Although thanks to the edit mode, the user will be able to see if he has introduced these
functions incorrectly and change them before executing the test.
Project’s Structure and Development Universitat Rovira i Virgili
38
7 Project’s Structure and Development
7.1 High Level Abstraction
The diagram above summarizes what NETgine needs as inputs, what uses and what generates.
As it will be further described, to generate a unit test, the program needs to import the test case
and the software component, also known as system under test.
It generates the unit test based on a template basis, and it also uses an XSLT engine to convert
the XML code generated by NETgine to HTML.
The outputs it gives are first of all, the C-Unit framework, and also a graphical output in form
of chronograms reports.
MVT stands for Modelling Verification Toolkit, which is the framework of the Lear
department in which this program has been developed.
All this files are hereby described in more detail.
Figure 7-1: Tool’s High Level Diagram
Project’s Structure and Development Universitat Rovira i Virgili
39
7.2 Structure Diagram
The above diagram represents the program layers interaction. As this software has been
developed following the Model – View – Controller architecture, the controller layer is the
one which interacts with the other layers and classes and have the power to control the flow of
the program.
The class Engine App is the one that manages the initialization of the application and prompts
the initial menu view. At this point, all the events that are produced are handled for its
controller and it acts in consequence.
Figure 7-2: Tool’s Structure and Interaction Diagram
Project’s Structure and Development Universitat Rovira i Virgili
40
7.3 Layers description and functionality
In this section, the main layers of the application which have been presented are further
described.
7.3.1 Import Layer
Figure 7-3: Import Layer
Project’s Structure and Development Universitat Rovira i Virgili
41
The main goal of the import layer is to select the test case which is to be used and process the
data it contains. As said, the test case contains the information regarding what we expect the
test to do, so we need this data to compare the real output with the ideal or the expected one.
7.3.1.1 clsFactoryImport
First of all, after the test case has been selected, NETgine calls this class in order to check and
consequently redirect the flow of the program depending on what type of file the test case is
described in.
Currently, the test cases are only embodied in Excel sheets, although the application is
prepared for other files to be added in a future.
7.3.1.2 clsBaseImport
The BaseImport class is the parent class for any other class which might be added in order to
process the data of the test case. It contains general functions and subroutines which are to be
common regardless the file type. Examples of that could be adding a test case, adding a test
signal, adding new transitions, and so on.
7.3.1.3 clsImportExcel
As Excel sheets are the only actual method used to embody the test cases, this class is the only
inheriting one which is fully developed.
What it does is go over every Excel sheet, treating every sheet as a test case, and by some
logical functions, it gets all the needed data and stores it in a model so that it can be read and
used from this point and on.
What the Excel sheets contain is all the signals expected values in time. To do that, a column
refers to time, a row to all the signals (both the input and the outputs), and the crossing cells
the value the signals take in that very time transition.
7.3.2 Mock
In order to properly run the test, some libraries have to be added with the aim of letting set and
get the values of each signal.
7.3.2.1 clsMock
In this class, the mocked file in which the information about the signals used is described is
created.
Moreover, it creates another file which aims to get and write the data of the run test so that it
can be compared to the expected one. This output is also essential to draw the reports and the
signals evolution.
Project’s Structure and Development Universitat Rovira i Virgili
42
7.3.2.2 clsStructGenerator
This class supports the mocked file by writing a structure which defines all the signals used in
the test case, the type of that signals, and it disjoins the inputs from the outputs.
7.3.3 Generation
The main goal of this layer is to generate files or outputs needed either for the test or for the
reports.
Figure 7-4: Generation Layer
Project’s Structure and Development Universitat Rovira i Virgili
43
7.3.3.1 clsGenerateFiles
What this class does is physically generate many of the needed files. It also acts as a caller to
other functions or classes which fill these files.
For example, it is the one that calls the function which generates the mocked files, the one
which dynamically writes the unit test, it creates the file in which the outputs from the test will
be written in too, and it also calls the function that writes dynamically the XML code to create
the MVT reports.
7.3.3.2 clsUnitTest
This class could happen to be the most relevant one, inasmuch as it is the one which writes the
C-Unit test code to check the data from the test case. That is the reason why this class must be
described in detail.
First of all, it opens a .c file which has been previously created with writing permissions.
After that, it writes in it a common structure which could be seen as the skeleton for any unit
test. This skeleton has been designed following some Lear guidelines, although there did not
exist some fixed standard to develop this tests.
In the first instance, after adding some comments describing the aim of this file, the needed
libraries and headers to run the test properly are included. For example, the mocked file which
contains the information about the signals, or a C-Unit header which lets us test the output
values given some input values.
The next step is to fill this skeleton dynamically with the data of the imported test case. What
this test basically does is incrementing a variable which represents the tick time, and at every
time transition it sets the input values as the test case suggests.
Then, it calls a function declared in the C-Unit library header to check the value of the output
signals, passing as a parameter the expected value, which will be compared to the real value.
After those outputs are compared, this test file calls a function of one of the mocked files
which stores in a text file the output signals values for every tick time. This data is needed
because NETgine will read that file in order to determine whether the test has been successful
or not, and if not, draw the evolution of the signals.
All that means, that it must write C code with Visual Basic .NET.
7.3.3.3 clsXMLFiles
The objective of this class is also paramount. The task of this class is the one which enables to
get the graphical reports.
In order to obtain these reports, some XML code must be dynamically generated with the
information that will be displayed by them.
The methodology used is, in some part, similar to the one used to write de C code. A general
structure is common for every test regardless of whether it fail or not, but there are also some
sections (called tags in XML terms) which depend on every case.
Project’s Structure and Development Universitat Rovira i Virgili
44
With some logic, these fields are read from the model, after having compared the expected
values with the real ones. For that reason, the test generated must be run before the XML’s are
filled.
It generates and fulfils many files:
- The first one is what we could call the Main Report.
- Another one is produced for every test case, and what it contains is the information to
draw all the signals.
- There is also another report produced for every single signal, both for inputs and
outputs.
We will see the appearance and the details of those reports in next sections.
But why must we generate these XML?
The programming language with which the reports are typed is HTML, which is a language
that eases the job of configuring a file and displaying images, text, and so on.
There exists another programming language called XSLT that what it does, is to transform the
XML code to HTML code. This is very useful because XML is by far easier to be written in
comparison with HTML.
For that reason, a little XSLT engine program runs these generated XML files and produces
HTML visual reports.
7.3.4 Errors Handler
In this layer most of the errors are handled. If during the execution something unexpected or
wrong happens, this layer treats those errors, for example displaying some error or warning
message, or even exiting the application.
7.3.5 Running Engine
This class acts as the engine which runs programs automatically releasing the user of having to
run all the executable files manually.
First of all, when the whole unit test is written, a function of this class is called in order to
execute the test.
Then, a subroutine creates and fills a batch file, which is a program that can even be really
simple. The aim of it in this particular case is to execute a XSLT to generate the reports.
When all this is done, NETgine executes this batch file and obtains the reports.
Project’s Structure and Development Universitat Rovira i Virgili
45
7.3.6 Check Layer
This is the layer in which the most important checks are made providing NETgine of certain
robustness.
To get a better idea with some examples, what it for instance checks is whether the signals that
the SWC handles are the same than the ones which the test case does, and also the other way
round.
Here is also where the output values are compared to the expected ones. If that is not the case,
these mismatches are stored in the model. Concretely, if the test passed or not, if every single
output signal from the test case passed or not, and if not, which were the transitions that
caused the mismatch.
It also stores the period of time in which the output mismatched and the value it took during
this time, because this is quite useful information for test report.
7.3.7 Views and Controllers
Following the Model – View – Controller software model, for every (or almost every) view
display of the program, there is also a controller for it. The view catches the events that occur,
for example when the user clicks on a button. When that happens, unless the functionality is
really simple, a function of the controller is called which handles this event and which
contains all the logical weight.
The main reason and potential of this software architecture, is that if some day, or in future
versions, some view is modified or completely removed and changed for another one, all the
logic and functions are contained in its corresponding controller.
For example, when at some form the user clicks on an OK button, this act produces an event,
the event of having pressed this button. The view class catches this event and calls a function
or subroutine of its controller that handles the logic of this OK button.
Project’s Structure and Development Universitat Rovira i Virgili
46
To establish the relation between the view and the controller, an object child of the view
controller is instantiated in the view class, and an object child of the view is also instantiated
in the corresponding controller of that view. Then, by code, we associate that controller to that
view, and the other way round. The following example shows that:
Public Class clsControllerImport
Protected mView As Object ‘Declaration of the view object
The next subroutine is called everytime we declare an object of the class clsViewImport
Public Sub New(ByVal View As clsViewImport)
mView = View
mView.SetController(Me)1
End Sub
1We can access the clsViewImport functions through its object and a point “.”
Figure 7-5: Views and Controllers I
Project’s Structure and Development Universitat Rovira i Virgili
47
When the view object is initialized, its initialization makes this view be displayed.
Public Sub Init()
mView.Show()
End Sub
...
End Class
Public Class clsViewImport
Private mController As Object
Public Sub SetController(ByVal controller As Object)
mController = controller
End Sub
Function called when an object of the view is instantiated:
Public Sub New()
Dim controller As New clsControllerImport(Me)
' This call is required by the designer.
InitializeComponent()
End Sub
...
End Class
When an object of clsViewImport is instantiated, it instantiates another object of the class
clsControllerImport, and a parameter is passed indicating which view has declared it.
Then, the subroutine New of the clsControllerImport is executed and clsViewImport is
associated to the declared object mView in clsControllerImport. At this point, the controller
knows who his view is, so now it is still remaining determining who the controller of the view
is.
For that reason, the controller accesses the subroutine SetController of the view passing itself
as a parameter, and the view associates this controller to the object it has declared before.
Now, the connection view – controller has been established.
Project’s Structure and Development Universitat Rovira i Virgili
48
The one that manages the flow of the views and the program during its execution is the
controller. The controllers are the ones that contain the logic, so the movement between forms
is made throughout their corresponding controllers.
Figure 7-6: Views and Controllers II
Figure 7-7: Views and Controllers III
Project’s Structure and Development Universitat Rovira i Virgili
49
7.3.7.1 Initial Menu
As the name suggests, this view – controller handles the initial menu.
7.3.7.2 Export
When the user faces the export view, he has to browse for a destination directory in which he
wants to store the test. The controller for that view checks if the selected path exists, and if so,
it is stored in the model.
7.3.7.3 Import
In the import view, the user has to select which test case he wants to use so that it can be
imported and treat its data.
When this view is displayed, the user can also introduce some information about the test case,
such as some requirements, add some comments, etc.
7.3.7.4 Mocklib
In order to run the C-Unit, the program needs some libraries as we have seen. There exists
some template bases folder which contains libraries that should be common for many tests. In
this display, the user is asked whether he wants to use these template libraries or he wants to
select his own ones. That is maybe because he has included new files or has modified the
existing ones. For example, that test could require some extra signal or some new signal type
which is not taken into account.
7.3.7.5 Add Software Component
This view lets the user select the software component he wants to use to test the system under
test. He can visually see which ones he has selected in a grid and is able to delete them at any
time.
7.3.7.6 DFA
There is also a view in which the user has to select a folder that contains DFAs. These DFAs
contain the declaration of some variables and constants that the software component uses, so
NETgine needs these declarations so that there are no compiling errors.
7.3.7.7 View About
This view just shows general information regarding NETgine. Due to its simplicity, it does not
even count on a controller.
Project’s Structure and Development Universitat Rovira i Virgili
50
7.3.7.8 View Configuration
Some configuration parameters are introduced in this display and stored in the model in order
to run the test properly as desired.
7.3.7.9 View Continue
This view is prompted when there exists a mismatch between the test case and the software
component. That can be either because the test case contains signals which the SWC does not
handle, or the other way round.
This form informs the user of this fact, and suggests that the test could fail. Even though, it
offers the choice to continue nevertheless. Continuing can be useful when the user just wants
to test some little part of the whole test, knowing that generally it will fail, but having the
chance to check and test a smaller part.
7.3.7.10 View Warnings
This view is displayed when some warning is shown to the user. It consists of a form with a
warning message depending on at which part of the program evolution is called.
7.3.8 Model
7.3.8.1 clsModelDefinition
In this class, all the data which must be stored in some point of the program in order to be
lately used, is stored.
For example, it contains all the structure of the test suite, with all the test cases contained in it,
with all the signals contained in every test case, and with the values and time of the transitions
for every signal of the test case. The same is done for the real outputs after the test is run.
Other useful information such as the path where the test will be saved, and many other things
are also stored in this model class.
7.3.8.2 clsItemData
Although this class would not be strictly necessary because the data is stores could be also
kept by clsModelDefinition, it was made to separate it from the whole model and the data is
stores is everything related to the SWC which the user selects.
7.3.9 Constants
In that class, the constants used in the whole project are declared. It is very important to use
constants instead of numbers. Those numbers are usually called “magic numbers”. The
importance lies in that only the program developer knows why he used those numbers, but it
might be really hard and annoying for other programmers to understand and guess why the
Project’s Structure and Development Universitat Rovira i Virgili
51
first developer used this numbers. If these numbers are somehow encapsulated in a variable or
constant name, it all gets easier.
7.3.10 APIs
7.3.10.1 Excel Library
This project, treated as an API, has the goal of getting all the data of the Excel sheets through
the use of many functions.
7.3.10.2 Test Library
This API, which has been created with the development of this project, gets the data of test
cases defined in either Excel sheets or in XML files, and it creates a test structure.
First of all, it enables to store a project. This project can contain several test suites, which
would be, for example, a bunch of test cases (Excel sheets). A test case contains several
signals, and each of these signals contains several transitions. A transition is composed of a
discrete value and its time. It has been designed as an API because it can be useful for other
future projects.
Figure 7-8: Test Structure
Results Universitat Rovira i Virgili
52
8 Results
8.1 Outer Appearance
In the following grid, the appearance of the forms the user face with are displayed. This
application has been thought to be as user friendly as possible.
Initial Menu
In that display, the user first has to configure some parameters of the test, such as the tick
time, and the name of the functions that the SWC uses both to initialize and to step. Then, he
can proceed to run the test, and when this is done, get the reports. He can also at this point
consult the version of the tool by clicking on the About button.
Figure 8-1: Main Menu View
Figure 8-1: Initial Menu View
Results Universitat Rovira i Virgili
53
About NETgine
In this form, general information regarding the application is shown, such as its version and
the developer of it.
Test Configuration
In the configuration view, the program will store the tick time that the user inputs as well as
the functions which the software component uses to get initialized and to step in time
transitions.
Figure 8-2: About View
Figure 8-3: Test Configuration View
Results Universitat Rovira i Virgili
54
Export
In the export view, the application gets the destination where the test will be stored that the
user selects.
Import Test Case
As the name suggests, at this point the user must choose the test case file in his computer.
It is also possible to introduce some data about the test case optionally.
Figure 8-4: Test Export View
Figure 8-5: Import Test Case View
Results Universitat Rovira i Virgili
55
Select Mocklib
By selecting “New Test”, the test will use the files from the template. If the user has modified
some of these libraries or needs to use new ones, he can select “Existing Test” and select his
own mocklib folder.
Select DFAs
When the user faces this view, he must click on the squared button to select the folder in
which the DFAs he uses are stored, so that NETgine can recognize the used variables.
Figure 8-6: Import Libraries View
Figure 8-7: DFAs Import View
Results Universitat Rovira i Virgili
56
Add Software Component
Here the user has to click on “Add SWC” button and select the SWC he wants to use. After
that, it will be listed in the grid. At any time he will be able to remove it. The program also
checks that the user do not select more than once the same SWC.
Continue
This warning display appears when the signals of the test case don’t match the SWC ones. The
user can choose to stop the program or proceed anyway.
Figure 8-8: Import SWC View
Figure 8-9: Continue View
Results Universitat Rovira i Virgili
57
8.2 Unit Test Folders’ Structure
In this section the folders and the files generated by NETgine are presented.
Those two folders are the first ones the user sees in the test destination folder after having run
NETgine.
The first one contains the source code of the imported software component and all the header
files which it uses.
The above ones are the folders and files inside 02_SW_COMPONENTS.
Figure 8-10: Unit Test Folders
Figure 8-11: 02_SW_COMPONENTS Folders and Files
Results Universitat Rovira i Virgili
58
The first folder named 00_tools contains the programs that are needed to execute the unit test
and generate the C-Unit reports. What they exactly do is out of the scope of this project.
In the folder named coverage is where the report regarding the coverage of the test is stored.
This report is later presented and described.
Figure 8-12: 00_tools Items
Figure 8-13: report Folder Content
Results Universitat Rovira i Virgili
59
The above figure shows the appearance inside the report folder. The xml files are the ones that
let generate the graphical reports meanwhile the HTML files are actually the reports. Inside
res, there are more reports regarding each signal involved in the test. The folder named
XSLT_commandLine_Transformation contains the engine program which transforms the XML
files into HTML ones.
There are also batch files, which are little programs that launch applications or do something
when they are executed. These are the files that NETgine execute.
The testfile folder contains the libraries and headers that the .c file of the test uses.
Inside ut_SGNmock_files we can find, for example, files as the above ones. They are the DFA
libraries, which define constants and variables used in every software component, that’s the
reason why they need to be imported to properly run the unit test.
The get_outputs files are the code files which enable to get the real values of the outputs after
the test has been run.
The mockApiSignals has already been described.
And finally, vs12 contains the Visual Studio solution regarding the unit test itself. The other
files and folders do not have special interest for this project.
Figure 8-14: ut_SGNmock_files Sample Content
Results Universitat Rovira i Virgili
60
8.3 C-Unit Reports
The following report gives us information about which transitions failed due to a different value to the expected one, as well as the
number of assertions and tests that passed.
This is the methodology C-Unit uses. If the test case defines that an output signal should have a value of 1 at the second 4.58, C-Unit
launches and assertion to verify if this output has this value at this time, but this only return either true or false.
As will be seen later, with the chronograms reports we are able to know in fact which value this output signal takes, and also the value
before and after this specific moment.
Figure 8-15: C-Unit Report
Results Universitat Rovira i Virgili
61
This second C-Unit report displayed below is of a huge utility because shows the percentage
coverage of the test. That means how well we have tested a system. A test with 100%
coverage means that we have tested all the possible combinations of inputs.
It is important to distinguish between coverage and correctness or validity. A test can pass
without having a 100% of coverage, because we might just want to test a specific part of the
whole system, and the other way round, a test can be 100% covered but have several errors or
mismatches.
Figure 8-16: Test Coverage
Results Universitat Rovira i Virgili
62
In case a test did not have a 100% of coverage, the C-Unit report also informs us about which
code lines were not executed, or were executed but not all the statements. For example, the
test might have executed an if statement, but it was always evaluated as true, so the false case
was not executed. That would cause a test not to be fully covered.
Figure 8-17: Lines of Code Executed
Results Universitat Rovira i Virgili
63
These previously shown reports have not been designed in this project. They are implemented
as a reporting tool by C-Unit. Even though, they have been here presented so that the reader
can know of their existence, as well as clarifying that the usual C-Unit user keeps the benefits
C-Unit had.
These reports can be obtained thanks to a feature implemented on Visual Studio called Live
Report.
The following explained ones are the reports that NETgine produces a part from the C-Unit
ones, using HTML language. They aim to give complementary information to the existing
ones.
The configuration and information of the reports were designed in the Lear’s software
department, displaying the data they needed to complete the existing tools.
All they need to be generated is the XML documents with the proper format and containing
the needed information to fill the reports.
Then the XSLT existing program enables to get these XML files and run them to generate the
HTML report files.
So, what NETgine does is to generate these needed XML files, which enable to get the HTML
reports.
Even though, as part of this project, the MVT reports have been modified a little bit. To do
that, I edited the XSLT in order to get the displays and features I desired, and consequently
generating XML files which could give this information.
Results Universitat Rovira i Virgili
64
8.4 MVT Reports
8.4.1 Passed Test
A passed test means that everything went as expected. That occurs when every single signal
output took the expected values for every time transition.
8.4.1.1 Main Report
The images shown below probe the main report’s appearance when the test passed.
In that report, what we see is general information regarding the test, such as when was the test
run, who run that test, and of course it indicates that the test was OK.
Figure 8-18: Main Report I
Results Universitat Rovira i Virgili
65
We also see if every output passed the test or not and how much time the test is evaluating.
In the Test Cases Summary we can visualize how long did the test take, as well as whether
every output we are testing passed the test or not.
In this sample test, only one output is being tested, the signal named
U3_ahlCtrlIlumSpecFunction, which is referring to a light of a car, and it passed the test.
Figure 8-19: Main Report II
Results Universitat Rovira i Virgili
66
8.4.1.2 All Signals Report
In the next image, the report displaying all the signals’ evolution in time is displayed.
That is very useful to visually see when the signals took the transitions and their influence on the outputs, enabling to detect something
unexpected.
Figure 8-20: All Signals Report
Results Universitat Rovira i Virgili
67
8.4.1.3 Single Signal Report
The image below shows the single signal’s report. That one concretely refers to an output
when it succeeded, otherwise, the inputs, are drawn with a black line.
The user can scroll and zoom it, and he can also move from one signal to another thanks to the
hyperlinked green arrows. At any time he is able to come again to its initial and default zoom
value too.
Figure 8-21: Passing Output Signal
Results Universitat Rovira i Virgili
68
8.4.2 Failed Test
In that case, something of the test failed. A single transition for one signal causes the whole
test to fail. That means, that a test will only be considered as a passing one if the test was
100% as expected.
8.4.2.1 Main Report
This is the appearance of the main report when the test fails:
Figure 8-22: Main Report Failed Test I
Results Universitat Rovira i Virgili
69
And as mentioned before, the main report also shows which signals exactly failed and which
ones passed.
This test, which refers to the sonar of a motorbike, handled 6 different outputs. 3 of them
passed the test case, meanwhile the others did not. The reason is that the 3 failing outputs
needed more inputs to take the expected values.
Figure 8-23: Main Report Failed Test II
Results Universitat Rovira i Virgili
70
8.4.2.2 All Signals Report
In the case of the report that displays the evolution of all the signals in time, does not differ much from when the test passed. It also
draws the evolution of all the signals, and in the case of the failing outputs, the real value after the test has been run is drawn.
Figure 8-24: All Signals Failed Test Report
Results Universitat Rovira i Virgili
71
8.4.2.3 Single Signal Report
In this case, it can be noticed that some unexpected transitions took place. The blue line shows the real output after the test has been
run, meanwhile the red one describes the expected evolution.
The mismatching points are also pointed because a very small mismatch could occur, and in that way they are highlighted.
Figure 8-25: Failed Signal
Tool’s Test Battery Universitat Rovira i Virgili
72
9 Tool’s Test Battery
In order to make a first attempt to test this tool, ensuring some robustness as well as
determining the scope of the program, a battery of tests has been designed and proved on it.
This battery is based on the requirements and specifications. It defines the testing procedure
and contains test cases.
General description of the test cases
According to the selected strategies I have designed 15 test cases.
Initial state for all test cases: As required, I will test the robustness of the program. I will
assume that some bug will be found at every test case for wrong inputs. That means that I
assume that the program will only work properly for valid inputs. But, even so, I will design
test cases to document what happens with wrong inputs.
Stopping criteria and tolerances (pass/fail criteria): I will stop testing when at least all the
requirements described before with their test cases are tested once. I expect to find some bugs
in the program, otherwise I will assume that my testing-method has not been accurately
enough and I will design more test cases in order to increase the thoroughness of my testing
method.
Naming
Constants: should be in capital letters.
Functions: First capital letter and always including a verb.
Variables: in small letters
Global defined variables start with “g”
In the expressions use the constants.
Indentation
The indentation should show the logical structure.
Comments
At the beginning of the main program: purpose, input/output files, requirements/specifications,
authors, version history, global variables.
Tool’s Test Battery Universitat Rovira i Virgili
73
Beginning of the function: purpose, input/output variables, notice when using a global
variable.
Comment density: At least two words in each line of comments
After having designed the test cases and having run them on NETgine, coming up next are the
results displayed.
Test # Test Description
SWRS-F-IN-1v1 Import an Excel test case
This test proved that NETgine is able to import an Excel test case and get its data.
1 N/A stands for Not Applicable
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A1
N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Data from the test case The test case data should be the one
imported from the test case As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
74
This test proved that NETgine is able to import and get the data from a SWC.
Test # Test Description
SWRS-F-IN-2v1 Select the desired software component
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder As expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_E05_Values_Interface,
dfa_E06_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder As expected OK
1m Selected SWC: ahl.c
The software component source file
has to be the ahl.c an its information
must be stored in memory
As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
75
Test # Test Description
SWRS-F-IN-3v1 Let the user select the tick time the test is going to evaluate
With this test case, it is being tested that the user is free to introduce the tick time he desires to
test a system.
The tick time must be a number, if the user introduces either letters or symbols, NETgine
notices that and asks the user to introduce a valid tick time.
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Looking at the memory storage, it can
be seen that the tick time = 5ms
As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
76
Test # Test Description
SWRS-R-IN-4v1 Errors when importing: incorrect software component
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder As expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_E05_Values_Interface,
dfa_E06_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder As expected OK
1m
The program filters the file
extension, so it only accepts either
.c or .h files. This test selects a
software component with no lines
of code.
The programs check that the selected
software component handles the same
signals that the test case does, so as it is
not the case, an error message is
prompted.
As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
77
Test # Test Description
SWRS-R-IN-5v1 Do not let the user choose the same software component more than once
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder As expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_E05_Values_Interface,
dfa_E06_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder As expected OK
1m Selected SWC: ahl.c
N/A
N/A
N/A
1n Press Add button and select the
same software component
A message warns that this software
component has already been selected
and does not store it
As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
78
Test # Test Description
SWRS-R-IN-6v1 Let the user delete a previously selected software component
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder As expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_E05_Values_Interface,
dfa_E06_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder As expected OK
1m Selected SWC: ahl.c
N/A
N/A
N/A
1n
Click on the selected software
component and press Delete
button
The ahl.c software component is erased
from the data grid and the memory that
stored is freed
As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
79
Test # Test Description
SWRS-F-O-1v1 Create a C-Unit tree of folders
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\
tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp As expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CF
G_SFL_INDIC_NO_SPECIAL.xl
s
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As expected OK
1j Optional Fields = None Optional Fields = None As expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder As expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_E05_Values_Interface,
dfa_E06_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder As expected OK
1m Selected SWC: ahl.c
N/A
N/A
N/A
1n Click on the Done button Selected SWC: ahl.c
As expected OK
1o Check the test destination path C-Unit template structure As expected OK
Tool’s Test Battery Universitat Rovira i Virgili
80
Test # Test Description
SWRS-F-O-2v1 Generate the C code unit test
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As
expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As
expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tm
p
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp
As
expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_S
FL_INDIC_NO_SPECIAL.xls
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As
expected OK
1j Optional Fields = None Optional Fields = None As
expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder
As
expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder
As
expected OK
1m Selected SWC: ahl.c N/A N/A N/A
1n Click on the Done button Selected SWC: ahl.c As
expected OK
1o
Check the file inside the selected
destination test folder +
\02_SW_COMPONENTS\01_TLA\u
t\vs12\ut.sln
Generated C code Unit Test As
expected OK
Tool’s Test Battery Universitat Rovira i Virgili
81
Test # Test Description
SWRS-F-O-3v1 Save the generated C code and the needed files in the corresponding
directories so that it compiles
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As
expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As
expected OK
1g Run Test N/A N/A N/A
1h
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tm
p
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp
As
expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_S
FL_INDIC_NO_SPECIAL.xls
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As
expected OK
1j Optional Fields = None Optional Fields = None As
expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder
As
expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
dfa_E04_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
DFA headers = The ones inside the
selected DFA folder
As
expected OK
1m Selected SWC: ahl.c N/A N/A N/A
1n Click on the Done button Selected SWC: ahl.c As
expected OK
Tool’s Test Battery Universitat Rovira i Virgili
82
There cannot be any error, otherwise the unit test code will not compile.
1o
Check the unit test solution inside the
ut folder, the used tools inside the
tools folder, the template libraries
inside the testfiles folder, the MVT
reports inside the reports folder, the
coverage report inside the coverage
folder and the mocked libraries inside
utSGNmock_files
All the folders contain the expected
files from either the template or the
generated ones.
As
expected OK
Test # Test Description
SWRS-F-RE-1v1 Create a suitable XML to generate MVT kind of reports
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtaine
d Result
Test
Result
1a Configuration Mode Selection Configuration View Prompted As
expected OK
1b Tick Time = 10ms N/A N/A N/A
1c Web Browser = Google Chrome N/A N/A N/A
1d Initial Function = ahl_initialize N/A N/A N/A
1e Step Function = ahl_step N/A N/A N/A
1f Press Done button
Tick Time = 10ms, Web Browser =
Google Chrome, Initial Function =
ahl_initialize, Step Function = ahl_step
As
expected OK
1g Run Test N/A N/A N/A
1h Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp
Destination Folder =
C:\Users\XPalomoteruel\Desktop\tmp
As
expected OK
1i
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
Test Case =
MOCK_AHL_ut_scn_Test1_CFG_SF
L_INDIC_NO_SPECIAL.xls
As
expected OK
1j Optional Fields = None Optional Fields = None As
expected OK
1k Mocklib Libraries = ahl mocklib Libraries = The ones inside the selected
mocklib folder
As
expected OK
1l
Selected DFA:
dfa_CAN_Values_Interface,
dfa_E00_Values_Interface,
dfa_E01_Values_Interface,
dfa_E02_Values_Interface,
dfa_E03_Values_Interface,
DFA headers = The ones inside the
selected DFA folder
As
expected OK
Tool’s Test Battery Universitat Rovira i Virgili
83
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1
1a
This has to be checked
following the steps of the
SWRS-F-RE-1v1 requirement
and analyzing the results. To
prove that, the already tested
software component ahl has
been run on NETgine, and the
results were exactly the same.
For this component, ahl, it was expected
to get the same results
As
expected OK
As for this test, we cannot ensure the correctness in every single test, the frequent use of the
tool will determine the real scope of this feature. Even though, the ahl component has been
tested even introducing wrong test cases to effectively visualize that the graphical reports
displayed those errors, and they did.
dfa_E04_Values_Interface,
dfa_IAP_Values_Interface,
dfa_IO_Values_Interface,
dfa_LCF_Values_Interface,
dfa_LIN_Values_Interface
1m Selected SWC: ahl.c N/A N/A N/A
1n Click on the Done button Selected SWC: ahl.c As
expected OK
1o Click on the Get Report button
The reports are automatically prompted
As
expected OK
1p
The generated XML files are located
inside the test destination folder +
\02_SW_COMPONENTS\01_TLA\ut\
reports
The XML files for both the main report
and the report which displays all the
signals are located inside the reports
folder, and the ones regarding the
reports for every signal are located
inside every test case folder
As
expected OK
Test # Test Description
SWRS-R-RE-2v1 Correctness of the reports
Tool’s Test Battery Universitat Rovira i Virgili
84
1 NT stands for Not Tested
Test # Test Description
SWRS-M-P-1v1 Comments and proper indentation
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1
1a
This cannot be tested, though
check the appearance of the
source code
The code contains comments and is
properly indented
As
expected OK
Test # Test Description
SWRS-F-IN-OP1 Availability of choosing more than one software component
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1
1a
The tool is not prepared yet to
accept more than one software
component, given that only one
test case is selected, as well as
one initializing function and
one step function, so the
program gets closed as it is
unable to generate the test
N/A Not
implemented NT
1
Test # Test Description
SWRS-F-O-OP Availability to run several tests
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1
1a
The tool is not prepared yet to
run more than one test. To do
that for this first version, the
user has to close and run again
NETgine.
N/A Not
implemented NT
Tool’s Test Battery Universitat Rovira i Virgili
85
Although it has not been reflected in this previous test, due to the similar results they report, it
has also been tried to get the following wrong inputs:
- Empty test case
- Test case with a different timing and signals values structure
- Empty SWC
- Non-matching test case and SWC
- Non-complete SWC (missing .c or .h files)
In all these cases, NETgine notices that the test case signals are not the same which the
software component handles and consequently prompts the continue form advising the user
that the test might fail. If the user decides to continue anyway, it generates all the unit test
structure and the test code, but it will fail when attempting to run it to get the results. When
that happens, NETgine is programmed to get closed.
- Either not selecting the correct libraries or using the template libraries when the test
should not.
- Selecting incorrect DFAs.
Similarly to the results mentioned before, NETgine will generate the unit test structure and the
test code, although the user does not get advised at any point if the test case and the software
component match. But as happen with the previous cases, NETgine exits when running the
test to get the results. That is because the test fails, inasmuch as not having the DFA files
means not having the variables and constants declared, and consequently having compiling
errors. Something similar happens if there are missing libraries.
Test # Test Description
SWRS-F-O-OP2 Availability to add more kind of reports in the future
Step Test Conditions Evaluation Criteria (Expected
Result)
Obtained
Result
Test
Result
1
1a
When the first version of
NETgine was released, there
were no other kind of reports
needed for it to be generated.
Even so, the tool stores the test
case values and the test results
so that they can be easily
compared and displayed in
other kind of reports if needed.
N/A Not
implemented NT
Tool’s Impact Universitat Rovira i Virgili
86
10 Tool’s impact
10.1 Time Point of View
Giving an exact value of the earned time by using NETgine in comparison with the effort of
an engineer when it comes to time is, of course, impossible. The time invested for a developer
in typing a test, totally depends on his knowledge, background, lines of code, difficulty, and
many other factors.
For that reason, it is even hard to get an approximation of its time benefit. Even though, we
could talk in terms of a ratio between 1 to 2 minutes and several hours, so it supposes a huge
time saving.
Furthermore, as NETgine enables us to get a graphical output in which all the signals’
evolution is displayed, it helps to find possible mismatches and errors, so apart from the time
of typing the whole tests it can even help detecting the errors, which is not less important.
10.2 Economical Point of View
NETgine does not provide of a direct economical saving, although it indirectly does through
the saved time. The great amount of hours it releases the developers of, means that they can
invest these hours doing other tasks, so, consequently, helping to cut down the economical
expenses.
Even though, in some cases it could have a direct economical save if we bear in mind that this
is an application for a company, because it can free some users of having to use MATLAB,
and consequently saving some licenses. These cases could be few, but it could be the case in
which some workers only use a MATLAB plug-in named Signal Builder to work with
chronograms tests.
10.3 Intangible Point of View
Once both the time and economical benefits have been described, let’s analyze other non-
measurable profits.
On one side, that application can be extremely useful when some worker is first met with this
testing methodology or just does not feel very comfortable due to a lack of knowledge. That
means that NETgine can even be useful as a learning tool.
Plus it merges both of the testing methodologies described before, which enables people to use
or at least understand them both. It acts as a bridge to pass from one to the other.
What is more, if those tests happen to be shown to some client or people not familiar with
coding, the reports it generates are fully understandable for anyone.
Conclusions Universitat Rovira i Virgili
87
11 Conclusions
11.1 Results on Real Projects
In order to be able to extract some conclusions meanwhile offering a completed tool to the
company, this tool needed to be tested on real projects to evaluate its real potential and
correctness.
After its development, the tool was tested on two real projects. One of them was fully
developed and tested at that time, so the objective was not to expedite its development but to
prove that the results given by NETgine matched the existing ones.
This test corresponded to a lightning component of an important German OEM. Although at
the first trial it did not work, it was useful to detect a missing requirement: that not all the
components used the template libraries, so some form asking the user to select the needed
libraries was needed. After having added that, the tool gave the same results that had been
obtained before.
The other component which was tested using NETgine was a component regarding the park
sonar of a car of an important Japanese OEM. The difference between the previous tested
component was that this component was starting to be tested, so I did not have results to
compare with. Similarly, I realised that I needed to add a new feature which I had not
considered at the beginning: that the LEARSAR-based tests use DFAs, the libraries where the
constants and variables were declared. After having added this new feature too, NETgine
started to be used in this project giving apparently good results.
11.2 Further Improvements
The first thing that must be remarked is that at this moment no one can ensure that NETgine
will test every component or system with 100% of reliability and correctness. As any other
software tool, it needs some time and worker’s practice to validate its robustness when facing
different kind of tests.
That is the main reason why this point is being written. This section is likely to be extended
internally in Lear Corporation, as with the usage of a program, bugs are found and other
possible features too.
For some possible future version of NETgine, some features are here described which I have
detected as useful but which were out of the scope of this project and the first version of
NETgine.
Conclusions Universitat Rovira i Virgili
88
- Using more than one software component at a time
At this very time, NETgine only works as expected by selecting one single SWC. Even
though, it is fully thought for this extension. This was not a main objective inasmuch as the
ratio of tests which must use more than one SWC is currently a bit.
- Accept AUTOSAR-based tests
NETgine cannot currently accept AUTOSAR-based tests [17], it is set to run LEARSAR-
based tests. LEARSAR is an adapted design architecture which for example, does not use the
RTE layer, instead it uses DFAs, and that’s why NETgine needs to import them.
So, a possible future improvement could be adding the needed features to accept AUTOSAR-
based tests too.
Having developed the tool using the Model-View-Controller software architecture enables and
eases the effort when having to implement these and other new features as well as modifying
some existing ones.
Conclusions Universitat Rovira i Virgili
89
Bibliography and References Universitat Rovira i Virgili
90
12 Bibliography and References
[REF 1] National Instruments [on-line]. 7th of November, 2009. <http://www.ni.com/white-paper/3312/en/> [Consulted: 8th
of November, 2014].
[REF 2] Code Project [on-line]. 8th of April, 2008. <http://www.codeproject.com/Articles/25057/Simple-Example-of-MVC-
Model-View-Controller-Design> [Consulted: 10th of December, 2014].
[REF 3] Artima [on-line]. 20th of March, 2009. <http://www.artima.com/articles/dci_vision.html> [Consulted: 10th of
December, 2014].
[REF 4] Wikipedia [on-line]. <http://en.wikipedia.org/wiki/Visual_Basic_.NET> [Consulted: 17th of October, 2014].
[REF 5] Zdnet [on-line]. 27th of November, 2007. <http://www.zdnet.com/news/the-top-10-it-disasters-of-all-time/177729>
[Consulted: 10th of December, 2014].
[REF 6] Vu bits [on-line]. <http://vu.bits-pilani.ac.in/ooad/Lesson2/topic3.htm> [Consulted: 10th of December, 2014].
[REF 7] Cs Nmt [on-line]. July, 1993. <https://www.cs.nmt.edu/~cse382/reading/therac-25.pdf> [Consulted: 10th of
December, 2014].
[REF 8] Dev Topics [on-line]. 12th of February, 2008. <http://www.devtopics.com/20-famous-software-disasters/>
[Consulted: 10th of December, 2014].
[REF 9] Kaner [on-line]. 17th of November, 2006. <http://www.kaner.com/pdfs/ETatQAI.pdf> [Consulted: 10th of
December, 2014].
[REF 10] Carnegie Mellon University [on-line]. Spring 1999. Author: Jiantao Pan.
<http://users.ece.cmu.edu/~koopman/des_s99/sw_testing/> [Consulted: 10th of December, 2014].
[REF 11] Artima [on-line]. 29th of May, 2012. <http://www.softwaretestingclass.com/v-model/> [Consulted: 10th of
December, 2014].
[REF 12] Reliable Software Technologies [on-line]. 28th of March, 2000. Author: Brian Marick.
<http://www.exampler.com/testing-com/writings/new-models.pdf> [Consulted: 10th of December, 2014].
[REF 13] ISTQB [on-line]. <http://istqbexamcertification.com/what-is-waterfall-model-advantages-disadvantages-and-when-
to-use-it/> [Consulted: 10th of December, 2014].
[REF 14] 3SCALE [on-line]. <http://www.3scale.net/wp-content/uploads/2012/06/What-is-an-API-1.0.pdf> [Consulted: 10th
of December, 2014].
[REF 15] C-Unit SourceForge [on-line]. <http://cunit.sourceforge.net/> [Consulted: 3rd of November, 2014].
Bibliography and References Universitat Rovira i Virgili
91
[REF 16] The Art of Unit Testing [on-line]. October, 2011. Author: Roy Osherove. <http://artofunittesting.com/definition-of-
a-unit-test/> [Consulted: 3rd of November, 2014].
[REF 17] Wikipedia [on-line]. <http://en.wikipedia.org/wiki/AUTOSAR>
[Consulted: 3rd of November, 2014].
Annexes Universitat Rovira i Virgili
92
13 Annexes
User Manual
In addition to the project, a user manual has been designed to let everyone know how the
application must be used. In it you can find an explanation of every single display and how to
interact with it.
In it, some examples of usage are covered.