Distributed Component-Based Systems
A. Paul Heely Jr.Kun Lu
Ting Zhou
April 25, 1999
1 IntroductionAs software systems grow larger and more complex developers continue to look
for new ways to build these systems. One method currently being used is building of
systems out of software components. A component can be any piece of self contained
code (agent, Java bean, widget) that provides a published interface and a certain set of
behaviors. The hope is that developers can then use different components to build the
needed systems by simply providing the glue that holds all the components together.
In this paper we look a three different areas of current component based design.
First we look at existing component based frameworks and how they are being used. We
then look at other Commercial Off the Shelf Systems (COTS) and how these can be
applied to the component based methodology. Finally we look the testing and validation
issues involved in trying to use agents in a component based way.
2 Background
2.1 Component Based Frameworks
Component frameworks are an increasingly popular strategy to provide guidance
for application-level semantics or structure - how to design and arrange specific
components to solve specific problems. Framework is build up with general components,
which represent commonality or knowledge in a domain. We build our various
application softwares by customizing the framework. This in essential reuses the
framework. The framework must be robust, generic, and stable. The OO methodology
provides good support to build framework with those properties. We will analyze those
characters of OO methodology and find out how they support framework. We also
Page 2 of 33
explain why the aglet system is a very good example of framework. Base on those
discovers, we will understand what we can do in the future.
2.2 COTS-Based Systems
A new trend in software commerce is emerging: generic software components,
also called commercial-off-the-shelf (COTS) components, that contain fixed
functionality. COTS components can be incorporated into other systems still under
development so that the developing system and the generic components form a single
functional entity. The role of COTS components is to help new software systems reach
consumes more quickly and cheaply. With COTS components, the functionality you
desire can be
Accessed immediately.
Obtained at a significantly lower price.
Developed by someone expert in that functionality.
Although on the surface "the COTS solution" appears straightforward and
compelling, projects that apply COTS find its use less than straightforward. Rather, they
encounter significant new trade-offs and issues. Applying COTS products is not merely a
technical matter for system integrators. It has a profound impact on business, acquisition,
and management practices, and organizational structures.
Nowadays, common maintenance processes originated when most systems were
comprised of subroutines and procedures formatted in source code. Typical maintenance
on such systems would include impact analysis, which determines if and how different
Page 3 of 33
system parts interact, and regression testing, which uses test code inputs from earlier
versions to ensure system integrity after maintenance. For such systems, impact analysis
was another procedure and could thus be used to limit re-testing to relevant procedures.
However, these traditional maintenance procedures, which rely on source code visibility,
are insufficient to contend with the maintenance demands of component-based
development.
The growing use of COTS components is fueled by object-oriented design and
will drastically change how we build and maintain systems. When these components are
incorporated into a system, maintenance becomes much harder because source code is
either partially of completely invisible. For example, most applications built for
Windows NT application, you have effectively teamed with hundreds of Microsoft's
developers. However, when the application needs maintenance, you become a one-
person team.
Clearly, component-based development forces us to rethink our maintenance
technologies. This paper focuses on the issues and complications that arise when
maintaining complex systems with COTS component. If we are COTS vendors, for
example, we must think not just of maintaining a block of source code in a specific
application, but of maintaining code that is reused in numerous customer applications.
Because each application may have slightly different requirements, component
modifications may not work for all applications. If we are component integrators, we
must think in terms of technologies that will let us maintain the entire system. If the
components are "black boxes", visibility is limited to documentation that describes the
component's operation and functionality. Although maintaining unfamiliar code is a
Page 4 of 33
common maintenance dilemma, maintaining systems filled with black boxes adds a new
level of difficulty.
2.3 Testing and Validation of Mobile Agent Systems
A growing trend in large distributed software systems is the use of multi-agent
systems. An agent is an entity that acts autonomously, under its own control, to solve a
particular problem. An agent communicates with other agents and the rest of the world
by sending and receiving messages. Agents can be humans interacting with a software
system or an actual piece of software.
Our ability to design and build multi-agent systems has out-paced our ability to
test and validate them. Multi-agent systems pose additional difficulties in testing and
validation when compared to more traditional stand-alone pieces of software. The
autonomous nature of the interacting agents does not fit into existing system validation
techniques. The agents in these systems are often designed to be mobile, able to move
from machine to machine in order to accomplish a task. The commonly used platform
independent language Java introduces it’s own unique problems when trying to test
agents. The Java language is designed to run on multiple operating systems with no
change to the source code. But, the virtual machine that executes the compiled Java code
makes use of the underlying operating system where it happens to be running.
Differences in how operating systems handle networking issues or multi-tasking can have
an effect on the behavior of an agent as it runs on different platforms.
Testing methods can be broken into two groups, static and dynamic. Static testing
methods analyze the source code itself. Static methods can be used to trace the flow of
Page 5 of 33
control through a piece of code, identifying which sequence of statements will be
executed for a certain range of inputs. Static techniques are also useful for identifying
sections of code that can never be reached. A section of code that is unreachable
indicates an error has occurred in either the design of the software or in the
implementation. Dynamic testing methods operate on running pieces of code. Sets of
test data are used as input to the running program. The expected output is then compared
with the actual output. Discrepancies between the actual and expected outputs indicates a
fault in the software. Validation of a software system is used to ensure that the system as
a whole performs in the way it was designed to. The software components are tested to
validate that they interact in a controlled and reasonable manner. Just as important,
validation identifies undesired behavior and unforeseen interactions between
components.
The components that need to be tested and validated in a multi-agent system can
be broken down into three logical pieces: classes, agents, and the entire system. Classes
are made up of the individual programming instructions. Classes need to be tested to
ensure that all statements are reachable and that data accepted for processing is within the
specified range. They must also be checked for the correct implementation of a given
algorithm. Classes are combined to form agents. Agents need to be tested to ensure that
the classes work together properly to gain the desired agent behavior. A multi-agent
system is composed of the individual agents: software and human. This entire system
must be validated to ensure that it performs in a predictable and correct manner and does
not have any undesired behavior [Lowr98, Sing97, Nard98].
Page 6 of 33
3 Exploration of Component Based Systems
3.1 Developing Sophisticated Applications with Component Based Frameworks
3.1.1 Reuse through frameworkTo reduce the effort to design and maintain complex software, we want software reuse.
Many pieces of software, e.g. software in the same domain, have commonalties. A
natural idea to reuse is that we distinguish common components from those pieces of
software and build them into a platform. We build final application software by reusing
this platform, or in another word, customizing the platform. The platform built up of
general components is framework.
Framework divides the effort of developing software into two relatively independent
cycles: development of framework and development of application software.
Corresponding to those two cycles, there are two groups of software engineers:
application software engineers and framework software engineers. Framework software
engineers build framework, which represent the commonalties in some domain. They
should have enough knowledge of the specific domain to do this work. Then application
software engineers customize the framework into many final software products. The
application software engineers may be independent software vendors who build for user
company or computer programmers hired by user company. If any requirement need to
be changed, the user only has to contact the application software engineers, and change is
only needed to be adopted in those customizing work.
Page 7 of 33
Framework engineers may encapsulate part or whole part of framework by providing
application software engineer with compiled codes and interface files. There are at least
two reasons:
1. The framework software engineers do not allow the application software engineers to
modify the framework directly. If the framework is robust and flexible which it
should be, the application software engineers only has to customize the framework
and need not to modify the framework. In case application software engineers find
problems with framework, they have to inform the framework software engineer to
solve them, who know the framework well.
2. The framework software engineers and application software engineers may be of
different companies. The company who build framework may sell it to other
company who want to build application software by customizing the framework. The
company who build the framework want to keep the code as secrecy. Of course, the
interface must be public.
3.1.2 The development of frameworkFramework may start in two ways:
1. The software engineers develop some software products. If those products have
much commonality, e.g. they are of the same domain, the software engineer would
realize the feasibility and necessity of frame the software, which means they extract
and those commonalties and build them into framework. The final software product
is then obtained by customizing the framework.
Page 8 of 33
2. The software engineers anticipate in advance that they will build many similar
software products in a specific domain and build the framework from the beginning.
This will cost some effort at beginning, but will make the future work much easier.
The framework usually can not be built perfect once forever. The reality is that some
times the application software engineers find awkward in using the framework. This may
be because that the framework software engineers put something not general or flexible
in the framework, e.g. unnecessary dependence between some components, too specific
interface. The framework software engineers improve their framework. Sometimes, the
application software engineers find that he still have to repeat some design in the
application software. This means that something general should be added into the
framework.
3.1.3 Framework and design patterns.Probably the most important step forward in OO design is the design patterns movement.
You think of a pattern as an especially clever and insightful way of solving a particular
class of problems. That is, it looks like a lot of people have worked out all the angles of a
problem and have come up with the most general, flexible solution for it. We can build
those patterns into framework and the framework becomes and ideal carrier for those
patterns. When the application software engineers use the framework, he has been
provided with the solution and will intend to use the solution in hand. This not only
reduces the design work, but also helps the software the robust.
3.1.4 Framework behind the aglet system.Aglets and its API system provide a framework for mobile agent. The aglet is a basic
component and general class of distributed computation. It encapsulates most of the
Page 9 of 33
complexity of mobile agent, e.g. serialization and deserialization in dispatch, message
passing mechanism, and multithread of mobile agent, etc. Application software
engineers who use the aglet system do not have to worry about those low-level details,
and they are only required to program those parts that need to be customized. This is
much easier than building mobile agent from scratch.
Aglet is also a design pattern. In fact, JAVA provides the basic function to support
distributed copmputation. Without aglet, we can still build from scratch some distributed
application. In that case, you have to deal with many complex details. Different designer
may have different solutions. It is difficult to build the distributed software robust, not to
say general or flexible. And since those solutions are different, it is difficult to make
them cooperate in the network. Aglet provides us with a consistent and rigorous way to
implement the mobile agent. In reality, we may try various way of distributed
computation, then we find some efficient way or pattern, which can be used as a basic
distributed computation. We summarize its character, and give it a new concept- aglet
and implement it as a useful component in the framework. After that, we build
distributed software dase on the aglet framework, not from scratch anymore. Most of the
time, we only have to maintain the customizing work.
Aglet system is a kind of distributed component platforms (DCPs). DCP isolate much of
the conceptual and technical complexity involved in construction component-based
application. However, they do not of themselves guarantee complete and satisfactory
software applications. They help users construct, reuse, and connect components, but
they supply no guidance for application-level semantics or structure - how to design and
arrange specific components to solve specific problems. They also do not guarantee
Page 10 of 33
robust, scalable, and agile systems. For example, aglet is still a quite basic component.
If we use is careless, it is easy to get out of control, such as violate the security rule, or
lost among the network. For distributed component application in any specific domain,
we have to build higher level of framework to regulate the way to use aglet. This may
include repeated used itinerary strategy, communication pattern between some agent
component. Our aglet book specifies some of the patterns. We can use the similar
strategy to build higher level of framework which implement those patterns. This in fact
helps the application software engineers use aglet in an amore controllable way. Of
course, the framework must still be flexible to use. This work is highly application-
oriented.
3.1.5 OO implementation strategyWe know that the framework must be flexible and general, for which it can be reused in
many situations and easy to adopt changes. Changes is inevitable in software
evolvement, and we anticipate it. There are two rules to make future change-adoption
easy:
1. Separate the things that change from the things that stay the same.
2. Put the same things together to avoid duplication
In this way, when changes happens, it is easy to find where to change, and the change
will not propagate too far or repeated over and over. OO language has provided us some
important features to build framework with those rules.
For any class, we have the class functionality and interface. The interface exists in
header file and the functionality exists in the method body of the source code. Of course,
Page 11 of 33
the interface in the source code must consistent with its header file and we can think that
interface only exist in header file, and the interface in source file simple repeat it as
compiler required.
3.1.5.1 Composition
This is one kind of reuse we most often encounter. We simple create object of existing A
class inside new B class. We say A has a B or A contains B. B could be data member of
A or in the method of A. Since OO provide good encapsulation for classes, we can easily
build up a composition hierarchy. This relation is also represented in the header file
including hierarchy. If A contains B, A’s header file must include B’s header file. This
include relation is transitive. In composition, we are simple reusing the functionality of
the code, not its form. If any thing changed in the container class outside of the
contained class, the contained class needs not to be changed. On the other hand, if the
contained class is changed without change its interface (e.g. header file), the container
class does not has to be changed (no need to recompile), even though their run-time
function may have be changed as we expected. This isolates and reduces the change. It
also helps to build scalable software.
3.1.5.2 Inheritance
Inheritance is one of the cornerstones of OO programming. It is the basis of generic and
polymorphism. It creates a new sub class as a type of an existing class. The property of
super class is inherited to subclass. If you change the superclass, the change will be
inherited to subclass automatically, which in logic is of course. Without inheritance, you
have to change subclass one by one. If the change does not affect interface of superclass,
Page 12 of 33
i.e. the header file of superclass does not change, we do not even need to recompile the
subclass. This in fact brings the commonality together. On the other hand, the subclass
has the option to override the inherited method or add new data member. Logically, the
subclass IS A kind of superclass and can be treated as superclass. From reuse point of
view, the superclass is reused in all its subclass. The subclass’s header file should
include that of the superclass, which is similar as compository.
In the framework, the framework engineer figures out the essential elements in the
specific domain and build up those essential classes. For example, in mobile agent
software, aglet and message are essential elements. Those general classes are
implemented in framework and they hide many low-level details. The application
software engineers generate their own specific aglet or message by subclass the general
class and customized them. In this way, they do not have to know the common
complexity. However, they do have to know in what sequence the functions are called
and how to manipulate them with the option provide by framework designer.
Although it is possible that we build up inheritance hierarchy with several levels, but it is
not a good idea to keep all those look-similar subclass. Too many of them will make the
designer difficult to distinguish between them. We should only keep some critic levels of
classes that are easy to distinguish.
3.1.5.3 Polymorphism
Inheritance provides us with the magic of polymorphism, which is dynamic binding.
This helps to isolate changes greatly. For example, in figure 1, suppose we have a
general class Person with a GetType() method. In another general class CourseDB,
Page 13 of 33
there is a person object of Person class and we send GetType() message to
person. When we compile CourseDB class, we have to include Person.h or import
Person.java.
If we wants to derive new specific classes Student and Facultly from Person and write
their own GetType() method, there would be no impact on the code of CourseDB and
we do not have to recompile CourseDB, which is very useful. At runtime, the
person.GetType() in CourseDB will call GetType()corresponding to the real
type of the person. In this way, even though we add new subclasses to Person, we do
not have to recompile CourseDB which may be built before those new classes exist. This
means framework software engineer can build framework with general class, compile it
and give the .obj or .class file as well as interface header file to the application
Page 14 of 33
PersonGetType( )
Return ‘I am a person’
CourseDBperson.GetType()
Figure 1 Relation before new classes added
StudentGetType( )
Return ‘I am a student’
PersonGetType( )
Return ‘I am a person’
FacultyGetType( )
Return ‘I am a faculty’
CourseDBperson.GetType()
Figure 2 Relation after new classes added
software engineer. There is no necessary to give the source code. The application
software engineer can customize by subclass and override. When they run the final
software, it is the overriding method that is executed.
In Aglet, we have seen that how frequency polymorphism is used. When we build our
own aglet, we subclass the default aglet in the frame. When we customize the our
specific class, we have to use the prescribed interface like the onCreation(Object
init). Why? Because must use the same interface to override themethod in general
aglet class, which is in the framework. By polymorphism, the system calls customized
onCreation(Object init) at runtime. The aglet system builder does not or need
not know what we want to customize. But they need to prescribe the interface. It the
machine that at run time to decide what method to call.
3.1.5.4 Generic
Note to make the polymorphism work, you must use the same method interface. If the
framework software engineers anticipate overriding, it is import that the interface is
generic enough so that application software engineers can use it in various situations. To
make an interface of a method to be generic, the key is that the argument should be
generic. As to Java, since every class is derived from Object class, the most generic
argument is Object. It can host any object. The example in onCreation(Object
init). The init object can be of any class. In the function body, you can cast it into
specific objects as it is. Of course, Object is also the most generic return type. For C++,
since there is no such most general class as Object, it is less convenient than Java in this
field.
Page 15 of 33
Another generic issue is about template. C++ provide template, which you can treat data
type as variable. In Java, it is not necessary because Java’s class has default root class-
Object, which is the most general one. Suppose we write a sorting method. To make it
applicable to various data type, we can simply make the elements to be sorted as Object
class or some less general class. We may also utilize Java’s interface to force some
method to be implemented. By polymorphism, this sorting method can be used on
various data type.
3.2 COTS-Based Systems
Today’s systems are mainly hybrid architectures in which part of the complete
system is custom-made and part is COTS. This partition is determined by the application
itself. Communication occurs between the custom-made and COTS parts; the
information exchanged ultimately decides the quality of the composite.
COTS products can be applied to a spectrum of systems. At one end of he
spectrum are nearly packaged software solutions, such as Microsoft Office or Common
Desktop Environment, that require no integration with other components. Further along
the spectrum are COTS products that support the information management domain, such
as Oracle or Sybase. These systems typically consist of both COTS products and
custom-made components, with COTS products making up the majority of the system.
Depending on how well the COTS products and custom components fit together, a small
to moderate amount of customization is usually required to enable them to work
cooperatively.
Page 16 of 33
At the other end of spectrum, there are systems composed of complex mix of
commercial and noncommercial products that provide large scale functionality that is
otherwise not available. Such systems typically require large amounts of "glue" code to
integrate the set of components. These systems are typically in the embedded, real-time,
or safety-critical domains.
With the use COTS as components for a system, a fundamental change occurs: an
organization now composes the system from building blocks that may or may not work
cooperatively directly out of the box. This fundamental shift from development to
composition causes numerous technical, organizational, management, and business
changes. It also has a pervasive impact on all lifecycle activities, regardless of which
lifecycle model an organization uses, waterfall, spiral, or iterative.
Requirements describe the desired system behavior and capability with a set of
specified conditions. For a COTS-based system, the specified requirements must be
sufficiently flexible to accommodate a variety of available commercial products and their
associated fluctuations over time. As the testing of COTS-based systems is considered,
you must determine what levels of testing are possible or needed. Should you test only
the features used in the system? How do you test for failures in used features that my
have abnormal behavior due to unknown dependencies between the used and unused
features of a COTS product?
Maintenance also changes in fundamental ways. It is no longer solely concerned
with fixing existing behavior or incorporating new mission needs. Vendors update their
COTS products on their own schedules and at differing interval. Also, a vendor may
Page 17 of 33
elect to eliminate, change, add, or combine features for a release. Updates to one COTS
product, such as new file formats or naming convention changes, can have unforeseen
consequences for their COTS products in the system. To further complicate
maintenance, all COTS products require continual attention to license expirations and
changes. All these events routinely occur. All these activities may start well before an
organization fields the system or major upgrade. Pragmatically, the distinction between
development and maintenance all but disappears.
3.2.1 CHALLENGES
Maintaining systems that incorporate COTS components can be a nightmare for
several following reasons:
Frozen functionality.
Incompatible upgrades such added features or bug fixes that, while
independently reliable, are incompatible with the host system.
Trojan horses.
Defective or unreliable COTS software.
Complex or defective middleware (such as wrappers), which brokers
information between COTS and custom software.
3.2.2 Frozen functionality
It occurs when a component’s vendor either goes out of business or stops
supporting the component. Applications with frozen components can become
Page 18 of 33
unmaintainable. If components require periodic updates, for instance, the system
developer has a serious problem such as with a parser component that requires
modifications each time the language changes.
There is no easy solution to frozen functionality. However, you have some
options. You can:
try to implement that functionality yourself,
acquire the functionality elsewhere
acquire the source code from the current vendor and maintain it yourself.
Unless you have the required domain expertise, the first option is probably
implausible. The second option will word only if competing components have the
functionality you need. If there are no alternatives and writing the functionality from
scratch is unrealistic, the third alternative is the only option. To ensure you can exercise
this option, you should negotiate for source code rights in the licensing agreement.
However, if your maintenance personnel do not have the domain expertise to maintain
the unfamiliar code, the results can be disastrous. Hiring the people who wrote the code
as consultants is a workaround, but the costs can be prohibitive.
3.2.3 Incompatible upgrades
Software vendors upgrade and maintain components according to the needs of
their biggest existing and potential customers. If the changes or fixes you need do not
align with what fellow customers' demand, a component you rely on may one day
become incompatible with the rest of your system.
Page 19 of 33
The incompatibility problem is similar to the frozen-functionality issue: Do you
abandon the vendor's component and build your won, or look to competitors for options?
Assuming that it is infeasible to build your own replacement, acquire it elsewhere, or
modify your software, you do have another option: building a wrapper around a
component to keep it from exhibiting incompatible behaviors. However, wrapping alone
may not be effective if component upgrades cause incompatibilities in how a system
hooks to the upgraded components. In this case, you may also have to rewrite the "glue"
that connects these components to the rest of the system.
If wrappers or modifications to the "glue" do not solve the upgrade problem,
downgrading may be necessary. It can difficult, as it is not always easy reverse the steps
taken during an upgrade. Nevertheless, like downgrading an OS or application,
downgrading components is destined to become a common maintenance activity.
Automated "component downgrading" tools will be needed.
However, if we could determine a prior an upgrade's compatibility, the need for
downgrading would disappear. Although this difficult, there are several dynamic
approaches for certifying the compatibility of COTS components in a given system.
The fear of incompatible upgrades is well founded. Except for the relatively rare
upgrades in which bugs are truly fixed without compromising code in process, upgrades
require you to wrap out tried and tested functionality with unproven functionality.
3.2.4 Trojan horses
In software, a Trojan horse is a component that has been covertly and
intentionally programmed for malicious behavior. For example, a Trojan horse
Page 20 of 33
component that is supposed to delete only files in a fixed directory might switch to a
privileged directory and delete all files.
Attempting to locate a Trojan horse statically, either by reading it or putting it
through a parser, is difficult as it sacrifices the code's context. When your swap in
components, avoiding those with malicious behavior is virtually impossible. Detecting
them is also problematic: it is difficult enough to identify malicious behavior when you
have source code access; without it, it is nearly impossible. Although relying on known
and reputable vendors can provide some protection, it is not a guarantee.
Detecting a component's malicious behavior requires that monitor all requests
from the component to the operating system and check each request's context. You can
use a wrapper to do this, but not with total confidence; the process typically involves
numerous calls and context checks, and the wrapper is likely to produce false-positives as
well as permit requests that should be denied.
3.2.5 Defective or unreliable COTS software
Today, no uniform standards exist by which software components are tested and
certified for reliability. There are process measurement schemes such as the Capability
maturity Model, ISO, and so on. But good processes do not guaranteed good software.
Software reliability models have been proposed for years, but the assumptions
they often make about such things as environment, defect rates, defect seventies, and
fault sizes are generic and may not reflect the idiosyncrasies of different environments.
Thus, even if a vendor supplies a dependability score, for example, that score may be
based on factors irrelevant to your environment.
Page 21 of 33
If maintenance is to someday become a process of swapping components in and
our, we must have a universal standard of assess component dependability. Such an
approach must provide enough information to account for environmental variability. We
have standard ways to assess transistor quality, and we price them accordingly. Why not
the same for software components? Knowing how a component affects system behavior
before we use it would certainly make component-based maintenance less of a gamble.
3.2.6 Complex or Defective Middleware
Middleware is general term for software that joins together, mediates between. Or
enhances tow separate software packages. Middleware is designed to monitor and
modify how components interact. Whenever you are unsure about how a component will
interact with other system parts, you should write middleware to enforce certain
behaviors.
A wrapper is a type of middleware that limits a component’s functionality. Thus
it plays a mediating role between what a component can output and what other
components that use its output can receive. Wrapping technology has roots in the safety-
critical community, which has long used techniques to partition off safety-critical
functions. Wrapper typically work by restricting a component’s input or output
information, both of which after component functionality.
The problem with wrappers is in knowing which behaviors to protect against. If
you don’t know how a component will behave, it is difficult to protect yourself against its
behavior. Querying the vendor could shake loose some information, but the best
approach is to extensively text the component in the system environment. By combing
Page 22 of 33
vendor information requests and in-hours testing, your can create more thorough
wrappers.
Wrappers are a reasonable way to address incompatibility, Trojan horses, and
dependability problems. But wrappers are not foolproof. They can be examples,
incomplete, and unreliable, particularly if you unsure about what behaviors you are
protecting yourself against.
3.3 Testing and Validation of Mobile Agent Systems
The validation of the interacting agents has a lot in common with integration
testing of a more traditional OO system. In multi-agent, as well as traditional, systems
the fact that an individual agent or component works correctly in it’s own isolated test
environment leads to no guarantee that a group of these entities interacting together is
going to operate correctly [Jorg94]. This problem of validating the behavior of
interacting agents is currently an unsolved one. Both types of systems are event or
message driven, which should lead to viable validation methods for multi-agent systems
by extending existing OO integration testing methods.
Page 23 of 33
One method of testing traditional OO systems that will extend to testing of multi-
agent systems is the notion of a Method/Message Path (MM-Path). An MM-Path starts
with a method and ends when it reaches a method which does not issue any messages of
its own. MM-Paths are composed of linked method-message pairs inside the object, or
agent, network. This causes interleaving and branching off of other MM-Paths. Figure 1
shows an example of what an MM-Path to message relationship looks like.
Figure 1: Example of a MM-Path.
This example shows that an MM-path can be isolated to just on object, in the case of A,
or can encompass the passing of messages between several different objects, in the case
of B.
I feel that the MM-Paths described here will be especially useful in validating
multi-agent or component based system. Trying to model every possible scenario has in
Page 24 of 33
the past been an insurmountable problem. The fact still remains that there are possible
infinite different paths through an object-based system. What I feel is more useful from
this method is the ability to identify undesired behavior that an input event might create.
Figure 2 shows how an MM-Path analysis of a system would reveal undesired behavior.
Figure 2: MM-Path Identification of Undesired Behavior.
As in the first example an input event, B, fires of the string of messages that
create MM-Path 2. But, now it can be seen that this sequence of messages also causes
the MM-Path to branch off in to path 3. The original MM-Path (2) continues on as before
producing the desired and expected output event, B. The undesired and unforeseen
events of MM-Path 3 cause method meth1 in object 2 to send a message to meth1 which
then sends a message to meth3 in object 1. When tested individually objects 1 and 2 both
performed as expected. It is not until they are interacting that it becomes apparent that an
input event (B) caused two different output events, which in this case is undesired.
Page 25 of 33
The MM-path analysis described above provides a way to analyze the behavior of
a system at design time. I addition to design time verification runtime verification is also
useful. The purpose of runtime verification is to "check the implemented systems
behavior during execution" [Lowr98]. The greatest benefit of runtime verification is that
it can catch implementation errors that a design level analysis alone would not.
Runtime verification is necessary when trying to build component based systems.
The wide range of sources for the components means that the component needs to be
responsible for its own behavior. It is no longer possible for a team of developers to
understand the entire behavior that the individual components possess, especially when
some components may have come from external sources.
A specification language designed for non-programmers is used to specify the
desired runtime behavior, the examples used here are base on TSpec [Lowr98]. I feel
that the runtime verification of a component’s state is the most useful aspect of this
method. If the state of a component is being directed by messages received from external
entities then it becomes the responsibility of the component it self to ensure that it is
making valid transitions internally and that those transitions are occurring within allowed
time constraints.
An example that is easy to illustrate is the behavior of a stoplight. Figure 3 shows
a transition diagram for a typical stop light, also included are minimum and maximum
amounts of time that can occur between the respective state changes.
Page 26 of 33
Figure 3: State Diagram for a Stop Light.
The information from the transition diagram encoded in the TSpec language is:
statemachine StopLight (color) {
states {red, yellow, green}
transitions {red -> green;green -> yellow;yellow -> red;
}
limits {duration (red, 15, 60);duration (yellow, 2, 6);duration (green, 10, 70);
}}
This shows the 3 possible states (red, yellow, green) that a stop light can be in and
the allowed minimum and maximum times that can be spent in each state, specified as the
Page 27 of 33
duration. Once a specification has been written it is compiled with the rest of the system
code and becomes an integral part of the system.
Current implementation of this method by NASA uses C++ and the observer
design pattern [Lowr98]. The use of the observer pattern should also allow this method
to be used in systems written with Java. The advantage to using a pattern is that the
verification code can be changed without affecting the operational code.
4 Conclusions and Future Work
From the above discussion, we know how to use the OO methodology to build
generic and agile framework. We also understand that aglet is in fact an implementation
of design pattern of distributed component. It provides us a basic element and a more
rigorous way to implement distributed software. However, aglet is still a low-level of
framework,. It is still at large. We need to find higher level of pattern to control its usage.
The aglet book has pointed out some of the patterns. We need to find more, and
implement them into framework with the support of OO methodology.
COTS-based software development is founded on the divide-and-conquer
principle: larger system needs are broken down and satisfied by individual subsystems.
Success depends on software units that were developed elsewhere, which requires that we
rethink and upgrade traditional software maintenance solutions. The following are some
advice:
COTS-based software development is founded on the divide-and-conquer
principle: larger system needs are broken down and satisfied by individual subsystems.
Page 28 of 33
Success depends on software units that were developed elsewhere, which requires that we
rethink and upgrade traditional software maintenance solutions. The following are some
advice:
Avoid building mini-systems from components.
Keep detailed requirements documentation on each component, and avoid the
temptation to deep endlessly adding "bells and whistles".
Use an information class repository structure and promotion.
If competing applications share a component but can not tolerate changes the
other might need, keep two similar components in the repository. It is better
to have two unique components and two working applications than a single
component and only one application that works.
In the future, software maintenance may be as simple as swapping components in
and out. However, even reusing components in similar environments may not be safe,
particularly when detailed integration testing is not part of routine maintenance.
Using COTS-based software engineering, we might someday rapidly produce
information systems. However, such COTS-based systems are much harder to certify
and validate than their predecessors. If this issue is not adequately addressed, these
systems might not be maintainable, since legacy systems require frequent regression
testing. Still, cares, airplanes, and buildings are maintained via replacement parts and
have overcome many of the challenges we now face. Why not software?
Page 29 of 33
The problem of testing and validation agent based systems is as of yet, and unsolved one.
This paper has presented two methods that will advance our ability to test and validate
these systems. One of the major difficulties at present, especially when trying to use
MM-Path analysis, is how to model an agent based system. Th OMG has created the
Agent Working Group to address this problem. The goals of the working group are to
develop and the current UML specification to include constructs what will support agent
based systems [OMG99]. IN the area of runtime verification the Ames NASA research
center is already successfully using the technique. Since the majority of today’s agent
based systems are implemented in Java, research needs to be conducted to identify any
performance or design implications that runtime verification will have on Java based
systems.
Page 30 of 33
References
[Bassett97] Paul G. Bassett, "Framing Software reuse: Lessons from the world", Prentice-Hall, New Jersey, 1997.
[BrownA98] Alan W. Brown and Kurt C. Wallnau, “The Current State of CBSE”, IEEE Software, September/October, 1998.
[BrownL98] Lisa Brownsword, David Carney, and Tricia Obrendorf, “The Opportunities and Complexities of Applying COTS Components”, CORSSTALK, April, 1998.
[Eckel98] Bruce Eckel, "Thinking in Java", Prentice-Hall, New Jersey, 1998.
[Fox98] Greg Fox and Steven Marcom, “A Software Development Process for COTS-Based Information System Infrastructure”, CROSSTALK, April, 1998.
[Ghez91] Ghezzi, C., Jazayeri, M., and Mandrioli, D. Fundamentals Of Software Engineering. Prentice-Hall, Inc., New Jersey, 1991.
[Jorg94] Jorgenson, P. C., Erickson, C. Object-Oriented software testing. Communications of the ACM (Sep. 1994), 29-38
[Kira94] Kirani, S. H., Aualkernan, I. A., and Tsai, W. Evaluation of expert system testing methods. Communications of the ACM 37, 11 (Nov. 1994), 71-81.
[Koza98] Wojtek Kozaczynski and Grady Boodh, “Component-Based Software engineering”, IEEE Software, September/October, 1998.
[Krieger98] D. Krieger and R. M. Adler, "The Emergence of Distributed Component Platforms", Compeer, Vol. 31, No. 3, 1998.
[Lange98] D. B. Lange and M. Oshima, “Programming and Deploying Java Mobile Agents with Aglets”, Addison Wesley Longman, Reading MA, 1998
[Lowr98] Lowry, M., and Dvorak, D. Analytic verification of flight software. IEEE Intelligent Systems (Sept./Oct. 1998), 45-49.
[Meyerd99] Bertrand Meyer, "On To Components", Computer, Vol. 32, No. 1, 1999.
[Nard98] Nardi, B. A., Miller, J. R., and Wright, D. J. Collaborative, programmable intelligent agents. Communications of the ACM 41, 3 (Mar. 1998), 96-104.
[Olea98] O’Leary, D. E. Using AI in knowledge management: Knowledge bases and ontologies. IEEE Intelligent Systems 13, 3 (May/June 1998), 34-39.
[OMG99] OMG ECDTF, April 26, 1999, www.omg.org/docs/ec/99-01-04.html
[Rama82] C. V. Ramamoorthy and F. B. Bastani, “Software reliability---Status and Perspectives”, IEEE Trans. Software Eng., July, 1982.
[Rope94] Roper, M. Software Testing. International software quality assurance series. McGraw-Hill, New York, 1994.
Page 31 of 33
[Sing97] Singh, S., Norvig, P., and Cohn, D. Agents and reinforcement learning. Dr. Dobb’s Journal, 263 (Mar. 1997), 28-32.
[Sing98] Singh, M. P. Agent communication languages: Rethinking the principles. Computer (Dec. 1998), 40-47.
[Srinivasan99] Savitha Srinivasan, "Design Patterns in Object-Oriented Frameworks" ", Computer, Vol. 32, No. 2, 1999.
[Voas98] Jeffrey Voas, “Certifying Off-the-shelf Software Components”, Computer, June, 1998.
[Voas98] Jeffrey Voas, “COTS Software: the Economical Choice?”, IEEE Software, March/April, 1998.
[Voas98] Jeffrey Voas, “Maintaining Component-Based Systems”, IEEE Software, July/August, 1998.
Page 32 of 33
APPENDIX
Component break down of project by author
Kun LuComponent Based Frameworks
Zing ThouCOTS Based Systems
A. Paul Heely Jr.Testing and Validation of Agent Based Systems
Page 33 of 33