University of L’AquilaFaculty of Engineering
Master’s Degree in Electronics
Engineering
Scheduling for wireless control
in single hop WirelessHART networks
Supervisors: Authors:
M.D. Di Benedetto Valeria Ercoli
Alf Isaksson Identification Number:
Karl Henrik Johansson 171448
A.A. 2009/2010
University of L’AquilaFaculty of Engineering
Master’s Degree in Electronics Engineering
Scheduling for wireless control
in single hop WirelessHART networks
Supervisors: Authors:
M.D. Di Benedetto Valeria Ercoli
Alf Isaksson Identification Number:
Karl Henrik Johansson 171448
A.A. 2009/2010
To my Dad
Acknowledgement
I would like to thank some of the people who made this work possible, feasible and plea-
surable.
A special gratitude to Alf Isaksson, my supervisor at ABB, for his kindness, patience,
motivation, enthusiasm, and immense knowledge.
I would like to express my thanks to Karl Henrik Johansson, my supervisor at KTH,
for his detailed review and constructive comments, and for having involved me in this
interesting project.
I wish to express my warm and sincere thanks to my home university supervisor Maria
Domenica Di Benedetto. I will always be greatly indebted to her for providing me with
the stimulating opportunity to make my master thesis in Sweden.
I am especially grateful to Dr.Alessandro D’Innocenzo for his constant support and
encouragement during these years.
Finally, I would like to thank my family and friends for giving me happiness and joy
during my difficult moments.
ii
Abstract
Key words : Automatic process control, WirelessHART, controlled variables, con-
troller, fieldbus, network reliability, performance analysis, PID control, process automa-
tion, scheduling algorithms.
iii
Contents
1 Introduction 1
1.1 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Chapter 2:WirelessHART . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Chapter 3:Scheduling Theory . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Chapter 4:Automation System 800xA . . . . . . . . . . . . . . . . . 3
1.1.4 Chapter 5:Process Control . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.5 Chapter 6:Scheduling of Wireless Control . . . . . . . . . . . . . . . 3
1.1.6 Chapter 7:Conclusions and future work . . . . . . . . . . . . . . . . 3
2 WirelessHART 4
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Wireless Standard for Industrial Automation . . . . . . . . . . . . . . . . . 4
2.3 WirelessHART Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3.1 TDMA Data Link Layer . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.2 WirelessHART Gateway . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.3 WirelessHART Network Manager . . . . . . . . . . . . . . . . . . . . 9
2.3.4 WirelessHART Network Schedule . . . . . . . . . . . . . . . . . . . . 10
2.3.5 Schedule Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.6 Communication Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.7 Graph Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Scheduling Theory 17
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 On-line scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.2 Off-line scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
i
CONTENTS
4 Automation System 800xA 22
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 AC800M Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Evolution of Control Technology . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Clock synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Fieldbus Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.6 S800 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6.1 S800 I/O Station Data Scanning . . . . . . . . . . . . . . . . . . . . 33
5 Process Control 35
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 PID Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.1 Proportional Action . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.2 Integral Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2.3 Derivative Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.3 Cascade Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4 Mid-Range Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.5 Split-Range Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.6 Ratio Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6 Scheduling of Wireless Control 42
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.2 Scheduling problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.1 Superframes setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.2.2 Communication and control superframe schedule . . . . . . . . . . . 46
6.3 Formalization of the scheduling problem . . . . . . . . . . . . . . . . . . . . 47
6.4 Scheduling policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.4.1 Scenario I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.4.2 Scenario II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.4.3 Shared Slots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.5 Boliden Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7 Conclusions and future works 66
A Boliden Control Variables 68
ii
CONTENTS
Bibliography 71
iii
List of Figures
2.1 WirelessHART network components. . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Slot timing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Communication tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Network topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 800xA system network architecture. . . . . . . . . . . . . . . . . . . . . . . 23
4.2 S800 I/O station overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 S800 I/O Dynamic Data Exchange. . . . . . . . . . . . . . . . . . . . . . . . 32
6.1 Industrial network topology [13]. . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2 PID controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.3 Example of a generic control loop. . . . . . . . . . . . . . . . . . . . . . . . 46
6.4 Set of control loops Example I.1. . . . . . . . . . . . . . . . . . . . . . . . . 52
6.5 Superframes Example I.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.6 Set of control loops Example I.2. . . . . . . . . . . . . . . . . . . . . . . . . 55
6.7 Superframes Example I.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.8 Set of control loops Example I.3. . . . . . . . . . . . . . . . . . . . . . . . . 56
6.9 Superframes Example I.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.10 Set of control loops Example II.1. . . . . . . . . . . . . . . . . . . . . . . . . 58
6.11 Precedence graph Example II.1. . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.12 Superframes Example II.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.13 Set of control loops Example II.2. . . . . . . . . . . . . . . . . . . . . . . . . 59
6.14 Precedence graph Example II.2. . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.15 Superframes Example II.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.16 Set of control loops Example Shared Slots. . . . . . . . . . . . . . . . . . . . 61
6.17 Superframes Example Shared Slots. . . . . . . . . . . . . . . . . . . . . . . . 62
iv
LIST OF FIGURES
6.18 Diagram of froth flotation cell. . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.19 Zinc flotation circuit [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
v
List of Tables
A.1 Controlled variables for the Garpenberg plant. . . . . . . . . . . . . . . . . 69
A.2 Garpenberg plant: variables to be scheduled in shared slots. . . . . . . . . . 70
vi
Chapter 1
Introduction
This thesis is part of the SOCRADES project, a European research and advanced devel-
opment project with the primary objective to create new methodologies, technologies and
tools for the modeling, design, implementation and operation of networked control sys-
tems embedded in smart physical objects. These systems are becoming more important
in the new-generation industrial automation field thanks to the many advantages intro-
duced by the networks. In fact, the use of a network to connect the devices permits to
eliminate unnecessary wirings, reducing the complexity and the overall cost in designing
and implementing the control systems. In the last years the fast spread of the wireless
technologies has opened new scenarios for the communication in the automation field.
The benefits introduced by the use of wireless communication in the networked control
system are many,first of all the gain in productivity and flexibility,and of course the sim-
plicity and the convenience of the sensors placement. Wireless industrial communications
based on WLAN and IEEE 802.15 standards are in the focus of this kind of research and
development. In particular, wireless sensor/actuator networks (WSN) are to be closely
investigated, as they will definitely foster the mobility and flexibility required in industrial
communication. The natural features of wireless technologies enable greater opportunities
for reconfiguration/upgrading, maintenance and fault tolerance. An industrial application,
as the one considered in this work, will frequently require hard bounds on the maximum
delay allowed. In particular, as the sensors and actuators are part of closed loop control
systems, strict timing requirements apply, ensuring a short response time and an efficient
use of the available radio bandwidth. Thus, algorithms and software that are capable of
dealing with hard and soft time constraints are very important in control implementation
1
1.1. Outline of the thesis
and design, and areas such as real-time systems from computer science are becoming in-
creasingly important also in control theory. This combined with the trend of having more
functionality being realized in software, make resource scheduling and its effect on control
performance a relevant issue.
This work is focused on the study of the problem of finding a good scheduling algorithm
in order to manage the exchange of information between sensors/actuators and the gateway
and between the gateway and the controller in a WirelessHART networked control system.
WirelessHART is a wireless protocol that provides a low cost, relatively low speed (e.g.,
compared to IEEE 802.11g) wireless connection. The WirelessHART standard does not
give any specification concerning about a particular scheduling algorithm to be used in
a WirelessHART network. However, in this Standard there are some requirements to
be taken into account. The scheduling theory has been deeply explored in the academic
literature and has progressed in recent years due to the strict requirements of real-time
systems such as predictability and timing constraints. However, extending this scheduling
theory in practice is not so easy cause these theoretical approaches to schedule do not
fit well with a real environment in which the notion of task is often completely different
from the one defined in theoretical algorithms. In these cases the implementation of a new
scheduling algorithm is required.
In this thesis a heuristic, off-line algorithm, priority based is proposed and described in
deep with some meaningful examples. The suggested scheduling policy has been applied to
two different ideal scenarios. The last part of the thesis deals with a level control problem
in a mineral flotation plant and with the possibility to use a wirelessHART network for
that plant.The proposed algorithm is applied also to this industrial environment and it is
proved to be a good solution to meet the feasibility-delay tradeoff.
Some relevant considerations and conclusions follows.
1.1 Outline of the thesis
1.1.1 Chapter 2:WirelessHART
In this chapter the WirelessHART protocol will be described. The first two sections
give a general introduction to the protocol with the main information and the technical
characteristics of WirelessHART. In the third section a more exhaustive description of
the communication protocol is given: the structure of the MAC protocol (Medium Access
2
1.1. Outline of the thesis
Control),the devices and the network resources specification and in particular scheduling
and routing are deeply explained.
1.1.2 Chapter 3:Scheduling Theory
This chapter is an overview of the basic scheduling algorithms of the academic literature
suitable for sensor networks. Both on-line and offline algorithms are analyzed.
1.1.3 Chapter 4:Automation System 800xA
In this chapter the industrial IT System 800XA process automation system is described.In
particular AC800M controller is presented since it is the most current controller series used
within all of the ABB and it provides modern communication features.
1.1.4 Chapter 5:Process Control
This chapter will deal with components required to build complex automation systems
using the bottom up approach. The key component is the PID controller and it will
be described in Section 5.2. Other important control principles such as cascade control,
mid-range control, split-range control and ratio control will be discussed, respectively, in
Sections 5.3, 5.4, 5.5, 5.6.
1.1.5 Chapter 6:Scheduling of Wireless Control
In this chapter is presented the proposed scheduling algorithm.The performance of the
algorithm are evaluated from the viewpoint of the feasibility and delay by simulation on
several ideal scenarios.The proposed approach is applied also to a real industrial environ-
ment that is Garpenberg mineral flotation plant.
1.1.6 Chapter 7:Conclusions and future work
The work concludes with a statement of future work.
3
Chapter 2
WirelessHART
2.1 Introduction
NOTE: IN THIS CHAPTER...
2.2 Wireless Standard for Industrial Automation
Wireless technologies give several advantages to industrial automation in terms of gain in
productivity and flexibility. Industrial sites are often harsh environments with stringent
requirements on the type and quality of cabling. Moreover they can easily require many
thousands of cables and it could be difficult to engineer additional wires in an already
congested site. Thus wireless communication can save costs and time. At the same time
it improves reliability with respect to wired solutions by means of several mechanisms of
diversity, such as space diversity, frequency diversity and time diversity. Furthermore the
ad-hoc nature of wireless networks allows for easy setup and re-configuration when the
network grows in size. Moreover where sensors and actuators are mounted on moving
parts, hard-wiring requires complex mechanical solutions that are costly and may limit
the freedom of movement of the part and present a potential cause of failure.
As sensors and actuators are part of closed-loop control systems, an industrial application
will require hard bounds on the maximum delay allowed during the communication, so
strict timing requirements apply. Another requirement is the coexistence of the network
with other equipment and competing wireless systems. The WirelessHART standard has
been released to fulfill all these demands.
4
2.3. WirelessHART Standard
2.3 WirelessHART Standard
WirelessHART is a wireless mesh network communication protocol for process automation
applications, including process measurement, control, and asset management applications.
It is based on the HART protocol, but it adds wireless capabilities to it enabling users
to gain the benefits of wireless technology while maintaining compatibility with existing
HART devices, tools and commands.
Each WirelessHART network includes three main elements:
• Field Devices that are connected to and characterize or control the Process or
Plant Equipment. All network devices, including field devices, must be capable of
routing packets on behalf of other devices.
• A Gateway which connects the WirelessHART network to a plant automation net-
work, allowing data to flow between the two networks. It enables communication
between Host Applications and field devices that are members of the WirelessHART
network. Every WirelessHART network includes one Gateway that, in turn, has one
or more network access points. They can be used to improve the effective through-
put and reliability of the network, as more packets per second through the network
are possible and the network is resistant to the failure of a single access point. It
is important to notice that a network access point is not directly connected to the
process, it is part of the Gateway.
• A Network Manager that is responsible for configuration of the network, schedul-
ing communication between network devices, management of the routing tables and
monitoring the health of the WirelessHART network. While redundant Network
Managers are supported by the standard, there must be only one active Network
Manager per WirelessHART network.
In the diagram in Figure 2.1 the WirelessHART network is connected to the plant automa-
tion network through a gateway. The plant automation network could be a TCP-based
network, a remote I/O system, or a bus such as PROFIBUS. The gateway is connected
to the WirelessHART network through network access points that increase the through-
put and improve the overall reliability of the network. All network devices such as field
devices and access points transmit and receive WirelessHART packets and perform the
basic functions necessary to support network formation and maintenance.
5
2.3. WirelessHART Standard
Figure 2.1: WirelessHART network components.
Devices can be deployed in a star topology, that is all devices are one hop to the gate-
way, to support a high performance application, a multi-hop mesh topology for a less
demanding application, or any topology in between. These possibilities give flexibility to
WirelessHART technology enabling various applications (both high and low performance)
to operate in the same network.
WirelessHART specifies the use of IEEE STD 802.15.4-2006 compatible transceivers op-
erating in the 2.4 GHz ISM (Industrial, Scientific, and Medical) radio band. The radios
employ DSSS (Direct Sequence Spread Spectrum) technology and channel hopping to
guarantee security and reliability. Communications among network devices are arbitrated
using TDMA (Time Division Multiple Access) that allows to schedule link activity.
2.3.1 TDMA Data Link Layer
WirelessHART uses TDMA and channel hopping to control access to the network and
to coordinate communications between network devices. The basic unity of measure is a
time slot which is a unit of fixed time duration commonly shared by all network devices
in a network. The duration of a time slot is sufficient to send or receive one packet per
channel and an accompanying acknowledgement, including guard-band times for network
wide-synchronization. The WirelessHART standard specifies that the duration of the time
slot is equal to 10 ms.
The TDMA Data Link Layer establishes links specifying the time slot and frequency where
6
2.3. WirelessHART Standard
communication between devices occurs. These links are organized into superframes that
periodically repeat to support cyclic and acyclic communication traffic.
All devices must support multiple superframes: different superframes may have lengths
that differ from each other and additional superframes can be enabled or disabled accord-
ing to bandwidth demand. Slot size and the superframe length (in terms of number of
time slots) are fixed and form a network cycle with a fixed repetition rate. However, a
superframe is fixed while it is active but its length can be modified when inactive.
Links may be dedicated or shared. Only two devices are assigned to a given dedicated slot,
one being the source and the other being the destination. A communication transaction
within the slot supports the transmission of a DLPDU (Data-Link Protocol Data Unit)
from the source followed right away by the transmission of an acknowledgment by the
addressed device.
Otherwise links may be shared between multiple sources, using contention-based access
as collisions may occur within a shared slots when more than one source try to convey
a packet within the same slot and channel. If a collision occurs, the destination device
will not be able to successfully receive any source’s transmission and will not produce
acknowledgement to any of them. To reduce the probability of repeated collisions, source
devices shall use random back-off delay when their transmission in a shared slot is not
acknowledged. Shared slots are allocated to provide base bandwidth and elastic bandwidth
utilization while minimizing power consumption.
For TDMA communications to be successful and efficient, all transactions have to occur in
slots following specific timing requirements thus synchronization of clocks between devices
in the network is critical. In particular, the network devices must have the same notion
of when each time slot begins and ends, with minimal variation.
Figure 2.2: Slot timing.
In this way, transmission of the source message can start at a specified time after the
7
2.3. WirelessHART Standard
beginning of the slot, allowing the source and the destination to set their frequency channel
and enabling the receiver to begin listening on the specified channel. It must start to listen
before the ideal transmission start time and continue listening after that ideal time due
to a tolerance on clocks. Once the transmission is complete, the communication direction
is reversed and the destination device indicates whether it received the source’s DLPDU
successfully or with a specific class of detected errors, by transmitting an acknowledgement
(see Figure 2.2).
To enhance reliability, channel hopping is combined with TDMA. It is a mechanism of
frequency diversity that allows to reduce interference from other sources and multi-path
fading effects. At the same time channel hopping provides channel diversity, that is each
slot may be used on multiple channels at the same time by different nodes.
NOTE: TIME KEEPING (DATA LINK LAYER)
2.3.2 WirelessHART Gateway
The WirelessHART Gateway is functionally divided into a Virtual Gateway providing a
sink or source point for the network traffic and one or more Access Points that provide the
physical connection into the WirelessHART network. If the gateway is made up of more
than one access point, the Network Manager will schedule communication traffic through
all of them.
The Gateway must provide the time synchronization messages to other network devices,
so the clock information ripples downward from the top of the network hierarchy to the
bottom, that is from the gateway to field devices. The virtual gateway communicates to
any field devices through network access points, so it must have a path to every device in
the network. On the other hand it can communicate directly with the Network Manager,
but this is an external connection. The network manager and the gateway must establish
a secure communication channel with each other, and maintain this connection to carry
control and data traffic.
The gateway can connect with the host application via various protocols (e.g. Modbus,
PROFIBUS) based on different physical layers. The network access points communicate
with the virtual gateway via a dedicated link or communication port. Moreover, each
access point can support communication with any device to which the network manager
has provided a path. As not utilizing every slot represents wasted opportunities, an access
point should have activity (e.g. transmit or receive) scheduled for every slot. Thus, if the
8
2.3. WirelessHART Standard
access points have nothing else to do they should advertise and perform shared listens.
It is important to notice that all communications with the WirelessHART network pass
through the gateway which must route packets to the specified destination (network device,
host application or network manager).
2.3.3 WirelessHART Network Manager
The Network Manager is central to the overall operation of the WirelessHART network.
It is responsible for the management, scheduling, monitoring, and optimization of com-
munication resources of the WirelessHART network. It manages both the WirelessHART
network and the network devices. To perform its complete set of functions it needs con-
figuration and setup information about the network devices that it reads from the devices
themselves, information about how the network is going to be used, and feedback from
the network about its overall health.
There is one network manager per WirelessHART network, and it may be co-located
with the Gateway in the same box or located in a completely separate physical box. It
is an application rather than a network device, so its location is not restricted by the
WirelessHART specification. However, the network manager must have a secure commu-
nication channel to the gateway.
The network manager forms the WirelessHART network and establishes routes, initializing
and maintaining network communication parameter values. It provides mechanism for
devices joining and leaving the network. It is also responsible for managing dedicated and
shared network resources, and for allocating communication resources. The allocation of
communication resources is referred to as scheduling.
The network manager establishes paths between the gateway and the network devices,
but after that it is not involved in communications between host applications and network
devices. The gateway is responsible for comparing the destination address of packets
with its own address and the network manager’s address. Whenever the gateway receives
packets destined for the network manager, it may remove the packet from the wireless
network and forward them to the network manager using its secure connection. Packets
with other destinations, as well as packets received from the network manager, are routed
into the network according to the routing described in the packet.
To generate the scheduling, the network manager combines information it has about the
topology of the network, heuristics about communication requirements, and requests for
9
2.3. WirelessHART Standard
communication resources from network devices and applications. In particular, in order
to schedule communication resources between network devices, the network manager must
know the update rate of each device.
As part of its system functions, the network manager collects network performance and
diagnostics about the behavior of the overall network. This information is available to be
reported to host-based applications but it is also used to adapt the network to changing
conditions. The adaptation includes updating route and schedule information, in order
to improve operation of the network while conserving power within devices. Reconfigura-
tion of the network may be performed while the network is operating as diagnostics are
accessible during run-time.
2.3.4 WirelessHART Network Schedule
A key characteristic of a WirelessHART network is the ability to automatically start up and
self-organize. However, before a WirelessHART network can form, a network manager and
a gateway must exist and they must have created a private connection with each other.
To initialize the network, the network manager must create the network management
superframe and the network graph that is an optimized route map.
NOTE: NETWORK MANAGEMENT SUPERFRAME
Management superframe has priority over data superframes and, following the Wire-
lessHART specifications, the network management superframe should be 6400 slots. When
the network manager creates the initial superframe, it assigns links in it for the gateway’s
access points and configures the gateway. It also assigns a dedicated superframe to the
gateway (the gateway superframe), in order to schedule activity of management of the
network which access points have to perform (such as listening of the channels to search
for new devices needing to join the network). Activating this first superframe the network
manager establishes the ASN (Absolute Slot Number, it indicates the actual time that is
being used for transmission of a specific packet) 0. The time when the network manager
starts the WirelessHART network is said to be the epoch for a specific network.
The network manager is also responsible to generate and to manage the network schedule.
In order to do so, it needs information about the network, the communication require-
ments, and the capabilities of the network devices. Using this information the network
manager is able to adjust the schedule to meet the requirements, and then to tune it by
using the feedback from the operation of the system.
10
2.3. WirelessHART Standard
The network manager allocates communication resources in terms of superframes and links.
A link is the full communication specification between adjacent devices in the network, that
is the communication parameters necessary to move a packet one hop. Each link includes
one time slot, a channel offset (for the frequency hopping), its type (transmit, receive
or shared), neighbor information, and transmit/receive attributes. Links are assigned to
superframes as part of the scheduling process.
The superframe is a set of slots repeating at a constant rate, these slots are called relative
slots, meaning that they are relative to the start of the superframe instead of being referred
to the epoch of the network. A frameID is assigned to each superframe. The superframe
size, i.e. the number of slots in the superframe, is the period of that superframe, that is
how often each slot repeats. In particular, the data superframe length is determined by
data scan rate.
Time slots are assigned to devices through links. For a dedicated link there will be a send
slot in one device and a receive slot in another device. If the link is shared then there will
be a receive slot in one device and one or more transmit slots in several devices, in other
words, shared links can have more than one talker and only one listener. When a device
has a shared link, it uses a collision-avoidance scheme with a backoff/retry mechanism to
handle collisions that may occur.
Using shared links may be suitable when throughput requirements of devices are low, or
when the traffic rate is irregular. In these situations, assigning shared links may decrease
latency because the network device does not need to wait for dedicated links, but this is
true only when chances of collisions are low.
The network manager creates a set of links for each device, it determines when the device’s
transceiver needs to wake up, and when it wakes up whether it should transmit or receive.
However the link does not determine what is communicated, it is only providing the
”opportunity” to communicate. A link assignment specifies how the network device shall
use a time slot. When a time slot is assigned to a device, the device can perform different
actions within that time slot, depending on the type of the associated link: it can attempt
to transmit a packet, wait to receive a packet, or remain idle.
All devices support multiple superframes of different sizes. All superframes logically start
in the same place in time: cycle 0, slot 0 of every superframe occurs at the beginning
of the epoch. Thus, time slots in different superframes are always aligned, even though
beginnings and ends of superframes may not be. Multiple superframes can be used to
11
2.3. WirelessHART Standard
define a different communication schedule for various sets of devices or to allow the en-
tire network to run at different communication rates. In fact, by configuring a network
device to participate in multiple overlapping superframes of different sizes, it is possible
to establish different communication schedules and connectivity matrices that all work at
the same time. But a network device with links in multiple superframes may encounter a
link arbitration situation. This may happen when two or more superframes with assigned
links coincide in the same absolute time slot. In these cases, the device must operate on
the link that has the numerically lowest frameID. For this reason, the gateway superframe
should be allocated with a large ID value. It is also required to be 40 slots long, this means
that the gateway superframe needs to be a minimum of 400 ms. Additional superframes
may also be allocated for event notifications or HART commands issued through host ap-
plications. It is important to notice that superframes can be added, removed, activated,
and deactivated by the network manager while the network is running.
2.3.5 Schedule Strategy
WirelessHART standard does not give any specification concerning about a particular
scheduling algorithm to be used in a WirelessHART network. However, there are some
references to be taken into account.
First of all, for all network devices accessed through the gateway, the user has to configure
how often each measurement value is to be communicated to the gateway. In order to
support multiple superframes for the transfer of process measurements at different rates,
the size of superframes should follow a harmonic chain in the sense that all periods should
divide into each other, in particular, scan rates should be configured as integer multiples
of the fastest update time that will be supported by network devices. For this reason, the
supported update rates will be defined as 2n where n is positive or negative integer values,
for example scan rate selections of 250 ms, 500 ms, 1 s, 2 s, 4 s, 8 s, 16 s, 32 s (or more).
The scheduling of communications associated with process measurements can be simplified
by defining a superframe for each scan period and developing the schedule by allocating
slots for transmission of measurement data starting with the fastest to the slowest scan
rate. To avoid any conflict between the slots reserved for process measurement and network
management, the length of the network management superframes may be configured to
be an integer multiple of the fastest scan rate and configured to use slots that are not
required for process measurement transmission. In this manner, slots may be allocated
12
2.3. WirelessHART Standard
for the transmission of measurement without any conflict with the slots dedicated for
management communications.
2.3.6 Communication Tables
Each network device (including field devices) contains tables controlling communication
activities and packet buffers, which are used to receive, process and forward packets. The
communication tables and the relationships between them are shown in Figure 2.3.
Figure 2.3: Communication tables.
The Superframe Table contains the identifier of the superframe, the number of slots in the
superframe, a flag indicating if the superframe is currently activated and a list of links.
The Link Table, in turn, contains a reference to a neighbor which is allowed to communicate
with the device, indicating the type of the link, the slot number in the superframe, the
frequency hopping channel offset and a flag indicating if the link may be used for receive
or for transmit.
The Neighbor Table has a primary importance in the management of device communi-
cations. It is a list of all neighbors of the device, which are all devices that the device
can directly exchange messages with. The neighbor table includes all the properties and
the statistics pertaining to the neighbor of the device, such as basic neighbor identity
information, performance and historical statistics and shared slots parameters.
The Graph Table maintains the identifier of the graph, optionally the destination address
and a reference to one or more neighbors. When a graph is used for routing, the list of
neighbors held by the graph table identify those devices that are legal destinations for the
packet’s next hop toward its final destination. For more details about graphs and routing
see the next subsection.
13
2.3. WirelessHART Standard
2.3.7 Graph Routing
WirelessHART networks can be configured in various topologies in order to support several
applications such as:
• star network, in which there is just one router device (the gateway) that communi-
cates with several field devices. It is suitable for a small application;
• mesh network, in which all network devices (including field devices) are router de-
vices. A mesh network is a robust network with redundant data paths which is able
to adapt to changing environments and more widespread applications.
• star-mesh network, that is a combination of the star network and the mesh one.
Figure 2.4: Network topologies.
In a star network all network devices are connected to the gateway through a single hop,
while mesh networks are multi-hop networks, that is, they use two or more wireless hops
to convey information from a source to a destination, thus requiring a routing algorithm
in order to allow the communication between network devices.
WirelessHART documentation does not specify a routing algorithm, it only describes
two methods of routing packets in a WirelessHART network: source routing and graph
routing. All devices must support both of them. Source routing specifies a single directed
route in terms of devices and links, between a source node and a destination node. The
source route is statically specified in the packet itself, that contains the list of devices
addresses composing the path toward the destination, thus intermediate devices require
no knowledge of the source route in advance. However if one of the intermediate link fails
the packet is lost, for this reason source routing should only be used for testing routes,
troubleshooting network paths or for ad-hoc communication.
On the contrary, in graph routing the graph route is a directed list of paths (subsets
of directed links and devices) that connect two devices within the network that need to
14
2.3. WirelessHART Standard
communicate, allowing to have redundant communication between network endpoints.
All intermediate devices must be pre-configured with graph information that specifies the
neighbors to which the packet may be forwarded.
The network manager is responsible to setup and manage all routes and to configure graph
information in each network device. In order to create efficient and optimized routes the
network manager needs information about the network, communication requirements and
the capabilities of network devices. Hence, when devices are initially added to the network,
the network manager stores all neighbors entries including signal strength information as
reported from each network device. Then it uses this information to build a complete
network graph which is an optimized route map, in the sense that possible but subopti-
mal links have been removed. In particular, the network graph is optimized in terms of
reliability, hop count, reporting rates, power usage and overall traffic load. As the over-
all network adapts to changing information, the network manager updates the topology,
adding or deleting information in each network device. The network manager contains
the network graph and portions of the graph that have been installed into each device.
Once the routing information and communication requirements for each device are known,
the scheduling of network resources can be performed for both scheduled upstream and
downstream communications.
Graphs are unidirectional, thus there are upstream paths which are used from field de-
vices to the gateway, for example for transferring process measurements and alarms, and
downstream routes that provide paths from the gateway to field devices, to send control
information, such as setpoint changes for actuators.
According to WirelessHART routing requirements, in a properly configured network, all
devices will have at least two devices in the graph through which they may send packets,
each graph should use a maximum of 4 neighbors as a potential next hop destination, the
minimum number of hops to be considered when constructing the graph is 2, the maximum
one is 4 and, if there is a one hop path to the gateway it should be used.
Every graph in the network is associated with a unique GraphID, a list of neighbors and
the destination’s address, but this one is an optional field since intermediate devices may
be merely forwarding the packet along the route path according to the GraphID. To send
a packet on a graph, the source device include a GraphID in the packet’s network header.
NEED TO INSERT NPDU STRUCTURE?
15
2.3. WirelessHART Standard
In addition to the communication tables mentioned above (see Subsection 2.3.6), in order
to be able to route packets along a graph, a device needs also to be configured with a
connection table containing entries that include the GraphID and neighbor address. The
device routing that packet must perform a lookup in the connection table by GraphID, and
send the packet to any of the neighbors associated with that packet’s GraphID. Once any
neighbor acknowledges receipt of the packet, the routing device may release it and remove
the packet from its transmit buffer. If an acknowledge is not received, the device will
attempt to retransmit the packet at its next available opportunity. This means that, when
a field device does not communicate directly with the gateway, then added communication
slots must be reserved in the schedule for packet routing.
16
Chapter 3
Scheduling Theory
3.1 Introduction
A real-time system is a system in which the correctness of the system behavior depends
not only on the logical results of the computations, but also on the physical instant at
which these results are produced, in other words, it is a system with explicit deterministic
(or probabilistic) timing requirements. Real time systems can be viewed as an important
subclass of embedded systems. These are most often subject to limited computation
resources as a result of economic considerations. This combined with the trend of having
more functionality being realized in software, make resource scheduling and its effect on
control performance a relevant issue. A key issue in real-time systems is predictability, i.e.
to be able to anticipate the behavior of the system before run-time, and the guarantee that
the system will behave as anticipated at run-time. At the same time, run-time flexibility
is a desired feature, as not all run-time events can be completely taken into account in
advance. In addition, the choice of scheduling strategy in real-time systems is strongly
related to the nature of the timing constraints which have to be fulfilled. As different
scheduling schemes provide different levels of, for example, predictability or flexibility,
there is usually a trade-off between the ability to handle complex constraints and the level
of flexibility provided by the selected scheduling strategy.
NOTE: IN THIS CHAPTER...
17
3.2. Scheduling Algorithms
3.2 Scheduling Algorithms
Real-time scheduling theory offers a way of predicting the timing behavior of complex
multi-tasking computer software, assuming that a real-time system consists of the following
components:
• a set of computational and communication tasks {τ1, τ2, ..., τn} to be performed
fulfilling a number of timing requirements, where a task τi is a sequence of jobs or
operations {Ji,1, Ji,2, ..., Ji,ki} for i = 1, ..., n;
• a run-time scheduler (or dispatcher) that controls which task is executing at any
given moment;
• a set of resources shared by the set of tasks. All communication and synchronization
between tasks are assumed to occur via shared resources.
Each job has a release time and a computation time. The release time is the time in
which the job becomes available for processing that requires a computation time c. For
a task the deadline is the time interval within which all task’s jobs must finish executing
and it is specified relative to the arrival time of the task invocation, thus defining also
the corresponding absolute deadline. The purpose of the deadline is to constrain the
acceptable finishing time for a task. The task characteristics can be specified by a set β
made up of 4 elements {β1, β2, β3, β4}. β1 describes precedence relations between jobs,
that are represented by means of an acyclic directed graph G = (V,E) where V is the
set of jobs and, ∀i, j = 1, ..., n, ∀x = 1, ..., ki, ∀y = 1, ..., kj , (ix, jy) ∈ E iff Ji,x must be
completed before Jj,y starts (notice that the involved jobs can belong to the same task or
to different tasks). INSERT FIGURE AND REFERENCES
If there are dependencies between jobs β1 = prec. If β2 = ri, then release dates may be
specified for each task. If ri = 0 for all tasks, then β2 does not appear in β. β3 specifies
restrictions on the computation time for each job of a task. If β3 is equal to ci,ki= 1 then
each job has a unit processing requirement. If β4 = di,kithen a deadline is specified for
the job Ji,kibelonging to the task τi. This means that the job Ji,ki
must finish within the
time interval di. Given an arbitrary set of tasks, the corresponding scheduling problem
is to find a schedule of these tasks satisfying certain restrictions and optimizing one or
more performance measures. A schedule is said to be feasible if the temporal constraints
of tasks are met at run-time (e.g. if all tasks are executed within a certain time interval).
18
3.2. Scheduling Algorithms
A task is periodic if it is time-driven, with a regular release (invocations are all identical
and arrive at fixed time instants). The time interval between two successive invocations of
the task τi is a constant Ti, and it is called the period of the task. If the relative deadline
for a periodic task is left unstated it is usually assumed to be equal to the task’s period.
A task is aperiodic if it is not periodic, it consists of a sequence of invocations which
arrive randomly, usually in response to some external triggering event, thus it is event-
driven. Sporadic tasks are a special case of aperiodic ones but they have a fixed minimum
inter-arrival time. Task invocations may be further categorized by their availability to be
preempted while executing. Following these classifications, tasks can be non-preemptive,
if task invocations execute to completion without interruption once started, or they can
be preemptive if task invocations can be temporarily preempted during their execution by
the arrival of a higher-priority invocation. In scheduling theory the notion of priority is
commonly used to order access to shared resources such as processors or communication
channels. Scheduling algorithms can also be divided in two big classes: off-line or static
algorithms and on-line or dynamic algorithms.
3.2.1 On-line scheduling
On-line scheduling algorithms are suitable for event-triggered systems as they provide the
capability to handle dynamic on-line events. They require a complex scheduler that has
to make decisions about which task to execute at run time, based on the priorities of
the task invocations. As it is a priority-based approach, this scheduling policy can be
further categorized with respect to the priority, thus determining fixed-priority algorithms
and dynamic-priority algorithms. In fixed-priority algorithms the dispatcher statically
associates a priority with each task in advance.
Two basic priority assignment rules are the Rate Monotonic (RM) algorithm and the
Deadline Monotonic (DM) scheduling. According to RM, tasks are assigned fixed priorities
ordered as the rates, so the task with the smallest period receives the highest priority.
Instead, in DM tasks with shorter deadlines are allocated higher priorities. In dynamic-
priority scheduling the priority of each task is determined at run-time. Typically this
requires a more complex run-time scheduler than fixed-priority scheduling. One of the most
used algorithms belonging to this class is the Earliest Deadline First (EDF) algorithm,
according to which task priorities are inversely proportional to the absolute deadlines.
Considering a physical plant interacting with a controller that measures some plant signals
19
3.2. Scheduling Algorithms
and generates appropriate control signals in order to influence the behavior of the plant,
another approach is to combine scheduling theory and control theory to achieve higher
resource utilization and better control performance. In this case, the on-line scheduler
uses feedback to dynamically adjust the control task attributes in order to optimize the
global control performance, trying to keep the resource utilization at a high level and
distributing the computing resources among the control tasks. In particular, in feedback-
feedforward scheduling of control tasks, the dispatcher uses feedback from execution time
measurements and feedforward from workload changes to adjust the sampling period of
the controller, so that the performance of the closed-loop control system is maximized.
3.2.2 Off-line scheduling
Off-line scheduling algorithms require the programmer to define the entire scheduling prior
to the execution. According to this table-driven approach the time line is divided into
slots of fixed length (minor cycle) and tasks are statically allocated in each slot based on
their rates and execution requirements. The schedule is then constructed up to the least
common multiple of all periods (called the hyperperiod or the major cycle) and stored in
a table. At run-time, tasks are dispatched according to the table and synchronized by a
timer at the beginning of each minor cycle. Given a set of periodic processes the problem
to schedule them meeting their deadline and period constraints is known to be NP-hard
for one processor, which means that in the worst case an exponential amount of work
appears necessary to determine whether a feasible solution exists. In other words, in the
worst case an exhaustive search is necessary in order to determine if a schedulable solution
exists or not.
In practice, most cyclic executive schedules are derived manually, but they can also be
automated, despite deriving an optimal schedule is theoretically an intractable (NP-hard)
problem. On one hand, off-line algorithms allow to consider and to manage complex
dependencies between tasks and resource contention among jobs when constructing the
static table. Moreover, this policy produce programs that are entirely deterministic, so it
is possible to know which task is executing at any given time. As tasks always execute
in their preallocated slots, the experienced jitter is very small. Furthermore, the entire
schedule is captured in a static table, so different operating modes can be represented
by different tables. On the other hand, this scheduling policy is fragile during overload
situations, since a task exceeding its predicted execution time could generate a domino
20
3.2. Scheduling Algorithms
effect on the subsequent tasks, causing their execution to exceed the minor cycle boundary.
In addition, off-line scheduling is not flexible enough for handling dynamic situations. In
fact, a creation of a new task, or a change in a task rate, might modify the values of the
minor and major cycle, thus requiring a complete redesign of the scheduling table. So, the
off-line scheduling approach is more suitable for static configuration of systems. Another
potential source of inefficiency is the need to fit all activities into common multiples of the
major and minor cycle (so if an activity does not fit exactly into the schedule it may be
necessary to rewrite the code to make it fit).
21
Chapter 4
Automation System 800xA
4.1 Introduction
The Industrial IT System 800xA is a process automation system that extends the scope of
traditional control systems incorporating all automation functions in a single environment,
and embracing the principles of real time computing and networking. The 800xA system
functionality is divided into a Base System, which is the system base software and a set
of options or functions that can be added to the system based on the needs of the process
that should be controlled. Controllers are integrated with the system through integration
functions. Generally, the 800xA system is used together with the AC800M controller.
NOTE: IN THIS CHAPTER...
4.2 AC800M Controller
Due to its modularity the AC800M controller can be used for a wide range of applications.
It supports industry standard fieldbuses and communication protocols such as Ethernet,
PROFIBUS, FOUNDATION Fieldbus and HART, via embedded or external communi-
cation interfaces. The AC800M controller supports the S800 I/O, a distributed modular
I/O system for communication via PROFIBUS or directly connected to the controller.
INSERT REFERENCE DOC 3BSE038018R5011
In Figure 4.1 the 800xA system network architecture is shown. The automation system
network is a real time LAN (Local Area Network) optimized for high performance, and
it is used for communication between workplaces, servers and controllers. Workplaces
provide various forms of user interaction, whereas servers run software that provides system
22
4.2. AC800M Controller
Figure 4.1: 800xA system network architecture.
functions. In a cabled network fieldbuses are used to interconnect field devices such as
sensors, actuators, and I/O modules, and to connect these devices to the system, either
via a controller or directly to a server.
As an alternative a WirelessHART network can be used to interconnect field devices and
to connect them to a gateway, which is in turn connected to the controller by means of
fieldbuses. The automation system network can be connected to a plant network, such as
an office or corporate network, via some form of secure network interconnection such as
an isolation device. The nature of the secure interconnection depends on the nature of the
plant network and the level of security that is required for the considered application.
The automation system network can also be split into a client/server network and a control
network for larger systems or for systems where network separation is required e.g. for
system integrity reasons. The control network is based on TCP/IP over Ethernet, so it
is a private IP network domain especially designed for industrial applications, and where
addresses are static and must be selected according to a scheme defined by the RNRP
(Redundant Network Routing Protocol). It is a routing protocol developed by ABB which
supports redundant network configurations as it handles alternative paths between nodes
and automatically adapts to topology changes. A redundant network consists of two
fully separate Ethernet networks. Each node has two IP addresses, one on the primary
network, and one on the backup network. Detection of a network failure and a switch over
23
4.3. Evolution of Control Technology
to the redundant network takes less than one second, with no loss or duplication of data.
Following the RNRP protocol, each node cyclically sends a routing vector as a multi-cast
message on both networks. The routing vector indicates which other nodes this node can
see on the network. Each node uses received routing vectors to build a table, listing which
nodes can be reached on which of the two networks. One of the networks is designated
as the primary network, the other one as the backup network. As long as the primary
network works, all traffic is sent on that network , only routing vectors are sent on the
backup network to verify that it works.
The control network can be a very small network with a few nodes or a large network
containing a high number of network areas. Normally, the control network covers one
manufacturing plant. A large control network can be divided into subnetworks (network
areas), for example to keep most of the time-critical communication within smaller areas,
thereby improving performance. Network areas are interconnected by means of RNRP
routers. The AC800M controller also has router capability, but since it only has two
network ports it can only be a router between two non-redundant network areas. It is not
possible to do routing between two redundant network areas by using two AC800M with
one on each path. The number of nodes in one control network segment is limited, due
to limited routing resources in controller nodes, and to the load generated from RNRP in
the controllers.
It is important to notice that communication performance is affected by bandwidth, mes-
sage length and cyclic load. In particular, the actual communication throughput for a
controller mainly depends on the cycle time of the applications and the CPU load in the
controller, rather than the ethernet speed.
4.3 Evolution of Control Technology
The evolution of controls technology can be resumed in the three phases: classical control
(in which the process control systems were analog and made up of simple devices with
signal formats that were essentially determined by the need for an architecture with a
minimum number of costly CPUs), digital control, and networked control.
The milestones of this development were the introduction of negative feedback amplifiers,
field adjustable PID controllers, and especially digital computers. These technologies have
had a huge impact on control theory and its application, and today they are strictly linked
24
4.3. Evolution of Control Technology
with the PC hardware revolution.
As regard networked control, control systems with spatially distributed components have
existed for several decades. However, in the past, in such systems the components were
connected via hardwired connections and the systems were designed to bring all the in-
formation from the sensors to a central control station. The control policies then were
implemented via the actuators, such as valves or motors. Moreover, before digital com-
munications was introduced, sensors and actuators were hardwired to their controllers
and it was impossible to transmit more than a single process or manipulated variable. In
addition, analog signals only traveled in one direction, from the transmitting sensor to the
controller or from the controller to the receiving actuator.
As the networking used in automation is predominantly digital, the trend is toward all-
digital communications, that makes it possible to extract much more information from
each device than was possible using analog signals. The advent of digital communica-
tions makes it possible for controllers to be placed away from sensors and actuators, as
all information for hundreds of loops and monitoring points could be transmitted to the
controller over a single network. Digital communications carry not only I/O like process
and manipulated variables but also operational information such as setpoint, alarms, and
tuning in both directions to and from the controller. Thus communication enables dis-
tributed processing, and some functions can be added not only in controllers but also in
field instruments. In this way, thanks to communications, field devices perform not only a
basic measurement or actuation but also have features and functions for control and asset
management. Moreover, using a digital signal for control there is no analog conversion
from the device to the signal wires, and then from the signal wires to the host, which
increases the accuracy of the process variable.
A second big benefit of digital communications is the capacity to connect several devices
to the same single pair of wires to form a multidrop network that shares a common
communication media. Compared to running a separate wire for each device (like in
analog communication), this reduces the wiring requirement, especially for plants with
large distances and many devices. Hence, the amount of required cables is greatly reduced,
allowing hardware and installations savings.
In industrial environment, the set of digital communication protocols used for distributed
control is represented by Fieldbus systems (see Section 4.5 for further information). In
the simplest form of communication, a device such as a host workstation or a controller
25
4.3. Evolution of Control Technology
is the master that sends requests to read or write a value to other devices such as field
instruments, which are called slaves. The slave that was addressed then respond to the
request. An example of this master/ slave communication is the HART or the PROFIBUS
architecture. In a network with no specific master or slaves such as FOUNDATION
Fieldbus the client/server method is used. In this case a device acting as a client requests,
and the device acting as a server responds. Another mode of communication is when a
device acting as a source transmits a message to a device acting as a sink without any
request from the sink. The choice of the communication protocol depends on the particular
application as buses are optimized for different characteristics. The common characteristic
of Fieldbuses is that they allow many devices to be connected on the same wire and provide
the necessary addressing mechanism to support communication with them.
Several Fieldbus manufacturers have recognized also the advantages of Ethernet, which is
another standard bus system for industrial applications. These advantages are related to
the physical layer, particularly in terms of bandwidth, which can be higher than 100Mb/s
rather than up to 12Mb/s for fieldbuses. However both of them can be considered high
speed protocol when compared with typical requirements of most industrial applications,
where the required data throughput of the network is relatively low, but its reliability needs
to be very high. The disadvantage of fieldbus are its higher installation and purchase
costs for fieldbus devices. Actuators and sensors are often relatively inexpensive when
compared with the cost of the cable used to connect them. Beyond the high installation and
maintenance costs, the high failure rate of connectors and the difficulty of troubleshooting
them have to be considered.
These are the main reasons why in industrial environments, apart from lower installation
and maintenance costs, wireless systems can offer ease of equipment upgrading and prac-
tical deployment of mobile robotic systems (see Section 2.2). It must be noted that the
common trend in the semiconductor industry has been towards ever more complex inte-
grated circuits with a higher transistors number (thanks to the continuing success of the
Moore’s law) and higher clock frequencies. This trend, in combination with recent tech-
nological advances in MEMS (Micro Electro Mechanical Systems), enables an increasing
integration of devices at a lower cost. However, integrating devices through wireless rather
than wired communication channels has highlighted important potential application ad-
vantages but, at the same time, also several challenging problems for current research. In
fact, one of the main characteristics of wireless networked control is that complexity of
26
4.4. Clock synchronization
the application lies not in the individual nodes, but in the collaborative effort of a large
number of distributed elements. This means that, even if high clock frequencies technol-
ogy and wireless data rates of many Mbit/s are available, individual nodes use a restricted
amount of computational power and wireless communication protocol with data rates of
the order of tens Kbit/s. This observation is quite crucial, as in some sense goes against
the dominant trends in both the world of computation and wireless communication, where
the aim is to reach high computational power and high data rates. The first reason of this
characteristic of networked control is due to the requirements of most applications, that is
regular and low frequency transmission of small data packets. Moreover, the distributed
nature of the embedded wireless network allows to split the computational power over a
collection of wireless devices, rather than relying on a central communication coordina-
tor, or at least to perform some functions on the field devices themselves. However, the
main reason to use a restricted amount of the possible computation power and low data
rates is due to the nature of the wireless communication between devices. In particular,
performance of a wireless network has to be optimized with respect to constraints on com-
munication bandwidth, contention of communication resources, delay, jitter, noise, fading,
and energy usage. Energy conservation is a key requirement in the design of wireless
networks. This is mainly due to the limited power availability. In fact, as the speed of
embedded processors increases and more peripherals are integrated into a single chip, the
applications that run on these devices become more computationally intensive. However,
technological advantages of the batteries which power the embedded systems lags signifi-
cantly behind, and as a result, power consumption is one of the most important issues for
wireless embedded systems. In order to reduce the energy spent in communication, the
wireless network should use bounded data rates, as the power spent in communication is
the principle source of energy consumption. A good power management technique reduces
also long term fading effects and interference. As a consequence of this power management
policy devices will transmit with the minimum transmit power level both to save energy
and to reduce fading and interference on the shared wireless medium.
4.4 Clock synchronization
Typically a controller will act as time master for the control network and the rest of the
system will be synchronized from this source. The time source can be a controller or one
27
4.5. Fieldbus Standards
external time source. It is advisable to use an external time source if it is important that
time stamps in the system are possible to compare with time stamps from other systems.
The AC800M supports clock synchronization by four different protocols: CNCP, SNTP,
MB300 Clock Sync, and MMS Time Service. The AC800M can send clock synchronization
with all protocols simultaneously, but it will itself only use the one configured protocol to
receive clock synchronization from another source. One usage of this is to let it receives
tim with one protocol and distribute to other nodes with another protocol.
CNCP (Control Network Clock Protocol) is an ABB proprietary master-slave clock syn-
chronization protocol for the control network. When using the AC800M controllers CNCP
is the recommended protocol for time synchronization to all nodes on the control network
that support CNCP.
SNTP (Simple Network Time Protocol) is a standard clien/server oriented time synchro-
nization protocol. If the control system needs to be synchronized with a global time
master, the recommended method is to use an SNTP server with a GPS receiver. The
same holds if the system is made up of more than one control network.
The AC 800M OPC Server supports the MMS Time Service for small systems where no
AC 800M is used for backward compatibility with older products. MB 300 Clock Sync
is a protocol for clock synchronization of Advant/Master products on a MasterBus 300
network.
4.5 Fieldbus Standards
Fieldbus systems are used as means of communications for serial data exchange between de-
centralized devices on the field level and the controller of the process supervision level. All
relevant signals such as input and output data, parameters, diagnostic information, con-
figuration settings and, for a wide range of applications, the power required for operation
can be carried over two wires. Obviously, if a field device has a high power requirement,
then this device can be powered externally. Historically, communication between field
devices and a control system has been over analog 4-20 mA current loop interfaces. This
widely used technology works well and allows accurate transmission of process variable
measurements as well as effective closed-loop control.
Fieldbus technology, however, is digital. This brings remarkable simplification to system
architectures and, principally, when digital field instruments are used, significantly more
28
4.5. Fieldbus Standards
data becomes available. Technically, Fieldbus is a fully digital and duplex data transmis-
sion system, which connects smart field devices and automation systems to an industrial
plant’s network. A Fieldbus differs from point-to-point connections, which allow only
two participating devices to exchange data. In fact, the key attribute to Fieldbus com-
munications is higher speed communications with the possibility of addressing multiple
transmitters all on the same field wiring. Since the Fieldbus concept is a computer network
specifically designed to support realtime, field sensor/actuator level messaging, most of
the Fieldbus systems available today adopt the ISO/OSI 7 layer model, but in a reduced
stack architecture.
There are hundreds of different Fieldbus protocols, but not all of them are recognized as
standards. Currently, PROFIBUS and FOUNDATION Fieldbus are the accepted Fieldbus
standards for the automation industry. Both of them can provide pure digital communi-
cations between field devices and the control systems. In terms of Fieldbus organization
they are similar with many user groups and support worldwide by the major manufac-
turers, whereas their technologies are different in several important areas. ABB supports
both PROFIBUS, and FOUNDATION Fieldbus, that are standards of the IEC (Inter-
national Electrotechnical Commission). PROFIBUS and FOUNDATION Fieldbus are
Fieldbus standards for applications in the manufacturing industry, process automation
and building automation.
The PROFIBUS family mainly consists of PROFIBUS-DP and PROFIBUS-PA. The first
one is optimized for high speed and simple connection of devices, and it is especially
designed for communication between programmable controllers and a distributed I/O level.
PROFIBUS-PA is especially designed for process automation, and it allows sensors and
actuators to be connected to the same bus, even in security areas. The transfer rate varies
from 9.6Kbit/s to 12Mbit/s.
The FOUNDATION Fieldbus is dedicated to the establishment of a single, open, inter-
operable Fieldbus. The standard developed by this Foundation, Foundation Fieldbus,
might emerge as one of the world’s dominant Fieldbus protocols for process control in the
very near future as this technology seamlessly integrates with Ethernet technology offering
high speed and reliable automation solutions for the process industry at affordable costs.
FOUNDATION Fieldbus defines two communication profiles, H1 and HSE. The H1 profile
has a data transmission rate of 31.25 Kbit/s and it is mostly used for direct communica-
tion between field devices in one link (H1 link). The HSE profile with a transmission rate
29
4.6. S800 I/O
of 100 Mbit/s serves first and foremost as a powerful backbone for the link between H1
segments.
Also the Modbus protocol is supported by ABB, which is a messaging structure to establish
master-slave/client-server communication between intelligent devices. It is an open stan-
dard network protocol widely used in the industrial manufacturing environment. Modbus
is executed serially and asynchronously according to the master/slave principle, and in
one direction at a time (half-duplex). It is used mainly for reading and writing variables
between control network devices, using point to point or multidrop communication.
ABB supports also HART devices although HART is strictly spoken not a fieldbus. The
HART communication protocol is an open protocol from the HART Communication Foun-
dation (HCF), and it has been developed in the late 1980’s to facilitate communication with
smart field devices. HART (Highway Addressable Remote Transducer) allows simultane-
ous analog and digital communication. The analog signal carries the process information,
while the digital one allows bi-directional communication and makes it possible for addi-
tional information beyond just the normal process variable to be communicated to/from
a smart field instrument. It can be considered a hybrid bus designed for configuration,
maintenance and other devices functions while maintaining compatibility with the huge
installed base of analog only hosts. HART is a master/slave protocol which means that
a field (slave) device only speaks when spoken to by a master. The HART protocol can
be used in various modes for communicating information to/from smart field instruments
and central control or monitoring systems. In 2007 the HART Communication Foundation
(HCF) has released the HART 7 Specification. Included in HART 7 is WirelessHART,
an open wireless communication standard especially designed specifically to address the
needs of the process industry for simple, reliable and secure wireless communication in
real world industrial plant applications (see Chapter 2 for a more detailed description).
WirelessHART supports both new wireless field devices and also retrofit of existing HART
devices with WirelessHART adapters.
4.6 S800 I/O
The S800 I/O is a distributed and modularized I/O system. It is a predefined device in
AC800M both as direct I/O using the built in AC800M ModuleBus connections, and as
remote I/O via PROFIBUS.
30
4.6. S800 I/O
It communicates via PROFIBUS or Fieldbus and contains digital and analog input/output
modules. The high modularized I/O station is made up of
• input and output modules, covering analog and digital signals of various types, and
interfaces for RTDs (Resistance Temperature Detectors) and TCs (Thermocouple)
of various types;
• power supply;
• Fieldbus Communication Interface (FCI);
• Fieldbus;
• ModuleBus.
Figure 4.2: S800 I/O station overview.
The S800 I/O uses PROFIBUS as an external communication interface (the S800 I/O
station works as a slave on PROFIBUS) and ModuleBus as an internal communication
link between clusters and their I/O modules. The S800 I/O provides also possibilities to
communicate with HART field devices via PROFIBUS. There are 4 I/O modules that
have HART interface. In total, the S800 I/O station can have up to 24 I/O modules.
Modules can be grouped into clusters, with an FCI for each cluster. The FCI connects
S800 I/O modules to a controller by way of different types of fieldbuses. The FCI can
31
4.6. S800 I/O
be used also to connect the S800 I/O station to a PROFIBUS network. The Fieldbus
Communication Interface is a configurable communication interface and it is responsible
for
• module configuration and supervision,
• performing signal conditioning on input and output values,
• dynamic transfer of information.
The FCI performs the signal conditioning for the more basic I/O modules. This means
that it has to make come computation before moving the value to the module or after
reading the value from the module. The type of signal conditioning to perform depends
on the module type and its configuration (parameter settings). Intelligent I/O modules
do signal conditioning themselves. In this case the FCI only has to move the value to or
from the module. Thus, in this case there is less load on the FCI which can be used on
other modules or services. Figure 4.3 gives an overview of how the exchange of dynamic
process data is transferred back and forward between the user application and the actual
process. The transportation of dynamic data between PROFIBUS and the ModuleBus
Figure 4.3: S800 I/O Dynamic Data Exchange.
is the main task for the FCI. The Fieldbus Communication Interface has a dedicated
32
4.6. S800 I/O
memory area when it sends the output values and reads the input values. The CPU in
the FCI performs the rest of the data transportation. It reads output values from the
memory and writes to the I/O modules via the ModuleBus and vice versa. The data
transfer between PROFIBUS and the ModuleBus is not synchronized. Read and write
operations are performed from and to a dual port memory in the FCI. The FCI scans the
I/O modules cyclically, with a cycle time depending on type and number of modules (from
4 to 108 ms)
4.6.1 S800 I/O Station Data Scanning
The ModuleBus data is scanned (read or written) cyclically, depending on the I/O module
configuration. To calculate the I/O scan cycle time in the FCI the following expression
can be used: ∑i
ni × ti (4.1)
where ni is the number of modules of type i, and ti is the execution time for modules of
type i. If the value is a multiple of 2, add 2 to the value. Otherwise increase the total
value to the nearest higher multiple of 2 to get the I/O scan cycle time. Digital input and
digital output modules will be scanned each I/O scan cycle time. Instead, on one scan 1/4
of the analog modules and 1/10 of the slow analog modules are scanned. Thus, it takes
four scans to read all analog modules, ten scans to real all slow analog modules, and one
scan to read all digital modules.
Example S800 I/O Station Data Scanning
Suppose to consider a S800 I/O station with the Fieldbus Communication Interface CI801
(t = 1.18ms), two analog input modules AI810 (t = 3.00ms), one analog output module
AO810/AO810V2 (t = 1.20ms), two digital input modules DI810 (t = 0.43ms), two digital
output modules DO810 (t = 0.43ms), and one analog input module AI830/AI830A (t =
33
4.6. S800 I/O
0.40ms). The I/O scan cycle time can be obtained as in the following.
2 AI810 ⇒2 ∗ 3.00ms = 6.00ms
1 AO810/AO810V2 ⇒1 ∗ 1.20ms = 1.20ms
2 DI810 ⇒2 ∗ 0.43ms = 0.86ms
2 DO810 ⇒2 ∗ 0.43ms = 0.86ms
1 AI830/AI830A ⇒1 ∗ 0.40ms = 0.40ms
1 CI840 ⇒1 ∗ 1.18ms = 1.18ms
10.50ms
10.50 is not a multiple of 2, thus the increase of the value to the nearest multiple of 2
gives 12 ms. This will give an I/O scan cycle time of 12 ms between the FCI and its I/O
modules. This means that the digital modules will be scanned every 12 ms, the AI810
and the AO810/AO810V2 every (4*12 ms) 48 ms, and the AI830/AI830A every (10*12
ms) 120 ms.
34
Chapter 5
Process Control
5.1 Introduction
Process control systems are normally complex with many control variables and many
measured signals. There are two general approaches for designing a complex system,
bottom up and top down. In the bottom-up approach the system is designed by combining
simple components and small subsystems, whereas the top-down approach starts with a
general overall design that is refined successfully. This chapter will deal with components
required to build complex automation systems using the bottom up approach. The key
component is the PID controller and it will be described in Section 5.2. Other important
control principles such as cascade control, mid-range control, split-range control and ratio
control will be discussed, respectively, in Sections 5.3, 5.4, 5.5, 5.6.
5.2 PID Control
The PID controller is one of the most common control algorithms, widely used in industrial
control systems. The general empirical observation is that most industrial processes can be
controlled reasonably well with PID control provided that the demands on the performance
of the control are not too high, in that case more sophisticated control is advisable. It is a
feedback system and it is able to solve a wide range of control problems. The PID controller
is used also in hierarchical systems, such as model predictive control, at the lowest level;
the multivariable controller gives the set points to the controllers at the lower level. In
this section a basic PID algorithm will be described, although several modifications of
this basic control structure must be implemented in order to improve performance and
35
5.2. PID Control
operability, and to obtain a practical, useful controller. Consider the simple feedback loop
illustrated by the block diagram in figure , where u is the control signal, that influences the
process by means of an actuator. The process output is denoted by y and it is measured
by a sensor. Both the actuator and the sensor are considered part of the process. The set
point (or reference value) is the desired value of the process output and it is denoted by
ysp. The control error e is the difference between the set point and the process output,
i.e. e = ysp − y. The control law of the PID controller can be expressed as:
u(t) = K
(e(t) +
1Ti
∫ t
0e(τ)dτ + Td
de(t)dt
)(5.1)
where K is the proportional gain, Ti is the integral time, and Td is the derivative time.
The control action is a sum of three terms: the P-term (Proportional to the error), the
I-term (proportional to the Integral of the error), and the D-term (proportional to the
Derivative of the error). These three terms will be described in the following subsections.
5.2.1 Proportional Action
In the proportional control the characteristic of the controller is proportional to the control
error for small errors:
u(t) = Ke(t) + ub (5.2)
where ub is a bias, which sometimes can be adjusted manually and reset to zero. In order
to better understand some properties of the proportional controller consider the static
model of the simple feedback loop shown in figure ADD FIGURE + REFERENCE to FIGURE
3.1 pag 65 :
x = Kp(u+ d)
y = x+ n
u = K(ysp − y) + ub
(5.3)
where x is the process variable, d is the load disturbance, n is the measurement noise,
Kp is the static process gain, and K is the proportional gain. Eliminating intermediate
variables the relation between the process variable x, set point ysp, load disturbance d,
and measurement noise n can be described as:
x =KKp
1 +KKp(ysp − n) +
Kp
1 +KKp(ub + d) (5.4)
36
5.2. PID Control
where KKp is called loop gain (it is a dimensionless number). From Equation 5.4 several
properties of the closed-loop system can be described. Suppose that n = 0 and ub = 0,
then Equation 5.4 reduces to:
x =KKp
1 +KKpysp +
Kp
1 +KKpd (5.5)
In this case, a high value of the loop gain KKp ensures that the process output x is close
to the set point ysp. If the controller gain K is high, then the system is also insensitive to
the load disturbance d. However, if n 6= 0 the measurement noise influences the process
output x in the same way as the set point ysp, thus the loop gain should not be made
too large. Moreover, if ub 6= 0 the controller bias influences the system in the same way
as the load disturbance d. This means that the design of the loop gain is a trade-off
between different requirements and control objectives. Further, for a dynamic system
the maximum value of the loop gain is determined by the process dynamics, because in
this situation the closed-loop system will be normally unstable for high loop gains if the
process dynamics are considered. It also follows from Equation 5.4 that there will be a
steady-state error with proportional control, but from Equation 5.3 it can be seen that
the control error is equal to zero only when u = ub in stationarity. Hence, with a proper
choice of the controller bias ub, the error can be made zero at a given operating condition.
5.2.2 Integral Action
Proportional control has the drawback that the process variable often deviates from the
set point in steady state. This can be avoided by making the control action proportional
to the integral of the error. With integral action, a small positive error will always lead
to an increasing control signal, and a small negative error will give a decreasing control
signal. In order to show that the steady-state error will always be zero with integral action,
assume that the system is in steady state with a constant control signal u0 and a constant
error e0. It follows from Equation 5.1 that
u0 = K
(e0 +
e0Tit
)(5.6)
Since u0 is a constant it follows that e0 must be zero. Integral action can also be visu-
alized as a device that automatically resets the bias term ub of a proportional controller.
Considering the PI controller:
u(t) = K
(e(t) +
1Ti
∫ t
0e(τ)dτ
)(5.7)
37
5.3. Cascade Control
the case Ti = ∞ corresponds to pure proportional control, whereas if Ti has finite values
the steady-state error is removed. Furthermore, for large values of the integration time, the
response creep slowly towards the set point, with an approach approximately exponential.
For smaller values of Ti the approach is faster but is also more oscillatory.
5.2.3 Derivative Action
The purpose of the derivative action is to improve the closed-loop stability. In a controller
with proportional and derivative action the control signal is made proportional to an es-
timate of the control error at Td time units in the future, where the estimate is obtained
by linear extrapolation.
Thus, in a PID controller the control signal is a sum of three terms:
• the I-term, representing the past by the integral of the error; it ensures that the
steady state error becomes zero;
• the P-term, representing the present;
• the D-term, representing the future by the time derivative of the control error; it
allows prediction of the future error.
It could be shown that PI control is adequate for all processes where the dynamics are
essentially of the first order, thus many industrial controllers only have PI action and
derivative action is frequently not used. For processes where the dominant dynamics are
of the second order PID control is sufficient, there are no benefits gained by using a more
complex controller. However, when the system is of an order higher than two, the control
can be improved by using a more complex controller than the PID controller.
5.3 Cascade Control
Cascade control can be used when there are several measurement signals but only one
control variable, to enhance control performance through the use of extra measurements.
Cascade control can be built up by nesting two control loops, as shown in Figure ADD
FIGURE + REFERENCE 12.4 PAG 373 , where an intermediate measured signal responding
faster to the control signal is used. The inner loop is called secondary loop, whereas the
outer loop is called the primary loop as it deals with the primary measured signal. It
38
5.4. Mid-Range Control
is also possible to have a cascade control with more nested loops. The performance of a
system can be improved with the number of measured signals, up to a certain limit. The
key idea of cascade control is to arrange a tight feedback loop around a disturbance. As a
rule, the secondary loop is used to eliminate fast disturbances, whereas slow disturbances
can be eliminated by the primary loop. Thus it is important to choose in a proper way
the secondary measured variable. The basic rules for selecting it are the following:
• there should be a well-defined relation between the primary and secondary measured
variables;
• essential disturbances should act in the inner loop;
• the inner loop should be faster than the outer loop;
• it should be possible to have a high gain in the inner loop.
A common situation is that the inner loop is a feedback around an actuator. The reference
in the inner loop can represent a physical quantity, like flow, pressure, velocity, etc., while
the control variable of the inner loop could be valve pressure, control current, etc. This
is also a good way to linearize nonlinear characteristics. After choosing the secondary
measured signal, the appropriate control modes for the primary and secondary controllers
have to be chosen, and their parameters have to be tuned. The choice is based on the
dynamics of the process and the nature of the disturbances, but it is difficult to give
general rules because the conditions can vary significantly.
5.4 Mid-Range Control
The dual situation of the cascade control is when there are several control signals but only
one measured signal. The two control signals can be used one at a time or simultaneously.
In the first case the control strategy is called split-range control and it will be described in
the next section. When the two control signals are used at the same time, a common situ-
ation is mid-range control or mid-ranging. The problem treated by mid-range control can
be illustrated as in Figure ADD FIGURE 12.9 PAG 379 + REFERENCE , where two valves
are used to control a flow. The valve v1 is small but has a high resolution, whereas the
valve v2 is large but with a low resolution. When small disturbances act on the system,
one controller that manipulates valve v1 is able to manage the control problem. When
39
5.5. Split-Range Control
larger disturbances occur, valve v1 will saturate, thus valve v2 must also be manipulated.
A block diagram of the mid-range control strategy is given in Figure ADD FIGURE 12.11
PAG 379 + REFERENCE , where the process P1 (valve v1) and controller C1 form a fast
feedback loop. Controller C1 takes the set point ysp and flow signal y as inputs and ma-
nipulates the valve v1. The mid-ranging controller C2 takes the control signal from C1
as input and tries to control it to a set point usp by manipulating the valve v2. If both
controllers have integral action, the flow will be at the set point ysp and the small valve
v1 will be at the set point usp in steady state.
5.5 Split-Range Control
In split-range control the two control signals can be used one at a time, or, in other words
two (or more) actuators are active in different parts of the control range. The principle
of split-range control is illustrated in Figure ADD FIGURE 12.13 PAG 381 + REFERENCE ,
where a system for heating and cooling is represented. In such a system one device is used
for heating and another one for cooling. In Figure ADD REFERENCE to 12.9 PAG 379 +
the static relation between the measured variables and the control variables is shown.
When the temperature (the measured variable) is zero the heater has its maximum value
as the environment needs to be heated. Then it decreases linearly until mid-range, where
the heater stops to act. When the process variable (the temperature) is above mid-range,
cooling is applied, and then it increases. The problem with mid-range is that there is a
critical region when switching from heating to cooling, also because the switching may
cause oscillations. To avoid to have both heating and cooling at the same time, there is
often a small gap where neither heating nor cooling is supplied. Spit-range control is also
useful when the control variable has a large range of variation, in this case the flow is
separated in parallel paths, each controlled with a valve.
5.6 Ratio Control
Ratio control is a strategy that allows control of two process variables so that their ratio
is constant. The block diagram of a typical ratio control strategy is shown in Figure ADD
FIGURE 12.17 PAG 385 + REFERENCE , in which the main loop consists of process P1 and
controller C1. The main flow is the output y1 of the process P1, and r1 is the set point
for y1. The second loop consists of process P2 and controller C2. It is used to control the
40
5.6. Ratio Control
flow y2 so that the ratio y2
y1is equal to a desired constant a. This is obtained applying a
Ratio statio (RS) to the main flow y1, where the set point r2 is determined by
r2(t) = ay1(t) (5.8)
The desired ratio a can be also time-varying.
ADD CONSIDERATION ABOUT NESTED STRUCTURE
41
Chapter 6
Scheduling of Wireless Control
6.1 Introduction
Networked control systems are spatially distributed systems for which the communication
between sensors, actuators, and controllers is supported by a shared communication net-
work. When considering distributed networked control, the network must be considered
explicitly as it affects the dynamic behavior of the control system. Thus, the design of the
control system has to take into account also the presence of the network as it represents
the interconnection between the plant, the controller, and the sensor and actuator compo-
nents. With shared communication resources (i.e. the wireless medium) the transmission
of sensor measurements or actuator signals may not be immediate and communication
delays and packet losses may occur. Moreover, using wireless communication introduces
new issues such as fading and time-varying throughput in communication channels.
As a result of these changes, new concerns have to be addressed when designing con-
trol systems. This is the reason why several areas such as communication protocols for
scheduling and routing have become important in control when considering, for example,
stability, performance, and reliability. Thus, algorithms and software that are capable of
dealing with hard and soft time constraints are very important in control implementa-
tion and design, and areas such as real-time systems from computer science are becoming
increasingly important also in control theory.
In this work we consider a plant automation network made up of several sensors and
actuators connected to an AC800M controller through a gateway. The aim of the project
is to find a good scheduling algorithm in order to manage the exchange of information
between sensors/actuators and the gateway and between the gateway and the controller.
42
6.1. Introduction
The use of a multipurpose shared network to connect spatially distributed elements re-
sults in flexible architectures and generally reduces installation and maintenance costs. In
industrial environments costs can be further reduced and productivity can be improved by
means of wireless sensor and actuator networks, which give numerous benefits with respect
to wired networks. However, wide deployment of wireless industrial automation requires
substantial progress in wireless transmission, networking and control because an industrial
application will frequently require hard bounds on the maximum delay allowed. In par-
ticular, as the sensors and actuators are part of closed loop control systems, strict timing
requirements apply, ensuring a short response time and an efficient use of the available
radio bandwidth. The most prevalent network topology in industrial environments is the
star topology, where wireless nodes communicate with a gateway device that bridge the
communication to a wired network. Industrial systems are functionally and, in most cases,
also physically built in levels. The system can be separated into the following network
areas 1 on different levels (as shown in Figure 6.1):
• plant network: it can be dedicated for process automation purposes;
• client/server network: it is used for communication between servers, and between
workplaces (clients) and servers.
• control network: it is a Local Area Network (LAN) optimized for high performance
and reliable communication, and with predictable response times in real time. Con-
trollers are connected to the control network.
It is important to notice that for large systems it is recommended to separate the control
network from the client/server network. Actually, the two networks should be separated
also for smaller systems, because this solution allows fault isolation between the two net-
works and traffic filtering, as the client/server network’s traffic will not disturb the real
time traffic on the control network. If in the network there are more than fifty nodes, it
is recommended to split the control network on two or more network areas, maintaining
the communication between controllers within one network area [4].
NOTE: IN THIS CHAPTER... + CASE STUDY DESCRIPTION1A network area is a logically flat network that does not contain IP routers.
43
6.2. Scheduling problem statement
Figure 6.1: Industrial network topology [13].
6.2 Scheduling problem statement
Consider a physical plant interacting with a controller that measures some plant signals
and generates appropriate control signals in order to influence the behavior of the plant.
The industrial plant is made up of several sensors and actuators being connected to a gate-
way, which, in turn, communicates with a controller. Sensors and actuators (field devices)
and the gateway are part of a WirelessHART network (see Chapter 2) and the controller
is supposed to be an AC800M (see Chapter 4). According to the WirelessHART stan-
dard the gateway communicates with the network manager, that has network management
functions, via a secure communication channel. In addition, the gateway is connected to
the AC800M through a high speed fieldbus. First, a star network topology is considered,
where each field device is directly connected to the gateway, so each sensor or actuator is
one-hop away from the gateway. Later, a multi-hop WirelessHART network will be taken
into account.
This industrial plant can be represented by means of a set of control loops, where each
control loop has a certain number of input and output signals. Input signals are sensor
measurements that the controller uses to generate actuator signals (output signals). Each
control loop can be seen as a black-box: only its scan rate is supposed to be known,
44
6.2. Scheduling problem statement
that is the sampling period of the system or, in other words, the update rate that the
sensors connected to that system use to send the respective measurements. The considered
environment is static and does not change in time so an off-line scheduling algorithm is a
good choice in order to capture the entire scheduling in a static table. Moreover, an off-line
algorithm allows to have predictability and to have a simple dispatcher (see Subsection
3.2.2).
Each control loop is supposed to be a task that has to be executed (see Chapter 5 to have
more details about control loops). A task is a set of jobs. There are two classes of jobs,
according to which kind of resources they share
• communication jobs (shared communication resources);
• control jobs (shared computation resources).
A communication job can be either a sensor signal or an actuator signal. For each task
there is a control job, meaning that the gateway receives the process measurements from
sensors, and sends them to the controller that makes the computation. After that the
controller transmits the results of the computation to the gateway that sends the output
signals to the actuators. Suppose to consider a simple PID controller as shown in Figure
6.2. This control loop/task is made up of two communication jobs and one control job.
Figure 6.2: PID controller.
The sensor signal S1 and the actuator signal A1 are the communication jobs, while C1 is
the control job. It is assumed that the rate with which sensors send their measurements
is the same as the frequency with which actuators receive control signals. For each task
this scan rate is supposed to be known.
Consider now as a general case the control loop represented in Figure 6.3. It is the k -th
task/control loop of the plant and it has a set of nk input signals (sensor measurements)
{Si}, i = 1, . . . , nk, and a set of mk output signals (actuator signals) {Aj}, j = 1, . . . ,mk.
There is only one control task Ck that makes the computation of the control signals for the
actuators. The task has an update rate of Tkms. The plant is supposed to be constituted
by a set of control loops identified by {Ck}, k = 1, . . . , p.
45
6.2. Scheduling problem statement
Figure 6.3: Example of a generic control loop.
In a WirelessHART network shared resources are scheduled in terms of time slots and su-
perframes, and according to the WirelessHART specification, time slots of 10 ms are used
(see Subsection 2.3.1). Hence, in a star topology the communication between a sensor and
the gateway requires only one time slot. The same applies to the communication between
the gateway and an actuator. Concerning the time required to complete the control job,
we have assumed that within one time slot the gateway sends the measurements to the
AC800M via a high speed fieldbus, the controller makes the computation and sends the
results back to the gateway by means of the same fieldbus (see Chapter 4 and the 800xA
documentation for more details).
6.2.1 Superframes setting
Suppose to consider a known fixed control loops set, with known periods and known exe-
cution times (of communication and computation jobs). For the sake of presentation it is
assumed to assign a superframe for each period and then to combine all these superframes
in a unique superframe having the same size of other superframes, called gigaframe. Thus
each superframe corresponds to a period and all periods should follow a harmonic chain,
i.e. all periods should divide into each other (see Subsection 2.3.5). It is important to no-
tice that all superframes, including the gigaframe, have the same size: it is defined as the
least common multiple of all periods of the control loops set. It has been described above
that there are two types of jobs to be scheduled, communication jobs and computation
jobs. Hence, two kinds of superframe are defined:
• communication superframes (to be used for communication jobs);
• control superframe (to be used for control jobs).
6.2.2 Communication and control superframe schedule
As said above, the communication from sensor to gateway or from gateway to actuator
occurs within a time slot of 10 ms. So, if the general case system of Figure 6.3 is considered
46
6.3. Formalization of the scheduling problem
as an example, first a slot has to be reserved for each one of the jobs belonging to the
set {Si}, i = 1, . . . , nk, and then to the set {Aj}, j = 1, . . . ,mk, meaning that, if a slot
is occupied by Si, in that slot the communication between that sensor and the gateway
occurs, or, alternatively, if it is occupied by Aj then in that slot the gateway communicates
with that actuator. For a given task, one slot in the communication superframe has to be
reserved for each element in the set of sensor signals, and for each element in the set of
process actuators. A formal definition of the scheduling problem will be described in the
next section.
6.3 Formalization of the scheduling problem
The scheduling theory has been deeply explored in the academic literature and has pro-
gressed in recent years due to the strict requirements of real-time systems such as pre-
dictability and timing constraints (a review of the most important algorithms can be
found in [6]). However, extending this scheduling theory in practice is not so easy. For
example, a theoretical approach to schedule periodic tasks is the RM (Rate Monotonic)
algorithm (see Subsection 3.2.1 and [7]). According to RM, all tasks are periodic, do not
synchronize with one another, do not suspend themselves during execution, ca be instantly
preempted by higher priority tasks, and have deadlines at the end of their periods. The
term ”rate monotonic” originated as a name for the optimal task priority assignment in
which higher priorities are accorded to tasks that execute at higher rates (that is, as a
monotonic function rate).
These ideal assumptions do not fit well with a real environment in which the notion of
task is often completely different from the one defined in theoretical algorithms. In these
cases the implementation of a new scheduling algorithm is required. It can be developed
over the base of an already existing algorithm or it can be implemented from scratch.
In this work a suitable model of tasks and jobs is given (see Section 6.2), according to the
requirements of a typical industrial application, as the one described in Section 6.2. This
definition is required in order to schedule the shared communication and computation
resources.
After this first step the development of a new scheduling algorithm is needed. In particular,
it is important to select the jobs to be scheduled with the purpose of having a feasible
solution. In addition, jobs have to be allocated in a proper way in order to achieve good
47
6.3. Formalization of the scheduling problem
performance and quality of service in terms of delay and robustness of the network.
To preliminary study the performance of a scheduling algorithm a feasibility analysis is
usually used (see [6] for further information). Feasibility analysis refers to the process
of predicting temporal behavior via tests, which determine whether the temporal con-
straints of tasks will be met at run time. Such analysis is important when implementing a
scheduling algorithm because the predictability is one of the key requirements of real-time
systems. Thus, it is fundamental to specify, understand, analyze and predict the timing
behavior of a scheduling algorithm used in a real-time context. For many important task
models and scheduling algorithms, feasibility analysis is provably computationally very
expensive, and in some cases even intractable. Moreover, in some cases the current gener-
ation of schedulability analysis tools offer inadequate support for new algorithms. Thus,
when a new algorithm is proposed, a way to verify the feasibility of this new approach
should be indicated.
As described above, in the considered WirelessHART network shared resources are sched-
uled in terms of time slots and superframes. In this work there are two types of jobs to be
scheduled, communication jobs and computation jobs, according to which kind of resource
they share (communication or computation resources). The timing of this basic operations
is critical to the performance of the controller, and it must be possible to schedule all the
control loops of the plant. Thus, two important constraints have to be fulfilled:
• the proposed scheduling algorithm must produce a schedulable solution, i.e. both
communication and control jobs must be allocated in the respective superframes;
• the timing of the input and output actions, and of the control computation of each
control loop must be strictly satisfied as it is critical to the performance of the control
loop.
The formulation of the problem of finding a scheduling algorithm fulfilling these constraints
can be stated as
s : T 7→ Σ (6.1)
where s is the particular scheduling algorithm in the set Σ of all the possible schedules,
given the temporal space T of time slots and superframes.
Assume that S ⊂ Σ is the set of all possible scheduling algorithms that gives a schedulable
solution, i.e. S ⊂ Σ is the set of all possible feasible solutions to the given scheduling
problem. Thus, to respect the constraints on the temporal behavior of the considered
48
6.3. Formalization of the scheduling problem
real-time system, the scheduling problem can be formulated as a constrained optimization
problem. The objective is to find a scheduling algorithm s ∈ S such that
mins∈S
F (s)∈F
D(s) (6.2)
where F (s) ∈ F is the set of restrictions for the particular scheduling algorithm s, belonging
to the set of all possible restrictions F, and D(s) is the constraint on the temporal behavior
of the real-time system.
In particular, the aim of the proposed scheduling algorithm is to minimize the time delay
from sensing to actuating for each control loop (and then over the set of all control loops).
The search of the minD(s) is subject to restrictions F (s) in the set F of all possible re-
strictions. F (s) is made up of constraints on communication and control jobs as described
below:
• each communication job requires a time slot to be scheduled,
• a device can only transmit or receive in one time slot,
• each control job requires a time slot to be scheduled,
• for each control loop, the control job can be executed only after that all jobs belong-
ing to the sensor signals set are scheduled.
Finding the optimum solution to this minimization problem has a non-trivial complexity
as it requires an exhaustive search in the space of feasible solutions. In optimization
problems, in order to avoid an exhaustive search, enumerative methods could be used to
reduce the number of solutions that need to be explored [12], or approximation methods
can be exploited to find a near optimal solution with moderate computing time.
The optimization problem stated above corresponds to the problem of finding a perfect
solution for combining communication and control jobs in such a way that the feasible
solution is always obtained and the delay between from sensing to actuating is minimized.
Finding a perfect solution is a hard problem. However, in this work a heuristic algorithm is
proposed, as theoretically deriving an optimal off-line scheduling satisfying the minimiza-
tion problem is an intractable problem under all realistic assumptions and restrictions.
To evaluate the performance of the proposed algorithm, in terms of delay and feasibility,
a random analysis will be performed instead of using a pure deterministic approach (see
Section ?? where the random approach is described in detail).
The detailed scheduling policy will be described in the next subsection.
49
6.4. Scheduling policy
6.4 Scheduling policy
Having defined a superframe for each scan period (see Subsection 6.2.1), we suggest to
schedule communication and control jobs allocating slots starting with the fastest to the
slowest scan rate, as in the RM scheduling policy (see Subsection 3.2.1). The proposed
algorithm allocates slots starting with the sensor signals of a given task. Supposing to
consider the k -th task the scheduler assigns a slot to each signal belonging to the set {Si},
i = 1, . . . , nk. After that it leaves at least one free slot in the communication superframes
and assigns the respective slot of the control superframe to the control job Ck. It is
reasonable to assume that within this time slot the gateway sends the sensor signals {Si},
i = 1, . . . , nk, to the controller, via a PROFIBUS, a Modbus or a FOUNDATION Fieldbus
(see Section 4.5). After the control task has been executed the controller sends the output
signals to the gateway. At this point, the gateway has to transmit these signals to the
actuator and, in order to do so, the algorithm allocates a time slot for each element of the
set {Aj}, j = 1, . . . ,mk, meaning that, if a slot is reserved for Aj , then in that slot the
gateway sends the output signal Aj to the respective actuator.
In a given superframe one or more tasks has to be scheduled, so a partial ordering between
them is required. They can be ordered according to the number of jobs (both commu-
nication and control jobs) and/or to the cardinality of the sensors set. In particular, in
a given superframe (i.e. for a given period) we propose to allocate tasks from the one
with the lowest number of jobs to that one with the highest number of jobs. This choice
is assumed to reduce the time delays from sensing to actuating over the set of control
tasks (see Subsection ?? for more details about performance evaluation of the proposed
algorithm). For tasks with equal number of jobs, in order to increase the chance of having
a feasible solution of the scheduling problem, tasks with a higher number of sensors will
be given a higher priority compared to tasks with a lower number of sensors. This means
that, when equal number of jobs, tasks should be allocated from the highest cardinality of
the sensor set nk to the lowest one. In the following subsections three different scenarios
are described requiring some additional considerations.
6.4.1 Scenario I
In the first scenario the following assumptions are considered:
• all tasks are independent, that is, have no precedence relationship;
50
6.4. Scheduling policy
• the WirelessHART network has only a gateway with one access point.
A completely independent task set allows to consider only the partial order of the commu-
nication jobs (for a given task sensors must be scheduled before actuators) instead of any
relation between control jobs. They can be scheduled according to the partial order defined
by the number of jobs and the number of sensors. The algorithm creates the gigaframe of
length equal to the least common multiple lcm(Tk), k = 1, . . . , p of the periods of all tasks
of the plant (see Subsection 2.3.5). Then for each period it creates a superframe of length
equal to the gigaframe size. Slots are allocated from the lowest to the highest period.
Suppose to consider the lowest period Tl. The algorithm starts allocating slots according
to the rule described above, i.e. with respect to the number of jobs and, when equal number
of jobs, with respect to the number of sensors. After that this basic frame of length equal
to the period Tl is repeated for a number of times equal to lcm(Tk)Tl
, thus forming the whole
superframe for that period. At this point, the algorithm allocates jobs for the next period
Tl+1 following the same rule.
However, at this step there is an additional constraint because jobs cannot be allocated in
the slots occupied by jobs of the previous superframe, i.e. the superframe corresponding to
the period Tl. As done for the lowest period, the obtained basic frame is repeated lcm(Tk)Tl+1
times. Assume now to consider the i-th period Ti, with 1 < i ≤ p. The scheduler allocates
jobs of tasks having period equal to Ti in the superframe corresponding to Ti, creating
first a basic frame and then repeating it for a number of times equal to lcm(Tk)Ti
. However,
jobs cannot be allocated in slots already occupied by jobs of all the previous superframes.
In other words, when allocating jobs in the superframe corresponding to the period Ti, the
algorithm cannot use slots that are already occupied in one of the previous superframes,
i.e. superframes corresponding to the periods from the lowest one Tl to Ti−1, that is Tk,
k = 1, . . . , i− 1.
Both the communication jobs and the control jobs have to be scheduled. Suppose to
consider the k -th task. If the first actuator A1 of the task cannot be put in the next
adjacent slot of the corresponding control job Ck because that slot is already assigned to
another communication job, then the control job Ck will have a window in which it can
be scheduled. It is proposed that the algorithm decides which slot of the window has to
be assigned to Ck according to considerations about the delay it could generate over other
control jobs (having higher priority) in the case it will create some jitter. In particular,
if possible, it will be scheduled in a slot such that it will have a free successor slot in the
51
6.4. Scheduling policy
control superframe, otherwise it will be assigned the first available slot. This procedure is
applied for all periods, until the highest one.
Following this approach it could happen that a certain superframe is not schedulable,
i.e. not all communication jobs can be allocated in the communication superframe. If
this is the case, the algorithm makes a ”shuffle” of the last communication superframe
(the highest period superframe). With the shuffle function, the algorithm shuffles the
last sensor of the superframe with the first previous actuator, then it schedules all the
sensors belonging to the same sensor set of the shuffled sensor, the shuffled actuator and
the actuator set belonging to the same task of the shuffled sensor. If the last superframe
contains only one task this action cannot be done because the superframe contains all the
sensors and then all the actuators. In this case, the algorithm makes the shuffle between
the last superframe with the previous one, shuffling the last sensor of the last superframe
with the first previous actuator in the previous superframe. Then it re-schedules the
last superframe and updates the control superframe taking into account the changing in
the last superframe and in the previous one. It is important to notice that, if at first
a schedulable solution is not produced, a feasible solution is always obtained after this
shuffling operation.
Once all superframes are created they can be combined together in the gigaframe, as
slots are not occupied by more than one job. Thus, at the end of the algorithm the
communication gigaframe and the control superframe are available. They can be stored
in the scheduler as tables to allow it to make scheduling decisions during run-time.
Example I.1
Consider the set of control loops in Figure 6.4.
Figure 6.4: Set of control loops Example I.1.
The first control loop C1 is a simple PID controller and it has the following characteristics:
52
6.4. Scheduling policy
• update rate T1 = 50ms;
• sensor signals set {S1};
• actuator set {A1}.
For the second control loop C2 (a cascade controller):
• period T2 = 100ms;
• sensor signals set {S2, S3};
• actuator set {A2}.
The last control loop C3 (a mid-range controller) has:
• period T3 = 100ms;
• sensor signals set {S4};
• actuator set {A3, A4}.
Thus, two superframes are defined, one for each scan period, and the algorithm creates the
gigaframe of length equal to the least common multiple lcm(Tk), k = 1, . . . , 3, i.e. equal to
100ms. It allocates slots starting with the fastest to the slowest scan rate, thus allocating
first jobs of task C1. The scheduler assigns a slot in the superframe corresponding to
50ms to {S1}, then it leaves a free slot in the communication superframe and assigns the
respective slot of the control superframe to the control job C1. In this slot the gateway
sends the sensor measurement to the controller, the controller makes the computations
and sends the output signal to the gateway. Hence, the gateway has to transmit this
signal to the actuator and, in order to do so, the scheduler allocates a time slot in the
communication superframe to {A1}. This basic frame is repeated lcm(Tk)T1
= 10050 = 2 times
within the superframe corresponding to 50ms.
After that the proposed algorithms has to allocate tasks with the highest period. There are
two tasks having period equal to 100ms, and both of them have 3 jobs. When equal number
of jobs, tasks with a higher number of sensors are given a higher priority compared to tasks
with a lower number of sensors. Thus, the algorithm schedules tasks according to the order
{C2, C3}. This means that the scheduler has to assign a slot to {S2} in the superframe
corresponding to 100 ms. The first slot is already assigned to the communication between
53
6.4. Scheduling policy
{S1} and the gateway, so the second slot is allocated to {S2}. After that {S3} is allocated
to the fourth slot, as the third one is allocated to {A1}. Then the algorithm must leave
at least a free slot in the communication superframe and assigns the respective slot of the
control superframe to the control job C2. However, leaving only one free slot, the actuator
signal A2 should be allocated in a time slot previously allocated to S1 (the sixth slot in
the superframe). This means that A2 is allocated in the first free time slot after the sixth
(in this case the first free time slot is the seventh), and the control job C2 has a window
within which it could be allocated in the control superframe (the fifth and the sixth time
slot).
At this point the algorithm can allocate jobs of the last task. The scheduler can allocate
the first job S4 in the first available time slot, that is the fifth slot. Then it has to allocate
the actuator signals A3 and A4 in the last two time slots. In this way C3 has a window
in the control superframe, that is it can be allocated in the sixth, seventh or eighth slot.
Thus, the gigaframe is obtained combining the superframe corresponding to 50ms to that
one corresponding to 100ms. After the scheduling of all tasks in the communication super-
frame, the position of control jobs having a window can be fixed in the control superframe.
The control job C2 is allocated in the first slot of his window in order to leave a free slot
before control job C1, which have higher priority than C2. This is done to avoid that a
possible delay of the execution of C2 creates some jitter in the task with higher priority.
Control job C3 is allocated in the eighth slot (the last slot of its window) for the same
reason. The communication superframe, the gigaframe, and the control superframe are
all shown in Figure 6.5.
Example I.2
Consider the control loops set in Figure 6.6, where there are a PID controller having
a period equal to 50ms, and two cascade controllers with period equal to 100ms. The
algorithm follows the procedure described in the Example I.1. However, in this case, the
last superframe (corresponding to 100ms) is not schedulable, i.e. not all communication
jobs can be allocated in the communication superframe. In particular A3 cannot be
allocated in that superframe. To solve this problem, the algorithm shuffles the last sensor
of the superframe (S5) with the first previous actuator (A2). Then it schedules the shuffled
actuator (A2) and the actuator corresponding to the shuffled sensor (A3). After this
54
6.4. Scheduling policy
Figure 6.5: Superframes Example I.1.
Figure 6.6: Set of control loops Example I.2.
shuffling operation the schedulable solution is obtained and control tasks with a window
can be allocated in the control superframe, making the considerations concerning the delay
described in the Example I.3. The communication superframe, the gigaframe (GF), and
the control superframe are shown in Figure 6.7.
Example I.3
In this example the set of control loops shown in Figure 6.8 is taken into account. The
scheduler allocates tasks according to the procedure described in the previous examples.
When it should allocate the only one task in the superframe corresponding to 240ms, a
schedulable solution is not obtained, as the communication job A2 cannot be allocated
within that superframe. The shuffling function described in the Example I.2 cannot be
applied, because the superframe contains all the sensors and then all actuators. In this
case, the scheduler makes the shuffle between the last superframe (corresponding to 240ms)
55
6.4. Scheduling policy
Figure 6.7: Superframes Example I.2.
Figure 6.8: Set of control loops Example I.3.
with the previous one (the superframe corresponding to 120ms). In particular, it shuffles
the last sensor of the last superframe (S4) with the first previous actuator in the previous
superframe (A5). Then the algorithm re-schedules the last superframe, obtaining the
feasible solution, and updates the control superframe taking into account the changing
in the last superframe and in the previous one. The communication superframes, the
gigaframe and the final control superframe are shown in Figure 6.9.
6.4.2 Scenario II
In this scenario the following assumptions are taken into account:
• tasks of the same period can be inter-dependent, that is, a task may require the
computation of another task of the same period;
• the WirelessHART network has only a gateway with one access point.
56
6.4. Scheduling policy
Figure 6.9: Superframes Example I.3.
Inter-dependencies between tasks determine a partial order also among control jobs. The
dependencies constraints between tasks are represented by means of a precedence directed
graph, in which the edge (Cx, Cy) indicates that Cx is an immediate predecessor of Cy or,
in other words, that the control job Cy needs the computation result of Cx before being
executed. In this case, the algorithm makes the schedule following the procedure of the
first scenario. At the end of the schedule it analyzes the control superframe to verify that
control jobs are executed in the order required by the particular application, by comparing
the control superframe with the precedence graph. If this is not the case it tries to modify
the control superframe by shifting control jobs involved in the precedence relations that
have a window. If this is not possible it re-schedules also the communication superframe, by
changing the partial order between tasks such that the dependency relations are satisfied.
Also in this case, if a not schedulable solution is obtained, the algorithm makes a shuffle
of the last superframe or of the last superframe and the previous one, in the same way as
for the first scenario.
Example II.1
Consider the set of control loops shown in Figure 6.10, where the control job C2 needs the
computation result of C1 before being executed. The precedence directed graph represent-
ing this dependency constraint between tasks of the same period is shown in Figure 6.11.
57
6.4. Scheduling policy
Figure 6.10: Set of control loops Example II.1.
The algorithm schedules tasks following the procedure of the first scenario (see Subsection
Figure 6.11: Precedence graph Example II.1.
6.4.1) that has been described in the previous examples (both communication and control
jobs). At the end of the schedule it analyzes the control superframe to verify that control
jobs are executed in the required order, by comparing the control superframe with the
precedence graph. Control jobs C1 and C2 have a window within which they can be allo-
cated, thus they can be easily allocated according to the order required by the application,
as shown in Figure 6.12, where the communication superframes and the gigaframe are also
presented. It can be noted that, if the opposite order would be required, i.e. C1 needs the
Figure 6.12: Superframes Example II.1.
58
6.4. Scheduling policy
computation result of C2 before being executed, it is still possible to allocate control jobs
in the right order without re-scheduling also the communication superframes.
Example II.2
Consider now the control loops set illustrated in Figure 6.13, where C2 needs the compu-
tation result of C3 before being executed. The precedence graph is illustrated in Figure
Figure 6.13: Set of control loops Example II.2.
6.14. The algorithm makes the schedule according to the procedure described in previous
Figure 6.14: Precedence graph Example II.2.
examples. Analyzing the control superframe, control jobs C2 and C3 cannot be allocated
in the order required by the particular application. Thus, the algorithm must re-schedule
the communication superframes, by changing the partial order between tasks such that the
dependency relation is satisfied, i.e. it has to schedule the third task and then the second
one. As in this case a schedulable solution is not obtained, the scheduler makes a shuffle of
the last communication superframe as described in the Example I.2 (see Subsection 6.4.1).
All the superframes are shown in Figure 6.15.
6.4.3 Shared Slots
As described in Subsections 2.3.1 and 2.3.3, in a WirelessHART network links may be
dedicated or shared. In particular, using shared links may be suitable when throughput
requirements of devices are low, or when the traffic rate is irregular. In the scheduling
59
6.4. Scheduling policy
Figure 6.15: Superframes Example II.2.
algorithm proposed in this work the user can determine the percentage of shared slots in
the gigaframe. The scheduler works as follows. The user chooses the percentage of shared
slots, according to the requirements of the particular application. Then the algorithm
creates a superframe for each period of the control loops set (as described in Subsection
6.4.1), and in each superframe it reserves a number of shared time slots corresponding
to the percentage defined by the user. Suppose that the set of the periods of all tasks
contains p elements, and to indicate them as Tk, k = 1, . . . , p. Assume then that the size
of the gigaframe is equal to
TMAX =lcm(Tk)
10(6.3)
where lcm(Tk) is the least common multiple of the periods of all tasks. If Sh% is the
percentage of shared slots, the corresponding number of shared time slots ShN can be
calculated with the following expression
ShN = xq, x =[TMAX
Sh%
100
](6.4)
The notation xq indicates that the quantity x has to be round up to the next integer multi-
ple of 2p−1, if p is the cardinality of the set of the periods of all tasks. This approximation
is required to have a uniform distribution of shared slots within the gigaframe, because
the sizes of superframes should follow a harmonic chain as specified by the WirelessHART
60
6.4. Scheduling policy
standard (see Subsection 2.3.5). Assume now to consider the lowest period Tl. The algo-
rithm allocates slots in the basic frame of length equal to Tl, and repeats that basic frame
for a number of times equal to lcm(Tk)Tl
thus forming the whole superframe for the lowest
period. In this basic frame the algorithm reserves a number of shared slots equal to
ShBasicN =ShN
2p−1(6.5)
Thus it equally distributes shared slots within that superframe, the superframes corre-
sponding to the other periods, the control superframe and the gigaframe. The scheduler
allocates the control job in the shared part of the control superframe as soon as all the
elements in the sensor signals set are scheduled, for each control loop. See the following
example to better understand how the scheduler works.
Example Shared Slots
Consider the set of control loops shown in Figure 6.16.
Figure 6.16: Set of control loops Example Shared Slots.
Suppose that the user requires a percentage of shared slots equal to 20%. Thus, for this
particular example p = 3, TMAX = 32, and Sh% = 20%. The algorithm calculates the
number of shared slots as it has been described above:
x =[TMAX
Sh%
100
]= 6.40
ShN = xq = 8
ShBasicN =ShN
2p−1= 2
(6.6)
After that it can schedule both communication and control jobs. The communication
superframes, the gigaframe, and the control superframe are shown in Figure 6.17.
61
6.5. Boliden Example
Figure 6.17: Superframes Example Shared Slots.
6.5 Boliden Example
Mineral processing is the processing of ores to recover minerals or metals. Processes have
to be optimized to yield the highest possible recovery with acceptable purity (grade).
Numerous steps are involved in achieving the goal of extracting minerals and metals from
ores in their purest form, they are listed in the following.
• Size reduction, to increase the surface area for high reactivity and to facilitate the
transport of ore particles. This process in in most cases wet and the product is
usually called pulp.
• Concentration of the pulp. The most common method is froth flotation, and it is
used to separate value minerals from the non-valuable gangue.
• De-watering, where solids must be separated from water to product minerals or
metals, when the mineral processing operations are conducted in the presence of
water.
A more detailed description can be found in [24] and [19].
The general approach in the mineral industry is to use several consecutive flotation cells
to form a flotation bank. The first cells that the pulp enters in a flotation circuit are called
roughers, and the last flotation bank is the scavenger, which produces a concentrate with
a lower grade than the first cells. In the following a simplified description of the flotation
process will be given. The separation of minerals in froth flotation depends primarily on
the differences in the hydrophobicity 2 of the particles, as they must selectively attach2The term hydrophobicity refers to the physical property of a molecule to repel a mass of water.
62
6.5. Boliden Example
to air bubbles to be floated. Some minerals can be directly floated, but in most cases
reagents have to be added to make the flotation process possible. There are three main
types of reagents:
• collectors, which make mineral particles hydrophobic;
• frothers, which keep the mineral-loaded bubbles stable;
• regulators, which make collectors more selective toward a specific mineral.
One of the most common manipulated variables that are available for control of the flota-
tion process is the addition rate of reagents. Chemical pH-regulators can also be added to
control the concentrate grade and the recovery of valuable minerals.
Consider the diagram of froth flotation cell in Figure 6.18. The flotation cell is a tank with
Figure 6.18: Diagram of froth flotation cell.
a pulp feed, an outlet and froth launders to recover the concentrate. The cell also has an
impeller to agitate the pulp and an air feed which is added close to the impeller. The air
flow rate is the main control variable affecting the amount of concentrate produced in the
cell. It offers the finest control of flotation cells. In particular, it has the quickest response
in the cell, when compared to the other control variables. The pulp level of the cell can
also be controlled, allowing to control indirectly the froth phase depth.
The Garpenberg process plant is designed to produce four concentrates: zinc, copper, lead
and precious metals. In this work the zinc flotation process is considered, as the zinc is
the most important metal extracted from the Garpenberg site’s ore.
The flow chart of the zinc flotation circuit is shown in Figure 6.19. The pulp enters the
circuit from the feed source, then it is mixed with a recycle stream in the mixer and fed
into the rougher cells. The tailings of the rougher cells are fed into the scavenger cells,
63
6.5. Boliden Example
Figure 6.19: Zinc flotation circuit [24].
which tailings, in turns, are the waste of the circuit. The zinc flotation plant has three
cleaning stages, which purpose is to increase the grade of the concentrate. The tailings
from the first cleaning stages (FA101 and FA102) are mixed with the scavenger concentrate
and this mixture is the input of a hydrocyclone which is used for centrifugal separation
of solids and gases from liquids. The coarser particles are reground in a mill prior to
being recycled into he rougher, while the finer particles are recycled immediately. Water
is added in the concentrate launders of each cell to facilitate the concentrate flow, but the
greatest part of water is added in the cleaners to decrease the solids fraction and increase
the selectivity. All the cells in the circuit have separate air feeds with flow measurements
and control. There are also pulp level control possibilities in each cell (but when cells are
paired into banks, there is only one level control setpoint for each bank).
The three main control variables of the plant are the air flow rate, the froth phase depth
(by means of the pulp level) and the reagent addition rate. However, as the flotation is a
complex process, there are also other control variables that have to be taken into account.
All the controlled variables are listed in Table A.1, where Ts is the sampling interval.
Each controlled variable represents a control loop, i.e. in the plant there is a number of
control loops equal to the number of controlled variables. Each control loop has a PID
controller, with one sensor signal and one actuator signal. However, there are dependencies
between tasks having the same period. Thus this example can be scheduled following the
procedure of Scenario II described in Subsection 6.4.2. In particular, the recycle load
output is used as feed-forward signal to the first pulp level control loop of the rougher
64
6.5. Boliden Example
(FA302 LC1). In addition, PU136 FC1 and PU131A FC1 (recycle load) send the setpoint
to the air flow control loops of the cleaner (i.e. FA101 FC1, FA102 FC1, FA103 FC1 and
FA104 FC1).
Following the algorithm described so far, the scheduler creates one gigaframe of length
equal to 8s (i.e. the least common multiple of the periods of all tasks of the plant) and
four superframes, one for each period (1s, 2s, 4s, and 8s), of length equal to the gigaframe
size. Then slots are allocated from the lowest to the highest period.
ADD SHARED SLOTS AND DELAY AND CONTROL LOOPS IN SHARED SLOTS
65
Chapter 7
Conclusions and future works
The proposed scheduling algorithm is proved to be a good solution to meet the feasibility-
delay tradeoff especially when the number of simulations increases, that is a very high
number of control loops is scheduled. However sometimes off-line strategy is not the most
suitable solution. This scheduling policy is not flexible enough for handling dynamic sit-
uations. In fact, a creation of a new task, or a change in a task rate, might modify the
values of the minor and major cycle, thus requiring a complete re-design of the scheduling
table. Run-time flexibility is a desired feature, as not all run-time events can be com-
pletely taken into account in advance. In addition off-line scheduling is fragile during
overload situations, since a task exceeding its predicted execution time could generate a
domino effect on the subsequent tasks, causing their execution to exceed the minor cycle
boundary. Another potential source of inefficiency is the need to fit all activities into
common multiples of the major and minor cycle (so if an activity does not fit exactly into
the schedule it may be necessary to rewrite the code to make it fit).
Theoretically deriving an optimal off-line scheduling satisfying the delay minimization
problem is a hard problem under all realistic assumptions and restrictions, which means
that in the worst case an exponential amount of work appears necessary to determine
whether a feasible solution exists. In other words, in the worst case an exhaustive search
in the space of feasible solutions is necessary in order to determine if a schedulable solution
exists or not.
The performance of the described algorithm has been evaluated by simulations but it
has not been proved that this scheduling always leads to an optimal solution. Thus an
analytical formalization of the algorithm is required to confirm the experimental results
66
obtained.
67
Appendix A
Boliden Control Variables
68
Table A.1: Controlled variables for the Garpenberg plant.
LOOP CATEGORY Number of loops Loop name Ts
AIR FLOW 9 FA301 FC1 2
FA302 FC1 2
FA303 FC1 2
FA304 FC1 2
FA305 FC1 2
FA101 FC1 2
FA102 FC1 2
FA103 FC1 2
FA104 FC1 2
PULP LEVEL 6 FA302 LC1 2
FA303 LC1 1
FA305 LC1 8
FA102 LC1 8
FA103 LC1 8
FA104 LC1 8
pH 3 FA301 AC1 1
FA304 AC1 1
FA102 AC1 1
PUMP LEVEL 5 PU230A LC1 2
PU231 LC1 1
PU131A LC1 8
PU136 LC1 4
PU006A LC1 1
WATER CONTROL 2 FA100 FC1 1
FA300 FC1 1
REAGENTS 2 BL031 FC1 2
FA300 FC2 1
RECYCLE LOAD 2 PU136 FC1 2
PU131A FC1 2
69
Table A.2: Garpenberg plant: variables to be scheduled in shared slots.
LOOP CATEGORY Number of loops Loop name
MOTOR ON/OFF 13 FA301
FA302
FA303
FA304
FA305
FA101
FA102
FA103
FA104
BL031
KV6
PU210
PU211
HAND VALVE INDICATOR 8 FA303 GS1
FA303 GS3
FA101 GS1
FA102 GS1
FA102 GS2
FA305 GS1
PU230A GS1
PU230B GS1
CYCLES ON/OFF 7 CY06 Px
70
Bibliography
[1] HART Communication Foundation - 2.4GHz DSSS O-QPSK Physical Layer Specifi-
cation - HCF SPEC-065, Revision 1.0, 2007.
[2] HART Communication Foundation - TDMA Data Link Layer - HCF SPEC-075,
Revision 1.0, 2007.
[3] HART Communication Foundation - Network Management Specification -
HCF SPEC-085, Revision 1.0, 2007.
[4] HART Communication Foundation - Wireless Devices Specification - HCF SPEC-290,
Revision 1.0, 2007.
[5] http://www.hartcomm2.org
[6] L. Sha, T. Abdelzaher, K.E. Arzen, A. Cervin, T. Baker, A. Burns, G. Buttazzo,
M. Caccamo, J. Lehoczky, A. K. Mok - Real-Time Scheduling Theory: A Historical
Perspective - Real-Time Systems, 28:101-155, 2004.
[7] G. C. Buttazzo - Rate Monotonic vs. EDF: Judgment Day - Real-Time Systems,
29(1):5-26, 2005.
[8] R. Dobrin - Combining Off-line Schedule Construction and Fixed Priority Scheduling
in Real-Time Computer Systems - Ph. D. Thesis, 2005.
[9] A. Cervin and J. Eker - Feedback Scheduling of Control Tasks - 39th IEEE Conference
on Decision and Control, Sydney, Australia, December 2000.
[10] A. Cervin, J. Eker, B. Bernhardsson, and K.E. Arzen - Feedback-Feedforward Schedul-
ing of Control Tasks - Real-Time Systems, 23:25-53, 2002.
71
BIBLIOGRAPHY
[11] S. C. Ergen and P. Varaiya - TDMA Scheduling Algorithms for Sensor Networks
- Technical Report, Department of Electrical Engineering and Computer Sciences
University of California, Berkeley, July, 2005.
[12] Y. Abdeddaim, E. Asarin, and O. Maler - Scheduling with Timed Automata - Theo-
retical Computer Science, TACAS 2003.
[13] IndustrialIT 800xA - System (System Version 5.0) - Automation System Network -
Design and Configuration - ABB Automation Technology Products, September 2006.
[14] IndustrialIT 800xA - System (System Version 5.0) - Communication - Protocols and
Design - ABB Automation Technology Products, June 2006.
[15] IndustrialIT 800xA - System (System Version 5.0 Service Pack 1) - System Guide -
Functional Description - ABB Automation Technology Products, June 2007.
[16] IndustrialIT 800xA - Control and I/O - S800 I/O (System Version 5.0) - Product
Guide - ABB Automation Technology Products, September 2006.
[17] IndustrialIT 800xA - Control and I/O - S800 I/O (System Version 5.0) - Fieldbus Com-
munication Interface PROFIBUS-DP/DPV1 - ABB Automation Technology Prod-
ucts, September 2006.
[18] C. Snickars - Design of a WirelessHART Simulator for Studying Delay Compensation
in Networked Control Systems - Master Thesis, Royal Institute of Technology, 2008.
[19] M. De Biasi - Simulation of Process Control with WirelessHART Networks Subject to
Packet Losses - Master Thesis, Royal Institute of Technology, 2008.
[20] J. Baillieul, and P. J. Antsaklis - Control and Communication Challenges in Net-
worked Real-Time Systems - Proceedings of the IEEE, 95:9-28, January 2007.
[21] N. P. Mahalik - Fieldbus Technology: Industrial Network Standards for Real-Time
Distributed Control - Springer, 2003.
[22] K. J. Astrom, and T. Hagglund - Advanced PID Control - ISA, 2006.
[23] M. Ohlin, D. Henriksson, and A. Cervin - TRUETIME 1.5-Reference Manual - De-
partment of Automatic Control, Lund University, 2007.
72
BIBLIOGRAPHY
[24] H. Lindvall - Flotation Modelling at the Garpenberg Concentrator Using Model-
ica/Dymola - Master thesis, Uppsala University, 2007.
73