+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics Flight Simualtion Technologies Conference -...

[American Institute of Aeronautics and Astronautics Flight Simualtion Technologies Conference -...

Date post: 13-Dec-2016
Category:
Upload: les
View: 212 times
Download: 0 times
Share this document with a friend
7
SIMULATION OF A REDUNDANT DIGITAL ASYNCHRONOUS FLIGHT CONTROL SYSTEM IN A MULTI-PROCESS ENVIRONMENT Benton L. Parris and Les R. Fader Boeing Commercial Airplane P.O. Box 3707 MS 19-MJ Seattle WA 98124 Abstract A digital flight control system with redundant Line Replaceable Units (LRUs) and data busses which all run asynchronously with respect to each other could not be accurately modeled with traditional methods. Simulation of such a system requires variable time steps and a much higher computational load than single-thread simulations. This paper describes a variable time step method and special task schedulers which manage the execution of asynchronous tasks. Also described is the methodology of distributing the tasks in a parallel processor environment. The ability to alter the parameters which effect the asynchronous nature of the system allows one to study aspects of digital flight control systems that could not be done in traditional simulation environments or in hardware bench studies. Introduction Traditional airplane simulations consist of a flight control system, control surface and actuator dynamics, other subsystem model s, aerodynamics models, airplane dynamics, and a model of the flight environment. Most of the simulated systems and the flight environment are continuous (or analog) and are modeled using ordinary differential equations. The flight control system is typically digital with multiple sample rates. These types of aircraft simulations are used extensively to study and refine aircraft and flight control systems designs. Digital flight control systems usually contain redundant signal paths through different LRUs that carry pilot commands to control actuators. These redundant signal paths are typically not modeled because their contribution to the overall airplane performance in a healthy system is minimal. Including them often prevents a simulation from running in "real time" due to increased computational loads. A major design consideration for digital flight control systems is the mechanism used to control data flow and the scheduling of events. Systems can be designed with a centralized controller that schedules all events in the system. Distributed systems may have many self contained controllers that only schedule events in a limited part of the system. In distributed systems with multiple controllers that are strictly autonomous, identical events can happen at different times in different signal paths. The ability to control these differences in event times is critical so that the simulation user can modify and observe the asynchronous behavior of the system. This paper describes software and methods that allow careful control of event timing in the simulation of an airplane with a redundant asynchronous flight control system. Scope and Methods of Amroach Three major topics are covered in this paper: (I) a set of variable time step schedulers used to control asynchronous events, (2) user specified data that defines the asynchronous behavior of the system, and (3) the distribution of models across multiple simulation computer processors to improve simulation performance. Copyright 6 1994 by THE BOEING COMPANY. Published by the American Institute of Aeronautics and Astronautics, Inc. with permiission.
Transcript

SIMULATION OF A REDUNDANT DIGITAL ASYNCHRONOUS FLIGHT CONTROL SYSTEM IN A MULTI-PROCESS ENVIRONMENT

Benton L. Parris and Les R. Fader Boeing Commercial Airplane

P.O. Box 3707 MS 19-MJ Seattle WA 98124

Abstract

A digital flight control system with redundant Line Replaceable Units (LRUs) and data busses which all run asynchronously with respect to each other could not be accurately modeled with traditional methods. Simulation of such a system requires variable time steps and a much higher computational load than single-thread simulations. This paper describes a variable time step method and special task schedulers which manage the execution of asynchronous tasks. Also described is the methodology of distributing the tasks in a parallel processor environment. The ability to alter the parameters which effect the asynchronous nature of the system allows one to study aspects of digital flight control systems that could not be done in traditional simulation environments or in hardware bench studies.

Introduction

Traditional airplane simulations consist of a flight control system, control surface and actuator dynamics, other subsystem model s, aerodynamics models, airplane dynamics, and a model of the flight environment. Most of the simulated systems and the flight environment are continuous (or analog) and are modeled using ordinary differential equations. The flight control system is typically digital with multiple sample rates. These types of aircraft simulations are used extensively to study and refine aircraft and flight control systems designs.

Digital flight control systems usually contain redundant signal paths through different LRUs that

carry pilot commands to control actuators. These redundant signal paths are typically not modeled because their contribution to the overall airplane performance in a healthy system is minimal. Including them often prevents a simulation from running in "real time" due to increased computational loads.

A major design consideration for digital flight control systems is the mechanism used to control data flow and the scheduling of events. Systems can be designed with a centralized controller that schedules all events in the system. Distributed systems may have many self contained controllers that only schedule events in a limited part of the system. In distributed systems with multiple controllers that are strictly autonomous, identical events can happen at different times in different signal paths. The ability to control these differences in event times is critical so that the simulation user can modify and observe the asynchronous behavior of the system.

This paper describes software and methods that allow careful control of event timing in the simulation of an airplane with a redundant asynchronous flight control system.

Scope and Methods of Amroach

Three major topics are covered in this paper: (I) a set of variable time step schedulers used to control asynchronous events, (2) user specified data that defines the asynchronous behavior of the system, and (3) the distribution of models across multiple simulation computer processors to improve simulation performance.

Copyright 6 1994 by THE BOEING COMPANY. Published by the American Institute of Aeronautics and Astronautics, Inc. with permiission.

Flight - Control System Modeled

Figure 1 illustrates the flight control system being modeled. This system utilizes redundant paths of digital signals. The devices which process these

, ".,,,, .," .... -. , . . =. . . - -. . . . - . Computer 1 I . '1 Computer N I 1 Sensors LRU I .

Controller 1 Controller M Sensors LRU

Transducers

Figure 1 - Flight Control System Modeled

signals are called Line Replaceable Units (LRUs). The signals originating from the pilot control inputs and various aircraft sensors are transmitted on multiple asynchronous data busses and read from the busses into multiple flight control computers. The outputs of the flight control computers are transmitted on the asynchronous data busses and read from the busses into actuator controllers which close the analog loop around the control surface actuators. Each LRU has its own clock that determines when events happen within the LRU. Each data bus transmitter (one for each LRU) has its own clock that determines when it is permissible to transmit. The system is asynchronous because no central controller exists to synchronize LRUs or transmitters.

Modeling Events

The two basic types of events modeled are those within an LRU (referred to as tasks) and those that happen between LRUs (referred to as transmissions). All tasks within a single LRU are controlled by that LRU's clock and are executed in a defined sequence. No central controller exists to synchronize LRU or bus transmitter clocks. Therefore, tasks within one LRU are asynchronous with respect to tasks in another LRU and transmissions on one bus are asynchronous with respect to transmissions on another bus.

In an asynchronous system, events do not occur in a predetermined sequence. A mechanism must be built into the simulation to allow for all possible sequences of events. This mechanism must provide repeatability or changeability of the sequence of events based upon predefined parameters. A user must have control over this set of parameters to be able to study effects of variations in asynchronous event timing.

The two basic sets of parameters controlled by the user in this simulation are clock rates and start times associated with each LRU and transmitter. Controlling the values for these parameters gives the user complete control over the sequence of events.

A common time must exist to compare results of asynchronous events. All events in typical simulations are directly tied to a common time variable. In this simulation, an independent reference time is defined that is external to all LRU and transmitter clocks and is referred to as base time. All LRU and transmitter clock rates are defined relative to base time.

Figure 2 illustrates LRU task and transmitter event timing, the parameters controlled by the user and their interrelationship.

Transm~tter for LRUl

Transm~tter for LRUa t---- STT2 ---

I B V - Base Tlrne

Figure 2 - LRU Task And Transmitter Time Lines

Each LRU clock and transmitter clock has a start time (STL, and STT,) and a clock rate (CRL, and CRT,) set by the user. A start time indicates when an LRU or transmitter will begin functioning relative to base time. The clock rate is a ratio of clock time to base time. For example, an LRU clock rate of 1.0 results in time on that LRU being identical to base time and a clock multiplier of 1.01 would force that LRU to run 1% faster than base

time. All LRUs and transmitters may be set to start at different base times and run with different clock rates.

The user also specifies the number, order and duration of tasks within each LRU. The task duration (di) is the time allowed to execute that task as measured by the LRU clock. The sum of durations for all the tasks in that LRU represent the frame time for that LRU (FT,). The continuous aspects of the simulation, like the airplane and control actuator dynamics, are considered LRUs with a clock rate of one.

Similarly, for each transmitter the user must specify what messages are to be transmitted, the order in which messages are to be transmitted, the wordstrings that make up each message, the duration of or allowable time to transmit each word string, the time between transmissions (TI,), and any other transmitter protocol parameters.

Control of Time and Events

Control of events in simulations is based on the control of some time base and when events occur in that time base. Time for simulations of analog systems is controlled by a fixed time step. Procedures for integrating ordinary differential equations with a fixed time step are well known. The selection of a fixed time step is a trade between simulation execution speed and the fidelity of the dynamics being modeled. For simulations of digital systems, it is typical to select a fixed time step as a function of sampling frequencies. In both types of simulations it is possible to select a fixed time step that will model the necessary dynamics and discrete events while maintaining an acceptable simulation execution speed.

In this simulation, the interval between asynchronous discrete events varies as time advances and may be as small as one nano second. A fixed time step would need to be small enough to see all events. A one nano second fixed time step would result in a simulation execution speed too slow to be practical. Therefore, a variable time step with a resolution of one nano second was chosen. For example, the time step may at times be one nano second and other times be 10 milliseconds depeneding on the interval between asynchronous events. It should be noted that although base time changes with a variable time step, tasks within an LRU use a fixed time step for integration. This fixed time step is the frame time of that LRU relative to the LRU clock.

This simulation incorporates two types of schedulers to control events. A task scheduler controls the advance of base time and when LRU tasks run. A bus scheduler controls when the bus model runs relative to the task scheduler. The bus model controls when each transmission occurs according to bus protocol and timing parameters.

Task Schedulers This simulation is partitioned into separate processes. Each process contains a copy of the task scheduler that will execute tasks for one or more LRU.

The coordinated effort of all task schedulers is to execute each task in each LRU in an order that represents the start times, clock rates, and task durations that were specified by the user.

Coordination between all task schedulers is accomplished by posting values to areas in common (or shared) memory. The two key areas are the one reserved for the posted table and the one reserved for the task table.

The posted table contains the name of the task that is to run next, the time that task is to begin (posted time) and the scheduler associated with that task.

The task table is used to determine which task of all tasks in the system to post in the posted table. An example of the task table is shown below.

Task Table

I BAP I aero 1 0.040 1 0 I

The table has one entry for each LRU in the system. An entry contains the task to run next in that LRU, the time (in terms of base time) that task is to begin, and the scheduler which will execute that task.

The task scheduler in each process continually monitors the scheduler parameter in the posted table. When its unique value is posted it begins the execution phase of the posted task. The scheduler coordinates with the bus scheduler, updates base

time and determines the task and time to run that should be posted next during the execution phase.

Execution Phase

The task schedulers coordinate communication between tasks and the bus model during the execution phase illustrated on Figure 3.

Check and Walt for BUSSES to update I Update base tlme

TASK (TRUE)- Read rlsrt- fmm BUS I

Undate the tlme to ao for the next task on shared memory

r -

( Determrne the neat task to run of all tasks for all LRUs

TASK (FALSE)

return a Figure 3 - Execution Phase

A task scheduler begins its execution phase by waiting until the busses have come up to the current posted time. Once it is determined that the busses are up to date, base time is updated. Then that part of the current task which performs communication with the busses is executed. The bus model is therefore not being executed while a task is reading bus data. This insures proper coordination of communications between the bus and tasks.

After the bus communication portion of the task is completed, the task scheduler determines the next task that will be run on the current LRU. The task scheduler then updates this LRU's row in the task table with this task and it's time to run. The time to run is determined by multiplying the duration of the task by the clock rate for the LRU (dix CRL,).

Then the table is scanned to determine the next task, across all LRUs, that is to be executed. Data recording for this process is executed and the next task, time to run, and corresponding task scheduler are posted.

Posting the next task allows the associated task scheduler to begin the execution phase and allows the bus model to run.

The remaining computations in the current task are executed after posting.

This structure of the execution phase assures that the bus model and a task do not try to access identical shared memory at the same time. The structure also takes advantage of parallel processing environment discussed later.

Bus Scheduler The bus model and bus scheduler run in the same process. The bus scheduler coordinates the execution of the bus model in relation to the task schedulers. Because transmissions and bus protocol sequencing happen at a much higher rate than typical LRU tasks, we let the bus model run freely between LRU tasks resulting in improved simulation speed. The bus scheduler coordinates with task schedulers using the dynamically changing value of posted time. The bus is allowed to execute all transmissions and all protocol sequencing from the current base time up to the posted time.

Bus Model The bus model controls bus transmissions according to the transmission timing and parameters which define the sequencing protocol. Figure 4 illustrates the two major functions of the bus model. The first function, protocol sequencing, determines the next transmitter on each bus and when to begin transmitting. The second function, message transmission, transmits the next message on each bus. Transmissions will be interrupted if the time to transmit a message exceeds posted time.

!.I whlle bus_t~me(N) c posted time Message

Trnnsmmon ----------- bus(N) IS ready for transmlttlng 7

Figure 4 - BUS model

The user defined bus protocol parameters determine the next active transmitter. The purpose of these parameters is to avoid two or more transmissions on the bus at the same time and to set transmission frequency of each transmitter. For example, an ARINC 629 bus has the

synchronization gap, terminal gap, and transmit Simulation Mode Control - - as bus P~~~~~~~ parameters' The m i s simulation a standard definition of

synchronization gap and terminal gap force a operational modes for the simulation. These modes specified "quiet" time on the bus between each

are defined as follows: transmission. The transmit interval determines the frequency of transmissions, that is, the time between IC - This mode is for setting all models to their transmissions of a single transmitter. The next initial conditions. It is also the mode in which transmitter to transmit is the first to satisfy all three trimming is performed. Base time is zero in this of the protocol parameters. mode.

Figure 5 illustrates the updating of the busses. One COMP - This is the mode in which the dynamic bus at a time is advanced through the required models with time dependent integrators are protocol sequencing up to where a message is to be running and base time advances. transmitted. If the transmission start time is less HOLD - hi^ mode is for the than posted time then transmission of the message in time. This stops execution of all begins. models and associated updates of the data.

I base tme

I message being next message to

transmitted +,-A be transmitted

Bus t

00 ... 0 Bus 3

Th~s wordstrlng FTl x CRL1 not transmttted. , Walt unt11 next / all m s a g e s tlme for thls transmttted now

Figure 5 - Updating Busses

Each transmission is a message containing a group of wordstrings. A wordstring is an array of variables. The time required to transmit each wordstring across the bus is specified by the user. Only one message is transmitted on a bus at a time. When a message is finished a prescribed "quiet" time must elapse before another message may be transmitted.

While a message is being transmitted, time for that bus is increased by the wordstring transmission time as the word is transmitted. If the next word string to transmit will exceed posted time the next bus is selected. If the message completes before posted time the current bus is advanced up to the next transmission time.

It is important to note that each transmitter is autonomous to all other transmitters on that bus and all transmissions on a single bus are considered asynchronous. In a multiple bus system, all transmissions are asynchronous to all other transmissions.

Once all busses have been advanced to posted time, the bus model returns to the bus scheduler.

TRIM - This is not a separate mode but a state of the IC mode in which software is run to null airplane accelerations.

The model code and task schedulers execute in different ways depending on which mode is in effect.

Although the heart of the models are executed in COMP mode, what and how things are performed in IC mode is critical to generating a valid starting point for COMP mode.

Mode nansitions

Since the models can be executed in different processes on separate CPUs, a mechanism must exist to insure that each model does not begin its IC or COMP calculations until all models are ready to begin that mode. One model must not be in the middle of IC calculations while another is doing COMP calculations. The models must "synch-up" before beginning their calculations in either mode.

Figure 6 illustrates how the task and bus schedulers control execution of all models during the transition from IC to COMP and the transition from COMP to IC. Individual bits in the icdone and icmode words and individual elements of the ictoco and cotoic arrays are assigned to a unique scheduler. The icdone and icmode words and ictoco and cotoic arrays all reside in shared memory so that they are accessible by all processes.

Transition from IC to COMP While in IC, each scheduler clears a bit in the icdone word and sets an element of an integer array called ictoco to one.

When the scheduler observes that COMP is true (COMP mode is in effect for this process), icmode not equal to zero (not all processes have reached this

Normal Transdlon Trans~t~on Normal COMP mode IC to COMP COMP to IC IC mode

COMP =true COMP =true and and

~cmode = 0 ~cdone = 0 ~csync = 1 else d

Figure 6 - Mode 'bansition Synchronization

point), and icdone equal to zero (all processes have completed the IC "synch-up") it sets its icmode and icdone bits and sets its element of the cotoic array to zero.

The scheduler then waits until all processes have set their element of the ictoco array to zero. This indicates that all processes have reached this point and so the icmode word is cleared and the icsync flag is set. This allows the processes to begin their COMP execution.

m i t i o n from COMP to IC

When the scheduler observes that IC is true (IC mode is in effect for this process) and icsync is equal to zero (just out of COMP and processes not all here yet) it then waits until all processes have set their element of the cotoic array to zero. This indicates that all processes have reached this point and so the icsync flag is reset. This allows the processes to begin their IC execution.

Task Scheduler and Simulation Modes

Figure 7 illustrates the operation of a task scheduler in IC, HOLD, and COMP modes.

During IC mode each copy of the task scheduler in each process runs independently from the others and does not use the posted table in shared memory to determine when and what to execute. The task scheduler first zeros base time. To prepare for COMP mode, it initializes the task table and posts the next task, time to run and corresponding task scheduler. The task scheduler then executes each of its tasks, first for bus communication then for the remainder of task computations. The data recording function is then invoked.

COMP a HOLD IC

EXECUTION PHASE

Determme Fmt LRU and Task to

For all Tasks

TASK (TRUE) TASK (FALSE)

Figure 7 - Task Scheduler and Simulation Modes

The task scheduler does nothing in HOLD mode.

The COMP mode is the main mode of operation and the behavior of a task scheduler in this mode is described in the Control of Events section above.

Bus Model and Simulation Modes Figure 8 illustrates the operation of the bus model in IC, HOLD, and COMP modes.

HOLD IC 4

posted time > bus tlme A execute tne BUS model I -

posted tlme

event tmng

wordstrlngs on all busses

set bus time to I T I return e

Figure 8 - Bus Model and Simulation Modes

During IC, the bus model initializes the event timing and bus protocol parameters. It then transmits all wordstrings on all busses without coordinating with the task scheduler. It also sets the bus time to zero.

The bus model does nothing in HOLD mode.

In COME if the bus time is less than posted time the bus model is executed. The behavior of the bus model is described in the Control of Events section above. The bus time is set equal to posted time after the bus model is executed.

Parallel Processing Environment

The computers used to run this simulation utilize parallel processors. To take advantage of this

parallel environment, processes are partitioned among the processors according to task and event durations, parallelism in the real system, memory requirements and computational loads. Figure 9 illustrates one way this can be accomplished.

b b b b , i , ,! , 4, , b

I ' Shared Memory

Figure 9 - Process Distribution vs Memory Usage

Distribution Based on Task and Event Durations LRUs which have smaller frame times than other LRUs need to run more frequently. Bus transmitters typically transmit more frequently than the LRU tasks run. The high frequency tasks and transmitters are assigned to processes and processors separate from the lower frequency tasks to minimize idle time for the higher frequency tasks. . . . ~stribution Based on Parallelism in The Real

Svstem The system being modeled has a combination of LRUs which are separate computers running in parallel. An attempt is made to preserve this characteristic of the system. Each LRU is assigned to a separate process. The processes are grouped on processors such that they execute simultaneously as they would in the real system. The simultaneous running characteristic can not be completely preserved due to the limited number of simulation computer processors available. However, parallelism is still guaranteed by the design of the task and bus schedulers.

Memory Requirements Distribution Each simulation computer processor has a fixed amount of associated local memory. When the processes assigned to a particular processor are so large that they exceed the associated local memory,

global memory is assigned to accommodate the excess. When global memory is used up, the virtual memory mechanism will begin to swap to disk. The reversion to global memory and ultimately to disk have progressively more severe performance penalties. The processes are therefore distributed such that exceeding local memory is minimized.

Com~utational Loads Distribution The aerodynamics, engines, gear and other aircraft systems which are not the main focus of this simulation are grouped together in relatively large tasks and thus require longer execution times than the typical LRU or bus model. LRUs require more execution time than the bus model. An attempt is made to group those processes with lower execution times on processors separate from the processes with higher execution times. Again the goal is to minimize idle time for the processes with the lower computational load.

Conclusions

This work is a significant advance in the technology of simulations for studying digital asynchronous flight control systems. It is being used to investigate and validate the design of a fly-by-wire flight control system for a modern transport airplane. The ability to alter the parameters which effect the asynchronous nature of the system allows the study of a wide range of possible variations in system characteristics.

This simulation capability can be used to study aspects of digital flight control systems that could not be done in traditional simulation environments or in bench studies of the hardware.

Acknowledgements

We wish to acknowledge members of the group which developed the software for this simulation. In addition to us, the members include Kipp Howard, Bruce Zunser, and Chris West.

We also wish to acknowledge the contribution of the flight controls engineers who supplied the requirements and model definitions for this simulation. Special thanks must go to Dana Olson for the basis of the bus model, to Ron Riter for his many valuable suggestions regarding the task schedulers, and Larry Hazard for his contributions to the design of the software.


Recommended