+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Modeling and Simulation Technologies...

[American Institute of Aeronautics and Astronautics AIAA Modeling and Simulation Technologies...

Date post: 12-Dec-2016
Category:
Upload: tiziano
View: 212 times
Download: 0 times
Share this document with a friend
9
Advanced interface for UAV (Unmanned Aerial Vehicle) Ground Control Station Francesca De Crescenzio*, Giovanni Miranda†, Franco Persiani‡, Tiziano Bombardi§, University of Bologna, Italy [Abstract] The research presented in this paper focuses on an advanced interface in a UAV Ground Control Station whose aim is to guarantee a high level of situation awareness and a low level of workload in the control and supervision of unmanned vehicles. The interface is based on a touch screen—used to command the UAV by means of high level commands—and a 3D Virtual Display which provides a stereoscopic and augmented visualization of the complex scenario in which the vehicle operates. Good levels of situation awareness are also guaranteed by an audio feedback that informs the operator about any change in operational situation, as well as when the mission objectives have been accomplished. Test results have revealed that this interface provides the operator with good sense of presence and enhanced awareness of the mission scenario and the aircraft under his control. I. Introduction ECHNOLOGICAL developments in the design of Unmanned Aerial Vehicles (UAVs) are opening the way for their exploitation in a wide range of applications. The growing autonomy of such vehicles and the complexity of the environments in which they are expected to operate bring to a new vision of the future role of humans, leading to a new generation of human-machine interfaces T 1-5 . Researchers are considering the limitations of the current interfaces, which are still too ineffective, inefficient and inadequate to allow the operator to deal with the expected levels of vehicle autonomy. The new interfaces should overcome such limitations 4-6 . For that reason, the design process must consider the human factors associated with UAV. Specifically, human factors such as attention, perception and cognition in managing the vehicle - in its normal operating state or in abnormal situations - must be considered. High levels of situation awareness would allow the operator to become aware of the relevant elements in the operational space and their relationships, and behave proactively in order to optimize UAV system performances and take actions to forestall possible future problems 1, , 23 . New interfaces should also require a low level of operator workload. The aim is to allow the operators to manage the mission easily, controlling even more complex situations. Recent papers present interfaces designed to satisfy such requirements 4-8 and allow the operator to deal with different degrees of vehicle autonomy. In each paper the architecture of the interface, tests and results are described. The interface detailed in ref. 4 comprises three display formats and supports a mouse and keyboard as input device. A Situation Awareness format provides the operator with a dynamic, large-scale presentation of the mission area. A UAV status format provides information about health and status of the vehicle. Finally, a multifunction format is used to manage most of the mission events. In order to test such interface, preplanned mission are simulated and managed by tester operators. They have to monitor the progress of the vehicle flight and adjust the path if unplanned events occur. Different levels of system automation and the time required to accomplish a task are the experimental variables. The results reveal that the testers maintain a good level of situation awareness and that they prefer a level of automation that allows them to select one solution among several produced by the system when path adjustments are needed. Ref. 6, 7, 8 describe an interface and present test results. Such an interface combines virtual representation, 2D visualization of operational space and flight parameters, and also supports different input devices, such as joystick, motion tracker and voice recognition. In this case the tester operator has to supervise the acquisition of imageries and the designation of target objectives while watching out for the pop-up warning signal (i.e. appearance of other vehicles and changes in a mission mode indicator). The time and placement of the pop-up is planned by an experiment designer before the experiments start. Again two levels of automation, called management by consent and management by exception, are considered. The results show that the virtual display * Assistant Professor, II Faculty of Engineering, Forlì, [email protected] . PhD student, II Faculty of Engineering, Forlì, [email protected] . Senior Professor, II Faculty of Engineering, Forlì, [email protected] . § Computer Programmer, II Faculty of Engineering, Forlì, [email protected] . AIAA Modeling and Simulation Technologies Conference and Exhibit 20 - 23 August 2007, Hilton Head, South Carolina AIAA 2007-6566 Copyright © 2007 by F.De Crescenzio,G.Miranda,F.Persiani,T.Bombardi. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

Advanced interface for UAV (Unmanned Aerial Vehicle) Ground Control Station

Francesca De Crescenzio*, Giovanni Miranda†, Franco Persiani‡, Tiziano Bombardi§, University of Bologna, Italy

[Abstract] The research presented in this paper focuses on an advanced interface in a UAV Ground Control Station whose aim is to guarantee a high level of situation awareness and a low level of workload in the control and supervision of unmanned vehicles. The interface is based on a touch screen—used to command the UAV by means of high level commands—and a 3D Virtual Display which provides a stereoscopic and augmented visualization of the complex scenario in which the vehicle operates. Good levels of situation awareness are also guaranteed by an audio feedback that informs the operator about any change in operational situation, as well as when the mission objectives have been accomplished. Test results have revealed that this interface provides the operator with good sense of presence and enhanced awareness of the mission scenario and the aircraft under his control.

I. Introduction ECHNOLOGICAL developments in the design of Unmanned Aerial Vehicles (UAVs) are opening the way for their exploitation in a wide range of applications. The growing autonomy of such vehicles and the complexity of the environments in which they are expected to operate bring to a new vision of the future

role of humans, leading to a new generation of human-machine interfaces

T 1-5. Researchers are considering the

limitations of the current interfaces, which are still too ineffective, inefficient and inadequate to allow the operator to deal with the expected levels of vehicle autonomy. The new interfaces should overcome such limitations4-6. For that reason, the design process must consider the human factors associated with UAV. Specifically, human factors such as attention, perception and cognition in managing the vehicle - in its normal operating state or in abnormal situations - must be considered. High levels of situation awareness would allow the operator to become aware of the relevant elements in the operational space and their relationships, and behave proactively in order to optimize UAV system performances and take actions to forestall possible future problems1, ,2 3. New interfaces should also require a low level of operator workload. The aim is to allow the operators to manage the mission easily, controlling even more complex situations.

Recent papers present interfaces designed to satisfy such requirements4-8 and allow the operator to deal with different degrees of vehicle autonomy. In each paper the architecture of the interface, tests and results are described. The interface detailed in ref. 4 comprises three display formats and supports a mouse and keyboard as input device. A Situation Awareness format provides the operator with a dynamic, large-scale presentation of the mission area. A UAV status format provides information about health and status of the vehicle. Finally, a multifunction format is used to manage most of the mission events. In order to test such interface, preplanned mission are simulated and managed by tester operators. They have to monitor the progress of the vehicle flight and adjust the path if unplanned events occur. Different levels of system automation and the time required to accomplish a task are the experimental variables. The results reveal that the testers maintain a good level of situation awareness and that they prefer a level of automation that allows them to select one solution among several produced by the system when path adjustments are needed.

Ref. 6, 7, 8 describe an interface and present test results. Such an interface combines virtual representation, 2D visualization of operational space and flight parameters, and also supports different input devices, such as joystick, motion tracker and voice recognition. In this case the tester operator has to supervise the acquisition of imageries and the designation of target objectives while watching out for the pop-up warning signal (i.e. appearance of other vehicles and changes in a mission mode indicator). The time and placement of the pop-up is planned by an experiment designer before the experiments start. Again two levels of automation, called management by consent and management by exception, are considered. The results show that the virtual display

*Assistant Professor, II Faculty of Engineering, Forlì, [email protected]. † PhD student, II Faculty of Engineering, Forlì, [email protected]. ‡ Senior Professor, II Faculty of Engineering, Forlì, [email protected]. §Computer Programmer, II Faculty of Engineering, Forlì, [email protected].

AIAA Modeling and Simulation Technologies Conference and Exhibit20 - 23 August 2007, Hilton Head, South Carolina

AIAA 2007-6566

Copyright © 2007 by F.De Crescenzio,G.Miranda,F.Persiani,T.Bombardi. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

improves operator situation awareness, and that the testers respond typically to the unexpected events rather than let the action automatically occur.

An immersive interface is presented in ref. 5. It provides the operator with a comprehensive view of the overall mission, including all the relevant geographic and political features of the area in which the unmanned vehicles are operating and their detailed status. In this case, the tests are carried out in order to test the effectiveness of a remote control system called VR-aided teleoperation system. The results reveal that such a control system greatly improves operator situation awareness and performance, when compared with camera-aided teleoperation system.

In this paper we describe the design, development and test of a novel interface for UAV ground control stations. The aim is to allow the operator to manage UAV missions by means of high level commands with a high level of operational situation awareness. This interface comprises a touch screen and a 3D virtual display. The touch screen is the main hardware that the operator uses to send commands to the vehicle. The virtual display provides a stereoscopic representation of the scenario, and a 2D visualization of flight parameters. An important feature of such an interface is an audio feedback. The interface tests have been executed by simulating UAV missions managed by human testers. During such missions, they had to assure the vehicle a safe flight that could be affected by the random appearance of obstacles with different shapes and dimensions. The level of automation has been varied, considering three different modes to readjust the path: manual, semiautomatic and fully automatic. In order to test the interface, we have connected the interface to a vehicle model and also developed a path planning algorithm that calculates safe and efficient paths. Such paths don’t collide with obstacles and are consistent with the vehicle performance constraints.

In the next section, we will detail the architecture of the interface. In the third section we will describe how we have executed the tests and their results. In the last section we will provide conclusions.

II. Human-Machine Interface Defining the requirements that an innovative interface has to satisfy, we firstly analysed the current interfaces for operating UAVs and the way they give information to the operator. Frequently a large quantity of useless information about the mission state is displayed on several monitors. In this environment, often the operator is not supported in selecting the information he needs when the situation evolves during the mission. Moreover, when appropriate feedback signals are not displayed by the system, the operator does not know if his requests have been received, if the actions are being performed properly, or if problems are occurring1, , , ,2 3 9 10. Based on the level of automation implemented on a specific system, such interfaces include several control system and input devices as HOTAS (Hands on Throttle and Stick) commands, pedals, mouse and keyboard. Currently the operator guides some vehicles manually by using a joystick, stick and throttle. In other cases the control is partially automated, so that the operator selects the desired parameters as, for example, the velocity to be held on or the altitude to be reached or a change of heading. In other cases the control is fully automated, so that the vehicle accomplishes a preplanned mission without any interaction with the operator1, , , ,2 3 9 10. It has been demonstrated that an high level of automation can have desirable and undesirable effects on both the operator workload and situation awareness1- , -4 10 14.One of the desirable effects of automation is that it can greatly improve the performance of the system by carrying out tasks that are performed poorly by a human operator, or by reducing operator workload during task-loading conditions. Undesirable effects can occur both with low and high level of automation. Low levels of automation impose the highest and most continuous level of workload on the operator, whereas high levels of automation lead to a decrease in operator situation awareness because of implementation of strategies that remove him from involvement in system operation, leading to out-of-loop performance decrements. Such effects depending from the level of automation can undermine the effectiveness of human-machine performance in advanced systems1- , 4 11. An innovative interface, able to deal successfully with such critical issues, should satisfy the following requirements.

• Low level of operator workload: the operator would have to spend few resources in term of time and cognitive effort to command the vehicle, in order to manage the mission and analyse the information coming from onboard system.

• High level of operator situation awareness: the operator should be provided with a comprehensive view of the overall mission scenario, in order to understand the mission state and detailed vehicle state during the mission, enabling him to score and order all the information to develop the optimal command sequence13.

Such an interface should facilitate the evolution of the operator role, from pilot or controller to supervisor who manages the mission and commands the vehicle by means of high level commands. The high level commands should derive from natural human interaction, that is, they should arise from simple and instinctive human gestures and vocal utterances. They are directives (classified as Goals, Constraints, “If-Then-Rules”) that the system understands and translates into detailed actions13, as path planning actions, payload management actions and, at a lower level, operation of the flight control system. On the airborne segment, the onboard computers are able to gather information from the environment, and process the collected data in order to make the right action. Thus, the vehicle is not a pure executor of the direct operator’s commands, but it can be an autonomous system with which the operator can interact to reach a high level goal

Figure 1. UAV Supervisory Control

13. Figure 1 summarizes these concepts. The simulation system architecture includes three workstations: SIM1, SIM2 and SIM3 (Fig. 2).

On SIM1 there are implemented:

Figure 2. Interface for UAV Ground Control Station.

• the command panel software; • the path planning algorithm for translation of operator commands; • the mission scenario data base.

On SIM2 are implemented: • the software for the stereoscopic visualization; • the mission scenario data base.

On SIM3 are implemented: • the vehicle dynamic model. In the following section the command panel, the path planning algorithm and the virtual display are described.

A. Command panel The Command panel is based on a touch screen, which allows a fast direct position control. The operator is

able to point at a location as a point on the screen and associate it with a specific action. Thus, the commands are fast, simple and do not need great accuracy. The screen is provided with a north-up oriented navigational map of the geographical space that gives a 2-D vision of the mission area to the operator. The screen is also provided with two columns of keys: manipulation keys, that are used to manipulate the map, and command keys used to send commands to the vehicle (Fig. 3). Such commands are composed of three instructions. Each command key is associated with specific information so that their combination results in the operator’s command list. The basic command is “Mission Task to Target at minimum Priority”. Each instruction is described below:

1. Mission Task: the vehicle can be commanded to fly to a destination point or to survey a target area. 2. Target: it can be a destination point or the centre of a target area. 3. Priority: it indicates the way the task has to be accomplished. The vehicle can follow the shortest or the

fastest or the most fuel efficient path. The command keys are divided into

three different blocks. In the first block we have placed the keys that identify the specific mission tasks. By using the keys in the second block the operator can select intermediate points where the vehicle is obliged to fly (Primary Points) and mission targets. In the last block the operator can select the mission priority (i.e., minimum mission time, fuel consumption, or path length). There is also a window that allows the operator to see the coordinates of the last point commanded. Then, the whole command is obtained selecting one key per block one at a time, as a piece of a verbal command (i.e., Mission Task to Target at minimum Priority). The placement of the command keys is consistent with the sequence of the command action. The command keys are placed on the same

column and in the same order they should be expected by the operator. The time to send commands is short thanks to the position of the keys. Moreover, the keys are distant enough to avoid blunders, or unintentional action.

Figure 3. Command Panel

On the right side of the map, we have placed manipulation keys. Such keys allow the operator to translate and scale the map in order to point the mission objectives or primary point more accurately. The operator can also rotate the map such that the direction of the motion of UAV in the space is compatible with operator’s mental model of how the UAV actually moves in the physical display.

Feedbacks that notify the operator about the commands he sends and their effects are one of the main features of the command panel. All information about the path calculated by the path planning algorithm is written on a text file and its 2D representation is displayed on the map. The second feedback is a voice communication. It is represented by the sentence that the operator would say if he sent vocal command. So, if the

Figure 4. Trajectory feedback

operator commands UAV to fly to a destination minimizing the time mission, he will receive the audio feedback: “fly to … at minimum time” and see the calculated path between the vehicle position and the target on the screen (Fig. 4). Such feedbacks provide the operator with all information he needs to accept or reject the suggestion of the system. If the operator does not confirm the path, he can insert primary intermediate point or modify the mission objective. Otherwise, if he approves it, the path is sent to the vehicle and is visualized on a 3D virtual display. The operator has two visualizations of the vehicle flight: a stereoscopic one through the 3D virtual display and a 2D one through the touchscreen (Fig. 5).

Figure 5. Path visualization B. Path Planning Algorithm

The high level command has been defined as a guide command by which operator directs the UAV to accomplish a mission or a part of it. The use of this command is based on automatic generation of a path that is sent to the vehicle as a list of generated data in terms of waypoints; each way point is associated with computed altitude and a computed velocity vector. The path is calculated in real-time by an “intelligent” planning algorithm based on obstacle avoidance and UAV performance observance strategies. The UAV performance, such as descent and climb rates, flight path angles, airspeed, turn radius and altitude, are provided by means of look-up tables as upper and lower bounds in given conditions. As these parameters depend on UAV weight, the UAV’s fuel consumption is continuously estimated and the UAV state is updated. In order to take into account the features of the environment we have constructed a dynamic, complex three-dimensional digital representation of the real world. The digital world incorporates triangular mesh-based models of the terrain and obstacles, such as hazard semi-spheres (in which ground based weapons can reach the UAV) and “infinitely” high polygons (“No Fly Zones” where the UAV is not allowed to fly). Both the terrain and the obstacles are converted to triangle meshes. The one that represents a hazard semi-sphere has the radius and centre latitude, longitude, altitude of the semi-sphere as parameters. The mesh that represents a No Fly Zone is obtained extruding its basic polygon and has the vertex number and coordinates, latitude and longitude, as parameters. The height of the extruded polygon is greater than the vehicle flight ceiling.

Figure 6 shows the input and output data of the algorithm. The input data are: 1) mission task, target and priority from the operator; 2) UAV state and the features of environment from simulation; 3) look-up tables of UAV performance from a database. The output data is the path the vehicle has to follow to execute the mission.

The algorithm calculates a series of waypoints - and corresponding altitude and velocity data - where UAV has to pass in order to accomplish the mission, called Primary Mission Points, and, between each couple of primary points, calculates the path, called macroleg, generating an appropriate sequence of intermediate waypoints and corresponding altitude and velocity data. The global path is the connected sequence of the macrolegs.

The calculus of the path between two Primary Points is executed in two different phases. In the first one a subalgorithm, called safe algorithm, calculates safe paths the UAV can follow toward the destination point. In the second one a subalgorithm, called cost algorithm, manipulates the safe paths and generates the path that fulfils the UAV performance and mission priority.

As said before, in the present software version the possible mission tasks are two: 1) flyto a destination point, 2) survey a target area. If the mission task is the first one, the algorithm calculates a safe and efficient path in term of distance, time or fuel consumed between the position in which the UAV is when it receives the command and the destination point. If the mission task is the survey of a target area, the algorithm calculates a further series of waypoints - and corresponding altitude and velocity data - where UAV has to pass in order to allow its sensor(s) scan the intended area while trying to satisfy the selected priority.

Figure 6. The path planning algorithm

1 The safe algorithm The safe algorithm restricts the flight space in a discrete number of planes and then calculates all safe paths

that allow the UAV to fly to the target point avoiding the obstacle on each of them. In such a way, the 3D problem of the calculus of paths is reduced to a 2D problem. The actions made by the safe algorithm are:

• generation of a sheaf of n planes by discrete angle rotations of a plane about the line joining the vehicle position and the target point;

computation of the plane intersections of obstacle and no fly zones volumes with each plane; determination of safe paths that avoid these intersections; selection of the shortest path in each plane.

2. The cost algorithm The cost algorithm receives the safe paths from the safe algorithm as a list of waypoints, manipulates them

and selects the one to send to vehicle. The cost algorithm comprises two routines:

Feasibility routine : it verifies the safe paths respect the UAV performance. Efficiency routine: it elaborates each safe and feasible path in order to take into account the vehicle performance and mission priority selected by the operator. Through a repositioning of the waypoints of a generic safe path, a path that the vehicle can follow is obtained. Then, the routine calculates the cost in term of mission time, fuel consumption or path length based on mission priority and selects the most efficient one.

In such a way the resulting trajectory is not an optimal path in a pure mathematical sense, but only an heuristic or good approximation sense obtained with a moderate computational effort.

C. 3D Virtual Display The 3D stereoscopic virtual display is the main hardware component that allows the operator to supervise

the mission providing with a stereoscopic synthetic visualization of the operational space. Such display merges data streams coming from multiple sources of information in a single, comprehensive view of the scenario (Fig. 7). Environment features, such as No Fly Zones, hazard semi-spheres and paths are overlapped on the scenery reconstruction to provide an augmented vision of the mission execution5.

Figure 7. 3D Virtual Display

For the visualization of the operational scenario, it has been used a Flight Data Monitoring (FDM) program based on virtual reconstruction and on a high level of integration of available digital data in a wide range of different formats. The software core is able both to read and filter digital recorded data and to read real time flight data coming from the vehicle. In both cases the resulting flight is displayed in a virtual environment, which the operator can view from both inside and outside the vehicle, with a high degree of photorealism. This is aided by a 3D model of the vehicle and by automatic geo referenciation of flight, based on the use of the DEM (Digital Elevation Maps). The level of fidelity of the simulation can be enhanced by inserting aerial or satellite images as textures, and by building realistic 3D model of buildings or infrastructures as bridges,

airports, etc. Moreover, while the UAV is flying, the operator can reproduce both visibility and meteorological conditions. He is able to customize visibility parameter, as the distribution of clouds in the space, the density, the area location of clouds and the altitude interval. The program also provides a reconstruction of the factors that describe the flight dynamic in each phase of the flight. The most significative data, vehicle attitude, flight path, altitude from ground, velocity vectors and wind parameter are depicted as graphically integrated in the main window. The other flight parameter can be displayed in separate windows, that the operator can dimension and place on the screen (Fig. 8). Since the program reads and filters also recorded data, the operator is able to review the mission off-line as an animated

time history. In such a way, the program creates a virtual moviola of the scenery and the operator can use a timeline to control the flight sequence. He can control playing speed, stop, and pause and rewind the flight sequence or go to a specific event of it when he needs an understanding of what happened in a specific moment during the mission. This is especially important in incident and accident reconstruction, in particular for what concerns the identification of human factors involved in an accident or any other abnormal event occurred during flight. More details about Flight Data Monitoring (FDM) program can be found in ref.

Figure 8. Visualization of Flight Parameter

15.

III. Evaluation of the interface We have tested the interface with the aim to evaluate its usefulness for management of vehicle mission and

to get guidelines for its further upgrades. As tester we have chosen twelve student pilots with an already good flight experience. Before the beginning of the tests a researcher presented to testers the functionality of the interface and how they would have used it in performing their tasks. The testers’ task has been the supervision of the vehicle safe flight toward a destination point in a dynamic semiknown scenario. The scenario was updated by the random appearance of pop-up threats. They were inserted, depending from the mission progress, by a second researcher from a workstation called obstacle workstation. Figure 9 shows how the obstacle workstation communicates both with the command panel and the 3D virtual display by means of TCP/IP channels.

The main assumption we made is that the vehicle is autonomous enough to react to its own malfunctions. In order to improve the operator awareness about the change of the scenario and the progress of the mission, audio synthetic communicatios inform the operator about mission tasks accomplishment or about change in the environment were included.

The main experimental variable was the level of automation of the system. We have considered three different levels. Each tester performed three different tests, each of them with different level of automation. In the following points are described the actions the tester and the system make when a pop-up appears, for each level of

automation.

Figure 9. Obstacle workstation

Manual• : the operator commands the computation of a new path. Semi-automatic• : the path planning algorithm computes a new path and provides the operator with limited time to veto it before the vehicle assumes the new path as nominal path. If the operator does not approve the path suggested by the system, he can intervene by means of high level command changing the mission objective or inserting intermediate points where the vehicle is obliged to fly. Automatic• : the path planning algorithm calculates a new path and informs the operator that the vehicle assumes a new path.

A. Results of the tests Three different ways to collect data for the evaluation of the interface were chosen.

Observations• : the researcher observed each tester during the tests and noted his behaviour. • Questionnaires: questions with multiple or open answers were included in order to gather both statistics

and testers ratings for situation awareness, the effects of level of automation on system performances and the interface usability.

• Debriefings: the researcher analyzed with each tester the notes and the answers to the questionnaires in various details. Such discussions were useful since they allowed the testers to remind more about the tests and to explain their answer clearly.

1. Situation Awareness Test results have revealed that tester maintained high level of situation awareness thanks to the two displays of the interface and the features of the software running on their workstation. By integrating the information coming from the touch screen and the 3D display they said they were able to understand the effects of both commands and path changes, to maintain awareness of the dynamic scenario and of vehicle position in reference to obstacles, and to make specific, decisive and informed decisions for necessary actions. Testers also appreciated the possibility of switching between “pilot view” and “external view”, and, in the latter case, the possibility of changing the scale factor and the external point of view. Moreover, the audio feedback was assessed as an important interface features by the testers since they could be both notified about the execution of the mission and warned about changes of scenario, whatever they were looking at or whatever they were doing. The importance of audio feedback comes from the fact that the auditory system is omni directional; and so, unlike visual signal, the operator can sense auditory signals no matter how he is oriented.

2. Level of automation Among three levels of automation the testers preferred the semi-automatic one. The reason may be explained by the two most significant tester answers:

1. Semi-automatic level of automation makes a situation similar to the “pilot – copilot” one. In such situation one can correct the mistakes of the other and vice versa.

2. The operator, who is supervising the mission, has only to verify that the path calculated by the system does not collide with the obstacles. Anyway, when the path is not safe, the time and the workload required to the operator to intervene are minimal.

Hence, the semi-automatic level of automation both leads to a decrease in operator workload and allows the operator to have an active role in the execution of the mission, since he can modify the mission by means of high level commands if the system can not react to changes of operational scenario with appropriate actions.

IV. Conclusions In this paper we propose a human-machine interface integrated with a path planning algorithm. These were tested using a real time simulation system able to manage missions of unmanned vehicles. Such system has allowed new interaction ways between the operator and the vehicle to be tested. The interaction consists of a continuous exchange of information between the vehicle and the operator that cooperate in order to gain the mission success. The operator, who is considered a mission supervisor, sends high level commands that are understood and translated by the system into the paths to be flied. The vehicle is also able to adjust the path if any change of operational scenario happens during the mission, whereas the operator supervises the mission through the 3D virtual display and can always intervene to change the mission. The interface has been tested using several pilot students with a reasonably good flight experience. Test results show that the interface satisfies the design requirements. The operators maintained a good level of situation awareness, mainly thanks to 3D visualization and audio feedback. Moreover, they also spent few resources in term of time and cognitive effort to command the vehicle, mainly when the semiautomatic level of automation was active. Future works will be focused on interface improvement based on test results and its use to simulate UAV swarm, wide areas monitoring and surveillance missions.

References 1 K.W. Williams, “Human Factors Implications of Unmanned Aircraft Accidents: Flight Control Problems”, April 2006,

Final Report, Civil Aerospace Medical Institute, Federal Aviation Administration, Oklahoma City, OK 73125. 2 J.S.McCarley, C.D.Wickens, “Human Factors Implications of UAVs in the National Airspace”, Technical Report

AHFD-05-05/FAA-05-01, April 2005, Prepared for Aviation Administration Atlantic City International Airport, NJ. 3 J.S.McCarley, C.D.Wickens, “Human Factors Concern in UAV Flight”, Institute of Aviation, Aviation Human Factors

Division, Universiyt of Illinois at Urbana-Champaign. 4 G. Barbato, G. Feitshans, R.Williams, T. Hughes, “Operator Vehicle Interface Laboratory: Unmanned Combat Air

Vehicle Controls & Displays for Suppression of Enemy Air Defences" from Proceedings of the 12th International Symposium on Aviation Psychology, 2002.

5 B.E. Walter, J.S. Knutzon, A.V. Sannier and J.H. Oliver, “VR Aided Control of UAVs”, 3rd AIAA Unmanned Unlimited Technical Conference, Workshop and Exhibit, Paper No. AIAA 2004-6320, Chicago, IL, September 20-23, 2004.

6 K. S. Tso, G. K. Tharp, W. Zhang, and A. T. Tai, "A multi-agent operator interface for unmanned aerial vehicles" in Proceedings of the 18th Digital Avionics Systems Conference, (St. Louis, MO), pp.6.A.4.1-6.A.4.8, Oct. 1999.

7 K. S. Tso, G. K. Tharp, A. T. Tai, M. H. Draper, G. L. Calhoun, and H. A. Ruff, "A human factors testbed for command and control of unmanned air vehicles" in Proceedings of the 22nd Digital Avionics Systems Conference, (Indianapolis, IN), Oct. 2003.

8 H.A.Ruff, G.L.Calhoun, M.H.Draper, J.V.Fontejon, B.J.Guilfoos, “Exploring Automation Issues in Supervisory Control of Multiple UAVs”, Proceedings of the Human Performance, Situation Awareness and Automation Technology Conference, March, 2004, pp.218-222.

9 K.W. Williams, “A summary of Unmanned Aircraft Accident/Incident Data: Human Factors Implications”, December 2004, Civil Aerospace Medical Institute, Federal Aviation Administration, Oklahoma City, OK 73125.

10 N.J.Cooke, H.L.Pringle, H.K.Pedersen, O.Connor, “Human Factors of Remotely Operated Vehicles”, Advances in Human Performance and Cognitive Engineering Research, Volume 7, Edited By Eduardo Salas.

11 C.D. Wickens, J. D.Lee, Y.Liu, S.G. Becker, “An Introduction to Human Factors Engineering”, second edition, Pearson, Prentice Hall, Upper Saddle River, New Jersey 07458.

12 I.A. McManus, R.A.Walker, “Multidisciplinary Approach to Intelligent Unmanned-Airborne-Vehicles Mission Planning”, Journal of Aircraft, Vol.43, No.2, March-April 2006, pp.318-335.

13 T.B. Sheridan, “Human and Automation”, a John Wiley & Son, Inc., Santa Monica, Publications, 2002. 14 M.R.Endsley, “Automation and Situation Awareness”, Automation and human performance: Theory and applications,

Mahwah,NJ: Lawrence Erlbaum, pp.163-181. 15 A.Boccalatte, F. De Crescenzio, F. Flamigni, F. Persiani, “A Highly Integrated Graphic Environment for Flight Data

Analysis”, XV ADM - XVII INGEGRAF International Conference, Sevilla , June 1st-3rd 2005.


Recommended