+ All Categories
Home > Documents > Virtual Collaborations with the Real: NASA’s New Era in...

Virtual Collaborations with the Real: NASA’s New Era in...

Date post: 21-Aug-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
7
MARCH/APRIL 2002 1094-7167/02/$17.00 © 2002 IEEE 63 AI in Space People base their view of nature on a new device that is both revolutionary and pervasive. Because we humans built the device, we can comprehend it. Consequently, the new technology often further serves humans as a well- understood analog of nature. In the present context, some scientists claim that the basic particle of the universe is information; that information is not an abstraction of reality—it is, in fact, reality. 1 While some of these views are indeed extreme, they form the basis for current dis- cussions within the scientific community. If new technologies can inspire a new understanding of our universe, we might ask how far and in what direction can computer science research itself be pushed? NASA is poised to play a significant role in answering this ques- tion. Traditionally, NASA has pointed the way for many new technologies. NASA has identified the directions that have led to the breakthroughs in flight and space flight that we now often take for granted, playing a key leader- ship role that has galvanized large research, development, and industrial communities. NASA will need to play a similar role in tomorrow’s computer science research, developing critical enabling technologies to support future missions. 2 To play that role, NASA Ames Research Center has recently changed its research focus to computer science (see Figure 1). Motivations At NASA, we are undergoing a fundamental shift in the way we design exploration missions. Driving the shift is a change in the character of the science goals for these missions. Science exploration missions can be characterized in terms of the distance from the instrument making the observation to the observation’s target. Science observa- tions accomplished at relatively great distances from the target are called remote science; observations done in close proximity are contact science. The former observa- tions typically occur either during fly-bys or from orbit, with the latter typically performed in situ, with the instru- ment in physical contact with the target (see Table 1). Until recently, most science exploration missions beyond lunar orbit were remote science, given mainly to the global mapping nature of the science goals. As the science goals begin to require higher-resolution measure- ments and close proximity to the target, the missions increasingly involve more contact science. Even in the presence of large communication time delays caused by the finite speed of light, remote science can often be accomplished by preprogrammed action. That’s because the environment in which the spacecraft operates rarely requires decision-making more rapid than the round trip communications time. In this environment, the only re- quirement for rapid onboard decisions is during unusual or critical maneuvers (such as Saturn ring-crossing) or off- A ll science is computer science, say recent claims. 1 Historically, scientific breakthroughs typically occur in the presence of a major breakthrough in a human- made technology. With Newton, it was the clock. With Maxwell and Einstein, it was the steam engine. Today, it is the computer. Virtual Collaborations with the Real: NASA’s New Era in Space Exploration Daniel E. Cooke, Texas Tech University Butler P. Hine III, NASA Ames Research Center In this installment, we describe emerging success stories of apply- ing AI techniques to the challenges of space exploration. It is time to take a look at how NASA is structuring its programmatic investments in the area of AI and related technologies. The focus is the Intelligent Systems Program, managed by NASA Ames Research Center. As the authors describe, the inspiration for this program is the new set of capabilities that will be required for taking the next step in space exploration: Moving beyond gathering science results from the relatively well-defined environment of planetary orbits, to doing the same in the highly uncertain environment of planetary surfaces. Joysticking from Earth is no longer an option. The space platforms defining space exploration’s next phase must be more capable, and AI will play a central role in creating these autonomous agents of our desire to explore and understand. Richard Doyle Editor’s Perspective Editor: Richard Doyle Jet Propulsion Lab [email protected]
Transcript
Page 1: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

MARCH/APRIL 2002 1094-7167/02/$17.00 © 2002 IEEE 63

A I i n S p a c e

People base their view of nature on a new device that isboth revolutionary and pervasive. Because we humansbuilt the device, we can comprehend it. Consequently, thenew technology often further serves humans as a well-understood analog of nature. In the present context, somescientists claim that the basic particle of the universe isinformation; that information is not an abstraction ofreality—it is, in fact, reality.1 While some of these viewsare indeed extreme, they form the basis for current dis-cussions within the scientific community.

If new technologies can inspire a new understanding ofour universe, we might ask how far and in what direction

can computer science research itself be pushed? NASA ispoised to play a significant role in answering this ques-tion. Traditionally, NASA has pointed the way for manynew technologies. NASA has identified the directions thathave led to the breakthroughs in flight and space flightthat we now often take for granted, playing a key leader-ship role that has galvanized large research, development,and industrial communities.

NASA will need to play a similar role in tomorrow’scomputer science research, developing critical enablingtechnologies to support future missions.2 To play thatrole, NASA Ames Research Center has recently changedits research focus to computer science (see Figure 1).

MotivationsAt NASA, we are undergoing a fundamental shift in

the way we design exploration missions. Driving the shiftis a change in the character of the science goals for thesemissions.

Science exploration missions can be characterized interms of the distance from the instrument making theobservation to the observation’s target. Science observa-tions accomplished at relatively great distances from thetarget are called remote science; observations done inclose proximity are contact science. The former observa-tions typically occur either during fly-bys or from orbit,with the latter typically performed in situ, with the instru-ment in physical contact with the target (see Table 1).

Until recently, most science exploration missionsbeyond lunar orbit were remote science, given mainly tothe global mapping nature of the science goals. As thescience goals begin to require higher-resolution measure-ments and close proximity to the target, the missionsincreasingly involve more contact science.

Even in the presence of large communication time delayscaused by the finite speed of light, remote science can oftenbe accomplished by preprogrammed action. That’s becausethe environment in which the spacecraft operates rarelyrequires decision-making more rapid than the round tripcommunications time. In this environment, the only re-quirement for rapid onboard decisions is during unusual orcritical maneuvers (such as Saturn ring-crossing) or off-

A ll science is computer science, say recent claims.1

Historically, scientific breakthroughs typically

occur in the presence of a major breakthrough in a human-

made technology. With Newton, it was the clock. With

Maxwell and Einstein, it was the steam engine. Today, it is

the computer.

Virtual Collaborations with the Real:

NASA’s New Era in SpaceExploration

Daniel E. Cooke, Texas Tech UniversityButler P. Hine III, NASA Ames Research Center

In this installment, we describe emerging success stories of apply-ing AI techniques to the challenges of space exploration. It is time totake a look at how NASA is structuring its programmatic investmentsin the area of AI and related technologies.

The focus is the Intelligent Systems Program, managed by NASAAmes Research Center. As the authors describe, the inspiration for thisprogram is the new set of capabilities that will be required for takingthe next step in space exploration: Moving beyond gathering scienceresults from the relatively well-defined environment of planetaryorbits, to doing the same in the highly uncertain environment ofplanetary surfaces.

Joysticking from Earth is no longer an option. The space platformsdefining space exploration’s next phase must be more capable, andAI will play a central role in creating these autonomous agents ofour desire to explore and understand.

—Richard Doyle

Editor’s Perspective

Editor: Richard DoyleJet Propulsion [email protected]

Page 2: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

nominal conditions such as internal systemfailures. Because the spacecraft environmentis predictable and well-modeled, a controlstrategy involving at most conditionalbranching can serve to manage critical deci-sion-making. As a last-resort means of han-dling unexpected conditions, the spacecraftcan go into a “safe-mode” from which it cansystematically recover under guidance fromthe ground.

Unlike remote science missions, wecannot accomplish contact science mis-sions with preprogrammed actions. Whena spacecraft or instrument is in situ, it isphysically interacting with its environ-ment, a situation that requires a veryshort decision timescale with respect to

the round trip communications time.Also, the environment becomes difficultto predict or simulate prior to the mission,unlike a vacuum environment in whichthe spacecraft is subject only to relativelypredictable forces and effects, such as awell-modeled gravitational field.

Under the dynamic conditions of in-situenvironments, a traditional control strategyis difficult (and expensive) to use. Thenumber of conditional decision pointsbecomes exponentially high even for rela-tively simple missions. We will need a newway of designing exploration missions,incorporating higher levels of onboardautonomous decision-making to accom-plish future science exploration goals.

Currently, robotic contactscience missions are remotelycontrolled—teleoperated. Liter-ally hundreds of Earth-boundengineering and science special-ists provide the intellectualsafety nets required for spaceexploration. This approach hassucceeded due to trade-offs inthe complexity and distanceinvolved. During long-distancemissions when the finite speedof light becomes a factor in com-munications, the subject mis-sions have been compelled to

remain comparatively simple. In these mis-sions, the state of the art in embedded sys-tems has sufficed for the autonomy required.

Based upon their relative orbits, roundtrip communications at the speed of lightbetween Mars and Earth vary from six to40 minutes. For a complex human orrobotic mission to Mars, capabilities inteleoperation must increase significantly.Even if we placed astronauts at a spacestation at the Mars-Sun libration point,there is a 7.2-second round trip delay. (Alibration point is any of five positions inthe plane of a celestial system consistingof one massive body orbiting another atwhich the gravitational influences of thetwo bodies are approximately equal.) Cur-rently, fine-grained predictive control candeal with time-delays of five seconds orso.3,4 So, given a five-second delay, ahuman-machine predictive control systemcan effectively provide fine-grained con-trol of a remote device. Operating beyondfive seconds delay will require robustreflexive controls on the remote deviceitself. The device will need to manage itsown movements reflexively.

Therefore, mission complexity and com-munication delays are the roots of NASA’smotivations for advancing computer sci-ence research. NASA will need an under-standing of causal relationships in the dataacquired in real-time, a need for greaterautonomy in our deployed systems, andrevolutionary advances in the way humansand machines work as a system. NASA’sIntelligent Systems Program is a nationalinitiative, organized to respond to theseneeds. This article will attempt to provide avision within which these elements con-verge and a better definition of the elementsand the goal of each research category arerealized.

64 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 1. In the shadows of the world’s largest wind tunnel, the number one priorityat NASA Ames is computer science research.

Table 1. Autonomy required for different mission classes. Blue indicates we can do this class of mission adequately with current technology, black means we can do this class of mission, but notvery efficiently, while gray means we cannot yet do this class of mission with current technology.

Decision Mission class Example Distance timescale Level of autonomy

Fly-by Voyager Remote Slow Pre-event programmedSurvey Galileo Remote Slow Pre-event programmedLocal sampling Viking Static contact Slow Remotely operatedLocal exploration Pathfinder/MER Dynamic contact Medium Remotely operated w/reflexesIntensive exploration MSL Dynamic contact Fast Short-term goal-directedGlobal exploration Europa Ocean Dynamic contact Fast Long-term goal-directed

Page 3: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

The visionIn the next 50 years, space missions

might deploy astronauts or mission con-trollers with “intelligent” machines topoints near or on the surfaces of distantplanets. The people involved will need tooperate in a seamless relationship withalso-deployed intelligent machines.

In deploying humans to the surfaces ofdistant planets, intelligent machines willassist the humans with exploration and mis-sion operations. Machines will need to extendand magnify human physical and mentalabilities. Among other duties, the machinesaccompanying the astronauts might need toserve the purpose currently served by earth-bound mission operators—the people whoremotely control missions.

In having humans occupy a space stationnear the distant planet targeted for explo-ration, humans would deploy and controlintelligent machines on the planet’s sur-face. The machines will need to give thehumans a true sensory experience of actu-ally being on the planet. These machineswill extend and magnify human abilities byseemingly placing humans in remote envi-ronments. The experiences of the spacestation’s human operators could be pack-aged and sent to Earth, giving earthboundscientists the same experience. Theseremotely deployed systems might providesensory inputs directly to the nervous sys-tem of humans and intercept signals fromhumans as feedback controls.

Currently, the computing requirementsto carry the data and intelligence on futuremissions combine with radiation effectsand extremely low wattage environments tomake safe, cost-effective long-distancemissions unattainable. Revolutionaryadvances must occur in almost every areaof fundamental computer science.

Levels of reasoningWe can determine the extent to which a

system performs cognitive functions basedon a framework of reasoning levels. In TheMath Gene, Keith Devlin describes threetypes of reasoning found in living organ-isms. In stimulus-response, the most primi-tive form, an organism can process an exter-nal stimulus and determine an appropriateresponse. Some S-R activities are so primi-tive that they are viewed as reflex rather thanreasoning. When you touch a hot stove, youinstinctively pull away, with little or no con-sideration of the situation.

Other forms of S-R reasoning are not soreflexive. Consider the situation of facingan impending head-on collision in traffic (astimulus). Given some ample period ofavailable time prior to impact, in this situa-tion you will most likely spend a few sec-onds considering the options for takingevasive action (the response) to determinethe best possible response.

Stimulus-stimulus, a more sophisticatedform of reasoning, occurs when an organ-ism receives a stimulus and, in turn, pro-duces a stimulus for another organism orsome tool or machine. In the head-on colli-sion situation, once you’ve determined thebest option for evasive action, you willproduce the stimuli to cause your vehicle to

avoid the oncoming vehicle. Therefore, theresponse in the S-S reasoning is a stimulusto control the vehicle to avoid the collision.From an historical perspective, note the S-Sreasoning required by humans when usingtools in the agrarian and industrial ages.Inventing the tools and determining, forexample, the role of the seasons in plantgrowth, requires a more sophisticated formof reasoning.

One simple view of offline reasoning,the most sophisticated form of reasoning, isto envision humans as having a primitivebrain that performs the S-R and S-S rea-soning. This primitive brain deals withexternal stimuli and cannot originatethoughts that are not triggered by outsideoccurrences. Now envision a more sophis-ticated brain that spends its time monitor-ing the behavior of the primitive brain—reflecting on and analyzing situations.

In the head-on collision example, O-Lreasoning might result in trying to deter-mine how to avoid future head-on colli-

sions. Using O-L reasoning, you mightinvent mechanisms on the road or in thevehicles that would reduce the possibilityfor head-on collisions. Perhaps these inven-tions come to mind as a delayed S-R func-tion. Nonetheless, we can characterize theseparate analysis and reflection leading toinvention as O-L reasoning. Observationand sophisticated analysis leading to dis-covery and invention is O-L reasoning andrepresents man’s creative ability.

The major successes in machine-basedreasoning have occurred in S-R and S-Sreasoning functions. Even systems capableof deciding effective workarounds in theface of system and subsystem failures arebasically performing S-S reasoning. Fur-thermore, these successes typically arise innarrow and well-defined problem domains.O-L reasoning remains the exclusiveprovince of humans.

To advance the state of the art in human-machine systems, we need advances inautomated reasoning, human-centered com-puting, and intelligent data understanding.When we perform reasoning at any level, itis based on filtering data—observing a verysmall segment of the electromagnetic spec-trum—and determining causal links in thedata. Even when we recoil from the hotstove, we have quickly determined the rela-tionship between our pain and the fact thatit is being caused by our proximity to thestove.

Intelligent data understanding is key toour ability to construct future intelligenthuman-machine systems. Advances inautomated reasoning that push the currentboundaries of system autonomy are re-quired so that machines can perform reflex-ive (S-R) activities robustly. The degree towhich we can advance automated reason-ing to fulfill other levels of reasoning (suchas S-S and O-L) are key to the future suc-cess of space explorations.

Finally, the extent to which we can viewthe human and machine as a seamless sys-tem—where humans are free to do whatthey do best, such as O-L reasoning, andmachines do what they do best—will alsohelp determine how well we effectivelyexplore distant places in space.

Automated reasoningIn the past, the success of semiauton-

omous system behavior, more often thannot, has corresponded to how well systemdesigners could predict situations the system

MARCH/APRIL 2002 computer.org/intelligent 65

To advance the state of the art in

human-machine systems, we

need advances in automated

reasoning, human-centered

computing, and intelligent data

understanding.

Page 4: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

might encounter. If the situation occurs, thesoftware provides a predetermined course ofaction. At a superficial level, we might viewthis approach as similar to raising a child. Aparent might instruct the child in how torespond appropriately in some given situa-tion. For instance, the child might learn thathe or she should not strike a friend, even ifthe friend strikes first. The parent mightattempt to predict a large number of varyingsituations and “program” the child withappropriate responses.

Raising a child this way is similar to theway semiautonomous systems have beenprogrammed to operate on past missions.But with this approach, the resulting sys-tems have significant difficulty contendingwith unforeseen events. Typically, the sys-tems fail in these situations: They are unableto contend with this degree of uncertainty.This is a significant difficulty becauseexploration is fraught with uncertainties.How can anyone accurately predict all situa-tions that might arise when engaged inexploration, particularly in-situ exploration?Furthermore, how can future missions vastlydecrease their reliance on the earthboundsafety nets represented by missioncontrollers?

Recent advances in model-based reason-ing show great promise for dealing with the

forms of uncertainty facing space explo-ration. The approach involves building sys-tems that have a model of their environ-ment. It resembles differing approaches tochild-rearing that focus on raising childrenin a manner where they can operate usingsimpler guiding principles, such as, “treatothers as you would like to be treated.”These guiding principles provide for amore robust approach that can effectivelycontend with uncertainties. The systemdesigner need not predict every circum-stance that might arise. Instead, a modelingapproach improves the system’s ability torespond and adapt to uncertain situations.Through recent program development,NASA is addressing this important area ofcomputer science research.

Intelligent data understandingCurrently, NASA receives two terabytes

of data per day from Earth-observing satel-lites alone. NASA can acquire and storevast amounts of data, but the sheer amountis stressing our ability to analyze this data.We can view these vast data sets as empiri-cal data. Scientists typically endeavor toreduce empirical observations to concisetheories, which explain the observations.NASA’s goals include revolutionaryapproaches that provide theory-basedaccess to these data sets.

NASA’s data are not always contained ina database. In fact, most data NASAacquires is contained in flat files that pos-sess format information in their headers.Traditional approaches to data mining andknowledge discovery are, therefore, notalways relevant to NASA’s needs. A majorresult in this program element would be ifwe could reduce these datasets to muchsmaller representations of content of amore algorithmic nature. [We could viewthese algorithms as concise statements ofthe data—providing more manageable rep-resentations of the data that should lead tobetter understanding—and perhaps mightbe capable of reproducing the datasets.]Thus these algorithmic units might result insignificant data compression.

The Santa Fe Institute is investigatingthe relationship between theories and theamount of data the theories explain. Theyare analyzing these relationships throughan application of Kolmogorov’s Complex-ity measure, called algorithmic informationcontent. Given a particular message string,the programs that will print the string and

then halt are identified. The length of theshortest program is called the string’s AIC.

We can envision a ratio where the short-est program’s size—the number of charac-ters—serves as the numerator and themessage’s size—the number of charactersin the program’s output—serves as thedenominator. For example, a program thatcomputes millions of the digits of pi willresult in a fraction close to zero. A mes-sage that is not the product of a formula oralgorithm will simply be a print statementin which the entire message is a literal. Insuch cases, the ratio approaches one. Oneapproach to data understanding mightattempt to discover ways for analyzingdata to identify the shortest program thatcan produce the data. These algorithmicunits could then serve as the “theories”explaining the data and could result indata compression.

Fundamental results here should havewide application, providing new analyti-cal tools to assist scientists in understand-ing space and Earth science data, andengineers in understanding vehicle andinstrument maintenance data. Clearly,application to other types of data, such asInternet databases, is a potential sideeffect of research in this area. In terms ofthe vision we’ve discussed, intelligentdata understanding is a crucial require-ment that needs to be addressed for futurespace exploration. The ability to establishcausal links in data is crucial—even at theS-R reasoning level.

On future missions, vehicle and person-nel health and safety requirements willrequire the distillation and automaticanalysis of large amounts of sensor data.Tomorrow’s missions cannot rely on Earth-based controllers to perform data reductionand analysis. Furthermore, the astronautswill need to analyze and understand largeamounts of scientific data as it is acquiredduring the mission. Quick analysis will letthem perform just-in-time exploration,experimentation, and other scientific activi-ties, based upon newly acquired scientificunderstanding.

Clearly, there is both a bandwidth and atime-delay problem. Given unlimited band-width in data transmission, we must stillcontend with the round trip time delays toEarth. Revolutionary advances to performquick analysis and distillation to identifycausal relationships in the data at its sourceare crucial to achieving the degree of

66 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 2. In 1999, the NASA Deep Space1 Mission flew the Remote AgentExperiment, demonstrating the firstuse of Autonomy to control a spacecraft.

Page 5: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

autonomy needed on future missions thatare both distant and complex.

Human-centered computingAdvances in automated reasoning will

certainly affect NASA’s ability to deployrobotic platforms into deeper space. Theseadvances will also improve NASA’s abilityto deploy humans into long-distance explo-ration missions. As an example mission,consider the human exploration of Mars(see Figure 3). Because of the communica-tion delays inherent in such a mission,astronauts and their mechanized physicaland mental extensions will need to exercisegreater autonomy.

The goals of human-centered computingresearch include system design approachesthat take into account the level of intelli-gence and capability of the systems deployed,together with the cognitive and perceptualabilities of the astronauts. The result isoptimal systems of humans and machineswhere the machines do what they do best,freeing humans to do the more creativeactivities that they do best.6

To further explore this revolutionaryapproach to systems design, consider pastepochs of human experience.7 In agrariansociety, humans equaled physical labor.Because humans spent most of their timeperforming labor, they had very little timeleft to perform advanced problem solving,theory formulation, and the other morecreative activities required for inventionand discovery.

In industrial society, machines began per-forming physical labor and humans served astheir brains. Machines extended and magni-fied the physical abilities of people. Humanswere freer to perform advanced cognitiveactivities during this epoch, and sciencemade great strides. In the information age,the human brain is extended and enhancedby a machine—the computer. Even trivialapplications significantly extend humancapabilities. Knowledge and the applicationof knowledge are embodied in software. Forexample, many people now prepare theirtaxes aided by software tools having much ofa tax expert’s skill and knowledge.

As system intelligence increases, comput-ers can perform the more mundane andlower-level reasoning, freeing humans toperform more advanced and creative cogni-tive functions. In future explorationmissions, humans cannot be mired in thedetails of mission operations or even vehicle

health and maintenance. Humans must befree to perform a mission’s discovery objec-tives. Humans excel at putting seeminglydisjointed concepts together—the types ofcognitive activities that are at the heart ofinvention and discovery. Computers do notexcel at these kinds of activities, but do exceland outperform humans on more routine andsometimes tedious mental activities.

Results in this area will affect not onlyhuman exploration of distant planets, butalso the abilities of humans on Earth, per-forming such activities as mission opera-tions and air traffic control. All NASA-relevant computer science researchcontributes to and converges under thehuman-centered computing research focus.

Nontraditional computingSize, weight, energy consumption prob-

lems, and space hazards interfere with theability to perform space-based computa-tions. The possibilities of quantum andmolecular computing provide answers tosome of NASA’s concerns about comput-ing in space.

Offsetting the radiation and solar effectson computing is the massive parallelismsthese approaches might offer. The size,weight, and power consumption concernsare also improved by these newer

approaches to computing architectures. Per-haps the most important benefit is the newcomputational models and computer lan-guages that these approaches might imply.

Revolutionary computing approachesdiffer radically from the traditional von Neu-mann and even the more conventional non-von Neumann approaches to architecture. Assuch, the computational models impliedmight provide radically new insights intoproblem solving—even possibly helpingscientists find tractable solutions to problemsfor which only intractable algorithms arecurrently known. These algorithms mightallow for feasible implementation within theconstraints of current technologies.

More straightforward solutions to prob-lems might result. (Currently solutions tothese problems are approximate solutions—due to the intractability of the problems—making them much more complex todevelop.) The revolutionary computingprogram element focuses not on buildingquantum or molecular computers, but onthe computational models and languagesimplied by these approaches, as well as inthe development of specific NASA-relevantalgorithms that would allow for the imme-diate exploitation of these device technolo-gies if and when they become available.(See the “An example” sidebar.)

MARCH/APRIL 2002 computer.org/intelligent 67

Figure 3. The Mars Smart Lander Mission will include an autonomous rover capable oftraversing long distances with relatively infrequent command cycles from Earth.

Page 6: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

Organizing the communityThe computer science community must

face market forces, which could ultimatelyimpede its ability to perform state-of-the-artresearch. To combat the potential for stagna-tion, the research community continues toneed a major force to provide direction andleadership in key areas. In particular, due tothe aforementioned market forces and inspite of excellent efforts to advance com-puter science research through programsfunded by NASA and other agencies, thetheoretical computer science communityhas, for the most part, lacked a significantand organized experimental community.

Without an experimental community, itis difficult to chart progress and provide

convincing evidence of a theoreticalresult’s significance. NASA seeks toadvance the notion that hard applicationareas could be an excellent substitute forexperiment. NASA has an excellent rangeof hard applications, and these applica-tions converge with the applicationsneeded in other agencies.

Just as experiments provide for the test-ing of theories in the physical sciences,hard applications can provide the experi-mental testbeds for the theories arising outof computer science. Therefore, in additionto providing funding for some of the mostpromising computer science research,NASA can help advance computer scienceresearch through its service as a pervasive

and organized experimental community totest computer science results.

Software engineering researchand practice

Basic research is most likely to result inprototypes and proofs of concepts. Proto-types can serve to perform preliminary testsagainst hard applications. However, themost promising results must be maturedfurther so that the theoretical results embod-ied in software can be tested against moresubstantial applications and problems. Thewinning approaches must be matured—to alevel of flight readiness or a similar level ofproduction quality software.

Researchers are unlikely to produce near-

68 computer.org/intelligent IEEE INTELLIGENT SYSTEMS

NASA Ames Research Center has led in the development of aneural network-based Intelligent Flight Control system. Newsreports concerning the IFC software have shown a test pilotflying a plane with a simulated loss of a wing surface.

To simulate the loss of an entire wing, the test flight airplanecan position an airfoil in front of the main wing surface. Theairfoil carefully positioned creates turbulence that renders themain wing ineffective—as if it were completely removed fromthe fuselage. The plane goes out of control. When the IFC soft-ware is enabled, the pilot can regain control of the aircraft,even though it is “severely damaged.” Clearly, the test pilot’sskills are magnified and extended.

The IFC system is a good example of model-based reasoning—automated reasoning—insofar as the system is based on a modelof flight. The system can integrate into a fly-by-wire aircraft andlearn its flight characteristics through observation of pilot inputsand aircraft response. In doing so, the system exemplifies intelli-gent data understanding through its ability to establish and learnthe causal links between inputs and aircraft response.

The system is also an example of human-centered comput-ing because it magnifies and extends a human’s ability to fly aseriously damaged aircraft. Extending the skills of a test pilot isone thing, but extending the skills and the pertinent mentalacuity of a novice or nonpilot is a different matter altogether.

Several people—pilots and nonpilots alike—have flown a“full-up” simulation of an F-15 at NASA Ames Research Center(see Figure A). A full-up simulation is the type of simulator inwhich pilots are trained to fly new aircraft. The simulators arelarge, fitting only in a multistory bay, and provide realisticvisual and motion effects. Furthermore, they are able to simu-late varying effects on the plane’s surfaces and indicate realis-tic aircraft responses to these effects. Of course one of theeffects on the plane’s surfaces are the pilot’s inputs to the air-craft’s control surfaces.

The subjects of the Ames simulation study were quicklytaught and checked out on landing the aircraft under calmconditions from a good approach to San Francisco Airport(SFO). Most subjects could land the plane very well. Oncechecked out under normal flight conditions, the simulation isreset. The subjects are placed back on approach into SFO. Next,they experience a simulated failure of all control surfaces. Theonly operable control elements to the plane were the spoilersand the engines.

The plane is clearly out of control and efforts to regain con-trol have no effect whatsoever. Finally, the IFC software isengaged. The controls are not as crisp as before. However, thesubjects are typically able to regain control of the aircraft andperform hard landings at SFO. There would have been noinjuries in these landings. The subjects’ abilities were clearlyextended and magnified by the IFC software: a good exampleof human-centered computing.

An Example

Figure A. Dryden Space Flight Center: F-15 modifications to testfly IFC software.

Page 7: Virtual Collaborations with the Real: NASA’s New Era in ...pdfs.semanticscholar.org/31ee/803f2bbec4fd01b51e9... · ll science is computer science, say recent claims. 1 Historically,

MARCH/APRIL 2002 computer.org/intelligent 69

production-quality software. Therefore,NASA is also determining better ways totransition the fundamental results producedby the research to products. In terms of soft-ware engineering practice, we believe that adifferent model of software engineeringwould help a great deal. A proof-of-conceptarising from the theoretical community,tested against a hard application, will notnecessarily be transitioned into practice.

To take an idea from proof-of-concept andactually use it on board an aircraft or a space-craft is a major undertaking. To address thisissue, some elements of software engineer-ing need redefinition or refinement. Theo-rists are not likely to take their idea all theway to product. Software engineers will dothat. The notion of joint application develop-ment should expand to include more thanproblem domain experts. This processshould also include the theorists who devel-oped the idea that is being taken to product.

This is not a new idea. Years ago,Richard Feynman, while working at LosAlamos, was dispatched to Oakridge,where engineers were building the plants toproduce the materials for the atom bomb.The engineers needed to be briefed on thetheoretical aspects and context withinwhich they were working. After the brief-ing, the engineers could correct seriousproblems in their initial designs and ulti-mately construct the plants that served amajor role in winning World War II.

Furthermore, efforts to identify formalclasses of software based on their associ-ated validation and verification require-ments are needed. These classes shouldthen serve as the basis for specializedprocess models and tools. The classes andtheir associated models will also be aresearch focus of the center.

Examples of classes:

• Ground-based information systems havemore traditional verification and valida-tion requirements and recommend well-known, existing software process models.

• Parallel systems require modeling beyondtraditional verification and validation todiscover anomalies due to concurrency,such as deadlock or race.

• Onboard flight systems require poten-tially all of the above plus extensiveflight simulation and flight test.

Without process models and tools thatare more sensitive to classes of software,

the repeatable development of reliablesoftware will continue to be difficult toachieve.

NASA is entering a new age of explo-ration. We are transitioning from sciencedominated by fly-by and orbital (remotescience) measurements to in-situ or contactscience measurements. We are transition-ing from analyzing data primarily fromsingle instruments to merging and extract-ing information from loosely coordinatedfleets of spacecraft with multiple instru-ments. We are transitioning from central-ized hierarchical control of spacecraft tomixed-initiative teams of humans andautomation.

These transitions are driven as much byeconomics and policy as they are by sci-ence objectives. To accomplish these tran-sitions and achieve the agency’s missiongoals, we need to deploy a new generationof technologies drawn from the computa-tional sciences. With few exceptions,deploying these technologies will result ina revolutionary change in the way NASAdesigns and executes future missions,rather than an evolutionary change.

References

1. T. Siegfried, The Bit and the Pendulum: FromQuantum Computing to M Theory—The NewPhysics of Information, John Wiley & Sons,New York, 2000.

2. D.E. Cooke and S. Hamilton, “New Direc-tions for NASA Ames Research Center,”Computer, vol. 33, no. 1, Jan. 2000, pp.63–71.

3. Y. Yokokohji et al., “Bilateral Teleoperation:Towards Fine Manipulation with Large TimeDelay,” Proc. 7th Int’l Symp. ExperimentalRobotics, Lecture Notes in Computer Science271, Springer Verlag, New York, 2000, pp.11–20.

4. W. Cellary, “Knowledge and Software Engi-neering in the Global Information Society,”Keynote Address, Int’l Conf. Software Eng.and Knowledge Eng. 2000, Knowledge Sys-tems Inst., Skokie, Ill., 2000.

5. M. Gell-Mann, The Quark and the Jaguar,W.H. Freeman and Co., New York, 1995.

6. K. Ford, “Cognitive Prostheses,” keynoteaddress, 9th Int’l Conf. Tools with AI, IEEEPress, Piscataway, N.J., 1997.

7. L.F. Penin et al., “Force Reflection for Time-Delayed Teleoperation of Space Robots,”Proc. IEEE ICRA 2000, IEEE Press, Piscat-away, N.J., 2000, pp. 3120–3125.

Daniel E. Cook is professor and chair of the Computer Science Depart-ment at Texas Tech University. He recently completed an 18-month assign-ment as the manager of NASA’s Intelligent Systems Program, a cross-enter-prise program led by the NASA Ames Research Center. He received a PhDin computer science from the University of Texas at Arlington. He currentlyserves as the Formal Methods area editor of the International Journal ofSoftware Engineering and Knowledge Engineering and as the chair of theIEEE Computer Society Technical Committee on Computer Languages. Heis a Senior Member of the IEEE. Contact him at the Computer Science Dept.,Texas Tech Univ., Lubbock TX 79409; [email protected].

Butler P. Hine III is the manager of the Intelligent Systems Program, aNASA effort to develop intelligent spacecraft and vehicles, and technologyto enable highly capable teams of humans and automation to solve some ofNASA’s most pressing problems. His research interests include instrumen-tation for space astronomy, image processing, machine vision, real-timehigh-performance computing architectures, telerobotics, and 3D visualiza-tion. He received BS in physics and mathematics from the University ofAlabama and an MS and PhD in astronomy from the University of Texas,Austin. Contact him at [email protected].


Recommended