by Norman Metzger
Computational science and the more traditional disciplines are pursuing each other toward more powerful and more versatile tools of every body's trade.
2 MOSAIC
This article is about computation. But first, some astronomy.
Because they are thousands to millions of light-years long, radio jets streaming out of galaxies and quasars are the largest known coherent structures in the universe. When sheer admiration for these cosmic pyrotechnics subsides, questions arise: What creates these jets? What holds them together over such vast distances? What can be learned of their internal structure?
A shift of scene: Three o'clock in the morning in the terminal room at the Max Planck Institute for Physics and Astrophysics, outside Munich, where several scientists are watching the jets emerging on a monitor. Their jets are formed by equations flown on a Cray-I supercomputer. The simulated jets change shape as new input statements vary densities or temperatures. Internal structures pop up that were not evident in the initial observations of jets by the Very Large Array radio telescope in New Mexico. The computer-flown jets, contrary to human intuition but congruent with radio observation, do not descend into chaos as they plunge into the galactic voids. Rather, they seem to maintain an orderly structure.
That image of radio jets with mind-eluding dimensions being flown inside a computer in West Germany is significant in itself. It is also a window upon a new dimension in science, one so powerful and increasingly ubiquitous that there is growing acceptance of the once questionable proposition that theoretical and experimental science has been joined by a third kind: computational science.
The jet work, in which a grand and vastly complex phenomenon was recreated within a computer, joins other complexities now being parsed computationally. These complexities include the patterns of the plasma within a fusion reactor, the paths of electrons coursing through an imagined semiconductor crystal, a gestating tornado, a yet-to-be-built airplane encountering strong head winds at 40,000 feet, impulses traversing a nerve tract, and the consequences for an unbuilt chemical plant if there were a ten-degree Celsius rise in temperature in one reactor vessel.
These are simulations. To create one, a particular phenomenon—the radio jet, for example—is expressed in mathematical equations that relate one quantity to another and yield an analogue, or model. The model becomes a simulation
The advance of analysis is, at this moment, stagnant along the entire front of non-linear problems. That this phenomenon is not of a transient nature hut that we are up against an important conceptual difficulty is clear. . . . Many branches of pure and applied mathematics are in great need of computing instruments to break the present stalemate created by the failure of the purely analytical approach to non-linear problems. . . . We may conclude that really efficient high-speed computing devices may . . . provide us with those heuristic hints which are needed in all parts of mathematics for genuine progress.
—-John von Neumann and Herman H. Goldstine (1946)
as it responds to manipulation of its components, such as density or pressure.
Simulations work by a blend of intuition, apt equations, and reasonable simplifications. The intuition in the case of the radio jets was the assertion, by Martin Rees and R. D. Blandford of Great Britain, that the jets were In reality streams of gas blowing out of the galaxies and moving supersonically relative to their surroundings. The equations were those of hydrodynamics, of flowing fluids, known for more than 100 years. The simplifications were that nothing mattered other than the conditions quintessential to a supersonic gas, such as temperature and pressure, and that other conditions specific to radio jets—magnetic fields, radiat ion, and plasma effects—could be ignored.
This is glibly stated. The work actually involved three people working collaboratively over three years. Two were from the Max Planck Institute—Michael L. Norman, who contributed the gas-dynamics computer code, or program, to simulate the jets, and Karl-Heinz A. Winkler, who developed the computer Images to display the jets. The third, from the University of Illinois, was Larry L. Smarr, who suggested studying the jets numerically. The three researchers carried out a systematic numerical study, worked out the physical mechanisms, and applied the results to astrophysical observations.
Essential to the enterprise was some assurance that the simulations comported with reasonable physics. Direct observations were marginally useful, because the telltale radio waves may have
been generated by mechanisms only incidental to the jets' origins, structure, and history. However, historical assurance was available. Some 100 years before Winkler and his colleagues looked at radio jets, Ernst Mach and Peter Sal-cher had also looked at supersonic jets emerging not from a galaxy but from a laboratory nozzle. A Schlieren photograph of the laboratory jet showed the same knotty patterns, formed by bow shocks meeting, as the Munich group saw in the simulated jets. Other parallels emerged, confirming that the physics of hydrodynamics was applicable in dimensions ranging from centimeters to light-years, from the scales of the Malch-Salcher jets to those of radio jets.
Having used a computer to support a theory that explained an observation, Smarr and other colleagues—John F. Hawley of the University of Illinois and James R. Wilson of the University of California—then used a computer to probe the unobserved, the jets' origins. The theory asserted that the jets originate from accretion disks, doughnuts of matter, that surround black holes. The computer affirmed, in some 30 hours of computation, that such disks were possible within the boundaries laid down by theorists and that matter could indeed return from a black hole. The finding was made more credible by subsequent observational hints.
The jet story is both a true account and a parable for modern computational science. The computer put numerical musculature on the seen and the unseen, the jets and their origins. It exploited known equations transformed into algorithms, or instructions to the machine on how to handle a given problem. It displayed, notes Norman Zabusky of the University of Pittsburgh, what the computer can do ''when the algorithms are working and the physics is puzzling/ '
A final point of the parable is that the work was done on a U.S. machine operating in West Germany. Access by academic scientists to supercomputers has been limited until recently. The first of the current generation of supercomputers—the Cray-I—came on the market in 1976, but such machines were not acquired by universities until 1981. Before then academic researchers were limited largely to supercomputers located at weapons laboratories and subject to the attendant requirements for secrecy clearance. The Cray-I at the Max Planck Institute was one of the few in the world
MOSAIC 3
dedicated entirely to research in physics and astrophysics.
Those constraints have loosened, as supercomputers have become more generally accessible and as networks for remote access to them have begun to form. Although prediction can be a stimulating if foolish enterprise, one well-bounded prediction is that the availability and use of supercomputers, current and future, will increase greatly as students and their teachers formulate the sorts of questions adumbrated by the work on the radio jets.
A temporal class
What actually will be done? How? What are supercomputers? Why are they being used to reify the otherwise abstract phrase computational science?
Supercomputer is a temporal term, referring to the fastest computer available at any moment in time that is capable of solving scientific problems. In this sense there have always been supercomputers, and today's champions will always be tomorrow's drayhorses.
The characteristics that set apart the current generation of machines—the Cray-I and the Cray XMP, Control Data's Cyber 205, the Hitachi S-810120, and the Fujitsu VP-200—are the capacity of their fast-access memories for storing data while a computation is under way and the rapidity with which they compute. j. hey woiK at a speeu. oi auout luu minion instructions per second (MIPS), somewhat faster than the few hundred instructions per second achieved by a personal computer, (MIPS is one of two customary measures of computational speeds. The other is MFLOPS, or millions of floating-point operations per second. MFLOPS counts only the instructions re™ lating to floating-point operations typical of large computations. Specifically, floating point refers to the scientific notation in which numbers are expressed in powers of ten. As a generality there are three MIPS for every MFLOPS, with the exact ratio depending on the application.)
The high speeds derive from the ability of the machines to do arithmetic operations on arrays of numbers, or vectors, and from their embodiment of pipelin-ing, a rudimentary form of concurrent, or parallel, processing. Vector machines can treat sets of numbers as single numbers, so that if a set of numbers requires
Metzger is on the staff of the National Academy of Sciences/National Research Council
the same arithmetic step (addition, for example), the machine can perform the step in one cycle rather than in iterative cycles for each addit ion. Pipelining means the decomposition of a problem into component parts, with the parts computed concurrently; a reasonable analogy is the assembly-line manufacture of a car.
The computational prowess of these machines, however, does not fully explain their enormous impact on science. What really matters, argues Kenneth Wilson of Cornell University, is that the arrival of supercomputers was confluent with a watershed in science. "What has changed," he says, "is the complexity of problems people are willing to tackle. Experiments became precise and sophisticated enough that you could measure properties of materials that were not homogeneous."
Enormous progress in what Wilson calls measu remen t t echnology has provided computable data on complex materials and events, such as the crystal structures of metals and the internal structures of stars, the excited states of atoms and ions during chemical reactions, and the behavior of quarks. The main function of computers , Wilson says, "is to help you build intuition about what happens in these complicated circumstances. You can doodle with pencil and paper when you're thinking about the earth going around the sun, but you cannot doodle when you're trying to follow 10,000 rocks going around Saturn."
Doodling
One doodles, in Wilson's sense, by simulating an event within a computer, although the word simulation is a bit misleading, implying a passivity that does not exist. Simulations are not simply laboratory experiments mirrored in binary digits. They can redefine experimental time, slowing down otherwise unobser-vable phenomena, such as nerve impulses zipping through a maze of tracts; they can accelerate time, as in reshaping the earth's surface or an airfoil in minutes. Simulations image the unobserva-ble—the propagation of cracks in a material as load stresses rise or electrons circling a neutron star, for example—and they are untroubled by complexity.
Nonlinear equations are a calculational nightmare, for example, but, as Larry Smarr says, a "supercomputer doesn't care whether the equations are linear or non l inear . " In a p p r o x i m a t i n g real
events, though, simulations are always incomplete. "Nature is more complicated than the model," says Smarr. "But until you understand simple solutions, you're fooling yourself that you can understand something more complicated."
How is a simulation done? Why are today's incredibly powerful computers still insufficient for many simulations? Simulations invariably involve nonlinear events, where there is no straight-line relation between two values in an equation. Modeling such events usually involves solving partial differential equations either analytically, which means algebraically, or numerically, which means inserting numbers for the values and then computing. The difficulty is that models of complex, non l inear events quickly outrun analytical solutions. Although this problem is not new, the answer—numerical solutions, using today's high-speed computers-^is new.
The elusive continuum
Nature is invariably continuous. Temperature, pressure, time, and space vary smoothly over an unbroken field. To ask a computer to duplicate that reality is to ask it to calculate anywhere from one to several thousands of equations at an infinite number of points and to do it in three dimensions—in four dimensions if time is added, and in five if the change over time of a condition, such as temperature, is wanted.
That is now impossible; no digital computer can effectively deal with an infinite number of points. It is more reasonable to constrain the number of points at which the computer is asked to compute. The number and spacing between points is a calculus of the problem being solved, the detail sought, the available algorithms specifying the actual computational steps to be taken, and the reasonableness of the physics.
Each point in a mesh, or network, is a computational locus, a site populated by data and by equations to link them. The data and equations at each point may connect the density, velocity, and temperature of a flowing gas, the loads and resultant stresses upon a metal bar, or the energy states of electrons for an atom. The points, the computational loci, summed as a mesh, create a universe. This universe may be a star within a galaxy, a galaxy within a big-bang cosmology, a pixel within satellite Images, a quark within a proton, a neuron within a brain, cars on a freeway, sand grains
4 MOSAIC
within sand dunes, or an electron within a semiconductor lattice.
The finer the mesh of points, the more probable the simulation and the more demanding the computation. Computational loads expand eightfold as the spacing between points is halved, and the computational time for a simulation is usually the square—sometimes the fourth power—of the number of points.
Equally daunting are the enormous memories required. For example, for a reasonable simulation of a putative chemical plant handling some 40 to 50 chemicals—the computer's task being to obse rve thei r c h a n g e s — t h e concentrations of these chemicals and their effects on immediate neighbors are needed at about 100 points. Almost one million words of memory are needed to store data and results. This needed capacity is larger than the fast-access memory of most conventional computers and about a quarter the capacity of current supercomputers.
If the experimenter, not unreasonably, wants to know how these chemicals diffuse as they react, his requirement will expand the calculation to two or three dimensions. Even if the calculation is cut to 20 chemicals and only 50 points, 2.5 million words of memory are needed just to store solutions, and 350 million more are needed to store the data.
Computing techniques
There is a rich collection of numerical methods available for computing, with a useful typology being their division into two classes: deterministic and Monte Carlo. In deterministic calculations, if the input data and algorithms are always the same, then the results will always-be the same. In Monte Carlo calculations, the results may not be the same, but may rather differ for each run, because the input data are supplemented by probabilities that an event to be calculated will occur—that an electron going through a semiconductor crystal will take a particular track, for example, or that ions in a plasma will hit a container wall.
Both methods have their subsets. Deterministic techniques, of necessity, often resort to adaptive grids, in which the computation bears down on what are likely to be most important parts of a solution. The Monte Carlo techniques include the renormalization group, used for expanding a lattice of points without sacrificing accuracy. As Kenneth Wilson says, 'The renormalization group is one
of the fundamental approaches to tackling this problem of what to do when you cannot make your grid small enough to use the fundamental equation. How do you increase the grid spacing beyond the level of a straight numerical approach, yet preserve all of the reliability that working from a fundamental equation can give you?"
Challenges and limits
Although the renormalization group is a brilliant tactic for attacking otherwise insuperable problems, its application to the problem that most interests Wilson, that of quantum chromodynamics, is still in the future. It awaits the arrival of much faster computers with workable parallel architectures, the accessory algorithms, memories, programming languages , compilers to map programs onto the machine architectures, and graphics to show computational results instantly and visually. Quarks, the elementary blocks of protons, neutrons, and other subatomic particles, have the peculiar property of being inseparable, of being permanently bound to one another. The issue for Wilson and other theoreticians Is to derive that quark confinement from the underlying theory, quantum chromodynamics. Says Wilson, 'The computing power is totally inadequate."
Similar barriers afflict other fields: hydrodynamics, fusion, and even the somewhat hermaphroditic use of computers to simulate the properties of future computers, for example.
The Insufficiencies are most evident In hydrodynamics, in the simulation of flowing fluids. Hydrodynamics not only is the first field to invoke computational science as crucial but also is the most catholic. It Includes aircraft and wing design, both subsonic and supersonic; climate and weather p red ic t ion , both globally and locally; analysis of water waves and the design of ship hulls; the operation of piping networks, such as those In nuclear reactors or other power plants; geological phenomena, such as the flow of glaciers or of oceanic crust; biological flows, such as that of blood through the heart; and reactive flows, such as in combustion.
The Navier-Stokes equat ions, the same general equations that govern all these phenomena, have existed for over 100 years, but they remain unsolvable for all but the simplest of fluid flows. The reasons are in part that the phenomena they express are nonlinear and tend to
descend Into turbulence. The length scales can vary wildly for the same problem, from ocean currents flowing for thousands of kilometers to waves that can be measured In centimeters. The rea-sonableness of the calculation also depends on the boundary conditions, or walls, imposed on the fluid. These walls can be sharp, as in automobile engine cylinders, nonexis tent , as in stratospheric airflows, or complex and protean, as in the walls of a beating heart.
Therefore, says David Caughey of Cornell University, "given the Navier-Stokes equations, how can one extract anything useful from them? The fundamental problem is that of describing turbulence. Basically, in flow past an airplane wing, unsteady fluctuations develop. There are little eddies In the flow, ranging in size from a tenth of a millimeter on up. These eddies are transferring energy back and forth/' Computing that flow and the resultant turbulence by accounting for these energy exchanges at different spatial scales, Caughey says, is beyond any practical computation, even allowing for computating speeds a thousand tim.es as great as existing speeds. However, he says, "it is possible to solve these sorts of problems in very simplified ranges, so there is hope that with the next generation of supercomputers we can model turbulence in quite complicated g e o m e t r i e s / ' For now, in Wilson's phrase, "we have to depend on hoking up the equations/'
Nevertheless, computational powers in hydrodynamics have grown enormously. Airflow over a wing, for example, can now be modeled in half an hour for less than $1,000 of computer time. In 1960 that same computation would have cost $10 million and would not have been completed until 1990. Modeling storms remains largely a two-dimensional act, constraining the simulation of tornadic clouds and their spawn. Similar constraints apply to the powerful and unpredictable downdrafts from clouds, the wind shears that have struck down airplanes with disastrous results.
Plasmas and chips
Much the same tale of progress and limits applies to plasma physics: coarse results, limited experiments. Plasmas, ionized fluids, are the most common matter of the universe. They are the stuff of stars and the working fuel of fusion reactors. Computer simulations of fusion plasmas probe a torrent of funda-
MOSAIC 5
mental questions on what happens to particles within an actual reactor: How do neutrons streaming out of a plasma affect surrounding magnets? What mix of plasma energies and densities, confinement times and geometries works best? How can damage to the walls of a reactor by energetic particles be gauged without first building a reactor and having it cannibalize itself?
A fusion plasma will have about ID15
particles per cubic centimeter, with each particle carrying a charge affecting both near and far particles. Not even the most heroic computer could sort out the effects of individual plasma particles. Rather, the electrical and magnetic fields owing to the collective motions of these particles are calculated first, and then, recursively, the interlocking effects of the fields on particles are calculated. The state of the art remains limited, however. It is typified by a recent achievement in calculating the equations of state relating density, temperature, and other parameters of an idealized, single-component plasma: one chemical species immersed in a uniform sea of electrons, the composite being electrically neutral. That computation took a Cray4 seven hours. At present there is no hope of calculating more complex plasmas; therefore, computation in plasma physics, as in the other domains of computational science, becomes that of artful compromises designed to make a problem tractable.
Because the average paths of particles are typically sought, Monte Carlo calculations can be used. For example, a particle hitting a wall may reflect at a certain angle about 10 percent of the time. "You then send in a particle/' says David Ruzic of the University of Illinois, "and a random-number generator rolls the dice at each point and tells whether the particle will go one way or another. You keep track of where those particles end up. With bigger computers, you can send in more particles and then you can build up your statistics."
Similar Monte Carlo techniques apply to computers simulating future computers or their underlying components, such as semiconductor chips. "That Is," says Karl Hess, also of the University of Illinois, "we simulate a real semiconductor crystal, and then we let an electron move through that crystal and we let things happen to it. We let It be scattered by Impurities, we let It Interact with lattice vibrations, and so on. We then average what has happened to It, and we
extract from that Information the macroscopic parameters we need for the device. We need to simulate millions of interactions, each of which has hundreds of equations attached to it."
There are now many ideas for increasing the switching speeds of chips. One involves raising electron mobilities, an approach favored in the High Electron Mobility Transistor (HEMT) approach emphasized by Japanese engineers. The difficulty is in determining what these devices will do, compared to existing devices, as they are scaled down in size and as edge effects—as opposed to one-dimensional planar effects—become more prominent.
Interactiwe graphics
Hess and.his colleagues did model one HEMT device using a VAX superminicomputer. The VAX is still the computational workhorse of academic science, but the Hess group took 30 hours to get just one picture. There was no allowance for altering the setup to see what a change in the initial circuitry might do. "What we actually want to do," Hess says, "is to reduce this complicated model to a comprehensive and simple one, which can then be used in computer-aided design. However, it's very difficult for us to see from these few pictures what model, what approach, what simplifications we should use. What we need is a movie, to see directly how the electrons are flowing in and out." Making movies demands much faster computers than are now available to Hess.
Hess's difficulty plagues almost all of computational science: insufficient computational power to understand a computer simulation while it is under way As Don Greenberg of Cornell University has pointed out, about 90 percent of the time and cost in a simulation is now invested in defining the problem, because the computation is too slow and the result often too opaque to allow reshaping the questions while a simulation is running. There is now no software, Greenberg says, that allows an experimenter to "Interact with a simulation while it's occurring."
What is decidedly not acceptable, says his Cornell colleague Kenneth Wilson, "is to show a frame, and then sometime later show another frame. You cannot Interact very effectively If each Interaction requires seven days; if there's something you want to change, that's another seven days." Having effective computer
graphics that can display visually and in color the results of three-, four-, or five-dimensional calculations, Greenberg says, will enable the experimenter to rethink the problem in real time and, more subtly, will alter the way this sort of science Is done. Thus, for example, the chemist interested in the molecular pas de deux of an enzyme and its substrate could watch their dance as they react, rearrange, migrate, or absorb. He could change conditions in midstream—acidity, the strength of certain bonds, temperature, the presence of other molecules—and observe the effects.
Graphics are valued first of all because they convert streams of data Into inter-pretable pictures. In computational science, however, their most crucial value— since computers obey algorithms and not the laws of physics—Is that they confirm computational intuition; they link calculations with the real world. The fact that the jets modeled by Smarr and others looked like reasonable jets, for example, confirmed the soundness of the algorithms. Similarly, the fact that fractal geometry (fractals are patterns that seem the same at whatever level of detail they are looked at) could create startlingly true images of mountains, snowflakes, and planets, as well as dragons and other imaginary beings, confirmed the physical Intuition.
The new architectures
Although today's most powerful supercomputers, such as the Cray XMP, can in principle generate sharp Images in varying colors and intensit ies at 30 frames per second, they cannot also do the calculations needed to produce each one of those frames. "We haven't even gotten to first base yet In terms of computing power," Wilson says.
How will faster speeds be attained? What will the machines look like? What are the pertinent software issues? In addition, considering how scarce the present generation of supercomputers initially was for academic science, what will be the availability of the future, much faster, computers?
Some background: Maximum computation rates are now about 1,000 MIPS, and the hope is to attain about 20,000 MIPS In five years and 100,000 MIPS in about ten years. Reaching these speeds will depend on effective computer parallelism; that is, having the algorithms, the architectures, and the computers to execute hundreds , thousands , or hun-
6 MOSAIC
dreds of thousands of instructions si™ multaneously. Schemes, and even actual machines, for parallel computation are not new. The 1943 British machine Colossus reportedly used parallel computations, as did early U.S. machines, such as Solomon in 1962 and Illiac IV in 1973. These machines used a form of parallelism, the execution of the same instruction on different data.
Recently, genuinely parallel machines, able to execute different instructions simultaneously, have emerged. Notable among them is the Heterogeneous Element Processor (HEP), built by the Den-elcor Corporation. James C. Browne of the University of Texas describes this machine as "probably the first commercially available general-purpose computer that can perform several operations concurrently/ ' Neither HEP nor the Distributed Array Processor (DAP), a British counterpart from International Computers Limited, also with more than 4,000 processors, will as such attain the thousandfold increase in computational speeds. Nevertheless, such an increase is required by real-time graphics, the manifold problems in hydrodynamics, and the imposing calculations for quantum chromodynamics. The goal is massive parallelism, thousands to even millions of high-speed processors working and communicating simultaneously.
There is no paucity of ideas on the architectures of such machines. Some 70 designs can be found in U.S. universities. These often are so-called dance-hall designs, with processors on one side, memories on the other, and communication links between them.
For parallel architectures to be realized in working machines, not only will about $20 million be required for each machine but there will also have to be accompanying software. The need will be for algorithms fitted to parallel rather than serial computation, compilers that discover parallelism in serially written programs and then express it in the language of the machine, and operating systems that provide fluid communications among thousands of cooperating processors. There are other, more subtle difficulties, which are reducible to the issue of writing programs for nonexistent machines. Experimentation is impossible, and it will be enormously difficult to trace errors in programs orchestrating thousands of events simultaneously.
A recurring bromide is that it will be extremely difficult to write programs for
parallel computation. There is little evidence to confirm that. In fact, there is some limited experience to suggest otherwise—students writing parallel programs for the DAP machine, for example. Creating a new computing environment, says Kenneth Wilson, means building an entire culture, and "cultures develop myths," such as that parallel processing cannot be done. "The biggest problem In knowing how much you can do on parallel systems," he adds, "is that we're building only the first generation of massively parallel systems."
Progress in science tends to be saltatory, and that may also hold true for massive parallel machines. Once the early models have been built and operated and the software has been refined, the machines may proliferate very quickly. The reason is both logical and generic: Parallel machines are built from large numbers of identical processors, and these can be mass produced, In contrast to the cottage-Industry mode applied to current supercomputers.
Despite the powerful promise of future computers, it Is Important to remember that the current powerhouses are not quite dead yet. The Cray XMP, which basically quadruples the processing power of a Cray-I, is probably the first of a generation of vector machines with parallel processors. Such continual redoublings of the Cray-I processors could generate machines with 16 to 32 processors, having speeds of 10,000 to 20,000 MIPS, or 20 to 40 times as great as the speed of the Cray-I. These are maximum speeds, and these machines will rarely attain them. However, the fact that these machines will offer faster speeds within basically the same software environment (programs, compilers, languages) will have a catalytic effect on computational science.
Other solutions
The scientific computers of the future also will continue to Include special-purpose machines, the specialists of advanced computing that are designed to do extremely well such single activities as speech and signal processing. The electronics of such machines can be tailored to particular algorithms, such as the fast Fourier transform, whose algorithms are already embedded in computational work in Image analysis and in such fields as high-energy physics.
In addition, in about four years, the number crunching offered by current su
percomputers may come at a fraction of the more than $10 million they cost now, as scientific processors with floatingpoint abilities and capable of 20 to 40 MIPS reach the market. Further into the future are reconfigurable architectures; these will be systems in which the architecture is redrawn ad libitum to fit the arriving algorithms.
Other elements will limn future computing. For example, the storage devices, being mechanical , have been lapped by the Increasing computation rates, although damage has been limited by much larger memories. Of present. interest and promise is an optical disk with much faster access time (the time needed to start the Inflow of randomly selected data) and capacious storage that is being developed by the U.S. Air Force and the National Aeronautics and Space Administration.
An obligatory software concern is Fortran, the old shoe of scientific computing; although it is badly worn, the replacement may not fit. The arguments about Fortran, as ubiquitous at computer science meetings as Styrofoam cups, tend toward the Grecian: Each side builds a pile of points, and the one with the highest pile wins the day. Obviously, Fortran has its drawbacks, notably that it is not a l anguage for concurrency. However, some scientists, such as David Kuck of the University of Illinois, argue for powerful compilers that can discover concurrency in Fortran code and restructure it accordingly. The current trend is to create metaprograms, such as Eft, Rat-for, and Gibbs, that are less cryptic to write and read than Fortran but that can automatically be translated into Fortran. "You don't replace Fortran/' says Wilson. "You build a system that people can use when they're building the new code, but that will Integrate well with existing, debugged Fortran." Perhaps the most reasonable summation is the comment that Fortran in ten years will be completely different from what it is now, but it will still be called Fortran.
The fate of Fortran Is certainly Important, but it really pales in contrast to what else lies ahead. In a fraction of a decade, a new branch of science ha-s opened, and there is an emergent generation of people with new perspectives. To them, complexity is less a concern than it is a challenge to strive for good algorithms, some Imaginative programming, and a few hours In front of a monitor to watch nature decoded In full color. •
MOSAIC 7