+ All Categories
Home > Documents > by Norman Metzger - Mosaic · by Norman Metzger Computational ... heuristic hints which are needed...

by Norman Metzger - Mosaic · by Norman Metzger Computational ... heuristic hints which are needed...

Date post: 27-Aug-2018
Category:
Upload: leliem
View: 219 times
Download: 0 times
Share this document with a friend
6
by Norman Metzger Computational science and the more traditional disciplines are pursuing each other toward more powerful and more versatile tools of every body's trade. 2 MOSAIC
Transcript

by Norman Metzger

Computational science and the more traditional disciplines are pursuing each other toward more powerful and more versatile tools of every body's trade.

2 MOSAIC

This article is about computation. But first, some astronomy.

Because they are thousands to millions of light-years long, radio jets streaming out of galaxies and quasars are the largest known coherent structures in the universe. When sheer admiration for these cosmic pyrotechnics subsides, questions arise: What creates these jets? What holds them together over such vast distances? What can be learned of their internal structure?

A shift of scene: Three o'clock in the morning in the terminal room at the Max Planck Institute for Physics and Astro­physics, outside Munich, where several scientists are watching the jets emerging on a monitor. Their jets are formed by equations flown on a Cray-I super­computer. The simulated jets change shape as new input statements vary den­sities or temperatures. Internal struc­tures pop up that were not evident in the initial observations of jets by the Very Large Array radio telescope in New Mex­ico. The computer-flown jets, contrary to human intuition but congruent with radio observation, do not descend into chaos as they plunge into the galactic voids. Rather, they seem to maintain an orderly structure.

That image of radio jets with mind-eluding dimensions being flown inside a computer in West Germany is significant in itself. It is also a window upon a new dimension in science, one so powerful and increasingly ubiquitous that there is growing acceptance of the once ques­tionable proposition that theoretical and experimental science has been joined by a third kind: computational science.

The jet work, in which a grand and vastly complex phenomenon was re­created within a computer, joins other complexities now being parsed compu­tationally. These complexities include the patterns of the plasma within a fu­sion reactor, the paths of electrons cours­ing through an imagined semiconductor crystal, a gestating tornado, a yet-to-be-built airplane encountering strong head winds at 40,000 feet, impulses traversing a nerve tract, and the consequences for an unbuilt chemical plant if there were a ten-degree Celsius rise in temperature in one reactor vessel.

These are simulations. To create one, a particular phenomenon—the radio jet, for example—is expressed in mathe­matical equations that relate one quantity to another and yield an analogue, or model. The model becomes a simulation

The advance of analysis is, at this moment, stagnant along the entire front of non-linear problems. That this phenomenon is not of a transient nature hut that we are up against an important conceptual difficulty is clear. . . . Many branches of pure and applied mathe­matics are in great need of computing instru­ments to break the present stalemate created by the failure of the purely analytical approach to non-linear problems. . . . We may con­clude that really efficient high-speed comput­ing devices may . . . provide us with those heuristic hints which are needed in all parts of mathematics for genuine progress.

—-John von Neumann and Herman H. Goldstine (1946)

as it responds to manipulation of its com­ponents, such as density or pressure.

Simulations work by a blend of intui­tion, apt equations, and reasonable sim­plifications. The intuition in the case of the radio jets was the assertion, by Mar­tin Rees and R. D. Blandford of Great Britain, that the jets were In reality streams of gas blowing out of the galax­ies and moving supersonically relative to their surroundings. The equations were those of hydrodynamics, of flowing flu­ids, known for more than 100 years. The simplifications were that nothing mat­tered other than the conditions quintes­sential to a supersonic gas, such as tem­perature and pressure, and that other conditions specific to radio jets—mag­netic fields, radiat ion, and plasma effects—could be ignored.

This is glibly stated. The work actually involved three people working collab­oratively over three years. Two were from the Max Planck Institute—Michael L. Norman, who contributed the gas-dynamics computer code, or program, to simulate the jets, and Karl-Heinz A. Winkler, who developed the computer Images to display the jets. The third, from the University of Illinois, was Larry L. Smarr, who suggested studying the jets numerically. The three researchers carried out a systematic numerical study, worked out the physical mechanisms, and applied the results to astrophysical observations.

Essential to the enterprise was some assurance that the simulations com­ported with reasonable physics. Direct observations were marginally useful, be­cause the telltale radio waves may have

been generated by mechanisms only in­cidental to the jets' origins, structure, and history. However, historical as­surance was available. Some 100 years before Winkler and his colleagues looked at radio jets, Ernst Mach and Peter Sal-cher had also looked at supersonic jets emerging not from a galaxy but from a laboratory nozzle. A Schlieren pho­tograph of the laboratory jet showed the same knotty patterns, formed by bow shocks meeting, as the Munich group saw in the simulated jets. Other parallels emerged, confirming that the physics of hydrodynamics was applicable in di­mensions ranging from centimeters to light-years, from the scales of the Malch-Salcher jets to those of radio jets.

Having used a computer to support a theory that explained an observation, Smarr and other colleagues—John F. Hawley of the University of Illinois and James R. Wilson of the University of Cal­ifornia—then used a computer to probe the unobserved, the jets' origins. The theory asserted that the jets originate from accretion disks, doughnuts of mat­ter, that surround black holes. The com­puter affirmed, in some 30 hours of com­putation, that such disks were possible within the boundaries laid down by the­orists and that matter could indeed re­turn from a black hole. The finding was made more credible by subsequent ob­servational hints.

The jet story is both a true account and a parable for modern computational sci­ence. The computer put numerical mus­culature on the seen and the unseen, the jets and their origins. It exploited known equations transformed into algorithms, or instructions to the machine on how to handle a given problem. It displayed, notes Norman Zabusky of the University of Pittsburgh, what the computer can do ''when the algorithms are working and the physics is puzzling/ '

A final point of the parable is that the work was done on a U.S. machine oper­ating in West Germany. Access by aca­demic scientists to supercomputers has been limited until recently. The first of the current generation of supercom­puters—the Cray-I—came on the market in 1976, but such machines were not ac­quired by universities until 1981. Before then academic researchers were limited largely to supercomputers located at weapons laboratories and subject to the attendant requirements for secrecy clear­ance. The Cray-I at the Max Planck In­stitute was one of the few in the world

MOSAIC 3

dedicated entirely to research in physics and astrophysics.

Those constraints have loosened, as supercomputers have become more gen­erally accessible and as networks for re­mote access to them have begun to form. Although prediction can be a stimulating if foolish enterprise, one well-bounded prediction is that the availability and use of supercomputers, current and future, will increase greatly as students and their teachers formulate the sorts of questions adumbrated by the work on the radio jets.

A temporal class

What actually will be done? How? What are supercomputers? Why are they being used to reify the otherwise abstract phrase computational science?

Supercomputer is a temporal term, re­ferring to the fastest computer available at any moment in time that is capable of solving scientific problems. In this sense there have always been supercomputers, and today's champions will always be to­morrow's drayhorses.

The characteristics that set apart the current generation of machines—the Cray-I and the Cray XMP, Control Data's Cyber 205, the Hitachi S-810120, and the Fujitsu VP-200—are the capacity of their fast-access memories for storing data while a computation is under way and the rapidity with which they compute. j. hey woiK at a speeu. oi auout luu minion instructions per second (MIPS), some­what faster than the few hundred in­structions per second achieved by a per­sonal computer, (MIPS is one of two customary measures of computational speeds. The other is MFLOPS, or millions of floating-point operations per second. MFLOPS counts only the instructions re™ lating to floating-point operations typical of large computations. Specifically, float­ing point refers to the scientific notation in which numbers are expressed in powers of ten. As a generality there are three MIPS for every MFLOPS, with the exact ratio depending on the application.)

The high speeds derive from the abil­ity of the machines to do arithmetic oper­ations on arrays of numbers, or vectors, and from their embodiment of pipelin-ing, a rudimentary form of concurrent, or parallel, processing. Vector machines can treat sets of numbers as single num­bers, so that if a set of numbers requires

Metzger is on the staff of the National Acad­emy of Sciences/National Research Council

the same arithmetic step (addition, for example), the machine can perform the step in one cycle rather than in iterative cycles for each addit ion. Pipelining means the decomposition of a problem into component parts, with the parts computed concurrently; a reasonable analogy is the assembly-line manufac­ture of a car.

The computational prowess of these machines, however, does not fully ex­plain their enormous impact on science. What really matters, argues Kenneth Wilson of Cornell University, is that the arrival of supercomputers was confluent with a watershed in science. "What has changed," he says, "is the complexity of problems people are willing to tackle. Experiments became precise and sophis­ticated enough that you could measure properties of materials that were not homogeneous."

Enormous progress in what Wilson calls measu remen t t echnology has provided computable data on complex materials and events, such as the crystal structures of metals and the internal structures of stars, the excited states of atoms and ions during chemical reac­tions, and the behavior of quarks. The main function of computers , Wilson says, "is to help you build intuition about what happens in these complicated cir­cumstances. You can doodle with pencil and paper when you're thinking about the earth going around the sun, but you cannot doodle when you're trying to fol­low 10,000 rocks going around Saturn."

Doodling

One doodles, in Wilson's sense, by simulating an event within a computer, although the word simulation is a bit mis­leading, implying a passivity that does not exist. Simulations are not simply lab­oratory experiments mirrored in binary digits. They can redefine experimental time, slowing down otherwise unobser-vable phenomena, such as nerve im­pulses zipping through a maze of tracts; they can accelerate time, as in reshaping the earth's surface or an airfoil in min­utes. Simulations image the unobserva-ble—the propagation of cracks in a mate­rial as load stresses rise or electrons circling a neutron star, for example—and they are untroubled by complexity.

Nonlinear equations are a calculational nightmare, for example, but, as Larry Smarr says, a "supercomputer doesn't care whether the equations are linear or non l inear . " In a p p r o x i m a t i n g real

events, though, simulations are always incomplete. "Nature is more compli­cated than the model," says Smarr. "But until you understand simple solutions, you're fooling yourself that you can un­derstand something more complicated."

How is a simulation done? Why are today's incredibly powerful computers still insufficient for many simulations? Simulations invariably involve nonlinear events, where there is no straight-line relation between two values in an equa­tion. Modeling such events usually in­volves solving partial differential equa­tions either analytically, which means algebraically, or numerically, which means inserting numbers for the values and then computing. The difficulty is that models of complex, non l inear events quickly outrun analytical solu­tions. Although this problem is not new, the answer—numerical solutions, using today's high-speed computers-^is new.

The elusive continuum

Nature is invariably continuous. Tem­perature, pressure, time, and space vary smoothly over an unbroken field. To ask a computer to duplicate that reality is to ask it to calculate anywhere from one to several thousands of equations at an infi­nite number of points and to do it in three dimensions—in four dimensions if time is added, and in five if the change over time of a condition, such as tem­perature, is wanted.

That is now impossible; no digital computer can effectively deal with an infinite number of points. It is more rea­sonable to constrain the number of points at which the computer is asked to compute. The number and spacing be­tween points is a calculus of the problem being solved, the detail sought, the avail­able algorithms specifying the actual computational steps to be taken, and the reasonableness of the physics.

Each point in a mesh, or network, is a computational locus, a site populated by data and by equations to link them. The data and equations at each point may connect the density, velocity, and tem­perature of a flowing gas, the loads and resultant stresses upon a metal bar, or the energy states of electrons for an atom. The points, the computational loci, summed as a mesh, create a uni­verse. This universe may be a star within a galaxy, a galaxy within a big-bang cos­mology, a pixel within satellite Images, a quark within a proton, a neuron within a brain, cars on a freeway, sand grains

4 MOSAIC

within sand dunes, or an electron within a semiconductor lattice.

The finer the mesh of points, the more probable the simulation and the more demanding the computation. Computa­tional loads expand eightfold as the spac­ing between points is halved, and the computational time for a simulation is usually the square—sometimes the fourth power—of the number of points.

Equally daunting are the enormous memories required. For example, for a reasonable simulation of a putative chemical plant handling some 40 to 50 chemicals—the computer's task being to obse rve thei r c h a n g e s — t h e con­centrations of these chemicals and their effects on immediate neighbors are needed at about 100 points. Almost one million words of memory are needed to store data and results. This needed ca­pacity is larger than the fast-access mem­ory of most conventional computers and about a quarter the capacity of current supercomputers.

If the experimenter, not unreasonably, wants to know how these chemicals dif­fuse as they react, his requirement will expand the calculation to two or three dimensions. Even if the calculation is cut to 20 chemicals and only 50 points, 2.5 million words of memory are needed just to store solutions, and 350 million more are needed to store the data.

Computing techniques

There is a rich collection of numerical methods available for computing, with a useful typology being their division into two classes: deterministic and Monte Carlo. In deterministic calculations, if the input data and algorithms are always the same, then the results will always-be the same. In Monte Carlo calculations, the results may not be the same, but may rather differ for each run, because the input data are supplemented by proba­bilities that an event to be calculated will occur—that an electron going through a semiconductor crystal will take a par­ticular track, for example, or that ions in a plasma will hit a container wall.

Both methods have their subsets. De­terministic techniques, of necessity, often resort to adaptive grids, in which the computation bears down on what are likely to be most important parts of a solution. The Monte Carlo techniques include the renormalization group, used for expanding a lattice of points without sacrificing accuracy. As Kenneth Wilson says, 'The renormalization group is one

of the fundamental approaches to tack­ling this problem of what to do when you cannot make your grid small enough to use the fundamental equation. How do you increase the grid spacing beyond the level of a straight numerical approach, yet preserve all of the reliability that working from a fundamental equation can give you?"

Challenges and limits

Although the renormalization group is a brilliant tactic for attacking otherwise insuperable problems, its application to the problem that most interests Wilson, that of quantum chromodynamics, is still in the future. It awaits the arrival of much faster computers with workable parallel architectures, the accessory algorithms, memories, programming languages , compilers to map programs onto the ma­chine architectures, and graphics to show computational results instantly and visually. Quarks, the elementary blocks of protons, neutrons, and other subatomic particles, have the peculiar property of being inseparable, of being permanently bound to one another. The issue for Wilson and other theoreticians Is to derive that quark confinement from the underlying theory, quantum chro­modynamics. Says Wilson, 'The com­puting power is totally inadequate."

Similar barriers afflict other fields: hy­drodynamics, fusion, and even the somewhat hermaphroditic use of com­puters to simulate the properties of fu­ture computers, for example.

The Insufficiencies are most evident In hydrodynamics, in the simulation of flowing fluids. Hydrodynamics not only is the first field to invoke computational science as crucial but also is the most catholic. It Includes aircraft and wing de­sign, both subsonic and supersonic; cli­mate and weather p red ic t ion , both globally and locally; analysis of water waves and the design of ship hulls; the operation of piping networks, such as those In nuclear reactors or other power plants; geological phenomena, such as the flow of glaciers or of oceanic crust; biological flows, such as that of blood through the heart; and reactive flows, such as in combustion.

The Navier-Stokes equat ions, the same general equations that govern all these phenomena, have existed for over 100 years, but they remain unsolvable for all but the simplest of fluid flows. The reasons are in part that the phenomena they express are nonlinear and tend to

descend Into turbulence. The length scales can vary wildly for the same prob­lem, from ocean currents flowing for thousands of kilometers to waves that can be measured In centimeters. The rea-sonableness of the calculation also de­pends on the boundary conditions, or walls, imposed on the fluid. These walls can be sharp, as in automobile engine cylinders, nonexis tent , as in strat­ospheric airflows, or complex and pro­tean, as in the walls of a beating heart.

Therefore, says David Caughey of Cornell University, "given the Navier-Stokes equations, how can one extract anything useful from them? The funda­mental problem is that of describing tur­bulence. Basically, in flow past an air­plane wing, unsteady fluctuations de­velop. There are little eddies In the flow, ranging in size from a tenth of a milli­meter on up. These eddies are transfer­ring energy back and forth/' Computing that flow and the resultant turbulence by accounting for these energy exchanges at different spatial scales, Caughey says, is beyond any practical computation, even allowing for computating speeds a thou­sand tim.es as great as existing speeds. However, he says, "it is possible to solve these sorts of problems in very sim­plified ranges, so there is hope that with the next generation of supercomputers we can model turbulence in quite com­plicated g e o m e t r i e s / ' For now, in Wilson's phrase, "we have to depend on hoking up the equations/'

Nevertheless, computational powers in hydrodynamics have grown enor­mously. Airflow over a wing, for exam­ple, can now be modeled in half an hour for less than $1,000 of computer time. In 1960 that same computation would have cost $10 million and would not have been completed until 1990. Modeling storms remains largely a two-dimensional act, constraining the simulation of tornadic clouds and their spawn. Similar con­straints apply to the powerful and un­predictable downdrafts from clouds, the wind shears that have struck down air­planes with disastrous results.

Plasmas and chips

Much the same tale of progress and limits applies to plasma physics: coarse results, limited experiments. Plasmas, ionized fluids, are the most common matter of the universe. They are the stuff of stars and the working fuel of fusion reactors. Computer simulations of fu­sion plasmas probe a torrent of funda-

MOSAIC 5

mental questions on what happens to particles within an actual reactor: How do neutrons streaming out of a plasma affect surrounding magnets? What mix of plasma energies and densities, con­finement times and geometries works best? How can damage to the walls of a reactor by energetic particles be gauged without first building a reactor and hav­ing it cannibalize itself?

A fusion plasma will have about ID15

particles per cubic centimeter, with each particle carrying a charge affecting both near and far particles. Not even the most heroic computer could sort out the effects of individual plasma particles. Rather, the electrical and magnetic fields owing to the collective motions of these particles are calculated first, and then, recursively, the interlocking effects of the fields on particles are calculated. The state of the art remains limited, however. It is typified by a recent achievement in calculating the equations of state relating density, temperature, and other param­eters of an idealized, single-component plasma: one chemical species immersed in a uniform sea of electrons, the com­posite being electrically neutral. That computation took a Cray4 seven hours. At present there is no hope of calculating more complex plasmas; therefore, com­putation in plasma physics, as in the other domains of computational science, becomes that of artful compromises de­signed to make a problem tractable.

Because the average paths of particles are typically sought, Monte Carlo cal­culations can be used. For example, a particle hitting a wall may reflect at a certain angle about 10 percent of the time. "You then send in a particle/' says David Ruzic of the University of Illinois, "and a random-number generator rolls the dice at each point and tells whether the particle will go one way or another. You keep track of where those particles end up. With bigger computers, you can send in more particles and then you can build up your statistics."

Similar Monte Carlo techniques apply to computers simulating future com­puters or their underlying components, such as semiconductor chips. "That Is," says Karl Hess, also of the University of Illinois, "we simulate a real semiconduc­tor crystal, and then we let an electron move through that crystal and we let things happen to it. We let It be scattered by Impurities, we let It Interact with lat­tice vibrations, and so on. We then aver­age what has happened to It, and we

extract from that Information the mac­roscopic parameters we need for the de­vice. We need to simulate millions of in­teractions, each of which has hundreds of equations attached to it."

There are now many ideas for increas­ing the switching speeds of chips. One involves raising electron mobilities, an approach favored in the High Electron Mobility Transistor (HEMT) approach em­phasized by Japanese engineers. The dif­ficulty is in determining what these de­vices will do, compared to existing devices, as they are scaled down in size and as edge effects—as opposed to one-dimensional planar effects—become more prominent.

Interactiwe graphics

Hess and.his colleagues did model one HEMT device using a VAX supermini­computer. The VAX is still the computa­tional workhorse of academic science, but the Hess group took 30 hours to get just one picture. There was no allowance for altering the setup to see what a change in the initial circuitry might do. "What we actually want to do," Hess says, "is to reduce this complicated model to a comprehensive and simple one, which can then be used in com­puter-aided design. However, it's very difficult for us to see from these few pic­tures what model, what approach, what simplifications we should use. What we need is a movie, to see directly how the electrons are flowing in and out." Mak­ing movies demands much faster com­puters than are now available to Hess.

Hess's difficulty plagues almost all of computational science: insufficient com­putational power to understand a com­puter simulation while it is under way As Don Greenberg of Cornell University has pointed out, about 90 percent of the time and cost in a simulation is now in­vested in defining the problem, because the computation is too slow and the re­sult often too opaque to allow reshaping the questions while a simulation is run­ning. There is now no software, Green­berg says, that allows an experimenter to "Interact with a simulation while it's occurring."

What is decidedly not acceptable, says his Cornell colleague Kenneth Wilson, "is to show a frame, and then sometime later show another frame. You cannot Interact very effectively If each Interac­tion requires seven days; if there's some­thing you want to change, that's another seven days." Having effective computer

graphics that can display visually and in color the results of three-, four-, or five-dimensional calculations, Greenberg says, will enable the experimenter to re­think the problem in real time and, more subtly, will alter the way this sort of sci­ence Is done. Thus, for example, the chemist interested in the molecular pas de deux of an enzyme and its substrate could watch their dance as they react, rearrange, migrate, or absorb. He could change conditions in midstream—acid­ity, the strength of certain bonds, tem­perature, the presence of other mole­cules—and observe the effects.

Graphics are valued first of all because they convert streams of data Into inter-pretable pictures. In computational sci­ence, however, their most crucial value— since computers obey algorithms and not the laws of physics—Is that they con­firm computational intuition; they link calculations with the real world. The fact that the jets modeled by Smarr and oth­ers looked like reasonable jets, for exam­ple, confirmed the soundness of the al­gorithms. Similarly, the fact that fractal geometry (fractals are patterns that seem the same at whatever level of detail they are looked at) could create startlingly true images of mountains, snowflakes, and planets, as well as dragons and other imaginary beings, confirmed the phys­ical Intuition.

The new architectures

Although today's most powerful su­percomputers, such as the Cray XMP, can in principle generate sharp Images in varying colors and intensit ies at 30 frames per second, they cannot also do the calculations needed to produce each one of those frames. "We haven't even gotten to first base yet In terms of com­puting power," Wilson says.

How will faster speeds be attained? What will the machines look like? What are the pertinent software issues? In ad­dition, considering how scarce the pres­ent generation of supercomputers ini­tially was for academic science, what will be the availability of the future, much faster, computers?

Some background: Maximum com­putation rates are now about 1,000 MIPS, and the hope is to attain about 20,000 MIPS In five years and 100,000 MIPS in about ten years. Reaching these speeds will depend on effective computer paral­lelism; that is, having the algorithms, the architectures, and the computers to ex­ecute hundreds , thousands , or hun-

6 MOSAIC

dreds of thousands of instructions si™ multaneously. Schemes, and even actual machines, for parallel computation are not new. The 1943 British machine Co­lossus reportedly used parallel computa­tions, as did early U.S. machines, such as Solomon in 1962 and Illiac IV in 1973. These machines used a form of paral­lelism, the execution of the same instruc­tion on different data.

Recently, genuinely parallel machines, able to execute different instructions si­multaneously, have emerged. Notable among them is the Heterogeneous Ele­ment Processor (HEP), built by the Den-elcor Corporation. James C. Browne of the University of Texas describes this ma­chine as "probably the first commercially available general-purpose computer that can perform several operations con­currently/ ' Neither HEP nor the Dis­tributed Array Processor (DAP), a British counterpart from International Com­puters Limited, also with more than 4,000 processors, will as such attain the thousandfold increase in computational speeds. Nevertheless, such an increase is required by real-time graphics, the manifold problems in hydrodynamics, and the imposing calculations for quan­tum chromodynamics. The goal is mas­sive parallelism, thousands to even mil­lions of high-speed processors working and communicating simultaneously.

There is no paucity of ideas on the architectures of such machines. Some 70 designs can be found in U.S. univer­sities. These often are so-called dance-hall designs, with processors on one side, memories on the other, and com­munication links between them.

For parallel architectures to be realized in working machines, not only will about $20 million be required for each machine but there will also have to be accompany­ing software. The need will be for al­gorithms fitted to parallel rather than se­rial computation, compilers that discover parallelism in serially written programs and then express it in the language of the machine, and operating systems that provide fluid communications among thousands of cooperating processors. There are other, more subtle difficulties, which are reducible to the issue of writ­ing programs for nonexistent machines. Experimentation is impossible, and it will be enormously difficult to trace er­rors in programs orchestrating thou­sands of events simultaneously.

A recurring bromide is that it will be extremely difficult to write programs for

parallel computation. There is little evi­dence to confirm that. In fact, there is some limited experience to suggest oth­erwise—students writing parallel pro­grams for the DAP machine, for example. Creating a new computing environment, says Kenneth Wilson, means building an entire culture, and "cultures develop myths," such as that parallel processing cannot be done. "The biggest problem In knowing how much you can do on paral­lel systems," he adds, "is that we're building only the first generation of mas­sively parallel systems."

Progress in science tends to be saltato­ry, and that may also hold true for mas­sive parallel machines. Once the early models have been built and operated and the software has been refined, the ma­chines may proliferate very quickly. The reason is both logical and generic: Paral­lel machines are built from large num­bers of identical processors, and these can be mass produced, In contrast to the cottage-Industry mode applied to cur­rent supercomputers.

Despite the powerful promise of fu­ture computers, it Is Important to re­member that the current powerhouses are not quite dead yet. The Cray XMP, which basically quadruples the process­ing power of a Cray-I, is probably the first of a generation of vector machines with parallel processors. Such continual re­doublings of the Cray-I processors could generate machines with 16 to 32 pro­cessors, having speeds of 10,000 to 20,000 MIPS, or 20 to 40 times as great as the speed of the Cray-I. These are max­imum speeds, and these machines will rarely attain them. However, the fact that these machines will offer faster speeds within basically the same software en­vironment (programs, compilers, lan­guages) will have a catalytic effect on computational science.

Other solutions

The scientific computers of the future also will continue to Include special-purpose machines, the specialists of ad­vanced computing that are designed to do extremely well such single activities as speech and signal processing. The electronics of such machines can be tai­lored to particular algorithms, such as the fast Fourier transform, whose al­gorithms are already embedded in com­putational work in Image analysis and in such fields as high-energy physics.

In addition, in about four years, the number crunching offered by current su­

percomputers may come at a fraction of the more than $10 million they cost now, as scientific processors with floating­point abilities and capable of 20 to 40 MIPS reach the market. Further into the future are reconfigurable architectures; these will be systems in which the architecture is redrawn ad libitum to fit the arriving algorithms.

Other elements will limn future com­puting. For example, the storage de­vices, being mechanical , have been lapped by the Increasing computation rates, although damage has been limited by much larger memories. Of present. interest and promise is an optical disk with much faster access time (the time needed to start the Inflow of randomly selected data) and capacious storage that is being developed by the U.S. Air Force and the National Aeronautics and Space Administration.

An obligatory software concern is For­tran, the old shoe of scientific comput­ing; although it is badly worn, the re­placement may not fit. The arguments about Fortran, as ubiquitous at computer science meetings as Styrofoam cups, tend toward the Grecian: Each side builds a pile of points, and the one with the highest pile wins the day. Obviously, Fortran has its drawbacks, notably that it is not a l anguage for concurrency. However, some scientists, such as David Kuck of the University of Illinois, argue for powerful compilers that can discover concurrency in Fortran code and restruc­ture it accordingly. The current trend is to create metaprograms, such as Eft, Rat-for, and Gibbs, that are less cryptic to write and read than Fortran but that can automatically be translated into Fortran. "You don't replace Fortran/' says Wilson. "You build a system that people can use when they're building the new code, but that will Integrate well with existing, de­bugged Fortran." Perhaps the most rea­sonable summation is the comment that Fortran in ten years will be completely different from what it is now, but it will still be called Fortran.

The fate of Fortran Is certainly Impor­tant, but it really pales in contrast to what else lies ahead. In a fraction of a decade, a new branch of science ha-s opened, and there is an emergent generation of peo­ple with new perspectives. To them, complexity is less a concern than it is a challenge to strive for good algorithms, some Imaginative programming, and a few hours In front of a monitor to watch nature decoded In full color. •

MOSAIC 7


Recommended