Lapas attēli
PDF
ePub

traffic is low, the wwMCCs has performed satisfactorily. When the message traffic is high, however, the performance of the system has suffered. In a 1977 exercise, when it was connected to the command-and-control systems of several regional commands, the WWMCCS had an average success rate for message transmission of only 38 percent.

As

a way of overcoming the limitations of small-scale testing, developers often rely on simulators to create likely attack scenarios. Although simulation provides information beyond that achieved by small-scale testing alone, the technique, as applied to the testing of a ballistic-missile defense, is restricted in several ways. First, simulators may not be able to reproduce the "signatures," or parameters, of the physical phenomena associated with various events, such as simultaneous nuclear explosions, rapidly and accurately enough to test the responsiveness of the defense system. Analysts must therefore rely on assumptions about the signatures of each event they choose to simulate; that is, events are "preprocessed" before they are fed into the defense system. As a result of using preprocessed data, there can be no surprises during the simulation that might test how the system reacts to circumstances not anticipated by developers, and so the realism of the test is diminished.

Although advances in computer technology have greatly increased the computational speed of computers, and thereby facilitated the design of simulators that can model more realistic environments, increased speed will not help to simulate data on physical phenomena for which theoretical and empirical understanding is inadequate. For instance, given present knowledge, scientists would not be able to model accurately the simultaneous explosion of several closely spaced nuclear weapons under certain conditions. More important, simulators cannot satisfactorily duplicate all plausible attacks, since a determined and clever opponent ultimately chooses the parameters of an actual attack. Consequently any confidence in a BMD system that is based on simulated tests rests on the assumption that those responsible for the simulation can predict and reproduce in electronic form the range of tactics to which an enemy might resort.

The story of the Aegis air-defense system illustrates the limitations of simulation testing. The battle-management system for Aegis is designed to track hundreds of airborne objects in a 300-kilometer radius and then allocate

enough weapons to destroy about 20 targets within the range of its defensive missiles [see "Smart Weapons in Naval Warfare," by Paul F. Walker; SCIENTIFIC AMERICAN, May, 1983]. Aegis has been installed on the U.S.S. Ticonderoga, a Navy cruiser. After the Ticonderoga was commissioned the weapon system underwent its first operational test. In this test it failed to shoot down six out of 16 targets because of faulty software; earlier smallscale and simulation tests had not uncovered certain system errors. In addition, because of test-range limitations, at no time were more than three targets presented to the system simultaneously. For a sizable attack approaching Aegis' design limits the results would most likely have been worse.

Errors are not unexpected in operational tests; indeed, malfunctions are inevitable during the initial shakedown exercise of a new weapon system. By the time of the next Aegis tests, they had been corrected and none recurred. Future exercises will probably uncover additional errors, which will in turn be fixed. Through this process the performance of the Aegis system will gradually improve.

Unlike the performance of Aegis, the performance of a comprehensive ballistic-missile defense against a largescale attack will not improve with experience. Because large-scale empirical testing is impossible, the first such test for a comprehensive BMD system would be an actual large-scale attack on the U.S. A more complex battlemanagement system that must keep track of more targets and operate under tighter timing constraints is unlikely to have a better performance record than Aegis has. In addition a severe software failure in a full-blown battle situation would offer little or no opportunity for system designers to learn from the experience.

Nevertheless, developers of a ballis

tic-missile defense will attempt to improve the system's performance by testing for errors and eliminating them, by expanding software capabilities in response to newly perceived needs and by adding new hardware and concomitant software to the project. These efforts, along with bringing new workers into the project, come under what specialists call software maintenance. It can account for approximately 70 percent of the total life-cycle costs of a software-development project.

Two major maintenance issues are particularly salient in relation to a system for ballistic-missile defense. First and most important, significant errors discovered after the software has been

put into operational use must be eliminated. These errors can range from an incorrect coding symbol to a fundamental design flaw. On June 19, 1985, the Strategic Defense Initiative Organization did a simple experiment. The crew of the space shuttle was to position the shuttle so that a mirror mounted on its side could reflect a laser beamed from the top of a mountain 10,023 feet above sea level. The experiment failed because the computer program controlling the shuttle's movements interpreted the information it received on the laser's location as indicating the elevation in nautical miles instead of feet. As a result the program positioned the shuttle to receive a beam from a nonexistent mountain 10,023 nautical miles above sea level. This small procedural error was of little significance to the test itself, however; a second attempt a few days later was successful. Nevertheless, the event shows that even simple errors can lead to mission failure.

Although the bug in the space-shuttle experiment was easily rectified, removing errors from a ballistic-missile defense will require considerably more effort. In particular the battle-management software must receive and process information in such a way as to keep pace with circumstances external to the computer during an attack. That is, the software must run in "real time" [see illustration on next page]. One problem with debugging a real-time software package is ascertaining why it may operate with one specific configuration of equipment but not with another that differs only slightly. For example, a certain missile can be currently launched at supersonic speeds by the F-4G Wild Weasel aircraft but not by the F/A-18 Hornet: a software system that works in the F-4G is not entirely compatible with the avionics of the F/A-18.

A second problem of particular significance to debugging real-time software is that analysts often find it difficult to make errors recur, something that is essential if bugs are to be located. Real-time output is often determined by factors such as the arrival times of sensor inputs. These determinants of a program's behavior cannot always be reproduced with enough accuracy for the error to be found and eliminated. Indeed, finding an error requires an understanding of the precise circumstances leading to it. Because that is not always possible with large real-time systems, software errors may be identified in practice only in a probabilistic sense. Large computer programs, in fact, usually evolve through a series of incompletely understood changes. After a while the pro

[merged small][ocr errors][merged small][merged small][merged small][merged small]

grammer can no longer predict outcomes with confidence; instead he can only hope that the desired outcome will be attained.

There is a final complication to de

bugging real-time software: even if an error can be located, attempts to eliminate it may not be successful. The probability of introducing an error (or more than one) while eliminating a known error ranges from 15 to 50 percent. Moreover, the majority of software-design errors that appear after software is put into service do so only following extensive operational use. Experience with large control programs (ones consisting of between 100,000 and two million lines of code) suggests that the chance of introducing a serious error in the course of correcting original errors is so large that only a small fraction of the original errors should be remedied.

In the context of a comprehensive ballistic-missile defense, one should therefore ask about the consequences of an error that would manifest itself infrequently and unpredictably. The details of the first operational launch attempt of the space shuttle provide an example. The shuttle, whose real-time operating software is about 500,000 lines of code, failed to take off because of a synchronization problem among its flight-control computers. The software error responsible for the failure, which was itself introduced when another error was fixed two years earlier, would have revealed itself, on the average, once in 67 times.

errors, a second aspect of software maintenance stands as a hurdle to developing a defensive system: the management of product development. The Defensive Technologies Study Team (DTST), chartered by the Department of Defense to examine the feasibility of a comprehensive ballistic-missile defense, estimates that the entire system will require a minimum of 10 million lines of programming code. In comparison, the entire software system for Aegis is an order of magnitude smaller. If the DTST estimate is low by a factor of only two, even a very optimistic software-development project will entail more than 30,000 manyears of work, or at least 3,000 programmers and analysts working for about 10 years. For this reason the project can expect a staff turnover that will reduce the institutional memory of the project. During staff transitions it is conceivable that an essential detail, such as updating a particular subprogram, could be overlooked. A tragic example of management error occurred in 1979, when an Air New Zealand airliner crashed into an Antarctic mountain; its crew had not been told that the input data to its navigational computer, which described its flight plan, had been changed.

It is also possible that Soviet agents might try to sabotage the project. They could, for instance, deliberately introduce hard-to-find flaws into the system that would become evident during a real attack but not during testing. The possibility that a program might contain some hidden "time bomb" that

[blocks in formation]

"REAL TIME" SOFTWARE must be executed at a rate defined by the timing of events external to the computer hardware. A computer controlling the radar illumination of an incoming reentry vehicle so that an interceptor missile can home in on the reflected signal would have to maintain the radar beam on the reentry vehicle until the vehicle was destroyed. If the radar were turned away too soon, the interceptor might "lose" its target, particularly if the reentry vehicle were designed to evade approaching interceptors. Yet the computer must re-aim the radar beam so that a second reentry vehicle could also be intercepted. The computer program has to shift the beam to the second vehicle once the first is destroyed, and it must do so quickly enough to enable a second interceptor to reach its target.

a critical moment cannot be ignored. The obvious preventive step would be to impose security clearances on a "need to know" basis. Such restrictions, however, would inhibit communication among the staff working on different parts of the system and thereby increase the likelihood of serious, unintentional software errors arising.

Tightening security will not prevent the Soviet Union from thwarting the system by several other means. There is no reason to believe that Soviet officials faced with a U.S. BMD system would not try to disguise the observable characteristics of their ballistic missiles, warheads and decoys during tests or in actual use. Without reliable data on Soviet missiles, U.S. developers cannot be sure the defensive system will work. Furthermore, the Russians may develop new tactics or weapons that would force U.S. analysts to reprogram the battle-management system to meet new threats. Accumulated changes over time would probably result in many unpredictable interactions and could ultimately necessitate a total redesign of the system.

I'

It has been suggested that the development of software for exceedingly complex systems could be facilitated by reliance on "expert systems" and automatic programming. For example, a Defense Advanced Research Projects Agency (DARPA) report stated that expert systems may be applicable to a ballistic-missile defense. Expert systems are detailed descriptions, expressed as computer rules, of the thought processes human experts use to reason, plan or make decisions in their specialties. Typically, expert systems use informal reasoning procedures or rules of thumb. For example, an expert system may take the following as a given: If a Soviet ICBM is launched, it is a threat to the U.S. It may then use the converse of the statement as a legitimate rule of inference: If there is a threat to the U.S., it is a Soviet ICBM launch. The validity of this inference tool cannot be proved or disproved using the standard tools of formal logic; the statement is not true all the time but is sensible most of the time. The reliance on informal reasoning procedures in programming expert systems leaves open the possibility that deeply embedded conceptual contradictions in the system could exist that might cause it to fail.

To date expert-system research has focused on well-defined areas such as biochemistry and internal medicine. These are areas in which human expertise is sufficiently developed to allow expert systems some success. Even so,

[graphic]

the technology is still limited. Those who recommend applying expert systems to battle management for ballistic-missile defense ignore the fact that human expertise is based on human experience. No one has expert knowledge of massive nuclear missile attacks based on experience.

Equally improbable is the idea that automatic programming, the use of computer programs to write other programs, will provide a solution. Air Force Major Simon Worden, special assistant to the director of the Strategic Defense Initiative Organization, has stated that "we're going to be developing new artificial intelligence (AI) systems to write the software. Of course, you have to debug any program. That would have to be AI too." Statements such as this one are misleading. The primary function of automatic programming is to alleviate the technical difficulties of implementing design specifications and modifying existing code. Roughly half of all software errors, however, are the result of human choices and decisions made during the planning and design phases. These human choices could not be delegated to automatic programming.

Two scenarios based on the above discussion show how the software for a comprehensive ballistic-missile defense might fail. Suppose a battle station that has successfully intercepted two missiles in operational testing using an electromagnetic railgun is faced with a large-scale Soviet missile attack. At first projectiles launched from the battle station destroy their targets. Unfortunately at the design stage developers neglected to take into account the fact that less massive objects have more recoil. As more projectiles are fired the recoil becomes stronger and skews the aiming algorithms for the railgun. As a result later projectiles are too slow and do not reach their targets in time to destroy them. Or suppose that, during the early phase of a nuclear war, U.S. and Soviet leaders agree to a cease-fire. The U.S. leaders realize that one submarine captain will not receive the cease-fire message in time but are confident that U.S. missile-defense satellites will be able to shoot down any missiles launched by mistake. Only after the submarine's missiles are launched does it become tragically clear that no developer had specified that the satellites might have to shoot down U.S. missiles. The missiles explode over the Soviet Union with catastrophic consequences.

Because they are anticipated, neither of these errors is likely to occur. The problem is not whether a system contains a particular error but rather how likely it is that the system will con

SM-2 MISSILE is part of the U.S. Navy Aegis air-defense system. The Aegis battle-management system was designed to track hundreds of airborne objects and to carry out up to 20 simultaneous intercepts. During its first operational test it failed to shoot down six of 16 targets because of faulty software. The system's malfunctions have since been repaired. A comprehensive ballistic-missile defense, which is a much more complicated and ambitious weapon system, must perform far better than the Aegis system did, and it must do so in its first try; a severe software failure during the system's first full-scale test would offer virtually no opportunity to rectify the problem. Because large-scale testing is impossible, the first empirical test of the BMD system would be an actual large-scale ballistic-missile attack.

tain any one of millions of potential errors. The primary matter of concern is "unknown unknowns," the potential errors that remain unanticipated.

All the problems cited in this article

are related to software planned, designed, implemented and debugged by experienced software engineers; most, if not all, were corrected. The general technique for correcting these errors, namely discovering the bugs in operational use and then correcting them, is unlikely to be fruitful in the development of a BMD system. Yet a comprehensive ballistic-missile defense requires not only that software operate properly the first time, in an unpredictable environment and without large-scale empirical testing, but also that planners are positive it will do so. The Reagan Administration has stated that empirical testing is essential for maintaining the reliability of nuclear weapons and has opposed a comprehensive test ban on these very grounds. One wonders how it is possible to have confidence in a comprehensive ballistic-missile defense, which will be at least as complicated as nuclear weapons are now, without requiring it to meet comparable test standards.

Proponents of the Strategic Defense

Initiative argue that even in the absence of large-scale empirical testing a ballistic-missile defense is still desirable, because the possibility that the system might work reasonably well would deter Soviet leaders from contemplating an attack. Others argue that the goals of the Strategic Defense Initiative are really more modest than the goal of a comprehensive defense against ballistic missiles, suggesting that a BMD system might be most appropriate for protecting military assets such as missile silos. Of course, for goals more circumscribed than those of comprehensive ballistic-missile defense, perfection is not a requirement and testing is much easier. Although a program that has more limited goals does have a much higher probability of success, these limited goals raise a disturbing issue. In particular they are inconsistent with President Reagan's stated vision of eliminating the threat of nuclear ballistic missiles. Consequently it is proper to judge the feasibility of the Strategic Defense Initiative on the basis of the president's challenge to the scientific community. By that standard no software-engineering technology can be anticipated that will support the goal of a comprehensive ballistic-missile defense.

BIBLIOGRAPHY

Readers interested in further explanation of the subjects covered by the articles in this issue may find the following lists of publications helpful.

COMPUTER RECREATIONS

MINIMUM-REDUNDANCY LINEAR ARRAYS. Alan T. Moffet in IEEE Transactions on Antennas and Propagation, Vol. AP-16, No. 2, pages 172-175; February, 1963.

APPLICATIONS OF NUMBERED UNDIRECTED GRAPHS. Gary S. Bloom and Solomon W. Golomb in Proceedings of the IEEE, Vol. 65, No. 4, pages 562-570; April, 1977. GOLOMB'S GRACEFUL CURVE. Martin Gardner in Wheels, Life and Other Mathematical Amusements. W. H. Freeman and Company, 1983. VARIATIONS IN THE ROTATION OF THE EARTH. W. E. Carter, D. S. Robertson, J. E. Pettey, B. D. Tapley, B. E. Schutz, R. J. Eanes and Miao Lufeng in Science, Vol. 224, No. 4652, pages 957-961; June 1, 1984.

THE DEVELOPMENT
OF SOFTWARE

FOR BALLISTIC-MISSILE
DEFENSE

SAFEGUARD DATA-PROCESSING SYSTEM. The Bell System Technical Journal, Vol. 54, Special Supplement; 1975.

SOFTWARE ENGINEERING ECONOMICS. Barry W. Boehm. Prentice-Hall, Inc., 1981.

BY SURGICALLY AND DEVELOPMENTALLY ONE-EARED FEMALES, AND THE CURIOUS ROLE OF THE ANTERIOR TYMPANUM. Franz Huber, H.-U. Kleindienst, Theo Weber and John Thorson in Journal of Comparative Physiology, Vol. 155, pages 725-738; 1984.

TEMPORAL SELECTIVITY OF IDENTIFIED AUDITORY NEURONS IN THE CRICKET BRAIN. Klaus Schildberger in Journal of Comparative Physiology, Vol. 155, pages 171-185; 1984.

THE IMMUNE SYSTEM IN AIDS FREQUENT DETECTION AND ISOLATION OF CYTOPATHIC RETROVIRUSES (HLTV-III) FROM PATIENTS WITH AIDS AND AT RISK FOR AIDS. Robert C. Gallo, Syed Z. Salahuddin, Mikulas Popovic, Gene M. Shearer, Mark Kaplan, Barton F. Haynes, Thomas J. Palker, Robert Redfield, James Oleske, Bijan Safai, Gilbert White, Paul Foster and Phillip D. Markham in Science, Vol. 224, No. 4648, pages 500-502; May 4, 1984. IMMUNOREGULATORY LYMPHOKINES OF T HYBRIDOMAS FROM AIDS PATIENTS: CONSTITUTIVE AND INDUCIBLE SUPPRESSOR FACTORS. Jeffrey Laurence and Lloyd Mayer in Science, Vol. 225, No. 4657, pages 6669; July 6, 1984. LYMPHADENOPATHY-ASSOCIATED

VI

RAL ANTIBODY IN AIDS: IMMUNE ⚫ CORRELATIONS AND DEFINITION OF A CARRIER STATE. Jeffrey Laurence, Françoise Brun-Vezinet, Steven E.

[merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small]

B87, Supplement 1, pages A84-A96; November 15, 1982.

THE ENORMOUS THEOREM

ON THE STRUCTURE OF GROUPS OF FINITE ORDER. Richard Brauer in Proceedings of the International Congress of Mathematicians, Vol. 1, pages 209217; 1954.

FINITE GROUPS. Daniel Gorenstein. Chelsea Publishing Co., 1980. FINITE SIMPLE GROUPS: AN INTRODUCTION TO THEIR CLASSIFICATION. Daniel Gorenstein. Plenum Press, 1982. THE FRIENDLY GIANT. Robert L. Griess, Jr., in Inventiones Mathematicae, Vol. 69, No. 1, pages 1-102; 1982.

CHINA'S FOOD

AGRICULTURE IN CHINA'S MODERN ECONOMIC DEVELOPMENT. Nicholas R. Lardy. Cambridge University Press, 1983.

THE BAD EARTH: ENVIRONMENTAL DEGRADATION IN CHINA. Vaclav Smil. M. E. Sharpe, Inc., 1984. RURAL DEVELOPMENT IN CHINA. Dwight Perkins and Shahid Yusuf. A World Bank Publication, The Johns Hopkins University Press, 1984.

THE CONSTRUCTION PLANS FOR THE TEMPLE OF APOLLO AT DIDYMA DIDYMA. Theodor Wiegand. Verlag Gebr. Mann, Berlin, 1941-1958. VORARBEITEN ZU EINER TOPOGRAPHIE VON DIDYMA. Klaus Tuchelt. Deutsches Archäologisches Institut, Istanbuler Mitteilungen, Beiheft 9, Verlag Ernst Wasmuth, Tübingen, 1973.

GREEK ARCHITECTS AT WORK: PROBLEMS OF STRUCTURE AND DESIGN. J. J. Coulton. Cornell University Press, 1977.

THE AMATEUR SCIENTIST THE NUMBER OF IMAGES OF AN OBJECT BETWEEN TWO PLANE MIRRORS. AnTi Chai in American Journal of Physics, Vol. 39, No. 11, pages 1390-1391; November, 1971.

MULTIPLE IMAGES IN PLANE MIRRORS. Thomas B. Greenslade, Jr., in The Physics Teacher, Vol. 20, pages 29-33; January, 1982.

REFLECTIONS IN A POLISHED TUBE. Laurence A. Marschall and Emma Beth Marschall in The Physics Teacher, Vol. 21, page 105; February, 1983. THROUGH THE KALEIDOSCOPE. Cozy Baker. Beechcliff Books, 100 Severn Avenue, Suite 605, Annapolis, Md. 21403; 1985.

THE AUTHORS

HERBERT LIN ("The Development of Software for Ballistic-Missile Defense") is a postdoctoral research fellow at the Center for International Studies of the Massachusetts Institute of Technology. He is a graduate of M.I.T., which awarded him an S.B. in 1973 and a Ph.D. in 1979. After teaching physics at M.I.T. and the University of Washington he moved to Cornell University, where he was an instructor in physics and a visiting fellow in the Peace Studies Program. He returned to M.I.T. in 1984. Lin's main research interest is the relation of technology to national security policy; in his leisure he indulges a fondness for international folk dancing.

VLADIMIR V. SHKUNOV and BORIS YA. ZEL'DOVICH ("Optical Phase Conjugation") work at the Institute for Problems in Mechanics (IPM) of the Soviet Academy of Sciences. Shkunov was educated at the Moscow Physical-Technical Institute and earned a Candidate of Science degree in 1979. He then joined the staff of the IPM, where he investigates problems of nonlinear and linear optics. Zel'dovich was graduated from Moscow State University in 1966 and was granted a Candidate of Science degree by the Institute of Theoretical and Experimental Physics in Moscow in 1969. He later worked at the P. N. Lebedev Physical Institute in Moscow, where he got a doctor of science degree in 1980. He took his present position in 1981. Zel'dovich and a group of colleagues were awarded the State Prize of the U.S.S.R. for their research in optical phase conjugation.

FRANZ HUBER and JOHN THORSON ("Cricket Auditory Communication") are respectively department director and consultant at the Max Planck Institute for Behavioral Physiology in Seewiesen, West Germany. Huber is a graduate of the University of Munich, which awarded him a doctorate in 1953 for a dissertation on the nervous system of crickets. From 1954 to 1960 he was assistant professor of animal physiology at the University of Tübingen. In 1963 he was appointed professor of zoology and animal physiology at the University of Cologne. Before moving to Seewiesen he served as director of research at the International Institute for Insect Physiology and Ecology in Nairobi. Huber attributes his lifelong interest in animal behavior to a childhood spent on a farm. Thorson holds a B.S.

14

(1955) and an M.S. (1958) in physics
from the Rensselaer Polytechnic Insti-
tute. He took a position at the General
Electric Company, where one of his
projects involved biophysics. The ex-
perience led him to pursue graduate
studies in biology at the University of
California at Los Angeles; he got a
Ph.D. in zoology in 1965. From 1967
to 1969 he was on the faculty of the
University of California at San Die-
go as assistant professor and research
scientist. Since 1969 he has held visit-
ing appointments and consulting con-
tracts at the universities of Oxford
and Frankfurt and at the Max Planck
institutes. He has collaborated in re-
search on insect and human vision,
muscle contraction, sensory transduc-
tion and insect behavior. When he is
not consulting on the Continent, Thor-
son writes, "my neuroscientist wife and
I are based in our Oxfordshire cottage,
where we write, solve nonlinear differ-
ential equations and mend old clocks."

JEFFREY LAURENCE (“The Immune System in AIDS") is assistant professor of medicine at the Cornell University Medical Center and assistant attending physician at New York Hospital. He earned a B.A. at Columbia University and an M.D. at the University of Chicago Pritzker School of Medicine. While he was in medical school he was elected a Rhodes scholar and a Henry Luce Foundation scholar. He accepted the latter honor and spent a year investigating immunologic defenses against tumor growth at the Institute for Cancer Research of Osaka University. After returning to the U.S. he completed his medical training at New York Hospital and went on to do research at Rockefeller University. He joined the Cornell faculty in 1982. Laurence has written Many Happy Returns, a "medical-detective murder mystery" play produced in 1982.

PETER H. SCHULTZ ("Polar Wandering on Mars") is associate professor of geological sciences at Brown University. He got his B.A. at Carleton College in 1966 and his Ph.D. from the University of Texas at Austin in 1972. From 1973 to 1975 he was a research associate at the Ames Research Center of the National Aeronautics and Space Administration. After a year as a research associate at the University of California at Santa Clara he joined the staff of the Lunar and Planetary Institute in Houston, where in 1981 he was made a senior staff scientist and director of the Planetary Image Center. In

1984 Schultz accepted his position at Brown; he also serves as science coordinator for the NASA-Ames Vertical Gun Range, a national facility where impact cratering is modeled, and as director of the Northeast Planetary Data Center.

DANIEL GORENSTEIN ("The Enormous Theorem") is Jacqueline B. Lewis Professor of Mathematics at Rutgers University. He was educated at Harvard University, where he received an A.B. in 1943 and a Ph.D. in 1950. He began his career at Clark University as assistant professor and became full professor there in 1959. In 1964 he moved to Northeastern University. After spending a year at the Institute for Advanced Study he joined the faculty at Rutgers in 1969. In 1972 he was named a Guggenheim fellow and a Fulbright scholar, and in 1978 he served as Sherman Fairchild Distinguished Professor at the California Institute of Technology. He has been studying finite, simple groups since 1960, and he played a key role in the development of the classification proof of all finite, simple groups. Gorenstein is now formulating a more concise, "second generation" proof.

VACLAV SMIL ("China's Food") is professor of geography at the University of Manitoba in Winnipeg. After getting a degree at Carolinum University in Prague he worked in a regional planning office as a consultant in energy and environmental affairs. Following the Soviet invasion of Czechoslovakia in 1968, he came to the U.S. He earned his doctorate from Pennsylvania State University in 1972 and then joined the faculty of the University of Manitoba. Since his student days in Prague, Smil has pursued his interest in the interrelations of food, energy and changes in the environment.

LOTHAR HASELBERGER ("The Construction Plans for the Temple of Apollo at Didyma") is an archaeologist who specializes in Hellenic architecture. He studied architecture, city planning and the history of architecture at the Technical University of Munich and, as a Fulbright scholar, at Harvard University. Under the sponsorship of the Technical University of Munich he spent two years studying an unusual class of ancient towers that are found on the Greek islands; he was recently given his doctorate for that work. Since 1980, with the support of the German Archaeological Institute and the German Research Society, Haselberger has been recording and cataloguing the construction plans he discovered at Didyma.

« iepriekšējāTurpināt »