The Report of the National Commission on Space
- The Evolution of the Universe
- Some Potential Space Science Headlines
- Physics, Chemistry, and Biology in Space
- Was Einstein Right?
- Solar and Space Physics
- Returning Samples from Solar System Bodies
- Exploration of the Outer Planets
- Future Great Space Observatories
- Life: Earth and the Universe
- Earth’s Gravity Well
- Remote Sensing and the Private Sector
- Self-Replicating Factories in Space
- Electromagnetic Accelerators
- Aerospace Plane Technology
- Advanced Rocket Vehicle Technology
- Protecting the Space Environment from Debris
- Space Station
- Tethers in Space
- Gravitation and Biology
The Universe is the true home of humankind. Our Sun is only one star in the billions that comprise the Milky Way Galaxy, which in turn is only one of the billions of galaxies astronomers have already identified. Our beautiful planet is one of nine in our Solar System. Understanding our Universe is not just an intellectual and philosophical quest, but a requirement for continuing to live in, and care for, our tiny part of it, as our species expands outward into the Solar System.
Beginnings: The Big Bang, the Universe, and Galaxies
In the 1920s scientists concluded that the Universe is expanding from its origin in an enormous explosion—the “Big Bang”—10 to 20 billion years ago. In the future, it could either expand forever or slow down and then collapse under its own weight. Recent studies in particle physics suggest that the Universe will expand forever, but at an ever decreasing rate. For this to be true, there must be about ten times more matter in the Universe than has ever been observed; this “hidden matter” may be in the form of invisible particles that are predicted to exist by modem theory. Thus, in addition to normal galaxies there may be invisible “shadow galaxies” scattered throughout space.
The Universe contains 100 billion or more galaxies, each containing billions of stars. Our Galaxy, the Milky Way, is the home of a trillion sun, many of which resemble our Sun. Each of these stars, when it is formed from an interstellar cloud, is endowed with hydrogen and helium—simple chemical elements that formed in the Big Bang, as well as with heavier elements that formed in previous ste1lar furnaces. Hydrogen is consumed by a thermonuclear fire in the star’s core, producing heavier chemical elements that accumulate there before becoming fuel for new, higher temperature burning. In massive stars, the process continues until the element iron dominates the core. No further energy-producing nuclear reactions are then possible, so the core collapses suddenly under its own weight, producing vast amounts of energy in a stellar explosion known as a supernova. The temperature in a supernova is so high that virtually all of the chemical elements produced are flung into space, where they are available to become incorporated in later generations of stars. About once per century a supernova explosion occurs in each galaxy, leaving behind a compact object that may be a neutron star—as dense as an atomic nucleus and only a few miles in diameter—or a stellar black hole, in which space-time is so curved by gravity that no light can escape.
The Solar System
Our Solar System consists of the Sun, nine planets, their moons and rings, the asteroids, and comets. Comets spend most of their time in the “Oort cloud,” located 20,000-100,000 astronomical units from the Sun (an astronomical unit is the distance from Earth to the Sun, 93 million miles). The Solar System formed 4.5 billion years ago near the edge of our Galaxy. Heir to millions of supernova explosions, it contains a full complement of heavy elements. Some of these, like silicon, iron, magnesium, and oxygen, form the bulk of the composition of Earth; the elements hydrogen, carbon, and nitrogen are also present, providing molecules essential for life.
A Grand Synthesis
The Universe has evolved from the Big Bang to the point we see it today, with hundreds of billions of galaxies and perhaps countless planets. There is no evidence that the processes which govern the evolution from elementary particles to galaxies to stars to heavy elements to planets to fife to intelligence differ significantly elsewhere in the Universe. By integrating the insights obtained from virtually every branch of science, from particle physics to anthropology, humanity may hope one day to approach a comprehensive understanding of our position in the cosmos.
- IDENTIFICATION OF THE MISSING MATTER THAT MAKES UP 90 PERCENT OF THE UNIVERSE’S MASS.
- GRAVITATIONAL WAVES DETECTED.
- SOURCE OF HUGE ENERGIES IN EXPLODING GALAXIES DISCOVERED.
- IMAGE OF THE IMMEDIATE SURROUNDINGS OF A BLACK HOLE AT THE CENTER OF OUR GALAXY.
- AMINO ACIDS DISCOVERED IN THE URANIAN OCEAN.
- SUPERNOVA DEBRIS RECOVERED FROM COMET ICE.
- METHANE VOLCANOES DISCOVERED ON PLUTO.
- FIRST PLANET DISCOVERED OUTSIDE OF THE SOLAR SYSTEM.
- SIGNAL DETECTED FROM EXTRATERRESTRIAL INTELLIGENCE.
- MONTHLY SOLAR FLARE PREDICTIONS ACCURATE TO WITHIN HOURS.
- LINKS BETWEEN SOLAR ACTIVITY AND OUR WEATHER UNDERSTOOD.
- EARTH’S ENTIRE RADIATION BELT IMAGED.
- MONTHLY HURRICANE PREDICTIONS ACCURATE WITHIN 12 HOURS AND 100 MILES.
- THREE CONSECUTIVE EARTHQUAKES PREDICTED WITH 24-HOUR AND 50-MILE ACCURACY.
- SPACE STATION PROJECT TEAM CELEBRATES FIRST YEAR WITHIN A SYNTHETIC BIOSPHERE.
- BONE THINNING IN ASTRONAUTS HALTED WITH TREATMENT APPLICABLE TO PEOPLE ON EARTH.
- PERFECT GALLIUM ARSENIDE CRYSTALS PRODUCED IN SPACE STATION.
- THIRTY-DAY WEATHER FORECAST NOW 95% ACCURATE
- NEW STATE OF MATTER PRODUCED IN A SPACE LABORATORY.
- ICES DISCOVERED AT THE LUNAR POLES.
- FOSSILS FOUND IN ANCIENT MAR11AN RIVERBED.
- VENUS VOLCANOES VERIFIED.
A Nobel Prize-winning theory predicts the change in beat capacity of liquid helium at uniform pressure as it makes a transition to the superfluid state. Although equipment has been developed to hold samples at a steady temperature within one part in 10 billion, the variation of pressure through the sample due to gravity is so large that experiments have yielded far less accurate results than desired. Reducing gravity by a factor of 100,000, as is possible in the Space Station, can provide a high-quality test of the theory.
Research is also proceeding on “fractal aggregates,” structures that have the remarkable property that their mean density literally approaches zero the larger they become. Such structures are neither solid nor liquid, but represent an entirely new state of matter. So far, experiments on such structures are limited by the fact that the aggregates tend to collapse under their own weight as soon as they reach .0004 inches in size. In a microgravity environment, it should be possible to develop structures 100,000 times larger, or three feet across. Such sizes are essential if measurements of the physical properties of fractal structures are to be made.
Research on many other processes, including fractal gels, dendritic crystallization (the process that produces snowflakes), and combustion of clouds of particles, will profit substantially from the microgravity environment of the Space Station. It is not unlikely that novel applications will develop from basic research in these areas; just as the transistor grew out of basic research on the behavior of electrons in solids.
On Earth, laboratory experiments have shown that many of the bio-chemical compounds that are essential for life as we know it can be synthesized under conditions that we believe simulate those on primitive Earth. In space, we can investigate the effects of gravitational force on the behavior of these life-related compounds. Astronomers have found evidence for the existence of simple and complex compounds of carbon in interstellar space. In a microgravity laboratory in space we can investigate the apparently universal processes that form these compounds, which were no doubt present in the matter from which the Solar System was formed, and were probably the precursors to life on Earth.
An especially promising avenue of research in space is the pursuit of new tests of Einstein’s theory of general relativity. It has long been recognized that because deviations from the Newtonian theory of gravitation within the Solar System are minute, extremely sensitive equipment is required to detect them. Many experiments require the ultraquiet conditions of space. Because Einstein’s theory is fundamental to our understanding of the cosmos—in particular, to the physics of black holes and the expanding Universe—it is important that it be experimentally verified with the highest possible accuracy.
Relativity predicts a small time delay of radio signals as they propagate in the Solar System; the accuracy in measuring this effect can be continuously improved by tracking future planetary probes. A Mercury orbiter would further improve the accuracy of measurement of changes in Newton’s gravitational constant, already shown to be less than one part in 100 billion per year. An experiment in Earth orbit called Gravity Probe B will measure the precession of a gyroscope in Earth orbit with extreme precision, permitting verification of another relativistic effect.
Einstein’s theory also predicts that a new type of radiation should be produced by masses in motion. There is great interest in detecting this so-called gravitational radiation, not only because it would test Einstein’s theory in a fundamental way, but because it could open a new window through which astronomers could study phenomena in the Universe, particularly black holes. Gravitational radiation detectors are in operation, or are being built, on the ground, but they are sensitive only to wave periods less than 0.1 second because of Earth’s seismic noise.
The radiation predicted from astronomical objects would have much longer periods if it is due to orbiting double stars and black holes with masses greater than 10,000 Suns, such as are believed to exist in the nuclei of galaxies. An attempt will be made to detect such radiation by ranging to the Galileo spacecraft en route to Jupiter. A more powerful approach for the future is to use a large baseline detector based upon optical laser ranging between three spacecraft in orbit about the Sun; detecting minute changes in their separations would indicate the passage of a gravitational wave.
Finally, instruments deployed for more general purposes can make measurements to test general relativity. For example, a 100-foot optical intederometer in Earth orbit designed for extremely accurate determination of stellar positions could measure the relativistic bending of light by the Sun with unprecedented precision. A spacecraft that plunges close to the Sun to study plasma in its vicinity could measure the gravitational red-shift of the Sun to high precision. In summary, a variety of space-based experiments on the shuttle and Space Station, in free flyers, and in orbit around the Sun and other planets have the capacity to test general relativity with a high degree of accuracy. Gravitational radiation from certain astronomical sources can be detected only in space. When that happens, astronomers will have an exciting new tool with which to study the Universe.
The objective of this field of study is to understand the physics of the Sun and the heliosphere, the vast region of space influenced by the Sun. Other regions of interest include the magnetospheres, ionospheres, and upper atmospheres of Earth, the planets, and other bodies of the Solar System. With this in mind, studies of the basic processes which generate solar energy of all kinds and transmit it to Earth should be emphasized, both because the physical mechanisms involved are of interest, and because there are potential benefits to life on Earth.
There are a number of sub-goals within this discipline: To understand the processes that fink the interior of the Sun to its corona; the transport of energy, momentum, plasma, and magnetic fields through interplanetary space by means of the solar wind; the acceleration of energetic particles on the Sun and in the heliosphere; Earth’s upper atmosphere as a single, dynamic, radiating, and chemically active fluid; the effects of the solar cycle, solar activity, and solar-wind disturbances upon Earth; the interactions of the solar wind with Solar System bodies other than Earth; and magnetospheres in general. Without assuming specific direct connection, the possible influence of solar-terrestrial interactions upon the weather and climate of Earth should be clarified.
A number of near-term activities are essential to the advancement of solar and space physics. Advanced solar observatories will study detailed energy production mechanisms in the solar atmosphere, while the European Space Agency’s Ulysses spacecraft will make measurements of activity at the poles of the Sun. Spacecraft with sufficient velocity to leave the inner Solar System will make possible measurements in the outer heliospbere, including its transition to the interstellar medium of the Galaxy. The International Solar-Terrestrial Physics program, which wilI be carried out jointly by the United States, Japan, and Europe, will trace the flow of matter and energy from the solar wind through Earth’s magnetosphere and into the upper atmosphere; investigate the entry, storage, and energization of plasma in Earth’s neighborhood; and assess how time variations in the deposition of energy in the upper atmosphere affect the terrestrial environment. Interactions of solar plasma with other planets and with satellites and comets will be investigated by a number of planetary probes already in space or on the drawing boards.
Up to now, information about Earth’s magnetosphere has been based upon measurements made continuously as various spacecraft move through the plasma and magnetic field in that region. An instantaneous global image of the entire magnetosphere can be made using ultraviolet emissions from ionized helium in the magnetosphere. It may also be possible to form an image of energetic particles by observing energetic neutral atoms as they propagate from various regions, having exchanged charge with other atoms there. Innovative experiments will be conducted from the shuttle to investigate the effects of waves, plasma beams, and neutral gases injected into Earth’s magnetosphere.
To date, our knowledge of the outer atmosphere of the Sun has been based upon remote sensing from the distance of Earth. In a new concept, a spacecraft would be sent on a trajectory coming to within 4 solar radii of the surface of the Sun, only 1/50th of Earth’s distance. The spacecraft would carry instruments to measure the density, velocity, and composition of the solar-wind plasma, together with its embedded magnetic field, in an attempt to discover where the solar wind is accelerated to the high velocities observed near Earth. Possible trajectories include a Jupiter swingby or a hypersonic flyby in the upper atmosphere of Venus. Such a mission would yield precise data on the gravitational field of the Sun with which to study its interior, and would test general relativity with higher precision. If a thruster were fired at the closest approach to the Sun, the energy change would be so great that the spacecraft would leave the Solar System with high velocity, reaching 100 times the distance of Earth in only nine years. This would provide measurements where the solar wind makes a transition to the local interstellar medium.
To acquire high-resolution information about the poles of the Sun over a long-period, a solar polar orbiter should be flown. A network of four spacecraft at the distance of Earth, but positioned every 90 degrees around the Sun, would provide stereoscopic views of solar features which are otherwise difficult to locate in space, and would also monitor solar flare events over the whole Sun. Such a network would also give early warning to astronauts outside the protective shield of Earth’s magnetic field.
Finally, plasmas in space should be studied for their own sake. Plasma is an inherently complex state of matter, involving many different modes of interaction among charged particles and their embedded magnetic fields. Our understanding of the plasma state is based upon theoretical research, numerical simulations, laboratory experiments, and observations of space plasmas. The synergy among these approaches should be developed and exploited. If neutral atoms and dust particles are present, as in planetary ring systems and in comets, novel interactions occur; they can be studied by injecting neutral gases and dust particles into space plasmas.
With the exception of the samples returned from the Moon by the Apollo astronauts and the Soviet robotic Luna spacecraft, and meteoritic materials that are believed to have fallen naturally on Earth from the asteroids, the Moon, and Mars, we have no samples of materials from bodies elsewhere in the Solar System for analysis in Earth-based laboratories.
Decades of study of meteoritic materials and lunar samples have demonstrated that vast amounts of information can be learned about the origin, evolution, and nature of the bodies from which samples are derived, using laboratory techniques which have progressed to the point where precise conclusions can be drawn from an analysis of even a microscopic sample.
The laboratory apparatus involved is heavy, complex, and requires the close involvement of people. Thus, given the substantial round-trip travel time for radio signals between Earth and these objects, it appears impractical to operate this equipment effectively under radio control on the bodies of greatest interest. The best method is to acquire and return samples, as was done by Apollo and Luna. Robot vehicles will be the most cost-effective approach to sample acquisition and return in the foreseeable future.
Unlike meteoritic materials, the samples will be obtained from known sites, whose location in an area which has been studied by remote sensing makes it possible to generalize the results to the body as a whole. Because of the variations among different provinces, samples are required from several sites in order to develop an adequate understanding of a specific object.
Considerable thought has been given to which targets are the most promising. They must be reachable, and samples must be able to be returned with technology that can be developed in the near future. Their surfaces must be hospitable enough so that collecting devices can survive on them, and they must be well-enough understood that a complex sample-return mission can be planned and successfully executed. For these reasons, as well as others noted in the text, we recommend that a sample return from Mars be accomplished as soon as possible.
Though at present no individual comet meets the criteria discussed above, comets in general are promising targets for sample return. A start on the study of comets has been made by the 1985 encounter with Comet Giacobini-Zinner, and the 1986 encounters with Comet Halley. The proposed mission to a comet and an asteroid in the Solar System Exploration Committee’s core program wi1l yield much more information. Comets are probably composed of ices of methane, ammonia, and water, together with silicate dust and organic residues. The evidence suggests that these materials accumulated very early in the history of the Solar System. Because comets are very small (a few miles in diameter), any heat generated by radioactivity readily escaped, so they never melted, unlike the larger bodies in the Solar System. It is quite possible, therefore, that the primitive materials which accumulated to form the Sun, planets, moons, and asteroids are preserved in essentially their original form within comets. It is even possible that comets contain some dust particles identical to those astronomers have inferred to be present in interstellar clouds. If so, a comet could provide a sample of the interstellar matter that pervades our Galaxy. The Space Science Board has given high priority to determining the composition and physical state of a cometary nucleus. No mission short of a sample return will provide the range and detail of analyses needed to definitively characterize the composition and structure of a comet nucleus.
Beyond the asteroid belt he four giant ringed planets (Jupiter, Saturn, Uranus, and Neptune), the curiously small world Pluto, more than 40 moons (two of which—Titan and Ganymede—are larger than the planet Mercury), and two planetary magnetospheres larger than the Sun itself. The center of gravity of our planetary system is here, since these worlds (chiefly Jupiter and Saturn) account for more than 99 percent of the mass in the Solar System outside of the Sun itself. The outer planets, especially Jupiter, can provide unique insights into the formation of the Solar System and the Universe. Because of their large masses, powerful gravitational fields, and low temperatures, these giant planets have retained the hydrogen and helium they collected from the primordial solar nebula.
The giant worlds of the outer Solar System differ greatly from the smaller terrestrial planets, so it is not surprising that different strategies have been developed to study them. The long-term exploration goal for terrestrial planets and small bodies is the return of samples to laboratories on Earth, but the basic technique for studying the giant planets is the direct analysis of their atmospheres and oceans by means of probes.
Atmospheric measurements, which will be undertaken for the first time by Galileo at Jupiter, provide the only compositional information that can be obtained from a body whose solid surface (if any) lies inaccessible under tens of thousands of miles of dense atmosphere. Atmospheric probe measurements, like measurements on returned samples, will provide critical information about cosmology and planetary evolution, and will permit fundamental distinctions to be made among the outer planets themselves.
The outer Solar System provides us with a special challenge, one that can be described as an embarrassment of scientific riches. It presents an overwhelming number of potential targets beyond the planets: The larger moons (Titan and Triton), the smaller moons (including the diverse Galilean satellites), the rings, and the magnetospheres.
Exciting possible missions include: (1) Deep atmospheric probes (to 500 bars) to reach the lower levels of the atmospheres of Jupiter and Saturn and measure the composition of these planets; (2) hard and soft landers for various moons, which could emplace a variety of seismic, heat-flow, and other instruments; (3) close-up equipment in low orbits; (4) detailed studies of Titan, carried out by balloons or surface landers; (5) on-site, long-term observations of Saturn’s rings by a so-called “ring rover” spacecraft able to move within the ring system; and (6) a high-pressure oceanographic probe to image and study the newly-discovered Uranian Ocean.
The size of the current generation of “great observatories” reflects the limitations on weight, size, and power of facilities that can be launched into low Earth orbit by the space shuttle. In the future, the permanently occupied Space Station will furnish a vitally important new capability for astronomical research—that of assembling and supporting facilities in space that are too large to be accommodated in a single shuttle launch.
Such large facilities will increase sensitivity by increasing the area over which radiation is collected, and will increase angular resolution using the principle of interferometry, in which the sharpness of the image is proportional to the largest physical dimension of the observing system. Though one or the other goal will usually drive the design of any particular instrument, it is possible to make improvements in both areas simultaneously. When we can construct very large observatories in space, these improvements wil1 be achieved over the whole electromagnetic spectrum. Although the Moon will also offer advantages for astronomical facilities once a lunar base becomes available, we focus our remaining discussion upon facilities in low Earth orbit.
A large deployable reflector of 65 to 100 feet aperture for observations in the far infrared spectrum, that is, diffraction limited down to 30 microns wavelength (where it would produce images of a fraction of an arc second across), will permit angular resolutions approaching or exceeding that of the largest ground—based optical telescopes. This project would yield high-resolution infrared images of planets, stars, and galaxies rivaling those routinely available in other wavelength ranges. Assembly in Earth orbit is the key to this observatory.
A large space telescope array composed of several 25-foot-diameter telescopes would operate in the ultraviolet, visible, and infrared. The combination of larger diameter telescopes with a large number of telescopes would make this instrument 100 times more sensitive than Hubble Space Telescope. Because the image would be three times sharper, the limiting faintness for long exposures would increase more than 100 times. Such an instrument would with exquisite angular and spectral resolution enable detailed studies of the most distant galaxies and studies of planets.
A set of radio telescopes 100 feet or more in diameter could be constructed in Earth orbit by astronauts to provide a very long baseline array for observing radio sources, with the radio signals transmitted to a ground station. Such radio telescopes in space could greatly extend the power of the ground—based Very Long Baseline Array now under construction. The angular resolution of the latter, 0.3 milliarcseconds (the size of a person on the Moon as seen from Earth), could be improved 300-fold by putting telescopes in orbits ranging out as far as 600,000 miles. The resulting resolution of 1/1000th of a milliarcsecond—or one microarcsecond—would enable us to image activity in the center of our Galaxy—believed to be due to a black hole—very nearly down to the black hole itself. It would also provide images of larger, more massive black holes suspected to he at the centers of several nearby galaxies.
A long-baseline optical space interferometer composed of two or more large telescopes separated by 300 miles would also provide resolution of 1 microarcsecond, although not complete information about the image. This resolution would permit us to detect a planet no lager than Earth in orbit around a nearby star (by means of its gravitational pull on the star) and to measure the gravitational deflection of light by the Sun as a high—precision test of general relativity.
A high-sensitivity x-ray facility, having about 100 times the collecting area of the planned Advanced X-ray Astrophysics Facility, could be assembled in orbit. A space station-serviced x-ray observatory would make possible the detection of very faint objects, such as stellar explosions in distant galaxies, as well as high-spectral resolution of brighter objects. This would make possible a study of x-ray signatures of the composition, temperature, and motion of emitted gases. For example, the theory that most heavy elements are produced in supernovae can be tested by studying die gaseous ejecta in supernova, remnants. A hard x-ray imaging facility with a large (1,000 square feet) aperture is needed to study x-rays with energies in the range from 10 KeV to 2 MeV. Sources of such radiation are known, but are too faint for smaller-aperture instruments to analyze in detail. It is important to find out whether the known faint background radiation at these energies is coming from very distant objects, such as exploding galaxies, or from gas clouds heated by the stellar explosions that accompany galaxy formation.
The future development of gamma-ray astronomy will depend upon the results of planned gamma-ray observatory, but it is anticipated that larger collecting areas and higher spectral and angular resolution will be needed to sort out the sources and carry out detailed spectroscopy. Cosmic-ray studies will require a superconducting magnet in space with 1,000 square feet of detectors to determine the trajectories of individual particles and hence their energy and charge.
The great observatories of the next century will push technology to its limits, including the capability to assemble large structures in orbit and on the Moon, the design of extremely rigid structures that can be tracked and moved with great precision, and the development of facilities on the Space Station for repairing and maintaining astronomical facilities in orbit. Because of the huge information rates anticipated from such observatories, great advances in computing will be required, especially massive data storage (up to 100 billion bits) accessible at high rates. The preliminary analysis of the data will be performed by supercomputers in orbit, transmitting only the results to the ground. The program will require a long—term commitment to education and support of young scientists, who will be the life blood of the program, as well as the implementation of high-priority precursor missions, including first-generation great observatories and moderate-scale projects.
The Evolution of Earth and Its Life Forms
Earth is the only one of our Solar System’s nine planets that we know harbors life. Why is Earth different from the other planets? Life as we know it requires tepid liquid water, and Earth alone among the bodies of the Solar System has had that throughout most of its history.
Biologists have long pursued the hypothesis that living species emerge very gradually, as subtle changes in the environment give decisive advantages to organisms undergoing genetic mutations. The recent discovery that the extinction of the dinosaurs (and many other species as well) some 65 million years ago appears to have coincided with the collision of Earth with a large object from outer space—such as a comet or asteroid—has led to new interest in “punctuated equilibrium.” According to this concept, a drastic change in environment, in this case the pall cast upon Earth by the giant cloud of dust that resulted from the collision, can destroy some branches of the tree of life in a short span of time, and thereby open up new opportunities for organisms that were only marginally competitive before. The story of the evolution of life on Earth—once the sole province of biology—thus depends in pan upon astronomical studies of comets and asteroids which may collide with our planet, the Physics of high-velocity impact, and the complex processes that govern the movement of dust in Earth’s atmosphere.
Atmospheric scientists are finding that within such short times as decades or centuries the character of life on Earth may depend upon materials originating in the interior of the planet (including dust and gases from volcanoes), chemical changes in the oceans and the atmosphere (including the increase in carbon dioxide due to agricultural and industrial activity), and specific radiations reaching us from the Sun (such as the ultraviolet rays which affect the chemical composition of Earth’s atmosphere). Through mechanisms still not understood, changes in Earth’s climate may in turn depend upon the evolution of life. It has become apparent that fife on Earth exists (in a complex and delicate balance not only with its own diverse elements, but with Earth itself, the Sun, and probably even comets and asteroids. Interactions among climatology, geophysics, geochemistry, ecology, astronomy, and solar physics are all important as we contemplate the future of our species; space techniques are playing an increasing role in these sciences.
Space techniques are also valuable for studying Earth’s geology. The concept of continental drift, according to which the continents change their relative positions as the dense rocks on which they rest slowly creep, is proving to be a key theory in unraveling the history of Earth as recorded in die layers of sediments bud down over millions of years.
The Possibility of Other Life in the Universe
Are we alone in the Universe? Virtually all stars are composed of the same chemical elements, and our current understanding of the process by which the Solar System formed suggests that all Sun-like stars are likely locales for planets. The search for life begins in our own Solar System, but based on the information we have gleaned from robotic excursions to Mercury, Venus, the Moon, Mars, Jupiter, Saturn, and Uranus, it now appears that Mars, and perhaps Titan, a moon of Saturn, are the most likely candidates for the existence of rudimentary fife forms now or in the past.
The existence of water on Mars in small quantities of surface ice and in atmospheric water vapor, and perhaps in larger quantities frozen beneath the surface, leaves open the possibility that conditions on Mars may once have been favorable enough to support life in some areas. Samples returned from regions where floods have occurred may provide new clues to the question of life on Mars.
Titan has a thick atmosphere of nitrogen, along with methane and traces of hydrogen cyanide—one of the building blocks of biological molecules. Unfortunately, the oxygen atoms needed for other biological molecules are missing, apparently locked forever in the ice on Titan’s surface.
How do we search for planets beyond our Solar System? The 1983 Infrared Astronomy Satellite discovered that dozens of stars have clouds of particles surrounding them emitting infrared radiation; astrophysicists believe that such clouds represent an early stage in the formation of planets. Another technique is to track the position of a star over a number of years. Although planets are much less massive than stars, they nevertheless exert a significant gravitational force upon them, causing them to wobble slightly. Through a principle called interferometry, which combines the outputs of two telescopes at some distance apart to yield very sharp images, it should be possible to detect planets—if they exist—by the perturbations they cause as they orbit nearby sun similar to our Sun. With sufficiently large arrays of telescopes in space we might obtain images of planets beyond the Solar System. By searching for evidence of water and atmospheric gases we might even detect the existence of life on those planets.
If life originated by the evolution of large molecules in the oceans of newly-formed planets, then other planets scattered throughout our Galaxy could be inhabited by living species, some of which may possess intelligence.
If intelligent life does exist beyond our Solar System, we might detect its messages. The Search for Extraterrestrial Intelligence, or SETI, is a rapidly advancing field. For several decades it has been technically possible to detect radio signals (if any) directed at Earth by alien civilizations on planets orbiting nearby stars. It is now possible to detect such signals from anywhere in our Galaxy, opening up the study of over 100 billion candidate stars. Such a detection, if it ever occurs, would have profound implications not only for physical and biological sciences, but also for anthropology, political science, philosophy, and religion. Are we alone? We still do not know.
To lift payloads in Earth’s gravitational field and place them in orbit, we must expend energy. We generate it first as the energy of motion—hence the great speeds our rockets must attain. As rockets coast upward after firing, their energy of motion converts, according to Newton’s laws, to the energy of height. In graphic terms, to lift a payload entirely free of Earth’s gravitational clutch, we must spend as much energy as if we were to haul that payload against the full force of gravity that we feel on Earth, to a height of 4,000 miles.
To reach the nearer goal of low Earth orbit, where rockets and their payloads achieve a balancing act, skimming above Earth’s atmosphere, we must spend about half as much energy—still equivalent to climbing a mountain 2,000 miles high.
Once in “free space,” the region far from planets and moons, we can travel many thousands of miles at small expenditure of energy.
A biosphere is an enclosed ecological system. It is a complex, evolving system within which flora and fauna support and maintain themselves and renew their species, consuming energy in the process. A biosphere is not necessarily stable; it may require intelligent tending to maintain species at the desired levels. Earth supports a biosphere; up to now we know of no other examples. To explore and settle the inner Solar System, we must develop biospberes of smaller size, and learn how to build and maintain them.
In order to grow food crops and the entire range of plants that enrich and beautify our lives, we need certain chemical elements, the energy of sunlight, gravity, and protection from radiation. All can be provided in biospheres built on planetary surfaces, although normal gravity is available only on Earth. These essentials are also available in biospheres to be built in the space environment, where Earth-normal gravity can be provided by rotation. Both in space and on planetary surfaces, certain imported chemicals will be required as initial stocks for biospheres.
Within the past two decades biospheres analogous to habitats in space or on planetary surfaces have been built both in the U.S.S.R. and in the United States. An example of biosphere technologies can be seen at the “Land” pavilion at the EPCOT Center (Experimental Prototype Community of Tomorrow) near Orlando, Florida.
Specialists in biospheres are now building, near Tucson, Arizona, a fully dosed ecological system which will be a simulation of a living community in space. It is Called Biosphere II. Its volume, three million cubic feet, is about the volume over a land area of four acres with a roof 20 feet above it. Biosphere II is much more than greenhouse agriculture. Within it will be small versions of a farm, an ocean, a savannah, a tropical jungle and other examples of Earth’s biosystems. Eight people will attempt to live in Biosphere II for two years.
The builders of Biosphere II have several goals: to enhance greatly our understanding of Earth’s biosphere; to develop pilot versions of biospheres, which could serve as refuges for endangered species; and to prepare for the building of biospheres, in space and on planetary surfaces, which would become the settlements of the space frontier.
In the early 1960s, the Government, through NASA, developed and launched the first weather satellites. When the operation of weather satellites matured, they were turned over to the Department of Commerce’s Environmental Science Services Administration, which became pan of the newly-established National Oceanic and Atmospheric Administration (NOAA) in 1970. Today, NOAA continues to operate and manage the U.S. civilian weather satellite system, comprised of two polar-orbiting and two geostationary satellites.
The Landsat remote sensing system had similar origins. Developed initially by NASA, the first Landsat satellite was launched in 1972; the most recent spacecraft in the series, Landsat 5, was orbited in 1984. So successful was the Landsat concept that a nationwide and worldwide group of users quickly grew, encouraged by NASA and the Agency for International Development (AID). A global network of ground stations now receives and processes Landsat transmitted data, and many countries incorporate Earth remote sensing in their development projects. Over the years, Landsat has proven to be one of the most Popular forms of American foreign aid.
Although remote sensing data are provided by the United States at re1atively low cost, many user nations have installed expensive equipment to directly receive Landsat data. Their investments in Landsat provide a strong indication of the data’s value.
Successive administrations and Congresses wrestled with the question of how best to deal with a successful experimental system that had, in fact, become operational. Following exhaustive governmental review, President Carter decided in 1979 that Landsat would be transferred to NOAA with the eventual goal of private sector operation after 7 to 10 years. Following several years of transition between NASA and NOAA, the latter formally assumed responsibility for Landsat 4 in 1983. By that time, the Reagan Administration had decided to accelerate the privatization of Landsat, but despite the rapid growth in the demand for these services, no viable commercial entity appeared ready to take it over without some sort of Government subsidy. In 1984, Congress passed the Land Remote Sensing Commercialization Act to facilitate the process.
Seven qualified bidders responded to the Government’s proposal to establish a commercial land remote sensing satellite system, and two were chosen by the Department of Commerce for final competition, One later withdrew after the Reagan Administration indicated that it would provide a considerably lower subsidy than anticipated. The remaining entrant, EOSAT, negotiated a contract that included a Government subsidy and requires them to build at least two more satellites in the series. In the fall of 1985, EOSAT, a joint venture between RCA Astro-Electronics and Hughes Santa Barbara Aerospace, assumed responsibility for Landsat.
The Government’s capital assistance to EOSAT is in limbo at this time because of the current budget situation, even though EOSAT was contractually targeted for such financial support. It is, therefore, too soon to say whether the Landsat privatization process will provide a successful model for the transfer of a Government-developed space enterprise to the private sector.
Factories that could replicate themselves would be attractive for application in space because the limited carrying capacity of our rocket vehicles and the high costs of space transport make it difficult otherwise to establish factories with large capacities. The concept of self-replicating factories was developed by the mathematician John von Neumann. Three components are needed for industrial establishment in space: a transporting machine, a plant to process raw material, and a “job shop” capable of making the heavy, simple parts of more transporting machines, process plants, and job shops. These three components would all be tele-operated from Earth, and would normally be maintained by robots. Intricate parts would be supplied from Earth, but would be only a small percentage of the total. Here is an example of how such components, once established, could grow from an initial “seed” exponentially, the same way that savings grow at compound interest, to become a large industrial establishment:
Suppose each of the three seed components had a mass of 10 tons, so that it could be transported to the Moon in one piece. The initial seed on the Moon would then be 30 tons. A processing plant and job shop would also be located in space—20 tons more. After the first replication, the total industrial capacity in space and on the Moon would be doubled, and after six more doublings it would be 128 times the capacity of the initial seed. Those seven doublings would give us the industrial capacity to transport, process, and fabricate finished products from over 100,000 tons of lunar material each year from then onward. That would be more than 2,000 times the weight of the initial seed—a high payback from our initial investment.
In an electromagnetic accelerator, electric or magnetic fields are used to accelerate material to high speeds. The power source can be solar or nuclear. There are two types of accelerators for use in space: the “ion engine” and the “mass-driver.” The ion engine uses electric fields to accelerate ions (charged atoms). Ion engines are compact, relatively fight in weight, and well-suited to missions requiring low thrust sustained for a very long time.
Mass-drivers are complementary to ion engines, developing much higher thrusts but not suited to extreme velocities. A mass-driver accelerates by magnetic rather than electric fields. It is a magnetic linear accelerator, designed for long service fife, and able to launch payloads of any material at high efficiency. Mass-drivers should not be confused with “railguns,” which are electromagnetic catapults now being designed for military applications.
A mass-driver consists of three parts: the accelerator, the payload carrier, and the Payload. For long lifetime, the system is designed to operate without physical contact between the payload carrier and accelerator. The final portion of the machine operates as ii, decelerator, to slow down each payload carrier for its return and reuse.
A key difference between the mass-driver and the ion engine is that the mass-driver can accelerate any solid or liquid material without regard to its atomic properties. Used as a propulsion system, the mass-driver could use as propellant, raw lunar soil, powdered material from surplus shuttle tankage in orbit, or any material found on asteroids. Its characteristics make it suitable for load-carrying missions within the inner solar system.
Another potential application for a mass-driver is to launch payloads from a fixed site. The application studied in the most depth at this time is the launch of raw material from the Moon to a collection point in space, for example, one of the lunar Lagrange points. A mass-driver with the acceleration of present laboratory models, but mounted on the lunar surface, would be able to accelerate payloads to lunar escape speed in a distance of only 170 yards. Its efficiency would be about 70 percent, about the same as that of a medium-size electric motor. Loads accelerated by a mass-driver could range from a pound to several tons, depending on the application and available power supply.
Technological advance across a broad spectrum is the key to fielding an aerospace plane. A highly innovative propulsion design can make possible horizontal takeoff and single-stage-to-orbit flight with high specific impulse (Isp). The aerospace plane would use a unique supersonic combustion ramjet (SCRAMJET) engine which would breathe air up to the outer reaches of the atmosphere. This approach virtually eliminates the need to carry liquid oxygen, thus reducing propellant and vehicle weight. A small amount of liquid oxygen would be carried to provide rocket thrust for orbital maneuvering and for cabin atmosphere.
A ramjet, as its name implies, uses the ram air pressure resulting from the forward motion of the vehicle to provide compression. The normal ramjet inlet slows down incoming air while compressing it, then bums the fuel and air subsonically and exhausts the combustion products through a nozzle to produce thrust. To fly faster than Mach 6, the internal geometry of the engine must be varied in order to allow the air to remain at supersonic speeds through the combustor. This supersonic combustion ramjet could potentially attain speed capability of Mach 12 or higher.
Such a propulsion system must cover three different flight regimes: takeoff, hypersonic, and rocket. For takeoff and acceleration to Mach 4, it would utilize air-turbo-ramjets or cryojets. From Mach 4 to Mach 6, the engine would operate as a conventional subsonic combustion ramjet. From Mach 6 to maximum airbreathing speeds, the engine would employ a supersonic combustion SCRAMJET. At speeds of about Mach 12 and above, the SCRAMJET engine might have additional propellant added above the hydrogen flow rates needed for utilization of all air captured by the inlet. This additional flow would help cool the engine and provide additional thrust. Final orbital insertion could be achieved with an auxiliary rocket engine.
Such a system of propulsion engines must be carefully integrated with the airframe. Proper integration of the airbreathing inlets into the 6frame is a critical design problem, since the shape of the aircraft itself determines in large part the performance of the engine. During SCRAMJET operation, the wing and forward underbody of the vehicle would generate oblique shock waves which produce inlet air flow compression. The vehicle afterbody shape behind the engine would form a nozzle producing half the thrust near orbital speeds. Second-generation supercomputers can now provide the computational capability needed to efficiently calculate the flow fields at these extremely high Mach numbers. These advanced design tools provide the critical bridge between wind tunnels and piloted flight in regimes of speed and altitude that are unattainable in ground-based facilities. In addition, supercomputers permit the usual aircraft design and development time to be significantly shortened, thus permitting earlier introduction of the aerospace plane into service.
The potential performance of such an airframe-inlet-engine-nozzle combination is best described by a parameter known as the net “Isp,” which is the measure of the pounds of thrust, minus the drag from the engine, per pounds of fuel flowing through each second. The unit of measure is seconds; the larger the value, the more efficient the propulsion. For the aerospace plane over the speed range of Mach 0 to Mach 25, the engines should achieve an average Isp in excess of 1,200 seconds burning liquid hydrogen. This compares with an Isp of about 470 seconds for the best current hydrogen-oxygen rocket engines, such as the space shuttle main engine. It is the high Isp of an air-breathing engine capable of operating over the range from takeoff to orbit that could make possible a single-stage, horizontal takeoff and landing aerospace plane. For “airliner” or “Orient Express” cruise at Mach 4 to Mach 12, the average Isp is even larger, making the SCRAMJET attractive for figure city-to-city transportation.
Another key technology is high strength-to-weight ratio materials capable of operating at very high temperatures while retaining the properties of reusability and long life. These can make possible low maintenance, rapid turnaround, reduced logistics, and low operational costs. Promising approaches to high-temperature materials include rapid-solidification-rate metals, carbon-carbon composites, and advanced metal matrix composites. In extremely hot areas, such as the nose, the use of active cooling with liquid hydrogen or the use of liquid metals to rapidly remove heat win also be employed. The use of these materials and cooling technologies with innovative structural concepts results in important vehicle weight reductions, a key to single-stage-to-orbit flight.
The Performance of rocket vehicles is primarily determined by the effective specific impulse of the propulsion system and the dry weight of the entire vehicle. Best Performance can be attained by burning a hydrocarbon fuel at low altitudes, then switching to hydrogen for the rest of the flight to orbit. This assures high effective specific impulse, thus minimizing the volume and weight of the tankage required for Propellants. Rocket engines have been studied which combine into one efficient design the ability to operate in a dual-fuel, combined-cycle mode. Lightweight versions of such engines are clearly possible, but will require technology demonstration and development.
The greatest leverage for high performance can be obtained by reducing the inert weight of the tanks, airframe, and other components, since they are lifted all the way into orbit and thus displace payload on a pound-for-pound basis. This holds for the entire vehicle in a single-stage design, and for the final stage (and to a lesser amount for the initial stage) in a two-stage vehicle. The use of new materials with very high strength-to-weight ratios at elevated temperatures could greatly reduce the weight of the tankage, primary structure, and thermal protection system. Thus, aluminum tankage and structure could be replaced with composite and metal matrix materials. Separate heat insulating thermal protection layers could be replaced with heat rejection via radiation by allowing the skin to get very hot, and perhaps by providing active cooling of some substructure. Wing and control surface weight can be minimized by using a control-configured design and small control surfaces.
Advances in these technologies, which should be feasible by the early 1990s, have the potential of reducing the vehicle dry weight dramatically, compared to designs for the same payload weight using shuttle technology. The performance of rocket vehicles using such technology would far exceed today’s values. Depending on the dry weight reductions actually achieved, the best vehicle could have either a single-stage fully-reusable design, or a fully-reusable two-stage design.
Attainment of low operating costs will depend most heavily on technology for handling and processing the launch vehicle and cargo in an automated, simple, and rapid manner. This includes self-checkout and launch from the vehicle’s cockpit, high reliability and fault-tolerance in the avionics, adaptive controls, lightweight all-electric actuators and mechanisms, standardized mechanisms for modularized servicing of the vehicle., and automated flight planning.
What goes up must come down—even in Earth orbit! The difference in space is that it can take millions of years for objects to be pulled back to Earth by friction with Earth’s atmosphere, depending on how dose they am to Earth. An object 100 miles above Earth will return in a matter of days, while objects in geostationary orbit will take millions of years to reenter.
Since the dawn of the Space Age, thousands of objects with a collective mass of millions of pounds have been deposited in space. While some satellites and pieces of debris are reentering, others are being launched, so the space debris population remains constant at approximately 5,000 pieces large enough to be tracked from Earth (thousands more are too small to be detected), This uncontrolled space population presents a growing hazard of reentering objects and in-space collisions.
As objects reenter, they usually bum up through the heat of friction with Earth’s atmosphere, but large pieces may reach the ground. This can constitute a danger to people and property, although there is no proof that anyone has ever been struck by a piece of space debris. There are numerous cases of such debris reaching the ground, however, including the reentry of the U.S. Skylab over Australia in 1979, and the unexpected reentry of two Soviet nuclear reactor powered satellites in 1978 and 1983.
The hazard of in-space collisions is created both by multiple collisions between pieces of debris and by intentional or unintentional explosions or fragmentations of satellites. When space objects collide with each other or explode, thousands of smaller particles are created, increasing the probability of further collisions among themselves and with spacecraft. A spacecraft is now more likely to be damaged by space debris than by small micrometeorites. For large, long-life orbital facilities, such as space stations and spaceports, the collision probabilities will become serious by the year 2000, requiring bumper shields or other countermeasures, and more frequent maintenance.
All spacefaring nations should adopt preventive measures to minimize: the introduction of new uncontrolled and long-lived debris into orbit. Such countermeasures include making all pieces discarded from spacecraft captive, deorbiting spent spacecraft or stages, adjusting the orbits of transfer stages so that rapid reentry is assured due to natural disturbances, and designating long-life disposal orbits for high altitude spacecraft. The increasing hazard of space debris must be halted and reversed.
In a purely physical sense, the Space Station will overshadow all preceding space facilities, Although often referred to as the “NASA” Space Station, it win actually be international in character; Europe, Canada, and Japan, in particular, plan to develop their own hardware components for the Station. As currently visualized, the initial Station will be a 350-foot by 300-foot structure containing four pressurized modules (two for living and two for working), assorted attached pallets for experiments and manufacturing, eight large solar panels for power, communications and propulsion systems, and a robotic manipulator system similar to the shuttle arm. When fully assembly, the initial Station will weigh about 300,000 pounds and carry a crew of six, with a replacement crew brought on board every 90 days.
To deliver and assemble the Station’s components, 12 shuttle flights will be required over an 18-month period. The pressurized modules used by the Station will be about 14 feet in diameter and 40 feet long to fit in the shuttle’s cargo bay. The Station will circle Earth every 90 minutes at 250-mile altitude and 28.5 degree orbital inclination. Thus the Station will travel only between 28.5 degrees north and south latitude. Unoccupied associate platforms that can be serviced by crews will be in orbits similar to this, as well as in polar orbits circling Earth over the North and South Poles. Polar-orbiting platforms will carry instruments for systems that require a view of the entire globe.
The Station will provide a versatile, multifunctional facility. In addition to providing housing, food, air, and water for its inhabitants, it will be a science laboratory performing scientific studies in astronomy, space plasma physics, Earth sciences (including the ocean and atmosphere), materials research and development, and life sciences. The Station will also be used to improve our space technology capability, including electrical power generation, robotics and automation, life support Systems, Earth observation sensors, and communications.
The Station will provide a transportation hub for shuttle missions to and from Earth. When the crew is rotated every 90 days, the shuttle will deliver food and water from Earth, as well as materials and equipment for the science laboratories and manufacturing facilities. Finished products and experiment results will be returned to Earth. The Station will be the originating point and destination for flights to nearby Platforms and other Earth orbits. The orbital maneuvering vehicle used for these trips will be docked at the Station.
The Station will be a service and repair depot for satellites and platforms orbiting in formation with it. Robotic manipulator arms, much like those on the shuttle, will position satellites in hangars or special docking fixtures. “Smart” repair and servicing robots will gradually replace astronauts in space suits for maintenance work, as satellites become more standardized and modular in design.
Space tethers have been known in principle for almost 100 years and were crudely tested in two Gemini flights in the 1960s. They were first seriously proposed for high atmosphere sampling from the shuttle by Italy’s Giuseppe Colombo in 1976, which led to a cooperative program between NASA and Italy scheduled to fly in 1988. In the past few years NASA has systematically explored tethering principles in innovative ways for many applications, so the value of space tethers is now becoming clear, and they will be incorporated in several space facilities.
Tethers in space capitalize on the fundamental dynamics of bodies moving through central gravity and magnetic fields. They can even provide a pseudo-force field in deep space where none exists. Energy and momentum can be transferred from a spacecraft being lowered on a tether below the orbital center of mass to another spacecraft being raised on a tether above it by applying the principle of conservation of angular momentum of mass in orbit. Upon release, the lower spacecraft will fly to a lower perigee, since it is in a lower energy orbit, while the upper will fly to a higher apogee. Thus, for example, a shuttle departing from a space station can tether downward and then release, reentering the atmosphere without firing its engines, while transferring some energy and momentum to a transfer vehicle leaving the station upward bound for geostationary orbit or the Moon. The result is significant propellant savings for both. Since the process of transfer and storage of energy and M0111entwn are reversible, outgoing vehicles can be boosted by the slowing of incoming vehicles. This can be applied in Earth orbit, in a lunar transfer station, or even in a two-piece elevator system of tethers on Phobos and Deimos that could greatly reduce propellant requirements for Mars transportation.
The generation of artificial gravity via tethers offers another class of opportunities. Spacecraft in orbit tethered together will experience an artificial gravity proportional to their distance from their center of mass. Current materials such as Kevlar can support tether lengths of hundreds of miles, allowing controlled gravity fields up to about 0.2g to be generated. By varying tether length, the forces can be set to any level between 0 and 0.2g. This can be used for settling and storing propellants at a space station, for life science research, and for simplifying living and working in space. By deliberately spinning a habitat on a tether only 1,000 feet long, levels of 1g or more can be generated at low revolutions per minute, with low Coriolis forces, to prevent nausea. Long tethers minimize the required mass of the structure, and can alter synthetic gravity by varying spin rate via reeling in or out of tethered counterbalancing masses,
If a tether is made of conducting material in orbit about a planet with a magnetic field (like Earth), it will act as a new type of space electric power generator, obtaining energy directly from the orbital energy of the spacecraft, or from a chemical or ion propellant used to keep the orbit from decaying. If power is driven into the tether instead (from a solar array or other source) it will act as an electric motor, and the spacecraft will change altitude, the tether acting as propellantless propulsion with a specific impulse of above 300,000 seconds. This feature can also be exploited by a tethered spacecraft in Jupiter’s strong magnetic field. Propulsion can be provided for maneuvers to visit the Jovian satellites and very high power can be simultaneously generated for the spacecraft and its transmitters.
As humans move out to settle space, the consequences of long-term exposure to less than Earth’s gravity must be fully understood. In our deliberations, the Commission has found a serious lack of data regarding the effects on the health of humans living for long periods of time in low-gravity environments.
NASA’s experience suggests that the “space sickness” syndrome that afflicts as many as, half the astronauts and cosmonauts is fortunately self-limiting. Of continuous concern to medical specialists, however, me the problems of cardiovascular deconditioning after months of exposure to microgravity, the demineralization of the skeleton, the loss of muscle mass and red blood cells, and impairment of the immune response.
Space shuttle crews now routinely enter space for periods of seven to nine days and return with no recognized long-term health problems, but these short-term flights do not permit sufficiently detailed investigations of the potentially serious problems. For example, U.S. medical authorities report that Soviet cosmonauts who returned to Earth in 1984 after 237 days in space emerged from the flight with symptoms that mimick severe cerebellar disease, or cerebellar atrophy. The cerebellum is the part of the brain that coordinates and smooths out muscle movement, and helps create the proper muscle force for the movement intended. These pioneering cosmonauts apparently required 45 days of Earth gravity before muscle coordination allowed them to remaster simple children’s games, such as playing catch, or tossing a ring at a vertical peg.
As little as we know about human adaptation to microgravity, we have even less empircal knowledge of the long-term effects of the one-sixth gravity of the Moon, or the one-third gravity of Mars. We need a vigorous biomedical research program, geared to understanding the problems associated with long-term human spaceflight. Our recommended Variable-g Research Facility in Earth orbit will help the Nation accumulate the needed data to support protracted space voyages by-humankind and life on worlds with different gravitational forces. We can also expect valuable new medical information useful for Earth-bound patients from this research.
Five U.N. treaties are currently in force regarding activities in space: the Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and other Celestial Bodies (1967); the Agreement on the Rescue of Astronauts, the Return of Astronauts, and the Return of Objects Launched into Outer Space (1968); the Convention on International Liability for Damage Caused by Space Objects (1972); the Convention on Registration of Objects Launched into Outer Space (1976); and the Treaty on Principles Governing Activities on the Moon and Other Celestial Bodies (1979). The major space nations, including the United States and Soviet Union, have ratified all but the last, which is more commonly referred to as the “Moon Treaty.” Only five countries have signed and ratified that agreement.
In addition to deliberations at the United Nations, there is an organization called the International Institute of Space Law, which is pan of the International Astronautical Federation that provides a forum for discussing space law at its annual meetings.
A specific opportunity for global space cooperation will occur in 1992. Called the International Space Year (ISY), it will take advantage of a confluence of anniversaries in 1992: the 500th anniversary of the discovery of America, the 75th anniversary of the founding of the Union of Soviet Socialist Republics, and the 35th anniversaries of the International Geophysical Year and the launch of the first artificial satellite, Sputnik 1. During this period, it is also expected that the International Geosphere/ Biosphere Program will be in progress, setting the stage for other related space activities.
In 1985, Congress approved the ISY concept in a bill that authorizes funding for NASA. The legislation calls on the President to endorse the ISY and consider the possibility of discussing it with other foreign leaders, including the Soviet Union. It directs NASA to work with the State Department and other Government agencies to initiate interagency and international discussions exploring opportunities for international missions and related research and educational activities.
As stated by Senator Spark Matsunaga on the tenth anniversary of the historic Apollo-Soyuz Test Project, July 17, 1985, “An International Space Year won’t change the world. But at the minimum, these activities help remind all peoples of their common humanity and their shared destiny aboard this beautiful spaceship we call Earth.”
PIONEERING THE SPACE FRONTIER: Table of Contents