Toward Distant Suns:

Chapter 4 – Energy, at Leisure Chapter 4 – Energy, at Leisure

Toward Distant Suns

by T. A. Heppenheimer

Copyright 1979, 2007 by T. A. Heppenheimer, reproduced with permission

Chapter 4: Energy, at Leisure

Early in the morning of October 6, 1973, Egyptian troops stormed across the Suez Canal and cracked the Israeli defenses. Eleven days after the outbreak of war, representatives of the Arab oil-producing nations met in Kuwait. There they agreed to cut back production of oil and to embargo its shipment to the U.S., as well as to the nations of Europe which were friendly to Israel. The embargo was never total, and in March 1974 it was lifted entirely. But within those few months, the world had changed. The result of that epochal meeting in Kuwait was not only the energy crisis, which saw motorists getting in line at five a.m. at gas stations which would not open till eight. There was the looming realization that cheap and accessible energy could never more be taken for granted.

The end of the cheap-energy era came quite suddenly. In 1947 the Persian Gulf’s oil reserves had come under the control of an Anglo-American consortium of oil firms, which set the price of crude at $2.17 a barrel. The price stayed at that level till 1959, when the oil cartel cut it to $1.79. This price cut led the Arab nations to organize a counter-cartel in 1960, which they called the Organization of Petroleum Exporting Countries (OPEC). For thirteen years, despite increasing control over the oil resources in their lands, the OPEC ministers were unable to budge the price of oil from that level of $1.79; but at the Kuwait meeting they made their move. The price went up to $5.12 a barrel.

In December 1973 the Shah of Iran, an advocate of much greater price hikes, held an auction of oil on the spot market, selling individual tanker-loads of oil not committed to long-term contract. When some sales went as high as $17 per barrel, the Shah knew he could safely push for what would be a large hike indeed. By early 1974 he had led OPEC into a price rise to $10.95 per barrel, and there
it stayed.

With the lifting of the embargo, gas lines disappeared, supplies once again became readily available, and prices steadied. These higher prices cut the growth of oil consumption, and in the mid-1970s Arabian crude was in oversupply. In these same years, important new oil fields were brought into production in Alaska and the North Sea, lessening world dependence on the OPEC powers.

However, this apparent return to normalcy was in fact merely an Indian summer of the old order, with a new winter of oil shortages only a few years off. The second oil crisis was triggered early in 1979, when Iran’s Ayatollah Khomeini ordered a strike of Iranian oil production in order to build pressure for an overthrow of the Shah. When the Shah left the country and Khomeini was installed as head of a new government, he resumed oil exports, but at a level two million barrels a day less than before. This move, plus the interruption in exports during the strike, proved quite sufficient to take out all the slack in world oil production, putting massive, new, upward pressure on prices. Prior to Iran’s upheavals, the OPEC producers had frequently sold petroleum at a discount from their posted price of $12.70 per barrel. After the crisis they raised the price to $18, but by restricting production they found few difficulties in demanding surcharges, which lifted the actual price per barrel to $22 or $23. The result: a return to the gas lines and station closings of 1974, with motorists paying over a dollar per gallon.

The roots of this second and likely more enduring shortage trace to 1956. In that year Egypt seized the British-controlled Suez Canal, touching off an invasion by the combined forces of Britain, France, and Israel. The U.S. secretary of state, John Foster Dulles, refused to support this high-handed action, and forced a withdrawal by the invading forces. This meant a defeat for the old, aggressive methods of imperialism and gunboat diplomacy; but in the long run it meant that the industrialized nations would become dependent upon the oil of small, weak, unstable nations, without having direct control in those countries. At the very moment when the old policies of imperialism and colonialism stood to reap their biggest dividends, these very policies stood discredited and impotent.

Also in 1956 the geologist M. King Hubbert first pointed out the limits on the U.S. oil supply. He asserted the U.S. would discover and produce no more than 150-200 billion barrels, and this meant domestic production would peak out by 1970 and thereafter decline. More optimistic estimates predicted a total of 590 billion barrels, which would push the date of peak production to close to 2000. Hubbert in turn replied that those estimates assumed we would continue to discover new oil at the same rate as in the past, but that in fact new discoveries had dropped off drastically. Indeed, by tracing this falloff, Hubbert once again was led to predict a total of some 170 billion, with a peak of production in the late 1960s.

Hubbert was right. Domestic production peaked out at 10 million barrels per day in 1970, and by 1973 was down to 9.3. In 1978, even with 1.2 million per day from Alaska, total production was down to 8.7 million and falling rapidly in the “lower 48” states. Daily consumption was up to some 17 million barrels, the difference being imported. In 1976 domestic oil reserves fell to 30.9 billion barrels; 2.8 were consumed. This supply was offset by 1.1 billion barrels of additions, mostly book¬keeping shifts from “probable reserves” to “proved reserves. ” Only 0.068 billion barrels were found in new fields. Nor were prospects bright for new discoveries in offshore fields. In the late seventies the major oil companies spent nearly $1 billion—the cost of their 1968 Alaskan leases—to gain the right to drill in a promising formation, the Destin Anticline, in the Gulf of Mexico. The result: dry holes. Much the same happened in another formation, Baltimore Canyon.

The 1978 discovery of huge new reserves in Mexico sparked hope that that nation might become
a major oil producer. In the long run, it probably will, but that oil is under the control of the domestic oil monopoly, Pemex. Among oilmen, Pemex has a solid reputation for featherbedding labor practices, for waste and inefficiency; it is known as the only oil company ever to lose money. This may in part explain the policy of Mexico’s president, Lopez Portillo, which is that that oil will be brought into production slowly, and exports kept to modest levels.

When President Carter came to office in 1977, he identified coal as the key fuel, the expanded production of which would alleviate our reliance in oil and natural gas. He called for America’s production to double to 1.2 billion tons per year by 1985. However, this goal may be out of reach, and the problems with coal illustrate sharply the disarray in the nation’s energy programs.

Since 1973 regulatory and legal delays have lengthened drastically. The time necessary to open a large coal mine has increased from five years to as much as ten or more. Many federal, state, and local agencies have set separate but overlapping requirements, which differ from place to place. As many as one hundred permits now are needed to open a mine, all of which must be obtained before production can begin.

A most valuable resource is the coal of the Rocky Mountain states. Much of Carter’s proposed expansion in coal production is to take place there, through the development of enormous strip mines. In Montana, seven such sites are planned, with a projected output of seventy-five million tons per year. Western coal is lower in energy content than is coal from eastern mines and must be hauled longer distances to its users. But it has an important advantage: It is low in sulfur.

Sulfur is one of the worst pollutants in coal, and in 1971 permitted emissions of sulfur oxides were set by law at a very low level. To comply with the law, utilities had two choices. They could install complex and trouble-plagued pollution-control devices, known as stack-gas scrubbers, to capture the sulfur oxides after burning and prevent their discharge. Such scrubbers have increased the cost of power plants by up to one-third. Or, the utilities could burn coal that is naturally low in sulfur. Much western coal is low enough to comply with the law, so that its use might be expected to grow rapidly.

However, in 1977 Congress passed amendments to the Clean Air Act, which now require all new coal-burning facilities use stack-gas scrubbers. Thus, new coal-fired power plants cannot comply with the law by using low-sulfur coal; they must have costly scrubbers, too. This deprives western coal of its competitive advantage, which is the same as writing off prospects for its rapid development. Thus, if 1985 comes and U.S. coal production is lagging, it will not be because of an energy crisis. It will be due to a legal and regulatory crisis.

The situation is even less promising with nuclear power. The era of commercial nuclear electricity got under way during the Kennedy administration with the building of a number of plants of 200-300 megawatts capacity. These were similar in design to the well-tested reactors that powered naval submarines. By the late 1960s, with little hindrance from the Atomic Energy Commission, the nuclear industry was moving aggressively to promote and build a new generation of much larger plants. These plants, in the 800-1200 megawatt range, lacked the strong technical base of operating experience that had supported the earlier ones. To some nuclear critics, it was as if the airline industry had moved in one leap from propeller-driven aircraft to the wide-body jets, without an intervening decade of operating experience with jets like the 707 or DC-8.

Inevitably there were problems. The new plants were often out of service for repairs or maintenance. Increasing regulation of the nuclear industry meant that new plants could need up to twelve years from time of order to entering service; costs soared. By the mid-1970s utilities had lost
much of their early interest in nuclear plants and had entered into a de facto moratorium by virtually ceasing their new orders. Still, the momentum of earlier orders meant that by early 1979 there were seventy-two plants in service, generating 13 percent of our electricity, with another ninety in various stages of construction. Chicago was getting half its power from the atom; New England, some 40 percent.

Then came Three Mile Island. On March 28, 1979, the nuclear plant near Harrisburg, Pennsylvania, suffered the worst reported accident to date in a commercial nuclear plant. This plant had been ordered in 1968, as one of the new generation, with a rated power of 865 megawatts. The accident started when a pump failure forced a “turbine trip” or generator shutdown. Auxiliary pumps were supposed to circulate cooling water to the reactor core, but could not do this because an operator had mistakenly left two key valves shut. In the control room another error took its toll: An important gauge malfunctioned, leading controllers to the mistaken conclusion that there was adequate cooling water in the core. Before things got back under control, the core had suffered massive overheating and partial melting, spilling very high levels of radioactivity into the containment structure, which housed the reactor proper. Some of this radioactivity leaked into the air, and many residents of the surrounding areas had to be evacuated. In the days that followed, it appeared to many observers that key people in charge, at both the Nuclear Regulatory Commission and the power-plant utility, simply had lacked adequate understanding of the systems they were trying to control.

This accident brought home to the nuclear skeptics the inherently risky nature of the enterprise and the need for better safety precautions. However, this accident did not mean the demise of the nuclear industry. Its long-term effect probably would be more nearly comparable to the Apollo 204 fire in January 1967. As the spacecraft of that name was undergoing ground tests, a sudden fire broke out, taking the lives of its three astronauts. The result was a wholesale reorganization of the Apollo project, and a strong new emphasis on safety, particularly against fire hazards.

It is an inherently risky matter to generate electricity with nuclear energy. It is also inherently risky to put two hundred people in an airplane and send them hurtling across the sky at thirty thousand feet and six hundred miles per hour. In the aviation industries, safety is part and parcel of all design and operating practice, and pilots and crewmen have long experience and training. Today’s challenge is for the nuclear industry to make a similar commitment to safety. To the degree it can do that, it will still have a future. Nevertheless, old dreams of a nuclear-electric America can hardly be sustained. Not only is there the safety question; there also is the availability of uranium. It is becoming more costly, and current projections suggest there will be only enough, at acceptable prices, to fuel two hundred large reactors for their thirty-year lives.

When President Carter was inaugurated, many of these events lay in the future, but the trend of our increasing dependence on Arab oil was plain. In April 1977 he addressed the nation on the subject of energy, calling on Americans to fight the “moral equivalent of war.” In this he was wrong; it was not the moral equivalent of war. It was, and is, the economic equivalent of war. But in facing these latest challenges, the U.S. is not without the means to cope.

It is the present policy of the Department of Energy to proceed with support of full-scale commercial synthetic-fuel plants, which by 1983 or 1984 will give us energy options we do not now possess. These options would take the form of proved and demonstrated technologies, which could then be expanded as needed.

Two of these options involve the conversion of coal into clean fuels by a process known as solvent refining. A third process will produce pipeline-quality natural gas from coal at a plant to be built in North Dakota. These initial plants will not in themselves solve the energy problem; the two solvent-refining plants, for example, will each produce the equivalent of only about twenty thousand barrels of oil per day, or a thousandth of our domestic requirements. Their importance lies in the new prospects they will offer. The U.S. has some 437 billion tons of coal that is currently mineable, enough to produce over a trillion barrels of liquid fuels. This is twice the total world reserves of petroleum, and six times the proved reserves of Arabia.

An even more significant synfuels industry may grow up around the oil shale of Colorado, Utah, and Wyoming. Those states contain vast deposits of shale, which is impregnated with a rubbery solid, kerogen. When heated by injecting air underground and lighting a fire, the kerogen turns to oil, which can be collected and pumped out. The total U.S. reserves of shale oil dwarf even the potential of oil from coal; these reserves amount to some two trillion barrels.

These synfuels industries will produce more than enough energy to tide us over to the era of permanent or renewable sources, but their products will not be cheap. Their cost will be some $30 per barrel at the refinery, and gasoline from these sources will cost $1.50 per gallon in today’s dollars. However, the auto industry by then will be building diesel-powered cars, which will get fifty miles to the gallon. When fifty-mile-per-gallon cars burn $1.50 gasoline, it will be the same as in the good old days, when cars got twelve miles per gallon but gas was $.36 at the pump.

It is far too early to write off prospects for America. Time and again, Americans have gone ahead with business as usual, downgrading or ignoring challenges from overseas till they were galvanized into action by a sudden shock—a Pearl Harbor, a Sputnik. Then they have responded with vast and successful national efforts, confounding the critics who pronounced us too weak, too irresolute, too self-centered, or preoccupied with personal pleasures. It could well happen again and probably will.

And when, by whatever means, we finally begin to treat energy with the seriousness it deserves, we will go forward to develop permanent, renewable sources. These will last us not for mere decades, nor even for centuries, but for as far into the indefinite future as we will care to plan. When they are fully developed and operating, they will be the foundation for our civilization.

In recent years one such permanent source has dominated our energy research budgets: the fast-breeder reactor. In the breeder, uranium 238 is exposed to neutrons and is converted into an isotope of plutonium, Pu-239, which is fissionable and can produce power. The attractive feature is that the breeder not only produces power, it can produce more plutonium than it needs to keep running. It has been described as functioning like a soda machine in which you would put in a quarter and which would give you not only a soda, but thirty cents.

However, there is more to nuclear energy than just the production of power; and the breeder offers some of the worst side effects imaginable. Plutonium is separated out and purified by a chemical process known as Purex. The Purex process has a long and distinguished history; it was first invented to produce plutonium for nuclear bombs. So a fast-breeder reactor, with its facilities for producing plutonium, is nothing less than a factory for atomic weapons. Alvin Weinberg, former director of the nuclear laboratories at Oak Ridge, Tennessee, had this feature in mind when he wrote
in Science, July 7, 1972:

We nuclear people have made a Faustian bargain with society. On the one hand we offer—in the catalytic nuclear burner [breeder]—an inexhaustible source of energy. . . . But the price we demand of society for this magical energy source is both a vigilance and longevity of our social institutions that we are quite unaccustomed to.

Of course, to say that disasters could happen is not to say that they will. The U.S. military has for over thirty years safeguarded some one hundred tons of bomb-grade plutonium, much of it in the form of bombs. But Weinberg’s “Faustian bargain” is quite real, and the most likely role for the breeder is as the energy source of last resort. As long as there are other prospects for achieving energy, the breeder will face most severe opposition and will be developed only with great reluctance.

Fortunately, the fast-breeder is not the only available permanent energy source. It is not generally appreciated that there is a way to build a nuclear plant whereby it will tap a virtually inexhaustible fuel supply, require no enrichment of uranium, operate quietly and routinely with almost no shutdowns, while producing at least half a million kilowatts of power. Such reactors indeed are working every day and have done so since 1967 in Ontario, Canada. These are the reactors of the CANDU (Canadian Deuterium Uranium) type.

CANDU reactors use their neutrons very effectively and economically and run on natural, unenriched uranium. The low natural fraction of fissionable U-235, 0.71 percent, is no handicap because these reactors use heavy water (deuterium oxide) as a moderator. All reactors require a moderator: a substance which slows down neutrons while absorbing as few as possible. U.S. reactors use ordinary water as a moderator, but heavy water is much better and absorbs far fewer neutrons. Hence CANDU reactors can operate with incredibly low fractions of U-235.

Present CANDU reactors, unfortunately, do produce plutonium. However, such reactors can operate using an entirely novel nuclear fuel, thorium, which is three times as abundant as uranium but which cannot be fashioned into bombs. A CANDU reactor, fueled with thorium, would use a little natural uranium as a source of neutrons; the neutrons would convert some of the thorium (Th-232) into another isotope of uranium, U-233, which is fissionable and which would produce the reactor’s energy. Thereafter, the system would need no further uranium and only small amounts of abundant thorium; it would be nearly self-sufficient. To use the soda-machine analogy, it would not return a soda plus thirty cents, but it would deliver a soda plus at least twenty-four cents.

This is not the only hopeful prospect on the energy horizon. Within a very few years, the long-held dream of nuclear fusion should be well on its way to reality.

Serious fusion research started early in the 1950s, when such physicists as Lyman Spitzer used simple physical theories to propose that it should be possible to confine a plasma, a gas of charged particles, within magnetic fields, which would form a “magnetic bottle.” If the plasma were a mixture of deuterium and tritium, the heavy isotopes of hydrogen, and if it were heated to sixty million degrees centigrade and kept there, then there would be produced energy reactions similar to those which power the Sun. A man-made star, a new source of power, would glow within the laboratory.

The first fusion experiments took place in such machines as Los Alamos’ Perhapsitron (the name reflected the dubious nature of the enterprise) and Princeton ‘s Stellarator (which expressed the hope of harnessing the energy of stars). The results were immediate. When the physicists turned on the power, the plasmas propelled themselves out of their magnetic bottles within a few microseconds.

Obviously, something was very wrong. By the early 1960s, it was clear that there were two major problems that stood in the way of success. The first was that plasmas in magnetic bottles were unstable. There was a depressingly long list of ways the plasmas could manifest instabilities; any one of these would prevent a magnetic bottle from confining the plasma.

The second problem was more subtle and involved the rate at which a plasma would leak from even a well-designed magnetic bottle. In the simple theory of the early fifties, the leakage was held to
be governed by “classical diffusion,” which would diminish rapidly with the increasing strength of the magnetic field. Hence modest increases in the field strength would aid greatly in controlling the leakage. But the fusion machines of the sixties showed “Bohm diffusion,” first studied by the physicist David Bohm. Bohm diffusion diminished much more slowly with increasing magnetic strength, so that to achieve a properly low leakage rate would require an impracticably strong magnetic field. [Author’s footnote: To be precise, classical diffusion scales as 1/B2, where B is the magnetic field strength. Bohm diffusion scales as 1/B.]

These problems slowed fusion research to a crawl and spawned the legend that fusion would always be impractical barring a breakthrough. But the breakthroughs came. By the late sixties physicists armed with powerful new theories were conquering one instability after another. The true breakthrough came from the Soviet Union. In 1968 Lev Artsimovich, director of Moscow’s leading fusion research center. announced test results from a new type of fusion machine, the tokamak (a Russian acronym for “toroidal magnetic chamber”). The tokamak did not show Bohm diffusion. This meant that if only a tokamak was built large enough, it would work. It would produce power.

Soon many physicists were building tokamaks or modifying existing machines. In November 1971 Robert Hirsch, director of the U.S. fusion program, went before a Congressional hearing and stated that with sufficient funding, the U.S. could have a working fusion power plant by 1995. The result of these new developments was that fusion funding, at less than $30 million per year through the sixties, suddenly took off. In 1974 it was $51 million; by 1978 it was $290 million, and heading higher.

This funding served to build new, large tokamaks for further experiments, and results were not long in coming. In 1977 MIT’s Alcator machine set a record by attaining 50 percent of the plasma conditions required to produce net fusion power. [Author’s footnote: The conditions are given by the Lawson criterion, which states that the product of plasma density (particles per cm3) and confinement time (seconds) must be 6 x 1013 or greater. The Alcator record was 3 x 1013.] However, the Alcator results were at the comparatively low temperature of ten million degrees centigrade. In 1978 the Princeton Large Torus raised the plasma to sixty million degrees centigrade using a new method of heating, an important accomplishment showing that even at that high temperature the plasma was stable.

True fusion power, the condition of “breakeven” wherein a fusion experiment produces more energy than it needs to operate, will come with the next major experiment. This is Princeton’s Tokamak Fusion Test Reactor (TFTR), a $230 million machine scheduled for completion early in 1982. So within a very few years, we can anticipate a dramatic announcement: the opening of the Fusion Age.

Nor will the TFTR be the last word in fusion reactors. Intensive work is already being done to devise advanced fusion machines, which will make more effective use of their magnetic fields and thus lower their costs. Other experiments are studying the use of fuels more advanced than deuterium-tritium. Additionally, there have been advances in an entirely different approach to fusion, in which a pellet of fuel is instantaneously compressed or imploded by being struck with powerful energy beams. The Shiva experiment, at Lawrence Livermore Laboratory near San Francisco, has already shown how a large laser can do this. Its successor, called Nova, will go further and reach breakeven by this entirely new route, a year or two after the TFTR. Other experiments, aiming at producing fusion using electron beams or beams of high-energy atomic nuclei, also are making important contributions. But the success of the TFTR will be the key event.

And with this achievement, the fusion program will reach a milestone that compares with the event which opened the Atomic Age: the first successful nuclear reactor in 1942. The Princeton experiment will not mean that fusion power will thereafter be lighting America. As with atomic power, it will mean that perhaps in fifteen more years—by century’s end—the first fusion plants will be producing commercial power. In another fifteen years, say by the year 2015, fusion, like nuclear power today, will begin to make an important contribution to the nation ‘s energy needs.

Success in fusion, then, may form the foundation for civilization in the next millennium; but it will not transform our energy needs overnight. This drawback raises the question of whether we will speed things up by relying on that big fusion reactor in the sky, the Sun. Solar energy has in recent
years become quite popular, and there is no doubt it has an important role to play. It can heat homes,
provide hot water, and serve a variety of uses where conventional electricity is too costly, as in remote areas. Solar energy can grow crops to be fermented to produce alcohol, which can be blended with gasoline. As its proponents never tire of noting, solar power is decentralized, democratic, available to all. Advocates such as Amory Lovins (of Friends of the Earth) have hailed solar power as the way to free the nation from dependence on the oil companies and other energy giants. They suggest the prospect of a world where everyone will have his own personal energy system.

There is nothing fundamentally absurd with decentralized, personal energy systems; most people have a decentralized, personal transportation system in the garage. But the history of decentralized energy is not encouraging. Many farmers and ranchers used to have windmills to generate power. But the windmills were costly and unreliable, and these people gladly welcomed the chance to hook into the nearest electric power network. Even today, few people propose to build a life-style around the ideal of self-sufficiency in energy, since it is much more convenient to pay the monthly electric bill than to wrestle with bulky generating equipment in the backyard. The joys of decentralized power, of Lovins’ “soft energy paths,” somewhat resemble those of centralized transport, of many forms of mass transit. In both cases the advantages are more convincing to social planners or political reformers than they are to the average citizen.

There are a variety of projects for using the Sun to generate electricity on a large scale, but few of them carry much conviction. Furthest advanced is the “power tower,” which employs large fields of mirror reflectors to focus desert sunlight onto an elevated boiler. It may see limited use in some communities of the Southwest, but at up to fifteen times the cost of a coal-fired plant, it bids fair to be one of the most costly ways of producing electricity ever invented. Somewhat better prospects exist for large windmills, with blades as large as the wings of a 747; but these would largely be limited to windy areas in Wyoming and the Rockies. Most dubious of all is the proposed Ocean Thermal Energy
Conversion system to produce electricity by taking advantage of the temperature difference between warm surface waters and cold deep waters. Commercial-size systems would need heat exchangers with the surface area of 150 football fields and would lose most of their performance with the growth of a layer of marine slime only one one-hundredth of an inch thick. If the plant were shut down for even a few days, the heat exchangers would be ruined by being overgrown with another form of marine life—barnacles.

So we can expect that future years will see increasing attention paid to a solar power system which works around the clock, can be built quickly, and offers the prospect of competitive costs for its energy. This, to be sure, is the power satellite. Should it go forward, this more than anything else would spark a space program of truly large dimensions. It is the power satellite which appears as the best initiative leading to space colonization.

The concept of the power satellite sprang full-blown from the mind of one man: Peter Glaser, vice-president of the consulting firm Arthur D. Little, Inc. In 1968 Glaser proposed that it would become possible to place arrays of solar cells in geosynchronous orbit, the arrays miles in dimensions and weighing a hundred thousand tons. The resulting electricity would be converted into a focused beam of microwaves and sent to Earth. There, it would be aimed at a receiving antenna or rectenna, which would convert the microwave energy back to electricity.

If the power satellite is to be practical, it will be necessary to develop vast new space projects. Immense space freighters will be needed to carry equipment to orbit at very low cost. Large crews of space workers will be shuttled to orbit and will need sustaining systems. The art of building space structures will need major developments. Above all, the cost of solar cells will have to drop from the present $10 per watt to $.50 or even less.

The power satellite thus is a tall order, but if it is not yet a formal project, it still is already something more than a gleam in the eyes of its proponents. Early in 1976 the Office of Management and Budget requested that ERDA, the Energy Research and Development Administration, consider the powersat concept as part of its solar energy program. An ERDA Task Group reviewed the NASA work on powersats and recommended a three-year study program to answer key questions.

The result was a joint NASA-Department of Energy “Concept Development and Evaluation Program,” which got under way in 1977. Funded at $15.6 million (an amount later raised to $22.1 million), it was a three-year study effort with the announced goal “to build confidence in the viability of SPS [powersats] as a promising energy technology, or, at as early a date as possible, clearly identify barriers to SPS.” However, as early as 1977, key ERDA managers were noting that “no obvious and clearly insurmountable problems have been identified by the ERDA Task Group. ” By early 1979 the word was out: The study would favor the powersat and would recommend its study be funded as a promising new energy source.

More powersat funding is likely to come. Early in 1978 Congressman Ronnie G. Flippo introduced a bill to allocate $25 million to start technical development of powersats. (His motives were not exactly disinterested; NASA’s Marshall Space Flight Center, a leading center for powersat studies, lies in his district.) In June 1978 the “Flippo bill” passed the House of Representatives by a vote of 267 to 96. A similar bill introduced in the Senate died in committee when Congress adjourned. Nevertheless, it was clear that Congress had shown its interest. The administration responded by requesting $8 million for powersat studies in fiscal year 1980, up from $6.6 million in 1979.

Passage of the Flippo bill, or else announcement of a new administration initiative following completion of the 1977-80 study effort, would put the power satellite in roughly the same funding position as was fusion in the late 1950s. But the race is not always to the swift, nor to the earliest starter; the first commercial powersats could be on-line even before the first commercial fusion plants.

A competition between fusion plants and power satellites will be a most leisurely, drawn-out
affair. Still, even before this competition is well begun, it is possible to make a small bet as to the winner.

It may be that fusion plants will have many similarities to the plants spawned by their parent technology, nuclear power. A fusion plant will probably be a huge, costly, complex affair, prone to expensive shutdowns. Its design will make it as much a plumber’s nightmare as any nuclear plant. Its unreliability would not stem from the difficulty in operating numerous safety systems designed to keep its reactions under control; quite the contrary. The fusion reaction will be so difficult to start and maintain, so easily quenched, that it will take considerable effort merely to keep it running normally.

Fusion plants will not produce plutonium (at least, not without major modifications), and they will not be capable of a core meltdown. But they will still produce copious radioactive waste in the form of heavy reactor parts irradiated to the point where they lose strength and must be replaced, the replacement of which will be no mean feat. In addition, they will produce radioactive tritium, which mixes freely in water and will call for the strictest of controls. To a nuclear critic, it will not go unnoticed that the containment structure which physically houses a tokamak reactor will be indistinguishable from that which houses a nuclear one. [Author’s footnote: A
tokamak must operate in vacuum; its containment building will keep the atmosphere out. This contrasts to that of a fission reactor, which serves as a safety measure to keep radioactivity in.]

Nuclear power has thus far been a technical disappointment, and its successor, fusion power, may share many of its weaknesses. By contrast, power satellites may flow from an area of technical strength: electronics. A powersat’s solar cells will be solid-state electronic devices. Much the same will be true of the klystrons or microwave generators, which will transform the solar cells’ electricity into the focused beam of microwaves directed to the rectenna. The control and shaping of the beam will be by other solid-state electronics known as ferrite-core phaseshifters. At the rectenna still other devices, the Schottky-barrier diodes, will convert the microwaves back to electricity.

The safety of a powersat will be assured without complex plumbing or opportunity for human error, for the powersat’s transmitting antenna and the ground rectenna will have to cooperate. The transmitting antenna will have many small klystrons, all of which must oscillate in step like soldiers marching if the microwave beam is to be formed and focused. Soldiers cannot keep in step without a sergeant counting cadence; the “sergeant” will be the rectenna. It will send upward a pilot beam, fed with a small amount of its energy, which will serve as a reference signal. If the main power beam wanders, the pilot beam will go out, and with it the focus of that main beam. It will spread out, dissipating its power harmlessly into space.

The safety of microwaves has recently become a topic for research, but over forty years of experience with radar and similar enterprises have not shown any effect of microwaves other than warming, as in diathermy. Unlike radioactivity and its radiations, microwaves have not been found to damage cells or genes at low levels of exposure. Microwaves are much more nearly similar to the oscillating electromagnetic fields with which we all live night and day because of our household and office use of alternating electric current.

The present U.S. standard for exposure to microwaves is ten milliwatts per square centimeter, which is exceeded in the central regions of the rectenna. The rectenna thus will not quite be a place for family picnics. For protection of the general public, however, its most important safety feature may well be simple: a chain-link fence.

It thus is the powersat that can be a theme that will renew our reach into space, spark our hopes for the building of space colonies. There will be in this a taste of things to come. The colonies in time may serve as outposts from which we will face the vaster space that is the milieu of the stars. The powersats themselves will remind us of that, for from the ground they will be seen to reflect sunlight from their orbits, 22,300 miles up. Glistening, twinkling, shining in the night, the powersats themselves will appear as an arc of bright dots across the sky. They will be, indeed, new stars.

To build a powersat, however, will take more than electronics. There will be need for major advances in rockets. This, of course, is an old story; the dreams of astronautics have always flown as payloads requiring such rockets. Nor are such rockets new; their development and improvements have covered much of this century and today are still far from finished. Some people have suggested that the space shuttle will play a major role in building powersats, but in fact it will see only limited use. The actual rockets that will serve are the topic of the next chapter; and as usual, in keeping with the long-term character of astronautics, a bit of history will be in order.