Sun Power: The Global Solution

for the Coming Energy Crisis – Chapter 6 for the Coming Energy Crisis – Chapter 6

Sun Power

by Ralph Nansen

Copyright 1995 by Ralph Nansen, reproduced with permission

Chapter 6: Exploring the Options

The energy crisis initiated by the oil embargo of 1973-74 led to the investigation of many potential alternative energy sources. Some were explored by the energy-generating companies—both privately and publicly owned—and some were proposed by individuals. Others were under the direction of developmental agencies. The federal government combined their efforts within the Department of Energy, which was formed by merging several governmental agencies. During the ensuing years of the 1970s the activity level was very high, but with the breakdown of the oil cartel the urgency was reduced and much of the activity has returned to business as usual. However, a great deal of basic knowledge was generated and many ideas were suggested. Let’s review what was considered at the time and what has transpired in the meantime.

Conservation — The Organizing of Scarcity

In the 1970s, conservation was the primary emphasis because it could be immediately effective. The theory is that the fastest and cheapest source of energy is that which is saved. This is certainly true; however, there are secondary effects of conservation that can be devastating if this is carried beyond improved efficiency for any length of time. There are three methods of achieving conservation of energy, and all three have been used extensively.

First and foremost is price. As the cost of energy rises, increased either by natural effects of the marketplace or deliberately, the consumer is forced to use less. This certainly happened to us in the 70s. Oil prices (not costs) were increased 1000% by the OPEC nations. As price was increased, we automatically turned to conservation; we had very little choice. Some people borrowed money to maintain their previous lifestyle, but that could not go on for long and reality soon caught up with them. They, along with the rest of us, soon found ways to reduce our energy use. We drove much less by turning to carpools, taking fewer Sunday afternoon drives, and using smaller cars with standard transmissions. We closed off the extra rooms and lived in one room instead of the whole house. We added insulation and weather stripping. We turned the thermostat down. These forms of conservation became involuntary when we could no longer afford not to follow them.

The second form of conservation is conservation forced by government decree. In this case, laws are passed or executive orders are issued. Prime examples of forced conservation in the US are the 55-mile-per-hour speed limit, the reduced thermostat settings mandatory in all commercial buildings, the mandatory increases in average gas mileage of all new automobiles, and building code changes that set high insulation standards and limit the area of windows allowed.

Rationing of energy is the next step in forced conservation. Standby rationing was placed on the books in the 70s, but fortunately was not imposed. A form of energy rationing was used in some parts of the country—in some states, gasoline could only be purchased on odd- or even-numbered days, depending on license number. Electric power utilities refused to install services for new houses in some locations because they had insufficient generating capacity to add new customers.

The third form of achieving conservation is to appeal to the patriotic and moral character of the people without committing to any positive action. There have been government media campaigns touting conservation “for the good of our country” and because it is the “right” thing to do. We should insulate our homes. We should not keep our thermostats set at a comfortable level; rather, we should experience some suffering or we are not doing our part. We should not drive alone in our cars; we should carpool or ride buses. We should strictly obey the 55-mile-per-hour speed limit.

Conservation was the only way we could attack the immediate problem in the 70s, but if we rely on this approach as the only permanent solution, we are surely building the major elements of economic disaster and a continuing reduction in our standard of living. Conservation cannot provide the foundation for increased productivity or an increase in our standard of living, nor can it provide the economic growth for which most of us strive. It will destroy the American dream if carried to extremes. Unfortunately, conservation is the primary course we are currently following.

The Resurrection of Coal

The United States has large reserves of coal, and in fact many parts of the country rely on it for electric power generation. Coal supplies more than 27% of our total energy use and 56% of our electricity. It has been a major world fuel for centuries, but it has been replaced by alternative sources whenever possible. There are several reasons for this. It is awkward to obtain. Whether it is mined in underground cavities or in strip mines, the problems are severe. In underground mining the safety and health of the miners is a major concern; the incidence of accidents is near the top of any profession, and the health problems are among the worst known. The annual fatality rate would be unacceptable if we had to deep mine all of our future energy.

Strip mining has its own set of problems. First among them is the damage to our environment and the great areas of the earth that must be devastated. The land areas required are larger than for any other known energy solution.

After coal is removed from the ground it must be burned to provide thermal energy either for direct heating or conversion to electric power. This means the coal must be transported to the geographic locations requiring the power, or the electric power plant needs to be located at the mine and transmission lines built to carry the electricity where it is needed. For example, if the decision was made to transport coal via train to a large power plant with the generating capacity of Grand Coulee Dam, it would take more than one thousand standard coal cars per day to maintain the boiler fires of one facility.

Even though there are great reserves of coal in the ground, much of this may not be usable because of the difficulty in mining it. If Richard C. Duncan is correct in his analysis of the worldwide fossil fuel reserves, then only 10% of the known reserves can be economically extracted. In that case we will soon be running short of coal at the current rate of usage.

The other problem with coal, however, is what happens when massive amounts are burned in the earth’s atmosphere. History has many graphic examples: Pittsburgh, the city built on coal, with coal, and under coal soot; London, with its coal-smoke-augmented smog that ultimately resulted in many fatalities, corrected only when the burning of coal in private homes was prohibited; China, with few private automobiles, has coal-induced smog in their cities that rivals car-choked Los Angeles at its worst in the 70s; Eastern Europe, where an environmental nightmare was revealed when the borders were opened, with the burning of coal as a major pollutant; Germany, where acid rains in the northeast part of the nation are destroying mountain lakes, injuring crops, and inhibiting forest growth even after most of the visible contaminants have been removed with modern scrubbers required by stringent environmental regulations.

What would it be like if we started burning coal seriously in the US? We are already facing the frightening aspect of the greenhouse effect on world climate due to excessive levels of carbon dioxide in the atmosphere. The world air temperature has already risen more than a degree and a half over the last few years. The natural disaster of the 1991 eruption of Mt. Pinatubo in the Philippines had one favorable side effect for the world: its dust cloud has effectively reduced world air temperatures by about half of the increase caused by carbon dioxide. But this is only a brief respite, and we certainly can’t count on a volcanic eruption every few years to counteract the damage we’re doing.

The primary effort in expanding the use of coal has been to reduce the impact of its emissions. Much has been done to remove many of the obnoxious exhaust particles, but there is no solution for the carbon dioxide. Each step taken to clean up the combustion products adds cost—and there is a long way to go, so the real cost will continue to escalate. Much of the rest of the world, outside of the US, has done little to minimize atmospheric pollution from coal.

Natural Gas — Oil’s Unwelcome By-Product

Natural gas has been oil’s partner in the abundant energy world of the twentieth century. Often it was an unwelcome by-product of drilling for oil and was frequently burned off at the well heads just to get rid of it. Today it is becoming increasingly important in the US effort to minimize air pollution from burning fossil fuels. It is one of the cleanest burning fuels available in significant quantities in America. A great number of households across the land depend on natural gas for cooking and heating. Many vehicles are being converted to its use. Consumption of natural gas will undoubtedly expand as costs and environmental impacts from other fossil fuels increase. In the US today nearly all new electrical generating capacity is produced by natural gas turbines. They are cheap to build, and gas prices are currently low. This has lulled the utilities into a false sense of security. Unfortunately, natural gas is a finite resource that, if Duncan’s analysis is correct, has already passed its peak of production in the world and will soon join the league of scarcity that will drive its price ever upward.

Natural gas is a very good option for the US as an interim fuel until a new energy system can be developed. It is certainly more desirable from an environmental standpoint than coal.

Nuclear Power — Future Unknown

After the atomic bombs dropped on Hiroshima and Nagasaki brought an abrupt close to World War II, it was only logical that there would be attempts to use atomic energy in many other ways. Its promise seemed unlimited. The amount of energy locked inside the atom boggled our minds. Scientists talked about powering an automobile with a fuel capsule the size of a pea. Electricity would be so cheap it would not be necessary to meter it. Ships would cruise the oceans of the world without the need for refueling.

The euphoria of atomic energy did not last for very long as the difficulties of making the predictions come true were revealed. However, work progressed quite rapidly and by 1951 the first electric power was generated by nuclear energy at Arcon, Idaho. This was followed by the first civilian atomic power plant at Schenectady, New York, in 1955. Atomic power was applied to military ships in 1954 when the US submarine Nautilus was converted to nuclear power and the nuclear-powered aircraft carrier Forrestal was launched.

These were all successful ventures, and other plants and ships were constructed. There were some serious problems, however. Small power plants were proving to be impractical for most applications, energy conversion efficiency was not very high, and the radiation hazards were a major concern. In the meantime the Soviet Union had developed the atomic bomb, along with England, and other nations were also preparing to join the atomic club. The hydrogen bomb added to the uncertainties of the world.

Even so, many nations were turning to nuclear power plants as a source of electricity. Their use expanded during the decades of the 1960s and 70s.

Even before the accident at Three Mile Island, Pennsylvania, in 1979, fear of the atom had become a fundamental problem in the United States. Serious questions were being asked. Is a nuclear power plant safe? Do we want one in our backyard? Will earthquakes destroy it and subject us to lethal or disfiguring radiation? Some questions were asked out of ignorance and some out of real concern. Emotions ran high on both sides of the issue. Many were motivated by blind fear and the fact that nuclear power burst on the world with two violent and devastating shocks. Two cities were vaporized. Masses of humanity were gone or cruelly maimed. The people of the world were sensitized to the power of the atom.

A very simple analogy can be used. If you have walked on a carpet on a dry winter day and then touched a light switch and experienced the instant jolting shock of static electricity, you will know what I mean. How many times does it take before you can hardly bring yourself to touch the switch again? Yet it is that same basic electricity in a different form and fully controlled that will light the lights in your home when you turn on the switch. Normally, you do not receive a shock. There is no connection between the two phenomena, except that they both involve electrons and are electrical in nature. The atom bomb and an atomic power plant are similarly dissimilar, with the common element being the energy in the atom in this case. Yet we have been sensitized by the devastation of the atom bomb to fear anything connected with atomic energy.

The resulting situation with nuclear power is very confusing. During the late 70s, after the oil embargo, the Carter Administration took opposing approaches to atomic energy. The first policy was to accelerate the licensing of new conventional nuclear plants and pursue research on breeder reactors, but at the same time they banned the construction of new fuel-processing facilities that were needed to extract the remaining useful fuel from used fuel rods. Thus, both the problems of obtaining sufficient low-cost fuel and of storing nuclear waste were compounded. The situation has not improved since then.

Nuclear power supplies about 22% of the electricity in the United States, which is 8% of our total energy use, but the safety issue and nuclear waste have become serious considerations. As a result, new plant construction in the United States has stopped. The uncertainties of nuclear power were highlighted by the accident at the Three Mile Island plant. Then came the disaster in 1986 at the Chernobyl Power Station near Kiev in the Soviet Union, with clouds of fallout raining down on all of Europe in addition to Russia. The uncertainties turned to certainties. The circumstances that caused the Chernobyl accident may not be applicable here, but it happened and nothing can change that in the mind of the public.

People do not understand clearly what they cannot see, feel, touch, or hear, and nuclear radiation fits all of these categories. The public must rely on scientists and engineers for information. Many people doubt the honesty of the highly educated on matters that cannot be readily understood by the average citizen. Unfortunately, in some cases they have good cause to doubt. They have been duped by the intellectual elite from time to time, either through deliberate actions or simple intellectual stupidity. It will take a lot of convincing to make the public comfortable with nuclear power, and in the process of adding the necessary safeguards the costs will be escalated dramatically. An example is the 1,150 megawatt Seabrook reactor in New Hampshire, which was completed in 1986 at a cost of $4.5 billion—four times the original estimate. Much of the price escalation was driven by safety concerns.

One problem that receives little attention is the question of what to do with a nuclear power plant when it is worn out. Because of the temperatures, stresses, and nature of the environment in which the machinery must operate, the plant has a finite life of 30 to 40 years. Some of the early plants are now reaching that age. The Trojan plant in Oregon, on the Columbia River, developed serious maintenance problems and was permanently shut down after only 18 years of operation. The cost of decommissioning the plant has been estimated at $450 million. Should it and the other old plants be rebuilt at enormous cost so they can continue to operate, or must they be decommissioned and their giant cooling towers left to stand as tombstones over their graves?

Some plants have died in infancy. The Satsop nuclear plants in Washington State were abandoned after runaway cost increases due to changing safety requirements and bad management drove WPPSS (Washington Public Power Supply System) to stop work before they were completed—causing WPPSS to default on the bonds used to finance the venture. The plant’s cooling towers rise in ghostly solitude above the surrounding forests, mute testimony to the debacle. The only signs of life are the flashing strobe lights warning passing aircraft that here lies one of man’s failures. Even the power to illuminate the night has to come from another source.
Many nations, such as France, turned to nuclear power because they felt it was the best option at the time; even today, they look toward it for the future. They either do not have their own fuel—such as oil or coal—or they could foresee the time when those resources would be gone. Even with the increasing costs of nuclear fuel and power plant construction, nuclear power is still cheaper than generating electricity with oil.

One limiting factor for nuclear power on a large scale is the availability and cost of the fuel. One way to extend the uranium is to develop and build breeder reactors that have the potential of extending the useful energy in the fuel by about a hundred times. The problem is generating plutonium, which can be used to make bombs as well as power plant fuel. Extensive research has been carried out in the US, but no operational breeder reactors have been built for power generation.

Many years ago, shortly after I began my career in the aerospace industry at Boeing, we started hearing rumors about a gigantic nuclear explosion in the Soviet Union. There was no mention of it in the news media, but among the engineers the feeling was that something big had happened and nobody was quite sure what. It was not until after the disintegration of the Soviet Union that the truth about the extent of damage caused by Soviet nuclear activities was revealed. In 1957 a nuclear-waste storage tank at Mayak exploded, sending vast quantities of nuclear radiation into the air. Those old rumors were finally confirmed.

The problems at the Mayak plutonium production plant had started much earlier when in 1949 it began spilling radioactive waste directly into Chelyabinsk’s Techa River. Today the Chelyabinsk region in the southern Ural Mountains is considered to be one the most irradiated places on the planet. Radioactive waste was also dumped into Lake Karachay. The lake is now so irradiated that just standing on its shore for an hour could be fatal. If there were to be an explosion in the waste storage tanks now at Mayak it could discharge about 20 times the amount of radiation released during the Chernobyl disaster.

The extent of the Chernobyl disaster is also becoming clear as we see on television the families and children suffering the ravages of nuclear radiation exposure. The picture is very grim, and the cost of cleanup is beyond the ability of the bankrupt former USSR. Worse yet is the fact that there are over a dozen other plants of the same design as Chernobyl, leaving these countries with both a current disaster it cannot handle and the possibility of more potential disasters shadowing the future.

In other places spent nuclear fuel has been indiscriminately dumped into the waterways, posing a serious danger of contaminating drinking water. Several nuclear ships have dumped damaged reactors and used nuclear fuel assemblies into the sea. Decommissioned nuclear submarines lay in the port of Murmansk on the Kola Peninsula leaking radiation into the sea and air. Nuclear waste in Russia is a cloud of doom hanging on the horizon.

Disposal of nuclear waste is not just a problem in the former Soviet Union. The United States and other countries using nuclear power are faced with the problem of disposing of the waste, some of which has a half-life of centuries or more. So far no good methods have been found that will ensure its safe disposal for indefinite periods. In the meantime billions of dollars have been spent trying to solve the problem.

The future is very clouded for nuclear power. It has been used for decades, but no clear path has emerged for its expanded use, and time has not been kind to its proponents.

Synthetic Fuels — High-Cost Insurance

Much effort has been applied to extracting what is generally called synthetic fuels. This category of fuels was one of the major thrusts by the government in the late 1970s. Congress approved $88 billion to be spent on developing synthetic fuels. These fuels take many forms, with the most common being processing coal to produce liquid fuels, extracting the oil contained in oil shale, and processing tar sands. All of these sources have been known for many years. As I mentioned earlier, it was the use of kerosene made from coal that stimulated the drilling of the first oil well in the United States.

Several pilot plants were built in the 1970s to refine the processes and determine costs. Unfortunately, the fuel costs were higher than for fuels made from natural oil, and the facilities for processing large quantities of these fuels were not constructed. The costs were being driven by the amount of energy required by the process and other major obstacles to overcome. Massive amounts of materials must be mined and handled. Large quantities of water are required for some of the processes, and as is true with the coal slurry pipelines, the sites of the tar sands and oil shale are generally in areas that have limited water supplies. But most of all, it takes enormous quantities of energy—however generated—to create these fuels that are themselves intended as energy sources.

Germany depended heavily on synthetic fuel made from coal to fuel their war machines in the late stages of World War II. South Africa is currently the only nation with large production capability for making synthetic fuel from coal. It was forced to this method by politically motivated trade embargoes and outside pressures.

Much of the $88 billion appropriated for synthetic fuels in the US was never spent as the true costs of the resulting fuel became known. By then the oil embargo was over. Foreign oil prices had dropped, and the politicians thought it was a good idea to just forget the whole thing. It is doubtful that we could expect this category of fuels ever to be used extensively unless there is no other alternative.

Earth-Based Solar — The First Step Toward a New Future

Probably the most popular of the new initiatives of the 1970s, and one that was enthusiastically pursued by the government, private companies, and individuals, was earth-based solar power. It seemed to fit all the ideals that people could imagine. The source was free, it was nondepletable, it was environmentally clean, it could be utilized by everyone, and it could be distributed in a way that would eliminate the need for depending on a utility company for power. Many thought it was the wave of the future and the path to their personal utopia. It was the showpiece of the Carter Administration’s energy development plan, which had a goal of solar energy providing 20% of the nation’s energy by the year 2000. How could it fail, as sunlight could deliver the equivalent of 4,690,000 horsepower per square mile of the earth’s surface?

A large variety of earth-based solar options were investigated, and several are being used in limited application today, but the early promise has been difficult to achieve for reasons I will discuss as we go along.

Many of the approaches were directed toward heating buildings and water, or using solar cell panels for providing localized electric power. These are generally known as “distributed energy” sources, or “soft technology” energy. In addition to the distributed forms, small-scale electric power plants have been developed. The most significant are windmills (often called wind turbines), thermal systems that use mirrors to concentrate sunlight to heat a working fluid that drives turbo-generators, and various arrangements of solar cell farms that provide electricity directly.

Unfortunately, all ground-solar systems suffer from two basic problems. The first is the intermittent nature of “solar insolation,” which is the amount of sunlight that shines on a given area. To put it simply, the sun goes down at night, and clouds often hide it during the day. This usually happens when we need the energy the most—during the winter and when it is stormy. As a result, the second problem arises. In order to make a ground-based solar system work as a complete energy supply system, it must be significantly oversized, generate energy above peak requirements, and be equipped with an energy storage system. Generally, to keep these components from becoming too large, a conventional backup system (i.e., another method of generating energy) is also required.

Solar Heating — The Simplest Form of Solar Energy

The simplest form of solar energy uses sunshine to directly heat buildings or water. We can obtain some advantages from it by simply controlling the window shades on the sunny side of our homes. However, that is not the level of energy usage that can address the serious energy needs of the nation, so I will restrict my discussion to heating systems that could make significant contributions.

The cost of most things we buy varies with size or capacity, and a solar system is no different. Because the energy from the sun is only available for less than a third of the time in the winter, even under ideal weather conditions, the system must be large enough not only to heat the building or water during the time the sun shines, but also at the same time provide enough extra heat to place in storage for the time after the sun has set. It is a little like buying an automobile that in order to take us to our job thirty miles away, has to have a large enough engine to provide all the necessary power in the first ten miles so it can coast the last twenty. Obviously, the car will not coast for twenty miles, so we must add some method of storing the energy for the last twenty miles. In the case of an automobile, this could be accomplished with flywheels or batteries.

Earth-based solar systems are also going to need storage systems for the same reason. In the case of solar heating for a house, the storage system can be approached in two basically different ways. The most common concept is quite simple but suffers from low efficiency. In this method large rock beds are placed in a convenient area, usually under a house. These are heated by flowing solar-heated air through the bed during the charging cycle and extracting the heat with a forced air flow during the extraction cycle.

A much more efficient storage system employs various brines that have very high heat-storage capacity. The problem here is that the system is more complex and requires pumps, motors, controls, and piping, much of which must operate in a very corrosive environment. As a result, the maintenance cost is high. So it is hard to win. The choices are either a cheap storage system that requires large heaters or costly storage systems that can use smaller heaters.

To top off the cost story, if we do not want to have everything in our house frozen solid—including ourselves—after a five-day winter blizzard, we will have to add some type of conventional backup system. It might be gas, oil, coal, electricity, or maybe a wood stove or fireplace. In any event it adds to the cost. Even the cost of wood has increased dramatically because of increased demand.

In this example we have actually had to invest in three systems instead of one, and the primary solar system must be oversized by at least three factors. The plus side of this equation is that the sunlight is free.

The situation is not all bleak for ground-solar systems. Many areas of the country have weather conditions suitable for economical utilization of solar heating systems. The total amount of energy these systems can provide for a society such as ours, with its high technology and high standard of living, is limited because the energy is in the form of low-temperature heat. While it can be used for heating, it is not very useful for running machinery or household lights and appliances.

Solar Cells — Electricity from Light

Solar cells, or “solar batteries” as they were originally called by the scientists of the Bell Telephone Laboratories who developed them, are solid-state, photovoltaic devices that convert sunlight directly into electricity with no moving parts. The initial cells were made from pure silicon wafers with certain impurities and electrical contacts added to each side to give the cell its unique characteristics. When the cell is exposed to sunlight the light dislodges electrons from atoms in the cell material. As the negatively charged electrons flow to one side of the cell, the other side gains a positive charge from the deficiency, creating an electrical charge between the connectors. When the contacts are connected to an electrical circuit, current will flow as long as the cell is exposed to light and the electrical circuit is maintained.

The development activity undertaken during the 1970s was focused on two major areas: conversion efficiency and cost. These two categories were addressed in several different ways, which included testing alternative materials, processing refinements, multiple layers of material, thin films, single-crystal versus amorphous material, cell size, and manufacturing processes.

A great deal of progress was achieved during that time, and numerous types of cells were developed while refinement of single-crystal silicon cells continued. Additional development has occurred since that period as more and more applications have been found. Worldwide output of solar cells has increased fifty-fold since 1978.

Single-crystal silicon cells are still the most common. Typical efficiencies have been steadily increasing from about 7% several years ago to today, when 13% to 14% efficiency is readily available from many commercial outlets in various sizes of pre-assembled, sealed panels. Numerous small, low-power devices, such as pocket calculators and ventilator fans, are also run with small silicon solar cells. Development of single-crystal silicon cells continues, with maximum efficiencies of 21.6% being achieved in the laboratories.

The highest solar cell efficiency obtained to date was accomplished by Boeing researchers with two-layer gallium arsenide and gallium antinomide cells that were 32.5% efficient at converting sunlight in space to electricity. Sunlight in space has a higher energy level than on the earth, but it also contains a lower proportion of the red light spectrum (due to the increased proportion of ultraviolet light that the earth’s atmosphere filters out). Therefore, it is more difficult to achieve as high an efficiency of conversion in space as on the earth. The use of multilayer cells provides for capture of a broader range of the light spectrum. Even though it is more difficult to obtain as high a conversion efficiency in space as on the earth, the total energy generated is much higher.

Progress in polycrystalline (multiple crystals) silicon cells has also been spectacular, with efficiencies of 16.4% in laboratory tests being reported by Japanese researchers at Sharp. Polycrystalline silicon is less expensive than single-crystal silicon. Another approach to silicon cells is being pursued by Texas Instruments and Southern California Edison. They are experimenting with silicon bead cells that can be made at very low cost, but their efficiency is only about 10%.

Another type of cells are called thin-film. These are cells that are only a few microns thick. A micron is one millionth of a meter, or approximately four one-hundred-thousandths of an inch. To put that in perspective, a human hair is about 75 microns thick. The advantages of thin solar cells include their light weight and the reduced amount of expensive materials required to make them. One type of thin-film solar cells is made from multiple layers of material, starting with an electrode, then P-type cadmium telluride, next N-type cadmium sulfide, topped with a transparent electrode and cover glass. The entire thickness is six microns—twelve times thinner than a human hair. Ting Chu, a retired professor at the University of South Florida, with the cooperation of researchers at the National Renewable Energy Laboratory, achieved a breakthrough efficiency of 14.6% with this type of cell. That threshold was soon broken, and they have now reached 15.8% efficiency with more advances likely.

Another of the thin-film cells that many believe has the most potential of all is made from copper-indium-diselenide. They have been made in meter-square sizes at 10% efficiency. The National Renewable Energy Laboratory has achieved 16.4% efficiency in the laboratory with copper-indium-diselenide and expects to increase that with future developments.

A unique new concept called roll-to-roll solar cells has been developed by Stanford Ovshinsky and his company, Energy Conversion Devices. This concept starts with a sheet of stainless steel coated with silver and three different layers of transparent thin-film amorphous-silicon cells topped with a transparent electrode. This type of solar cell can be made in continuous strips and has nearly 14% efficiency, also with a thickness much less than a human hair.

A totally different approach is to use concentrators to focus a large area of sunlight onto a small area of solar cells. This concept uses low-cost concentrators (“fresnel lens”) and high-efficiency solar cells, which can have higher costs because of the reduced number required. The conversion efficiency is also increased by concentration of the sunlight. A single crystal silicon cell with 20% efficiency without concentration increases to 26% with high concentration of sunlight.

As development has progressed, efficiency has increased and cost has decreased. One of the major problems with solar cells in the 1970s was the lack of large-scale, low-cost production techniques, but as the years have passed there have been major developments to improve mass production techniques. Today some of the manufacturers have highly automated assembly lines and are rapidly lowering the cost of their delivered solar cell modules. The lowered cost and higher efficiency has greatly enhanced the attractiveness of solar cells as an energy generating source. Unfortunately, no matter how low the cost, terrestrial-based solar cells cannot overcome the same problem as other earth-based solar systems—intermittent sunlight.

Solar cells have powered our space satellites for many years. In the space environment they are exposed to the full energy of the sun; however, when sunlight passes through the atmosphere, even directly overhead, it loses more than 25% of its energy to the atmosphere. This loss is greatly increased as the sun nears the horizon. As a result, a solar cell’s output varies from dawn to dusk at about the same ratio as we get sunburned on a dawn-to-dusk fishing trip. The overall result is that for the same cell efficiency, it takes about five times as many solar cells on the ground as on one of our communications satellites to generate the same amount of power—and that is if you lived in the Mojave Desert. If you lived in “Average Town, America,” it would take 15 times as many cells; if you lived in the Pacific Northwest, as I do, it would take 22 times.

In addition, in order to be utilized effectively for 24-hour-a-day energy, we need to provide a storage system and a power processor to correct voltage variations or electricity form. Solar cells generate direct current (DC) electricity, and our homes currently operate on alternating current (AC) electricity. If we lived in “Average Town, America,” and have an average household that uses electricity for cooking, lighting, and household appliances but not for either heating or hot water, and if we wanted to power our residence with solar cells, it would be necessary to cover an area equal to the area of our roof. That area would also have to rotate to track the sun from dawn to dusk, or else be even larger. In addition, we would have a bank of batteries in our garage and a power processor humming away in the corner. We would be suffering from being on the wrong end of the effects of scale. We would have created in a small area a complete power plant that could perform all the basic functions of a large, community plant. We would be paying all the costs ourselves instead of sharing many of the costs of common functions with other users, as is the case of a utility selling to many customers from a common facility.

One approach being used by the Sacramento Municipal Utility District to minimize some of the problems is to mount four-kilowatt solar cell arrays on the roofs of private homes, but instead of feeding the power directly to the houses, it is fed directly into the power grid. In this way it is used to contribute energy to the entire power grid during the day to help supply the daytime peak loads. The homes themselves obtain their power from the normal grids—and the residents pay a premium for the privilege of contributing power to the grid!

The future for distributed generation of electric power is not all bleak. Many remote locations find solar-cell-generated electricity cheaper than any other approach. Sunlight is free, so there will be many places where the cost of bringing centralized power to a remote site is greater than the added cost of distributed systems. On many remote islands of the South Pacific the only source of electrical energy is solar cell panels or an occasional Honda gasoline generator.

Centralized Solar Power Generation

One concept for centralized power generation from ground-based solar utilizes fields of mirrors (or heliostats, as they are called) to concentrate sunlight into a cavity absorber or receiver. When the sunlight is concentrated in the cavity, the temperature becomes very high and heats a fluid that is pumped through a heat exchanger inside the cavity and then routed to a turbine engine. The concept is essentially the same as a coal or gas-fired power plant, except that the heat is provided by sunlight instead of by fire. There are also variations in how the working fluid is heated. One other approach uses trough-shaped mirrors concentrating the sunlight on a pipe carrying the heated working fluid. Since they depend on heat from the sun, they suffer from many of the disadvantages of other ground-solar applications, but with three major differences.

First, they can be built in the best locations for sunlight. Second, they can provide electric power to a primary distribution grid that has a higher demand during the day than at night. Third, they can take advantage of the cost reductions associated with a large-scale system. By using thermal engines, they convert energy more efficiently than solar cells. However, in regard to absolute power costs, they cannot avoid the fact that the sun sets at night and the plant lies idle until the morning. There is an option of burning a fuel, such as natural gas, at night to run the generators, which is done at most locations to help balance the profit-and-loss ledger. The major existing example of a ground-based solar power plant is at Daggett, California, where the facility generates about 400 megawatts of power.

The use of solar cells is also being applied to central power stations that use large fields of solar cell panels to generate electrical power. Currently power costs of about 25 cents per kilowatt-hour are achievable with a goal of 6 to 12 cents per kilowatt-hour by the year 2000. This would make solar cell electricity competitive with existing systems for the periods the sun is shining. This will make centralized earth-based power plants quite attractive for the areas where peak demand is during the day while the sun shines.

Windmills — The 24-Hour Solar System

Windmills, or wind turbines—the modern resurrection of wind as a source of energy—have an interesting aspect that we do not often consider. Wind is one of nature’s storage devices for solar energy. It is a product of the earth’s rotation and air currents generated by the heat of the sun. Sunlight heats the air and the earth’s surface each day, causing convective currents to be generated. Since the atmosphere is a global phenomenon with worldwide interactions, and the sun is always shining on half the globe, we experience wind in some form day or night. The only problem is that nature is fickle and lets the wind flit around from place to place. Even so, there are some locations that have fairly consistent winds.

Modern airfoil (propeller) technology has been used, and wind turbines of three-megawatt peak capacity have been built, but the dynamics of such large rotating propellers cause long-term fatigue problems beyond the capability of the materials being used. Smaller units do not experience this degree of difficulty and have been built and installed in large numbers in a few selected locations. They are making a measurable contribution to the power grids. As an example, there are 7,000 wind turbines installed on the slopes of California’s Altamont Pass. Initially they were able to compete because of governmental incentives, but with time and numbers of production units they have reached the point where they can nearly compete with other sources without the incentives. They can only nibble at the problem, however. They simply cannot be made big enough or placed in sufficient numbers to completely swallow it.

Chapter 3: Our Energy Heritage

The sun is a seething mass of gases — a giant nuclear fusion reactor. An atomic furnace bathing the earth with its life-giving energy. Through the ages the earth has gathered the energy, turning some back into the void of space, converting some into the life that sets our planet apart from the others in our solar system. Some was gradually stored in the mantle of the earth’s surface, some continuously stirred the fluids and gases that cover its surface. Throughout time it has been the source of all our energy.

What has it meant to us in the past? The past is important, because it can teach us the lessons that allow us to progress into the future and unravel its secrets with knowledge and understanding.

The history of mankind in the industrial age is really the history of our ability to utilize energy beyond the confines of our own bodies. Ancient people had only the strength of their arms, legs, and backs to gather food and provide shelter. Even their weapons depended on physical strength to deliver mortal blows to enemies and meat-providing animals. This meant they had to approach very closely to their prey, and occasionally it resulted in them becoming the prey if they were exceptionally ambitious about the size of their dinner. People’s sphere of territory was bounded by the endurance of their legs. An individual’s ability to pursue game was limited to the speed with which he could run —so he was usually forced to use skill and cunning to stalk his food, rather than speed to outdistance it. As people developed, they found it was easier to stay in one place and grow most of their food. At first, even this was very difficult because of the need to provide all the labor of cultivation with the sweat of their brow.

Biomass — Liquefying Nature’s Solar Fuels

Another odd solar power energy source is biomass. It can be considered in many forms. It may be the controlled digestion of garbage to produce methane gas and other products; it could be the distillation of grains or sugar beets to alcohol for gasohol; the growing of special crops to be used in biomass conversion to fuels; processing of livestock manure; the collection of wood scraps; or the planting of fast-growing trees for chemical processing into fuels. These are all examples of biomass conversion.

Biomass conversion is actually the last liquid-fuel-producing step after nature’s solar energy process grows the fuel. This process starts with the growing of plants of various types, all of which grow because of nature’s photosynthesis process—using solar energy. Unfortunately, nature’s process represents only about 1% solar efficiency—not nearly high enough to sustain massive demands from an industrialized society. This has already been demonstrated in the past when wood was the primary fuel used in England. Biomass conversion to further process wood or its equivalent to liquid fuels will add more inefficiencies. The net result is that these fuels have limited capacity and can be cost-competitive only in very select markets. An example is Brazil where there is extensive use of gasohol. They have sufficient feed stock available to make gasohol cost effective. It also is energy generated within the country, which helps their foreign debt problem.

One exception to the low solar efficiency of nature’s conversion cycle has been found in purple pond scum. The purple bacteria that float in stagnant water convert sunlight to energy with 95% efficiency, more than four times that of solar cells. Maybe we can figure out how to convert purple pond scum into something useful or at least find out how to duplicate the process. While it is only a glimmer in some scientist’s eye at this point, who knows what might happen in the future.

Biomass fuel can be a lucrative by-product of processes that are necessary for other reasons. Methane gas and other fuel gases are being collected from old landfills and are added to industrial and community gas supplies. There will undoubtedly be many instances where this type of development can be economically attractive and beneficial to the general welfare and the environment. The total contribution of this energy source is limited, however, to nature’s rate of renewal.

Ocean Thermal Gradient—The Hard Way to Run an Engine

The last earth-based solar system considered during the 1970s was ocean thermal gradient, which uses the temperature differences found in the oceans. Whenever a steady-state temperature differential exists between two sources, there is a possibility of operating a thermal engine. The greater the two temperature extremes, the easier this is to accomplish and the more mechanical energy can be extracted. The situation really becomes challenging when the two temperatures are close together, as would be the case with ocean thermal gradient power generation.

The concept is to use the warm solar-heated water near the surface of the ocean as a heat source and the cool water several thousand feet down as a heat sink to reject heat. The problem is that the maximum differential is only about 45 degrees Fahrenheit. As a result, massive amounts of water must be moved to provide significant amounts of power. Moving massive amounts of water from the surface to a depth of 3000 feet, or vice versa, takes huge equipment. All of this equipment would be moored in deep water at selected ocean sites that had sufficient temperature differentials.

Maximum efficiencies believed to be achievable would be about 3%, with realistic efficiencies probably closer to 1%. The question of economic viability pivots on whether it is possible to build and maintain such massive facilities in the hostile environment of the sea for sufficiently low cost—it is not likely.

Geothermal — Tapping the Earth’s Energy

A natural source of heat is the very heart of our earth. Only the surface is cool, and as we penetrate deep underground the temperature rises. At some natural locations, the high temperatures come very close to the surface. We are aware that the eruption of Old Faithful Geyser in Yellowstone Park is caused by the heating of water to steam, building up pressure that forces the geyser to erupt at regular intervals. In fact, Yellowstone Park is covered with many reactions caused by geothermal activity close to the surface.

In some areas of the country, similar types of conditions occur and some have been developed as energy sources. There is a 600-megawatt plant at The Geysers, California. This site was easy to develop where natural activity is close to the surface. New Zealand generates 6% of its electric power from geothermal sources rising close to the surface. However, they are experiencing a gradual reduction in output as the area is cooling. This is also happening at The Geysers in California. The real question is whether we can tap the earth’s core heat on a large scale.

Concepts have been developed and some test work accomplished, but the case for large-scale development of geothermal does not look good. Engineering reality must be considered, which means that to extract large amounts of energy it is necessary to pump large amounts of water down to the high temperature areas. While it is down deep in the earth it must be heated. It takes a large area to accomplish the heating of a large volume of water. While this heating takes place, the surrounding area is being cooled by the water. Since rock is not a very good heat-transfer medium, we would soon find that we have cooled off our heat source—such as is happening in the New Zealand plant—and we would have to drill new holes.

Development work continues, however, and some small power plant sites are being developed. Large-scale development is not likely unless new methods are developed to economically tap the heat that lies beneath our feet.

Fusion Power — The Elusive Carrot

The promise of fusion power is the carrot that has been dangled in front of us since research for power generation was started in 1951. Fusion is the combining of atoms, the opposite of fission, which is the splitting of atoms. Fusion is the energy of our world, of our life. Our very existence depends on it. The sun and the stars are all operating fusion reactors. Without the sun we would not be.

We have accomplished fusion reactions on the earth in the form of hydrogen bombs. This is achieved by using a fission reaction to raise the temperature and pressure high enough to cause the combining of the hydrogen atoms in a fusion reaction to form helium. This is not a very “user friendly” form of energy if you have to set off an atomic bomb in order to make your morning toast.

Research has been going on throughout the world since 1951 to develop a method of achieving a controlled fusion reaction that can be harnessed into generating electrical power. The great advantage would be nuclear power using hydrogen as a fuel without the radiation dangers of fission reactors. The difficulties in achieving a controlled reaction are immense. Many breakthroughs have been announced, but the researchers have yet to achieve a reaction that could be used to extract useful energy.

For a while, March 23, 1989, was thought to be a date that would go down in history as one of the great milestones of our time. It was on that day when B. Stanley Pons, chairman of the University of Utah’s chemistry department, and Professor Martin Fleischmann of Southhampton University in England announced that they had carried out nuclear fusion through an inexpensive and relatively simple electrochemical process. Pons said the technique could be used to supply nuclear energy for industrial and commercial use. The announcement that they had achieved “cold” fusion in a laboratory jar generated tremendous excitement (and skepticism) among the scientific community. If true, it would be a dramatic breakthrough in the four-decade search for a method of controlling sustained nuclear fusion.

Alas, it was not to be. Later testing by other researchers failed to achieve the same results and revealed the errors made in the experiments. The hopes raised by Pons and Fleischman fizzled along with their scientific careers.

Fusion power research goes on at high levels of expenditure, and the predictions continue that it is only 20 or 30 or maybe as long as 50 years in the future, but it seems that the goal is more elusive than ever. The carrot is becoming limp and stale indeed.

Space-Based Solar Power — The Dark Horse

In 1968 when the United States was deeply immersed in the final testing of the Saturn/Apollo space vehicles for sending men to the moon, one farsighted visionary conceived an idea of a way to use space as a place to gather energy for use on the earth. Dr. Peter E. Glaser of Arthur D. Little Company first proposed the concept of placing satellites in geosynchronous orbit to provide energy from the sun. He saw them covered with solar cells to generate electricity, provided with an antenna to transmit the energy to the earth in the form of radio frequency energy waves, and then reconverting the energy back to electricity. This was based on the concept of wireless energy transmission first demonstrated by William C. Brown when he successfully powered a model helicopter with a wireless radio frequency energy beam.

The idea sounded like the invention of a science fiction writer at the time, but it was based on sound engineering principles. A few space-engineering enthusiasts around the country started looking at the concept in more detail, and by the early 1970s, several small studies were being conducted by aerospace companies and NASA. After the 1973-74 oil embargo these studies were expanded and culminated in the DOE/NASA systems definition studies of 1977 to 1980 that I described earlier.

The outcome of these studies followed Dr. Glaser’s original concept but provided greatly expanded design definition, understanding of the technology, and in many areas, test data. Each step of the required technology is in use in some form, for other purposes, somewhere in the world today. There are no scientific breakthroughs required. Even so, the engineering task of designing and developing such a concept is immense. However, the long-term potential benefits are even more immense.

What is the Solution?

I have reviewed briefly what energy concepts have been studied or developed in the search for new energy systems to replace or at least contribute to the replacement of oil as our major energy source. None of them have as yet emerged as the energy source for the next energy era. The question is whether any of the known concepts can give birth to the fourth energy era.

The test is to measure each system against the criteria.

Conservation—A Dead-End Street

Conservation is clearly the quickest and easiest way to eliminate the use of oil. Where can we buy another gallon of gasoline or another kilowatt of electricity for the cost of simply using neither? The price is zero dollars and there is no pollution. Unfortunately, cost and pollution are only part of the problem. Conservation cannot power industry, grow food for the emerging masses, light and heat our homes, cook our food, nor power the transportation system to bring us to our jobs and vacations. To accept conservation as the solution is to simply give up on the future of humanity and our planet. It is a stop-gap measure to buy us time, but it cannot meet any of the criteria for the world’s future energy needs. We would revert to subsistence levels of existence after the deaths of billions of people from starvation and war as scarcity forced country after country to desperately reach out for the essential elements of life.

Many people believe strongly that conservation is the only solution. They feel guilty about what we have here in America and think the “right” way is not to use energy so that they can share the burdens of so many poverty-stricken people in the rest of the world. Unfortunately, this does not help the poverty-stricken people, but rather makes their future prospects even grimmer and ensures the fact that they can never improve their lives.

This does not mean that increasing energy efficiency is bad. However, when the goals go beyond increasing efficiency they become counter-productive and destructive. Most of the gross inefficiencies have now been eliminated. It is time to get on with solving the problem of how we are going to provide the clean low-cost energy needed to power our society and help the other people of the earth to climb out of their pit of poverty.

Coal, Oil, Natural Gas, and Synthetic Fuels — A Fading Future

Any of the systems based on fossil fuels such as coal, oil, natural gas, and synthetic fuels share the problem of being finite resources and are subject to ever-increasing cost as they grow ever more scarce. Even though there are certainly some reasonably large reserves left in the world, they are ultimately limited. It does not take much more of a demand than there is supply to cause large price increases. Increasing world population and requirements from emerging underdeveloped countries will also increase overall demand.

The other key issue is atmospheric pollution. The United States has established controls to help minimize the amount of pollutants, but even with tight controls it is a very serious problem. Many countries have not made any attempt to control pollutants. In any event, scrubbers cannot remove carbon dioxide from the smokestacks, and the level will continue to build in our atmosphere until we stop the excessive burning of fossil fuels. The fossil fuels fail to meet the criteria of nondepletability, environmental cleanliness, and low-cost over the long term. Fossil fuels must be replaced for many of our energy needs so that they can provide the fuels for the systems that cannot readily be powered with electricity.

Earth-Based Solar — The Struggling Contender

Earth-based solar systems can meet four of the five criteria—the source is nondepletable, environmentally clean, available to everyone (wherever the sun is shining, that is), and convertible to a useable form. There is some possible question about the effect of land use if earth-based solar systems had to supply large amounts of power, but their biggest problem is cost. Large-scale development could help, but the fundamental stumbling block is the intermittent supply of sunlight, which creates the need for very large-scale facilities and also the requirement for some type of energy storage devices. To put the problem in perspective, I offer a very simple example. If we were to use solar cells that were 15% efficient at high noon at the most optimal earth location, we would find that the actual overall conversion efficiency of sunlight to usable electric power, averaged over a year, would be less than 3%. But if we were to locate the solar cells in an average US location, they would only be 1% efficient overall. That does not even take into account the inefficiencies of the storage system that would be required for when the sun wasn’t shining. When conversion efficiencies drop below 3%, the cost of the equipment becomes a dominant factor. Higher conversion efficiencies are certainly possible and even quite practical, but the cost of the equipment is also increased and the systems must be maintained in the unprotected outdoors.

The use of ground-based solar systems is very likely to expand, particularly for remote areas, but unless there are cost breakthroughs that are unforeseen now, the price will be too high for massive use. If there is no other solution, earth-based solar systems can be expanded to supply world requirements, but the cost of the energy will limit world economic growth. It is a contender, but it is struggling against heavy odds.

Nuclear — Had its Chance

Nuclear fission is an energy source that would come close to meeting the requirements if certain conditions could be met. The first is to eliminate emotion, fear, and prejudice from the decision-making process. Second, we would have to develop and use a breeder reactor. Third, we would need to develop an effective plan for waste disposal. None of these are likely to happen. And unless the breeder reactor is used, there is not enough uranium on the earth to meet the criteria of nondepletability.

When it comes to cost, the issue of nuclear power is clouded. When the atomic age began back in 1945, proponents of nuclear power talked of energy so cheap that it would not be necessary to meter it. We all know that the reality has been quite different. We should not dismiss this issue too quickly, however, for it exposes a fundamental problem. This is the difficulty of converting energy in one form, which is not useful, into a form which is useful. If we could use the total energy available from splitting the atom, the statements made decades ago would be true. We cannot use the energy directly, however, unless we want to make a bomb. Therefore, the cost that we now experience is heavily driven by the cost of the machinery and facilities to convert the heat of a controlled reaction first to mechanical energy and then to electricity. This drives the cost higher because of the gross inefficiencies in the thermal process—inefficiencies driven by material limitations and technological limitations. We are only able to convert an extremely small fraction of the energy released by splitting the atom into useful electricity—about 1%. As is the case with earth-based solar systems, it is the conversion process that causes most of the cost.

The costs of breeder reactors are projected to be about 30% higher than conventional fission reactors, and they require fuel reprocessing. As a result, the future energy costs are questionable.

The environmental issue regarding breeder reactors is as emotional as it is factual. The emotions react to the safety questions of nuclear proliferation, radiation danger, and waste disposal. All of these are serious issues that have not been satisfactorily answered. That will not be easy in light of the accidents that have occurred and the evidence of gross negligence in handling nuclear materials becoming evident in the former Soviet Union.

Strangely enough, though, one of the truly significant environmental issues is seldom ever raised. This is thermal pollution. Whenever thermal energy is converted to mechanical energy—as is the case in nuclear reactors or coal and oil plants—two thirds of the thermal energy is waste heat and must be disposed of. This can become a serious problem. Today’s solution is to dump it into the atmosphere and the waterways. Have you seen the great pillars of steam rising from the cooling tower of an atomic plant or coal plant? That is thermal heating in action.

So we find huge question marks when we try to measure nuclear fission against our criteria for a new energy source. It is not a solution that has the support of the people, and it is not likely to improve its position with time. It has had its chance and failed.

Nuclear Fusion — The Limp Carrot

The most likely place to find a potential solution to our energy crisis is with an advanced technology approach. Fusion is certainly an advanced technology. Unfortunately, it has not advanced to the point that a realistic fusion plant can be defined. In theory, there should not be any of the radiation problems associated with fission reactions, and if hydrogen can be used as a fuel, then it can be considered nondepletable for all practical purposes (since we can always separate hydrogen from water). Theory is often very difficult to put into practice, however, and in the case of fusion, this is doubly true. The pressures and temperatures required to start a fusion reaction are far beyond normal human experience and the materials we possess, so whole new sciences must be developed and applied. Besides, research has been directed primarily toward achieving a controlled reaction, not to power plant design.

Because of the unknown scientific and engineering features in the fusion power concept, it has been impossible to adequately characterize an operational power plant. As a result, it is not possible to make meaningful estimates of facility and operational costs. Without knowing these costs, the cost of fusion power cannot even be estimated.

Fusion power could be the clear leader of the fourth energy era if there are some scientific breakthroughs and if the energy can be converted directly into electricity without going through a thermal engine conversion. Anything can happen, but it has not happened yet.

Ironically, the Department of Energy has been focusing on developing fusion reactors to generate power that still has to be converted to electricity to be usable. In the meantime, we’ve already got a fully operational fusion reactor a safe distance away—93 million miles—that should continue to function for another four billion years. All we have to do is find a way to convert that fusion power to electricity—a job for which solar power satellites is eminently suitable.

Solar Power Satellites — Hope for the Future

The last of the candidates—solar power satellites—is the one that can best meet all of the criteria to become the energy system of the next energy era. I will review how it satisfies each criterion.

Low Cost

The first criterion is low cost over the long term. Usually the first reaction to the question of a solar power satellite being low cost is that nothing associated with space could possibly be low cost. That is simultaneously a correct reaction and an erroneous one. It is correct when considering the cost of hardware designed to operate in space on an independent basis with high reliability. Based on dollars per pound of space hardware versus dollars per pound of, say, a spool of copper wire, there is no comparison. But that very same piece of space hardware—a communications satellite, for example—can reduce the cost of an international telephone call by a factor of ten times less than could be provided by the spool of copper wire strung from one point on earth to another point far away. Now which one is lowest cost?

The same principle applies in the case of the solar power satellite. The hardware is not cheap, but it has high productivity. The high productivity is achieved because the solar power satellite is in the sunlight over 99% of the time, which is five times more sunlight than is available at the best location on earth. It can operate at maximum capacity at all times and does not need a storage system. Its overall efficiency of converting sunlight to electricity delivered on earth is projected to be from 7% and 10%, and the system will be operating in the benign environment of space. This compares to the 1% to 3% for earth-based solar cell systems, and along with the favorable environment and the elimination of storage systems, is the fundamental reason for going to space for solar energy.

If we use the cost estimates established from the preliminary designs developed by the NASA study contractors in the late 1970s, then the cost of power would be less than the cost of electricity generated by coal, oil, or nuclear power. When the initial capital costs are paid off, the cost of power could then drop to a fraction of the costs from other sources. The power costs are at least in the right ballpark. The energy is free; the only cost is the cost of the conversion hardware and the cost to maintain it. The environment in space is very favorable for most equipment. There is no wind or rain or dirt or oxygen or corrosive fluids. Things last a very long time in space. The potential exists for long-term low cost—without the inflationary cost of fossil or synthetic fuels.

Nondepletable

Second is the question of depletability. It is clear that the energy source is nondepletable since it is available for as long as the sun shines and, therefore, for as long as man exists. Only one part in two billionths of the sun’s energy is actually intercepted by the earth. This extremely small fraction is still a massive amount of energy. The satellites would not even have to infringe on this increment, however, as they would intercept the energy that normally streams past the earth into deep space. Geosynchronous orbit is about 165,000 miles in circumference—ample room to place as many satellites as we desire. The amount of energy that can be gathered and delivered to earth is primarily a function of how much we want, and only the usable energy is delivered to the earth.

Environmentally Clean

The environmental issue is what has stopped the construction of more nuclear power plants. Can solar power satellites pass this criteria? First of all, it is difficult to fault the energy source as environmentally unacceptable, even though most dermatologists try. The rest of us think having the sun around is just fine. Putting the power plant and its associated equipment 22,300 miles from the nearest house does not seem like a bad idea, either, especially when the thermal loss of energy conversion is left in deep space and will not heat up our rivers and atmosphere as all the thermal plants do.

But what about the wireless energy beam? Is it a death ray that will cook us if something goes wrong and it wanders from the receiving antenna? No. Even though the radio frequency beam is the same kind of frequency as we use for cooking in our microwave ovens, the energy density (or the amount of energy in a given area) is much less than the energy density in our microwave ovens (because our ovens are designed to contain the energy and concentrate it within the oven cavity). In fact, the wireless energy beam’s maximum energy density would be less than ten times the allowed leakage from the door of a microwave oven. At that level, which would be a maximum of 50 milliwatts per square centimeter, a person would just feel some warmth if he or she was standing in the center of the beam on top of the rectenna (not a very likely event). That much energy is less than half of the energy found in bright sunlight at high noon on a Florida beach, except that it is in the form of high-frequency radio waves, or microwaves. The only definitely known reaction of living tissue to microwaves is heating.

There is much debate about other possible effects, such as nervous system disorders or genetic effects due to long-term exposures at low levels. No good, hard evidence exists to prove or disprove the allegations. Many studies have been made and others are underway, however, to try to clarify the issue. In the meantime, let us consider the general evidence accumulated over the last century.

X-rays and the natural radiation of radium were discovered at about the same time as radio waves. In fact, Wilhelm Rontgen discovered x-rays in 1895, which was the same year that Marconi invented the radio telegraph. As early as 1888 both Heinrich Hertz and Oliver Lodge had independently identified radio waves as belonging to the same family as light waves. The big difference between nuclear radiation and radio and light waves is that radio and light waves are non-ionizing, whereas nuclear radiation is ionizing. Unfortunately, people often confuse the two. During the ensuing years, it became very clear that the magic of x-rays and the natural radiation of radium went beyond what was originally thought. Serious side effects were soon discovered. Mysterious deaths occurred among workers who painted the luminous dials of watches. The development of the atomic bomb lead to the discovery of many more effects of excessive exposure to ionizing radiation.

During that same period, radio, radar, and television grew at an even more rapid rate. Radar, television, radio, and space communication frequencies spanned the entire radio frequency range. Energy systems were added among the communication frequencies. During all these years of exposure by everyone on earth, the only nontransient effect identified has been heating. The point I am making is that if some serious phenomenon were caused by radio waves, there should be indications by now.

The overall picture for the microwave environmental issue looks good, but additional data will be needed to be certain. This is the hardest data to gather—information to prove that there are no effects.

The companion environmental issue is the question of the land required for the receiving antenna. Because the energy density is restricted to a very low level in the beam—in order to assure safety—the antenna must be large in order to supply the billion watts of power from a solar power satellite. The antenna would be about 1.8 miles wide. Since it can be elevated above the ground and since it would block less that 20% of the sunlight while stopping over 99% of the microwaves, the land can be used for agriculture as well as for the receiving antenna. In comparison, the total land required is less than with most other energy systems. The amount of land required for the receiving antenna is actually much less than that required for coal strip mines to produce an equivalent amount of power over 40 years.

Solar Power Satellites — Hope for the Future

The last of the candidates—solar power satellites—is the one that can best meet all of the criteria to become the energy system of the next energy era. I will review how it satisfies each criterion.

Low Cost

The first criterion is low cost over the long term. Usually the first reaction to the question of a solar power satellite being low cost is that nothing associated with space could possibly be low cost. That is simultaneously a correct reaction and an erroneous one. It is correct when considering the cost of hardware designed to operate in space on an independent basis with high reliability. Based on dollars per pound of space hardware versus dollars per pound of, say, a spool of copper wire, there is no comparison. But that very same piece of space hardware—a communications satellite, for example—can reduce the cost of an international telephone call by a factor of ten times less than could be provided by the spool of copper wire strung from one point on earth to another point far away. Now which one is lowest cost?

The same principle applies in the case of the solar power satellite. The hardware is not cheap, but it has high productivity. The high productivity is achieved because the solar power satellite is in the sunlight over 99% of the time, which is five times more sunlight than is available at the best location on earth. It can operate at maximum capacity at all times and does not need a storage system. Its overall efficiency of converting sunlight to electricity delivered on earth is projected to be from 7% and 10%, and the system will be operating in the benign environment of space. This compares to the 1% to 3% for earth-based solar cell systems, and along with the favorable environment and the elimination of storage systems, is the fundamental reason for going to space for solar energy.

If we use the cost estimates established from the preliminary designs developed by the NASA study contractors in the late 1970s, then the cost of power would be less than the cost of electricity generated by coal, oil, or nuclear power. When the initial capital costs are paid off, the cost of power could then drop to a fraction of the costs from other sources. The power costs are at least in the right ballpark. The energy is free; the only cost is the cost of the conversion hardware and the cost to maintain it. The environment in space is very favorable for most equipment. There is no wind or rain or dirt or oxygen or corrosive fluids. Things last a very long time in space. The potential exists for long-term low cost—without the inflationary cost of fossil or synthetic fuels.

Nondepletable

Second is the question of depletability. It is clear that the energy source is nondepletable since it is available for as long as the sun shines and, therefore, for as long as man exists. Only one part in two billionths of the sun’s energy is actually intercepted by the earth. This extremely small fraction is still a massive amount of energy. The satellites would not even have to infringe on this increment, however, as they would intercept the energy that normally streams past the earth into deep space. Geosynchronous orbit is about 165,000 miles in circumference—ample room to place as many satellites as we desire. The amount of energy that can be gathered and delivered to earth is primarily a function of how much we want, and only the usable energy is delivered to the earth.

Environmentally Clean

The environmental issue is what has stopped the construction of more nuclear power plants. Can solar power satellites pass this criteria? First of all, it is difficult to fault the energy source as environmentally unacceptable, even though most dermatologists try. The rest of us think having the sun around is just fine. Putting the power plant and its associated equipment 22,300 miles from the nearest house does not seem like a bad idea, either, especially when the thermal loss of energy conversion is left in deep space and will not heat up our rivers and atmosphere as all the thermal plants do.

But what about the wireless energy beam? Is it a death ray that will cook us if something goes wrong and it wanders from the receiving antenna? No. Even though the radio frequency beam is the same kind of frequency as we use for cooking in our microwave ovens, the energy density (or the amount of energy in a given area) is much less than the energy density in our microwave ovens (because our ovens are designed to contain the energy and concentrate it within the oven cavity). In fact, the wireless energy beam’s maximum energy density would be less than ten times the allowed leakage from the door of a microwave oven. At that level, which would be a maximum of 50 milliwatts per square centimeter, a person would just feel some warmth if he or she was standing in the center of the beam on top of the rectenna (not a very likely event). That much energy is less than half of the energy found in bright sunlight at high noon on a Florida beach, except that it is in the form of high-frequency radio waves, or microwaves. The only definitely known reaction of living tissue to microwaves is heating.

There is much debate about other possible effects, such as nervous system disorders or genetic effects due to long-term exposures at low levels. No good, hard evidence exists to prove or disprove the allegations. Many studies have been made and others are underway, however, to try to clarify the issue. In the meantime, let us consider the general evidence accumulated over the last century.

X-rays and the natural radiation of radium were discovered at about the same time as radio waves. In fact, Wilhelm Rontgen discovered x-rays in 1895, which was the same year that Marconi invented the radio telegraph. As early as 1888 both Heinrich Hertz and Oliver Lodge had independently identified radio waves as belonging to the same family as light waves. The big difference between nuclear radiation and radio and light waves is that radio and light waves are non-ionizing, whereas nuclear radiation is ionizing. Unfortunately, people often confuse the two. During the ensuing years, it became very clear that the magic of x-rays and the natural radiation of radium went beyond what was originally thought. Serious side effects were soon discovered. Mysterious deaths occurred among workers who painted the luminous dials of watches. The development of the atomic bomb lead to the discovery of many more effects of excessive exposure to ionizing radiation.

During that same period, radio, radar, and television grew at an even more rapid rate. Radar, television, radio, and space communication frequencies spanned the entire radio frequency range. Energy systems were added among the communication frequencies. During all these years of exposure by everyone on earth, the only nontransient effect identified has been heating. The point I am making is that if some serious phenomenon were caused by radio waves, there should be indications by now.

The overall picture for the microwave environmental issue looks good, but additional data will be needed to be certain. This is the hardest data to gather—information to prove that there are no effects.

The companion environmental issue is the question of the land required for the receiving antenna. Because the energy density is restricted to a very low level in the beam—in order to assure safety—the antenna must be large in order to supply the billion watts of power from a solar power satellite. The antenna would be about 1.8 miles wide. Since it can be elevated above the ground and since it would block less that 20% of the sunlight while stopping over 99% of the microwaves, the land can be used for agriculture as well as for the receiving antenna. In comparison, the total land required is less than with most other energy systems. The amount of land required for the receiving antenna is actually much less than that required for coal strip mines to produce an equivalent amount of power over 40 years.

Available to Everyone

The satellites may be located at any location around the earth and would be able to beam their energy to any selected receiver site except near the North and South Poles. Certainly they could make electricity available to all the larger populated areas of the earth, if those areas purchased a satellite or bought the electricity from a utility company that owned one.

It is not possible for most countries to be able to afford the development costs of a satellite system, but once developed the cost of individual satellites would be within the capability of many countries.

In a Useful Form

With solar power satellites, the form of the energy delivered is electricity, the cleanest and highest form available to us. It is the form we need to clean up the earth’s environment. It is the energy form of the future.

Here at last is a nondepletable, clean energy source with vast capacity, within our capability to develop, waiting to carry us into the twenty-first century.