1
The Electrical Grid
1. Basic Description of the Grid
1.1. Generation
Electrical power is generated in many locations and created by various technologies in the form
of alternating current. This power is commonly delivered to homes and businesses through an
electrical distribution system. When the power is needed, the various suppliers or utilities may
have more or less than they need. The individual supplier distribution systems are interconnected
into an electrical power grid so that power can be bought and resold by the participating utilities
to meet the demands of their customers.
When referring to the power industry, grid is a term used for an electricity network which may support all or some of the following four distinct operations:
1. Electricity generation
2. Electric power transmission
3. Electricity distribution
4. Electricity control
The sense of grid is as a network, and should not be taken to imply a particular physical layout,
or breadth. Grid may be used to refer to an entire continent's electrical network, a regional
transmission network or may be used to describe a subnetwork such as a local utility's
transmission grid or distribution grid.
Electricity in a remote location might be provided by a simple distribution grid linking a central
generator to homes. The traditional paradigm for moving electricity around in developed
countries is more complex. Generating plants are usually located near a source of water, and
away from heavily populated areas. They are usually quite large in order to take advantage of the
economies of scale. The electric power which is generated is stepped up to a higher voltage—at
which it connects to the transmission network. The transmission network will move (wheel) the
power long distances—often across state lines, and sometimes across international boundaries—
until it reaches its wholesale customer (usually the company that owns the local distribution
network). Upon arrival at the substation, the power will be stepped down in voltage—from a
transmission level voltage to a distribution level voltage. As it exits the substation, it enters the
distribution wiring. Finally, upon arrival at the service location, the power is stepped down again
from the distribution voltage to the required service voltage(s).
2
3
This traditional centralized model along with its distinctions are breaking down with the
introduction of new technologies. For example, the characteristics of power generation can in
some new grids be entirely opposite of those listed above. Generation can occur at low levels in
dispersed locations, in highly populated areas, and not outside the distribution grids. Such
characteristics could be attractive for some locales, and can be implemented if the grid uses a
combination of new design options such as net metering, electric cars as a temporary energy
source, or distributed generation.
1.2 Transmission
The transport of generator-produced electric energy to loads. An electric power transmission
system interconnects generators and loads and generally provides multiple paths among them.
Multiple paths increase system reliability because the failure of one line does not cause a system
failure. Most transmission lines operate with three-phase alternating current (ac). The standard
frequency in North America is 60 Hz; in Europe, 50 Hz. The three-phase system has three sets of
phase conductors. Long-distance energy transmission occasionally uses high-voltage direct-
current (dc) lines. See also Alternating current; Direct current; Direct-current transmission.
The electric power system can be divided into the distribution, subtransmission, and transmission
systems. With operating voltages less than 34.5 kV, the distribution system carries energy from
the local substation to individual households, using both overhead and underground lines. With
operating voltages of 69-138 kV, the subtransmission system distributes energy within an entire
district and regularly uses overhead lines. With operating voltage exceeding 230 kV, the
transmission system interconnects generating stations and large substations located close to load
centers by using overhead lines. See also Transmission lines.
1.2.1 Overhead alternating-current transmission
Overhead transmission lines distribute the majority of the electric energy in the system. A typical
high-voltage line has three phase conductors to carry the current and transport the energy, and
two grounded shield conductors to protect the line from direct lightning strikes. The usually bare
conductors are insulated from the supporting towers by insulators attached to grounded towers or
poles. Lower-voltage lines use post insulators, while the high-voltage lines are built with
insulator chains or long-rod composite insulators. The normal distance between the supporting
towers is a few hundred feet.
Transmission lines use ACSR (aluminum cable, steel reinforced) and ACAR (aluminum cable,
alloy reinforced) conductors. In an ACSR conductor, a stranded steel core carries the mechanical
load, and layers of stranded aluminum surrounding the core carry the current. An ACAR
conductor is a stranded cable made of an aluminum alloy with low resistance and high
mechanical strength. ACSR conductors are usually used for high-voltage lines, and ACAR
conductors for subtransmission and distribution lines. Ultrahigh-voltage (UHV) and extrahigh-
voltage (EHV) lines use bundle conductors. Each phase of the line is built with two, three, or
four conductors connected in parallel and separated by about 1.5 ft (0.5 m). Bundle conductors
reduce corona discharge. See also Conductor (electricity).
4
Transmission lines are subject to environmental adversities, including wide variations of
temperature, high winds, and ice and snow deposits. Typically designed to withstand
environmental stresses occurring once every 50–100 years, lines are intended to operate safely in
adverse conditions.
Variable weather affects line operation. Extreme weather reduces corona inception voltage,
leading to an increase in audible noise, radio noise, and telephone interference. Load variation
requires regulation of line voltage. A short circuit generates large currents, overheating
conductors and producing permanent damage.
The power that a line can transport is limited by the line's electrical parameters. Voltage drop is
the most important factor for distribution lines; where the line is supplied from only one end, the
permitted voltage drop is about 5%.
Conductor temperature must be lower than the temperature which causes permanent elongation.
A typical maximum steady-state value for ACSR is 212°F (100°C), but in an emergency
temperatures 10–20% higher are allowed for a short period of time (10 min to 1 h).
Corona discharge is generated when the electric field at the surface of the conductor becomes
larger than the breakdown strength of the air. The oscillatory nature of the discharge generates
high-frequency, short-duration current pulses, the source of corona-generated radio and
television interference. Surface irregularities such as water droplets cause local field
concentration, enhancing corona generation. Thus, during bad weather, corona discharge is more
intense and losses are much greater. Corona discharge also generates audible noise with two
components: a broad-band, high-frequency component, which produces crackling and hissing,
and a 120-Hz pure tone. See also Corona discharge; Electrical interference.
Transmission-line conductors are surrounded by an electric field which decreases as distance
from the line increases, and depends on line voltage and geometry. At ground level, this field
induces current and voltage in grounded bodies, causes corona in grounded objects, and can
induce fuel ignition. Utilities limit the electric field at the perimeter of right-of-ways to about
1000 V/m. An ac magnetic field around the transmission line also decreases with distance from
the line. See also Electric field.
Lightning strikes produce high voltages and traveling waves on transmission lines, causing
insulator flashovers and interruption of operation. Steel grounded shield conductors at the tops of
the towers significantly reduce, but do not eliminate, the probability of direct lightning strikes to
phase conductors. See also Lightning and surge protection.
The operation of circuit breakers causes switching surges that can result in interruption of
inductive current, energization of lines with trapped charges, and single-phase ground fault.
Modern circuit breakers, operating in two steps, reduce switching surges to 1.5–2 times the 60-
Hz voltage. See also Circuit breaker.
5
Line current induces a disturbing voltage in telephone lines running parallel to transmission
lines. Because the induced voltage depends on the mutual inductance between the two lines,
disturbance can be reduced by increasing the distance between the lines and shielding the
telephone lines. See also Electrical shielding; Inductive coordination.
1.2.2 Underground power transmission
Most cities use underground cables to distribute electrical energy. These cables virtually
eliminate negative environmental effects and reduce electrocution hazards. However, they entail
significantly higher construction costs.
Underground cables are divided into two categories: distribution cables (less than 69 kV) and
high-voltage power-transmission cables (69–500 kV).
Extruded solid dielectric cables dominate in the 15–33-kV urban distribution system. In a typical
arrangement, the stranded copper or aluminum conductor is shielded by a semiconductor layer,
which reduces the electric stress on the conductor's surface. Oil-impregnated paper-insulated
distribution cables are used for higher voltages and in older installations.
Cable temperatures vary with load changes, and cyclic thermal expansion and contraction may
produce voids in the cable. High voltage initiates corona in the voids, gradually destroying cable
insulation. Low-pressure oil-filled cable construction reduces void formation. A single-phase
concentric cable has a hollow conductor with a central oil channel. Three-phase cables have
three oil channels located in the filler.
1.2.3 Submarine cables
High-voltage cables are frequently used for crossing large bodies of water. Water provides
natural cooling, and pressure reduces the possibility of void formation. A typical submarine
cable has cross-linked polyethylene insulation, and corrosion-resistant aluminum alloy wire
armoring that provides tensile strength and permits installation in deep water.
1.3 Distribution
Electricity distribution is the final stage in the delivery (before retail) of electricity to end users.
A distribution system's network carries electricity from the transmission system and delivers it to
consumers. Typically, the network would include medium-voltage (less than 50 kV) power lines,
electrical substations and pole-mounted transformers, low-voltage (less than 1 kV) distribution
wiring and sometimes electricity meters.
6
Diagram of an electrical grid, distribution component is in green.
1.3.1 Current Grid
The current distribution system begins as the primary circuit leaves the sub-station and ends as
the secondary service enters the customer's meter socket. A variety of methods, materials, and
equipment are used among the various utility companies, but the end result is similar. First, the
energy leaves the sub-station in a primary circuit, usually with all three phases.
Most areas provide three phase industrial service. There is no substitute for three-phase service
to run heavy industrial equipment. A ground is normally provided, connected to conductive cases
and other safety equipment, to keep current away from equipment and people. Distribution
voltages vary depending on customer need, equipment and availability. Delivered voltage is
usually constructed using stock transformers, and either the voltage difference between phase
and neutral or the voltage difference from phase to phase.
In many areas, "delta" three phase service is common. Delta service has no distributed neutral
wire and is therefore less expensive. The three coils in the generator rotor are in series, in a loop,
with the connections made at the three joints between the coils. Ground is provided as a low
resistance earth ground, sometimes attached to a synthetic ground made by a transformer in a
substation. High frequency noise (like that made by arc furnaces) can sometimes cause transients
on a synthetic ground.
In North America and Latin America, three phase service is often a Y (wye) in which the neutral
is directly connected to the center of the generator rotor. Wye service resists transients better
than delta, since the distributed neutral provides a low-resistance metallic return to the generator.
Wye service is recognizable when a grid has four wires, one of which is lightly insulated.
Many areas in the world use single phase 220 V or 230 V residential and light industrial service.
In this system, a high voltage distribution network supplies a few substations per city, and the
230V power from each substation is directly distributed. A hot wire and neutral are connected to
the building from one phase of three phase service.
In the U.S. and parts of Canada and Latin America, split phase service is the most common. Split
phase provides both 120 V and 240 V service with only three wires. Split phase has substations
that provide intermediate voltage. The house voltages are provided by neighborhood
7
transformers that lower the voltage of a phase of the distributed three-phase. The neutral is
directly connected to the three-phase neutral. Socket voltages are only 120 V, but 240 V is
available for heavy appliances because the two halves of a phase oppose each other.
Japan has a large number of small industrial manufacturers, and therefore supplies standard low
voltage three phase service in many suburbs. Also, Japan normally supplies residential service as
two phases of a three phase service, with a neutral.
Rural services normally try to minimize the number of poles and wires. Single-wire earth return
(SWER) is the least expensive, with one wire. It uses high voltages, which in turn permit use of
galvanized steel wire. The strong steel wire permits inexpensive wide pole spacings. Other areas
use high voltage split-phase or three phase service at higher cost.
The least expensive network has the fewest transformers, poles and wires. Some experts saythat
this is three-phase delta for industrial, SWER for rural service, and 230 V single phase for
residential and light industrial. The system of three-phase Wye feeding split phase is flexible and
somewhat more resistant to geomagnetic faults, but more expensive.
Two frequencies are in wide use. Using 60 Hz permits slightly smaller transformers and is
usually associated with 120 V wall sockets. Outside North America 50 Hz is more common and
is associated with 230 V wall sockets. Large electrical networks tightly control the line
frequencies. The short term accuracy is normally better than 0.1 Hz. The long term accuracy is
controlled by making up "lost" cycles so that electric clocks maintain correct time.
1.3.2 Future - Smart Grid
1.3.2.1 Smart Distribution
A smart grid is a form of electricity network utilizing digital technology. A smart grid delivers
electricity from suppliers to consumers using two-way digital communications to control
appliances at consumers' homes; this saves energy, reduces costs and increases reliability and
transparency. It overlays the ordinary electrical grid with an information and net metering
system, that includes smart meters. Smart grids are being promoted by many governments as a
way of addressing energy independence, global warming and emergency resilience issues.
A smart grid is made possible by applying sensing, measurement and control devices with two-
way communications to electricity production, transmission, distribution and consumption parts
of the power grid that communicate information about grid condition to system users, operators
and automated devices, making it possible to dynamically respond to changes in grid condition.
A smart grid includes an intelligent monitoring system that keeps track of all electricity flowing
in the system. It also has the capability of integrating renewable electricity such as solar and
wind. When power is least expensive the user can allow the smart grid to turn on selected home
appliances such as washing machines or factory processes that can run at arbitrary hours. At peak
times it could turn off selected appliances to reduce demand.
8
Other names for a smart grid (or for similar proposals) include smart electric or power grid,
intelligent grid (or intelligrid), futuregrid, and the more modern intergrid and intragrid.
In principle, the smart grid is a simple upgrade of 20th century power grids which generally
"broadcast" power from a few central power generators to a large number of users, to instead be
capable of routing power in more optimal ways to respond to a very wide range of conditions,
and to charge a premium to those that use energy during peak hours.
The events to which a smart grid, broadly stated, could respond, occur anywhere in the power
generation, distribution and demand chain. Events may occur generally in the environment, e.g.,
clouds blocking the sun and reducing the amount of solar power or a very hot day requiring
increased use of air conditioning. They could occur commercially in the power supply market,
e.g., customers change their use of energy as prices are set to reduce energy use during high peak
demand. Events might also occur locally on the distribution grid, e.g., an MV transformer fails,
requiring a temporary shutdown of one distribution line. Finally these events might occur in the
home, e.g., everyone leaves for work, putting various devices into hibernation, and data ceases to
flow to an IPTV. Each event motivates a change to power flow.
Latency of the data flow is a major concern, with some early smart meter architectures allowing
actually as long as 24 hours delay in receiving the data, preventing any possible reaction by
either supplying or demanding devices.
1.3.2.1 Smart Energy Demand
Smart energy demand describes the energy user component of the smart grid. It goes beyond and
means much more than even energy efficiency and demand response combined. Smart energy
demand is what delivers the majority of smart meter and smart grid benefits.
Smart energy demand is a broad concept. It includes any energy-user actions to:
• reduce peak demand, • shift usage to off-peak hours, • lower total energy consumption, •
actively manage electric vehicle charging, • actively manage other usage to respond to solar,
wind, and other renewable resources, and • buy more efficient appliances and equipment over
time based on a better understanding of how energy is used by each appliance or item of
equipment.
All of these actions minimize adverse impacts on electricity grids and maximize consumer
savings.
Smart Energy Demand mechanisms and tactics include:
• smart meters, • dynamic pricing, • smart thermostats and smart appliances, • automated control
of equipment, • real-time and next day energy information feedback to electricity users, • usage
9
by appliance data, and • scheduling and control of loads such as electric vehicle chargers, home
area networks (HANs), and others.
The amount of data required to perform monitoring and switching your appliances off
automatically is very small compared with that already reaching even remote homes to support
voice, security, Internet and TV services. Many smart grid bandwidth upgrades are paid for by
over-provisioning to also support consumer services, and subsidizing the communications with
energy-related services or subsidizing the energy-related services, such as higher rates during
peak hours, with communications. This is particularly true where governments run both sets of
services as a public monopoly, e.g. in India. Because power and communications companies are
generally separate commercial enterprises in North America and Europe, it has required
considerable government and large-vendor effort to encourage various enterprises to cooperate.
Some, like Cisco, see opportunity in providing devices to consumers very similar to those they
have long been providing to industry. Others, such as Silver Spring Networks or Google, are data
integrators rather than vendors of equipment. While the AC power control standards suggest
powerline networking would be the primary means of communication among smart grid and
home devices, the bits may not reach the home via BPL initially but by fixed wireless. This may
be only an interim solution however as separate power and data connections simply defeats full
control.
2. Problems of intermittent resources
Intermittent energy source is a term usually used to refer to some sources of renewable energy,
such as wind and solar, (but not to geothermal generated electricity or hydroelectricity), because
these sources of electric power generation may be uncontrollably variable or more intermittent
than conventional power sources in normal operational conditions. Intermittency is a problem
related to dispatchability, or the ability to match the generated supply of electricity to actual
demand.
At present, the penetration of intermittent renewables in most power grids is low, but wind for
example generates 11% of electric energy in Spain and Portugal, 9% in the Republic of Ireland,
and 7% in Germany. Wind provides nearly 20% of the electricity generated in Denmark,
however this percentage forces Denmark to import and export large amounts of energy to and
from the EU grid, to balance supply with demand.
The use of small amounts of intermittent power has little effect on grid operations. Using larger
amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure.
2.1 Intermittency and Penetration Limits
Electricity generated from wind power can be highly variable at several different timescales:
from hour to hour, daily, and seasonally. Annual variation also exists, but is not as significant.
Related to variability is the short-term (hourly or daily) predictability of wind plant output. Like
other electricity sources, wind energy must be "scheduled". Wind power forecasting methods are
used, but predictability of wind plant output remains low for short-term operation.
10
Because instantaneous electrical generation and consumption must remain in balance to maintain
grid stability, this variability can present substantial challenges to incorporating large amounts of
wind power into a grid system. Intermittency and the non-dispatchable nature of wind energy
production can raise costs for regulation, incremental operating reserve, and (at high penetration
levels) could require an increase in the already existing energy demand management, load
shedding, or storage solutions or system interconnection with HVDC cables. At low levels of
wind penetration, fluctuations in load and allowance for failure of large generating units requires
reserve capacity that can also regulate for variability of wind generation. Wind power can be
replaced by other power stations during low wind periods. Transmission networks must already
cope with outages of generation plant and daily changes in electrical demand. Systems with large
wind capacity components may need more spinning reserve (plants operating at less than full
load).
Pumped-storage hydroelectricity or other forms of grid energy storage can store energy
developed by high-wind periods and release it when needed. Stored energy increases the
economic value of wind energy since it can be shifted to displace higher cost generation during
peak demand periods. The potential revenue from this arbitrage can offset the cost and losses of
storage; the cost of storage may add 25% to the cost of any wind energy stored, but it is not
envisaged that this would apply to a large proportion of wind energy generated. The 2 GW
Dinorwig pumped storage plant in Wales evens out electrical demand peaks, and allows base-
load suppliers to run their plant more efficiently. Although pumped storage power systems are
only about 75% efficient, and have high installation costs, their low running costs and ability to
reduce the required electrical base-load can save both fuel and total electrical generation costs.
In particular geographic regions, peak wind speeds may not coincide with peak demand for
electrical power. In the US states of California and Texas, for example, hot days in summer may
have low wind speed and high electrical demand due to air conditioning. Some utilities subsidize
the purchase of geothermal heat pumps by their customers, to reduce electricity demand during
the summer months by making air conditioning up to 70% more efficient; widespread adoption
of this technology would better match electricity demand to wind availability in areas with hot
summers and low summer winds. Another option is to interconnect widely dispersed geographic
areas with an HVDC "Super grid". In the USA it is estimated that to upgrade the transmission
system to take in planned or potential renewables would cost at least $60 billion.
In the UK, demand for electricity is higher in winter than in summer, and so are wind speeds.
Solar power tends to be complementary to wind. On daily to weekly timescales, high pressure
areas tend to bring clear skies and low surface winds, whereas low pressure areas tend to be
windier and cloudier. On seasonal timescales, solar energy typically peaks in summer, whereas
in many areas wind energy is lower in summer and higher in winter. Thus the intermittencies of
wind and solar power tend to cancel each other somewhat. A demonstration project at the
Massachusetts Maritime Academy shows the effect. The Institute for Solar Energy Supply
Technology of the University of Kassel pilot-tested a combined power plant linking solar, wind,
biogas and hydrostorage to provide load-following power around the clock, entirely from
renewable sources.
11
A report on Denmark's wind power noted that their wind power network provided less than 1%
of average demand 54 days during the year 2002. Wind power advocates argue that these periods
of low wind can be dealt with by simply restarting existing power stations that have been held in
readiness or interlinking with HVDC. Electrical grids with slow-responding thermal power
plants and without ties to networks with hydroelectric generation may have to limit the use of
wind power.
Three reports on the wind variability in the UK issued in 2009, generally agree that variability of
wind needs to be taken into account, but it does not make the grid unmanageable; and the
additional costs, which are modest, can be quantified.
A 2006 International Energy Agency forum presented costs for managing intermittency as a
function of wind-energy's share of total capacity for several countries, as shown:
Intermittency inherently affects solar energy, as the production of electricity from solar sources
depends on the amount of light energy in a given location. Solar output varies throughout the day
and through the seasons, and is affected by cloud cover. These factors are fairly predictable, and
some solar thermal systems make use of heat storage to produce power when the sun is not
shining.
* Intermittency: In the absence of an energy storage system, solar does not produce power at
night.
* Capacity factor Photovoltaic solar in Massachusetts 12-15%.Photovoltaic solar in Arizona
19% Thermal solar parabolic trough 56% Thermal solar power tower 73%
The extent to which the intermittency of solar-generated electricity is an issue will depend to
some extent on the degree to which the generation profile of solar corresponds to demand. For
example, solar thermal power plants such as Nevada Solar One are somewhat matched to
summer peak loads in areas with significant cooling demands, such as the south-western United
States. Thermal energy storage systems can improve the degree of match between supply and
consumption. The increase in capacity factor of thermal systems does not represent an increase
in efficiency, but rather a spreading out of the time over which the system generates power.
2.2 Managing Wind Energy
Wind-generated power is a variable resource, and the amount of electricity produced at any
given point in time by a given plant will depend on wind speeds, air density, and turbine
characteristics (among other factors). If wind speed is too low (less than about 2.5 m/s) then the
wind turbines will not be able to make electricity, and if it is too high (more than about 25 m/s)
the turbines will have to be shut down to avoid damage. While the output from a single turbine
can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over
larger and larger areas the average power output becomes less variable.
12
* Intermittence: A single wind turbine is highly intermittent. Theoretical arguments often
claim that a large wind farm spread over a geographically diverse area will as a whole rarely stop
producing power altogether, however this is in contradiction to the observed variability in total
power output of wind turbines installed in Ireland and Denmark.
* Capacity Factor: Wind power typically has a capacity factor of 20-40%.
* Dispatchability: Wind power is "highly non-dispatchable".
* Capacity Credit: At low levels of penetration, the capacity credit of wind is about the same
as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit
percentage drops.
* Variability: Site dependent. Sea breezes are much more constant than land breezes.
* Reliability: A wind farm is in most cases highly reliable (although highly intermittent). That
is, the output at any given time will only vary gradually due to falling wind speeds or storms (the
latter necessitating shut downs). A typical wind farm is unlikely to have to shut down in less than
half an hour at the extreme, whereas an equivalent sized power station can fail totally
instantaneously and without warning. The total shut down of wind turbines is somewhat
predictable via weather forecasting.
According to a study of wind in the United States, ten or more widely-separated wind farms
connected through the grid could be relied upon for from 33 to 47% of their average output (15–
20% of nominal capacity) as reliable, baseload power, as long as minimum criteria are met for
wind speed and turbine height. When calculating the generating capacity available to meet peak
demand, [ERCOT] (manages Texas grid) counts wind generation at 8.7% of nameplate capacity.
Because wind power is generated by large numbers of small generators, individual failures do
not have large impacts on power grids. This feature of wind has been referred to as resiliency.
Wind power is affected by air temperature because colder air is more dense and therefore more
effective at producing wind power. As a result, wind power is affected seasonally (more output
in winter than summer) and by daily temperature variations. During the 2006 California heat
storm output from wind power in California significantly decreased to an average of 4% of
capacity for 7 days. A similar result was seen during the 2003 European heat wave, when the
output of wind power in France, Germany, and Spain fell below 10% during peak demand times.
According to an article in EnergyPulse, "the development and expansion of well-functioning
day-ahead and real time markets will provide an effective means of dealing with the variability
of wind generation."
2.3 Solving Intermittency
Mark Z. Jacobson has studied how wind, water and solar technologies can be integrated to
provide the majority of the world's energy needs. He advocates a "smart mix" of renewable
energy sources to reliably meet electricity demand:
Because the wind blows during stormy conditions when the sun does not shine and the sun
often shines on calm days with little wind, combining wind and solar can go a long way toward
13
meeting demand, especially when geothermal provides a steady base and hydroelectric can be
called on to fill in the gaps.
Technological solutions to mitigate large scale wind energy type intermittency exist such as
increased interconnection (the European super grid), Demand response, load management, diesel
generators (in National Grid), Frequency Response / National Grid Reserve Service type
schemes, and use of existing power stations on standby. Studies by academics and grid operators
indicate that the cost of compensating for intermittency is expected to be high at levels of
penetration above the low levels currently in use today. Large, distributed power grids are better
able to deal with high levels of penetration than small, isolated grids. For a hypothetical
European-wide power grid, analysis has shown that wind energy penetration levels as high as
70% are viable, and that the cost of the extra transmission lines would be only around 10% of the
turbine cost, yielding power at around present day prices. Smaller grids may be less tolerant to
high levels of penetration.
Matching power demand to supply is not a problem specific to intermittent power sources.
Existing power grids already contain elements of uncertainty including sudden and large changes
in demand and unforeseen power plant failures. Though power grids are already designed to
have some capacity in excess of projected peak demand to deal with these problems, significant
upgrades may be required to accommodate large amounts of intermittent power. The
International Energy Agency (IEA) states: "In the case of wind power, operational reserve is the
additional generating reserve needed to ensure that differences between forecast and actual
volumes of generation and demand can be met. Again, it has to be noted that already significant
amounts of this reserve are operating on the grid due to the general safety and quality demands
of the grid. Wind imposes additional demands only inasmuch as it increases variability and
unpredictability. However, these factors are nothing completely new to system operators. By
adding another variable, wind power changes the degree of uncertainty, but not the kind..."
3. Role of Climate and Weather Forecasting
Climate and the complying histories of environmental influences can be used to help plan and
site many types of renewable resources. But the three primary intermittent resources, wind, solar
and hydro are most impacted and benefit the most form weather forecasting and will be the focus
of this section.
3.1 Site selection
Solar irradiance maps based upon measurement and clud impaction on irradiance have been constructed
for solar siting. For wind, wind climatology maps based upon e a combination of observational dates and
physical modles have made siting much more efficient, accurate and cost effective. Solar and wind maps
do not eliminate the need for on-site solar and wind resource measurement, but they can help utilities and
developers gain a better understanding of where the best wind resource areas are and screen out less
promising areas, significantly minimizing the cost and timing of prospecting. Also in the they can be use
by developers and landowners in making a first-cut feasibility analysis for installing distributed solar
plants and wind turbines to supply power for homes, farms and ranches.
14
Hydro power is heavy influenced by precipitation, but because the power is related to streamflow which
depends on both precipitation and steepness of the stream, the location and time of the precipitation
influencing stream flow at any given moment may be many miles and even years earlier, so there is no
equivalent hydro power potential maps related to one variable such as precipitation.
3.2. Intermittent Resource forecasting
3.2.1 General Methodology.
The forecast "look ahead period" can be for practical and physical reasons be divide into several
time periods. The are minutes, hours and days, and months or longer ahead. For the minutes
ahead forecasts persistence methods are typically the best. In the case of hours ahead,
combinations of statistical and physical approaches work best. For days ahead, the physical
approach typically the best performance and for time period beyond two week using
climatological data adjusted by major ocean-atmospheric cycles seem to be the most reliable. It
is important to realize that combining methods forecast of all time scale is essential to obtaining
the most accurate forecasts. But how the methods are combined changes with the look ahead
period. The figure on the next page illustrates factors that must be captured based upon the time
scale of the forecast.
How the Forecasting Problem How the Forecasting Problem
Changes by Time ScaleChanges by Time Scale
Minutes Ahead• Rapid and erratic evolution; very short lifetimes
• Large eddys, turbulent mixing transitions
• Mostly not observed by current sensor network
• Forecasting tools: Autoregressive trends
• Very difficult to beat a persistence forecast
• Need: Very hi-res 3-D data from remote sensing
Hours Ahead• Sea breezes, mountain-valley winds, thunderstorms
• Rapidly changing, short lifetimes
• Current sensors detect existence but not structure
• Tools: Mix of autoregressive with offsite data and NWP
• Outperforms persistence by a modest amount
• Need: Hi-res 3-D data from remote sensing
Days Ahead• “Lows and Highs”, frontal systems
• Slowly evolving, long lifetimes
• Well observed with current sensor network
• Tools: NWP with statistical adjustments
• Much better than a persistence or climatology forecast
• Need: More data from data sparse areas (e.g. oceans)
Time Scales
© 2008 AWS Truewind, LLC
3.2.1 Climatology and Persistence
There exists today a wealth of methods for the prediction of intermittent electrical energy
resource generation. The simplest ones are based on climatology or averages of past production
values. They may be considered as reference forecasting methods since they are easy to
implement, as well as benchmark when evaluating more advanced approaches. The most popular
15
of these reference methods is certainly persistence. This naive predictor — commonly referred to
as ‘what you see is what you get’ — states that the future wind generation will be the same as the
last measured value. Despite its apparent simplicity, this naive method might be hard to beat for
look-ahead times up to 4–6 hours ahead
Advance Methods
Advanced approaches for short-term power forecasting necessitate predictions of meteorological
variables as input. Then, they differ in the way predictions of meteorological variables are
converted to predictions of wind power production, through the so-called power curve. Such
advanced methods are traditionally divided into two groups the physical and statistical
approaches.
The first group, referred to as physical approach, attempt to explicit forecasts the wind flow at a
wind farm using computer models based upon the laws of physics. In a typical 24 hours
forecasts for a wind farm about 10 trillion calculations are made using this method. With the
advent of modern computers make this many calculations takes 20 - 30 minutes. The physical
models model are generally very good for forecasts than begin 3 - 6 hours in the future because
there is a required "spin up period". So for very short term forecasts it is essential that the
physical method be combined with persistence and statistical methods.
In parallel the second group, referred to as statistical approach, concentrates on capturing the
relation between meteorological predictions (and possibly historical measurements) and power
output through statistical models whose parameters have to be estimated from data, without
making any assumption on the physical phenomena.
The most advance prediction systems combine the statistical and physical approaches through
methodologies known as ensembling and model output statistic (MOS). In the ensemble
method, many models or variations of a given mode a run at the same time to come up with an
average forecast which also provided a range of probabilities of some condition occurring. In
some cases a weighting is applied to give certain models or model configurations more weight in
the final forecast because of superior past performance. The MOS methodology uses an
adjustment factor that is obtained by finding the typical bias of model for a given meteorological
situation and applying the adjustment in the hopes that it improves the forecast. On average both
methods have been shown to provide better forecasts than using a just physical or simple
statistical models approach. The figure below diagram the a typical intermittent resource
forecasting system.
16
General Forecasting Method General Forecasting Method
General Forecasting Method
Physics-based Models - use initial observed point conditions and the laws of physics
to create a mathematical model
Statistical Model Refinement - analyze history and calculate the likelihood (correlations
or probabilities) of a weather event occurring based upon current conditions.
Often use to adjust "biases" in physic models - ie terrain caused cool/warm bias 3.2.1 Hydo
In the case of hydro power resource forecasting, three modeling system must be combined to create one
hydro forecasting system. The first is the atmospheric modeling system which consists of both statistical
and physical components as described in the general methodology section. But is also must have a
hydrological model component and a hydro plant model representative of the plant of interest that will
convert the streamflow forecast to actual power. Both the atmospheric modeling and hydro molding
components can be done with physical, statistical and combination physical and statistical approaches. All
of the processes of the hydrological cycle must be accounted for in the hydrological component of the
forecast system as shown in the figure on the next page. The second fugue of then next page show how a
particular physical hydro model handles the hydro modeling..
17
Hydrological cycle above must be forecasted by the hydrological component of the forecast system.
Distributed Hydrology Soil
Vegetation Model
Distributed Model (DHSVM)Distributed Model (DHSVM)University of WashingtonUniversity of Washington
DHVM - a physical hydrological model.
18
3.2.2 Wind forecasting
A wind power forecast corresponds to an estimate of the expected production of one or more
wind turbines (referred to as a wind farm) in the near future. By production is often meant
available power for wind farm considered (with units kW or MW depending on the wind farm
nominal capacity). Forecasts can also be expressed in terms of energy, by integrating power
production over each time interval. Forecasting of the wind power generation may be considered
at different time scales, depending on the intended application:
• from milliseconds up to a few minutes, forecasts can be used for the turbine active control.
Such type of forecasts are usually referred to as very short-term forecasts
• for the following 48–72 hours, forecasts are needed for the power system management or
energy trading. They may serve for deciding on the use of conventional power plants (Unit
commitment) and for the optimization of the scheduling of these plants (Economic dispatch).
Regarding the trading application, bids are usually required during the morning of day d for day
d+1 from midnight to midnight. These forecasts are called short-term forecasts
• for longer time scales (up to 5–7 days ahead), forecasts may be considered for planning the
maintenance of wind farms, or conventional power plants or transmission lines. For the specific
case of offshore wind farms maintenance costs may be prohibitive, and thus an optimal planning
of maintenance operations is of particular importance.
For the last two possibilities, the temporal resolution of wind power predictions ranges between
10 minutes and few hours (depending on the forecast length). Lately, most of the works for
improving wind power forecasting solutions have focused on using more and more data as input
to the models involved, or alternatively on the providing of reliable uncertainty estimates along
with the traditionally provided predictions.
I n the electricity grid at any moment balance must be maintained between electricity
consumption and generation - otherwise disturbances in power quality or supply may occur.
Wind generation is a direct function of wind speed and, in contrast to conventional generation
systems, is not easily dispatchable. Fluctuations of wind generation thus receive a great amount
of attention. Variability of wind generation can be regarded at various time scales. First, wind
power production is subject to seasonal variations, i.e. it may be higher in winter in Northern
Europe due to low-pressure meteorological systems or it may be higher in summer in the
Mediterranean regions owing to strong summer breezes. There are also diurnal cycles, which
may be substantial or not, mainly due to thermal effects. Finally, fluctuations are observed at the
very short-term scale (at the minute or intra-minute scale). The variations are not of the same
order for these three different timescales. Managing the variability of wind generation is the key
aspect associated to the optimal integration of that renewable energy into electricity grids.
The challenges to face when wind generation is injected in a power system depend on the share
of that renewable energy. It is a basic concept, the wind penetration which allows one to describe
the share of wind generation in the electricity mix of a given power system. For Denmark, which
19
is a country with one of the highest shares of wind power in the electricity mix, the average wind
power penetration over the year is of 16-20% (meaning that 16-20% of the electricity
consumption is met wind energy), while the instantaneous penetration (that is, the instantaneous
wind power production compared to the consumption to be met at a given time) may be above
100%.
3.2.3 Solar Forecasting
The solar power forecasting system is fundamentally the same as the wind and hydro forecast
form the atmospheric molding component. The differences are the addition of a solar plant
model and the need to focus on making accurate cloud forecasts. The need to forecast clouds
makes meteorological satellite imagery a very important tool in the solar forecasting system,
especially for short term forecasts. So most advanced solar forecasting systems has a component
that uses satellites images to determining cloud coverage, development tends and motion tends to
improve the short term forecasts.
3.2.4 Forecasting Power Ramps
Power ramps are rapid changes up or down in the power generations availability. Both have
significant impact on the grid, with the down ramp having the greatest impact. For hydro power,
ramps are not as steep as they are for wind or solar power generation. But there are issues,
particularly if there is heavy rains and flooding that cause a plant to shut down.
Wind has many problems with ramps and are case by a multiple of reasons. One cause is the by
the passage of a large scale front. These are fairly predictable. Another cause are due to changes
in small scale circulations such as land-sea breezes. A particular challenge in mountainous
regions are shallow stable layers caused by nighttime radiational cooling near the surface which
stabilizes us the lower atmosphere stopping the winds. A very problematic wind ramp event
occurs when very high winds cause the wind power production to go from max production to
zero if the wind exceeds the operating threshold of a given turbine.
Solar power has ramps associated with rabidly changing cloud conditions. By far the most
difficult to predicate and the most problematic to the power forecasts are low level "boundary
level" clouds which typically form rabidly on sunny days when the surface temperature releases
some threshold cause the lower atmosphere to become unstable. These clouds are often called
"fair weather clouds" but they cause rapid fluctuations in the power output of a solar system
(especially PV) as the pass over a solar plant, so hardly "fair" from a solar perspective
4. Issues Associated with Renewables
4.1. Costs
Cost are a big factor. Alternative and Renewabels generally cost more to produce than
conventional (at least at the moment) for a variety of reason. A major factor si just being new
and not in the mainstream. Typically the cost to produce energy drops as the amount of energy
20
is produced. Another problems is efficiencies. It is not uncommon for an alternative energy in
the development stage to actually use more energy that it can create. This was actually the case
until about 10 years ago for solar. Economic, financial and political pressures also have
significant impact on costs.
4.2 Problem of Rare Earths
As defined by IUPAC, rare earth elements or rare earth metals are a collection of seventeen
chemical elements in the periodic table, namely scandium, yttrium, and the fifteen lanthanides.
Scandium and yttrium are considered rare earth elements since they tend to occur in the same ore
deposits as the lanthanides and exhibit similar chemical properties.
Despite their name, rare earth elements (with the exception of the highly unstable promethium)
are relatively plentiful in the Earth's crust, with cerium being the 25th most abundant element at
68 parts per million (similar to copper). However, because of their geochemical properties, rare
earth elements are not often found in concentrated and economically exploitable forms, generally
called rare earth minerals. It was the very scarcity of these minerals (previously called "earths")
that led to the term "rare earth". The first such mineral discovered was gadolinite, a mixture of
cerium, ytterbium, iron, silica, and other elements. This mineral was extracted from a mine in the
village of Ytterby, Sweden; many of the elements bear names derived from this location.
Until 1948, most of the world's rare earths were sourced from placer sand deposits in India and
Brazil. Through the 1950s, South Africa took the status as the world's rare earth source, after
large veins of rare earth bearing monazite were discovered there. Through the 1960s until the
1980s, the Mountain Pass rare earth mine in California was the leading producer. Today, the
Indian and South African deposits still produce some rare earth concentrates, but they are
dwarfed by the scale of Chinese production. China now produces over 97% of the world's rare
earth supply, mostly in Inner Mongolia, even though it has only 37% of proven reserves. All of
the world's heavy rare earths (such as dysprosium) come from Chinese rare earth sources such as
the polymetallic Bayan Obo deposit. In 2010, the USGS released a study which found that the
United States had 13 million metric tons of rare earth elements.
New demand has recently strained supply, and there is growing concern that the world may soon
face a shortage of the rare earths. In several years, worldwide demand for rare earth elements is
expected to exceed supply by 40,000 tonnes annually unless major new sources are developed.
These concerns have intensified due to the actions of China, the predominate supplier.
Specifically, China has announced regulations on exports and a crackdown on smuggling. On
September 1, 2009, China announced plans to reduce its export quota to 35,000 tons per year in
2010-2015, ostensibly to conserve scarce resources and protect the environment. On October 19,
2010 China Daily, citing an unnamed Ministry of Commerce official, reported that China will
"further reduce quotas for rare earth exports by 30 percent at most next year to protect the
precious metals from over-exploitation".
21
As a result of the increased demand and tightening restrictions on exports of the metals from
China, searches for alternative sources in Australia, Brazil, Canada, South Africa, and the United
States are ongoing. Mines in these countries were closed when China undercut world prices in
the 1990s, and it will take a few years to restart production as there are many barriers to entry.
One example is the Mountain Pass mine in California, which is projected to reopen in 2011.
Other significant sites under development outside of China include the Nolans Project in Central
Australia, the remote Hoidas Lake project in northern Canada, and the Mount Weld project in
Australia. The Hoidas Lake project has the potential to supply about 10% of the $1 billion of
REE consumption that occurs in North America every year. Vietnam signed an agreement in
October 2010 to supply Japan with rare earths from its northwestern Lai Châu Province.
Also under consideration for mining are a sites such at Thor Lake in the Northwest Territories,
various locations in Vietnam, and a site in southeast Nebraska in the US, where Quantum Rare
Earth Development, a Canadian company, is currently conducting test drilling and economic
feasibility studies toward opening a niobium mine. Additionally, a large deposit of rare earth
minerals was recently discovered in Kvanefjeld in southern Greenland. Pre-feasibility drilling at
this site has confirmed significant quantities of black lujavrite, which contains about 1% rare
earth oxides (REO).
Another recently developed source of rare earths is electronic waste and other wastes that have
significant rare earth components. New advances in recycling technology have made extraction
of rare earths from these materials more feasible, and recycling plants are currently operating in
Japan, where there is an estimated 300,000 tons of rare earths stored in unused electronics.
4.3 Environmental considerations
Mining, refining, and recycling of rare earths have serious environmental consequences if not
properly managed. A particular hazard is mildly radioactive slurry tailings resulting from the
common occurrence of thorium and uranium in rare earth element ores. Additionally, toxic acids
are required during the refining process. Improper handling of these substances can result
extensive environmental damage. In May 2010, China announced a major, five-month
crackdown on illegal mining in order to protect the environment and its resources. This
campaign is expected to be concentrated in the South, where mines are commonly small, rural,
and illegal operations particularly prone to release toxic wastes into the general water supply.
However, even the major operation in Baotou, in Inner Mongolia, where much of the world's
rare earth supply is refined, has suffered major environmental damage.
4.4 Geo-political considerations
China has officially cited environmental issues as one of the key factors for its recent regulation
on the industry, but non-environmental motives have also been imputed to China's rare earth
policy. According to The Economist, "Slashing their exports of rare-earth metals...is all about
moving Chinese manufacturers up the supply chain, so they can sell valuable finished goods to
the world rather than lowly raw materials." One possible example is the division of General
22
Motors which deals with miniaturized magnet research, which shut down its US office and
moved all of its staff to China in 2006.
It was reported,] but officially denied, that China instituted an export ban on shipments of rare
earth oxides (but not alloys) to Japan on 22 September 2010, in response to the detainment of a
Chinese fishing boat captain by the Japanese Coast Guard. On September 2, 2010, a few days
before the fishing boat incident, The Economist reported that "China...in July announced the
latest in a series of annual export reductions, this time by 40% to precisely 30,258 tonne
China’s trade policy could force the renewables sector to its knees – an industry that has
registered 230% growth in global investments since 2005 and generated $162 billion in
investments last year alone. After all, this successful sector suffers from a crucial weakness: Rare
earth elements are essential ingredients in the production of many key technologies such as
specialized magnets for hybrid vehicles. China controls 97 percent of the world market in rare
earth and has announced that it will reduce exports by 72 percent in the second half of 2010. The
West urgently needs to act in order to maintain its competitiveness.
The exploitation of rare earths is tremendously harmful to the environment. This is the case in
particular in China, where numerous small actors engage – largely illegally – in their production.
It is hence not entirely justified to suspect that China’s export restrictions are primarily geared
toward hurting the West. Rather, the official Chinese policy needs to be seen in the context of a
nationalization and better control of the production process within the country itself. It is
crucially important for Beijing to rein in these environmentally destructive production methods
that will prove enormously costly in the long run. Moreover, China wants to share in the global
renewables boom and upgrade its production capacities. The country is attempting to use its near
monopoly in order to become more attractive as a production site for green technologies. It is in
this context that Chinese efforts to buy up rare earth deposits outside of China need to be seen as
well. China itself possesses only 37 percent of worldwide deposits. Not least because of its lax
23
environmental standards, the country has been able to exploit its resources more cheaply and
hence dominate the world market up to date.
The West must succeed in becoming less dependent upon Chinese rare earth supplies, in order to
gain greater autonomy in the development of renewables. The Japanese firm Toyota for instance
has already begun securing its own sources abroad by investing in a Vietnamese mine. Like
Japan, Europe also lacks deposits domestically. However, Europe has not undertaken any steps
to assist industry in facing the challenges to be expected from the coming resource scarcity. By
contrast, the United States is attempting to make domestic mines, which have been unprofitable
in the past, attractive once again to investors. However, it is up to research and development
efforts to come up with alternatives to rare earth elements for the high-tech sector. Recycling
efforts must also be stepped up in order to guarantee a better exploitation of the rare earth
elements available. After all, this is not merely a question of economic competiveness, but also
one of national security, since the military depends on rare earth elements for numerous high-
tech gadgets as well.
4.4 Life cycle of renewables
Before new technologies enter the market, their environmental superiority over competing options must
be asserted based on a life cycle approach. However, when applying the prevailing status-quo Life Cycle
Assessment (LCA) approach to future renewable energy systems, one does not distinguish between
impacts which are ‘imported’ into the system due to the ‘background system’ (e.g. due to supply of
materials or final energy for the production of the energy system), and what is the improvement potential
of these technologies compared to competitors (e.g. due to process and system innovations or diffusion
effects). This paper investigates a dynamic approach towards the LCA of renewable energy technologies
and proves that for all renewable energy chains, the inputs of finite energy resources and emissions of
greenhouse gases are extremely low compared with the conventional system. With regard to the other
environmental impacts the findings do not reveal any clear verdict for or against renewable energies.
Future development will enable a further reduction of environmental impacts of renewable energy
systems. Different factors are responsible for this development, such as progress with respect to technical
parameters of energy converters, in particular, improved efficiency; emissions characteristics; increased
lifetime, etc.; advances with regard to the production process of energy converters and fuels; and
advances with regard to ‘external’ services originating from conventional energy and transport systems,
for instance, improved electricity or process heat supply for system production and ecologically
optimized transport systems for fuel transportation.
From http://www.nrel.gov/lci/about.html
The U.S. Life Cycle Inventory (LCI) Database is a publicly available database that allows users
to objectively review and compare analysis results that are based on similar data collection and
analysis methods.
Finding consistent and transparent LCI data for life cycle assessments (LCAs) is difficult. NREL
works with LCA experts to solve this problem by providing a central source of critically
reviewed LCI data through its LCI Database Project. NREL's High-Performance Buildings
24
research group is working closely with government stakeholders, and industry partners to
develop and maintain the database. The 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder
meeting was an important step in the ongoing improvement of the database. Prior to that event,
NREL conducted a poll of current and potential users of the LCI database to help guide meeting
discussions and produce user feedback.
The drive toward sustainability and improved environmental decision-making is helping us recognize the
value of validated data about the environmental outcomes of our actions. Life cycle assessment (LCA)
provides a holistic, science-based analysis method for decision-makers in all sectors concerning policies,
product purchases, process performance, and education systems. The use of LCA for product and service
analysis is increasing and the demand for life cycle inventory (LCI) data is growing.
Quality LCAs require quality LCI data, which are not readily available. Often, LCA practitioners have to
develop their own data or modify data from other countries for U.S. conditions. The assumptions and
approaches used by different practitioners can lead to variable results. Having consistent U.S.-based data
is essential if we are to understand and manage the environmental impacts of U.S.-based processes.
The U.S. LCI Database project began in 2001, when the U.S. Department of Energy (DOE) directed the
National Renewable Energy Laboratory (NREL) and the Athena Institute to explore the development of a
national public database. The U.S. LCI Database was created and has been publicly available at
www.nrel.gov/lci since 2003. The project strives to provide publicly available LCI data following a
consistent protocol, thus allowing users to objectively review and compare data based on similar data
collection and analysis methods.
5. Renewables and the Environment
To combat global warming and the other problems associated with fossil fuels, the United States must
switch to renewable energy sources like sunlight, wind, and biomass. All renewable energy technologies
are not appropriate to all applications or locations, however. As with conventional energy production,
there are environmental issues to be considered. This paper identifies some of the key environmental
impacts associated with renewable technologies and suggests appropriate responses to them. A study by
the Union of Concerned Scientists and three other national organizations, America's Energy Choices,
found that even when certain strict environmental standards are used for evaluating renewable energy
projects, these energy sources can provide more than half of the US energy supply by the year 2030.
5.1 Wind Energy
It is hard to imagine an energy source more benign to the environment than wind power; it produces no
air or water pollution, involves no toxic or hazardous substances (other than those commonly found in
large machines), and poses no threat to public safety. And yet a serious obstacle facing the wind industry
is public opposition reflecting concern over the visibility and noise of wind turbines, and their impacts on
wilderness areas.
One of the most misunderstood aspects of wind power is its use of land. Most studies assume that wind
turbines will be spaced a certain distance apart and that all of the land in between should be regarded as
occupied. This leads to some quite disturbing estimates of the land area required to produce substantial
quantities of wind power. According to one widely circulated report from the 1970s, generating 20
percent of US electricity from windy areas in 1975 would have required siting turbines on 18,000 square
miles, or an area about 7 percent the size of Texas.
25
In reality, however, the wind turbines themselves occupy only a small fraction of this land area, and the
rest can be used for other purposes or left in its natural state. For this reason, wind power development is
ideally suited to farming areas. In Europe, farmers plant right up to the base of turbine towers, while in
California cows can be seen peacefully grazing in their shadow. The leasing of land for wind turbines, far
from interfering with farm operations, can bring substantial benefits to landowners in the form of
increased income and land values. Perhaps the greatest potential for wind power development is
consequently in the Great Plains, where wind is plentiful and vast stretches of farmland could support
hundreds of thousands of wind turbines.
In other settings, however, wind power development can create serious land-use conflicts. In forested
areas it may mean clearing trees and cutting roads, a prospect that is sure to generate controversy, except
possibly in areas where heavy logging has already occurred. And near populated areas, wind projects
often run into stiff opposition from people who regard them as unsightly and noisy, or who fear their
presence may reduce property values.
In California, bird deaths from electrocution or collisions with spinning rotors have emerged as a problem
at the Altamont Pass wind "farm," where more than 30 threatened golden eagles and 75 other raptors such
as red-tailed hawks died or were injured during a three-year period. Studies under way to determine the
cause of these deaths and find preventive measures may have an important impact on the public image
and rate of growth of the wind industry. In appropriate areas, and with imagination, careful planning, and
early contacts between the wind industry, environmental groups, and affected communities, siting and
environmental problems should not be insurmountable.
5.2 Solar Energy
Since solar power systems generate no air pollution during operation, the primary environmental, health,
and safety issues involve how they are manufactured, installed, and ultimately disposed of. Energy is
required to manufacture and install solar components, and any fossil fuels used for this purpose will
generate emissions. Thus, an important question is how much fossil energy input is required for solar
systems compared to the fossil energy consumed by comparable conventional energy systems. Although
this varies depending upon the technology and climate, the energy balance is generally favorable to solar
systems in applications where they are cost effective, and it is improving with each successive generation
of technology. According to some studies, for example, solar water heaters increase the amount of hot
water generated per unit of fossil energy invested by at least a factor of two compared to natural gas water
heating and by at least a factor of eight compared to electric water heating.
Materials used in some solar systems can create health and safety hazards for workers and anyone else
coming into contact with them. In particular, the manufacturing of photovoltaic cells often requires
hazardous materials such as arsenic and cadmium. Even relatively inert silicon, a major material used in
solar cells, can be hazardous to workers if it is breathed in as dust. Workers involved in manufacturing
photovoltaic modules and components must consequently be protected from exposure to these materials.
There is an additional-probably very small-danger that hazardous fumes released from photovoltaic
modules attached to burning homes or buildings could injure fire fighters.
None of these potential hazards is much different in quality or magnitude from the innumerable hazards
people face routinely in an industrial society. Through effective regulation, the dangers can very likely be
kept at a very low level.
The large amount of land required for utility-scale solar power plants-approximately one square kilometer
for every 20-60 megawatts (MW) generated-poses an additional problem, especially where wildlife
26
protection is a concern. But this problem is not unique to solar power plants. Generating electricity from
coal actually requires as much or more land per unit of energy delivered if the land used in strip mining is
taken into account. Solar-thermal plants (like most conventional power plants) also require cooling water,
which may be costly or scarce in desert areas.
Large central power plants are not the only option for generating energy from sunlight, however, and are
probably among the least promising. Because sunlight is dispersed, small-scale, dispersed applications are
a better match to the resource. They can take advantage of unused space on the roofs of homes and
buildings and in urban and industrial lots. And, in solar building designs, the structure itself acts as the
collector, so there is no need for any additional space at all.
5.3 Geothermal Energy
Geothermal energy is heat contained below the earth's surface. The only type of geothermal energy that
has been widely developed is hydrothermal energy, which consists of trapped hot water or steam.
However, new technologies are being developed to exploit hot dry rock (accessed by drilling deep into
rock), geopressured resources (pressurized brine mixed with methane), and magma.
The various geothermal resource types differ in many respects, but they raise a common set of
environmental issues. Air and water pollution are two leading concerns, along with the safe disposal of
hazardous waste, siting, and land subsidence. Since these resources would be exploited in a highly
centralized fashion, reducing their environmental impacts to an acceptable level should be relatively easy.
But it will always be difficult to site plants in scenic or otherwise environmentally sensitive areas.
The method used to convert geothermal steam or hot water to electricity directly affects the amount of
waste generated. Closed-loop systems are almost totally benign, since gases or fluids removed from the
well are not exposed to the atmosphere and are usually injected back into the ground after giving up their
heat. Although this technology is more expensive than conventional open-loop systems, in some cases it
may reduce scrubber and solid waste disposal costs enough to provide a significant economic advantage.
Open-loop systems, on the other hand, can generate large amounts of solid wastes as well as noxious
fumes. Metals, minerals, and gases leach out into the geothermal steam or hot water as it passes through
the rocks. The large amounts of chemicals released when geothermal fields are tapped for commercial
production can be hazardous or objectionable to people living and working nearby.
At The Geysers, the largest geothermal development, steam vented at the surface contains hydrogen
sulfide (H2S)-accounting for the area's "rotten egg" smell-as well as ammonia, methane, and carbon
dioxide. At hydrothermal plants carbon dioxide is expected to make up about 10 percent of the gases
trapped in geopressured brines. For each kilowatt-hour of electricity generated, however, the amount of
carbon dioxide emitted is still only about 5 percent of the amount emitted by a coal- or oil-fired power
plant.
Scrubbers reduce air emissions but produce a watery sludge high in sulfur and vanadium, a heavy metal
that can be toxic in high concentrations. Additional sludge is generated when hydrothermal steam is
condensed, causing the dissolved solids to precipitate out. This sludge is generally high in silica
compounds, chlorides, arsenic, mercury, nickel, and other toxic heavy metals. One costly method of waste
disposal involves drying it as thoroughly as possible and shipping it to licensed hazardous waste sites.
Research under way at Brookhaven National Laboratory in New York points to the possibility of treating
these wastes with microbes designed to recover commercially valuable metals while rendering the waste
nontoxic.
27
Usually the best disposal method is to inject liquid wastes or redissolved solids back into a porous stratum
of a geothermal well. This technique is especially important at geopressured power plants because of the
sheer volume of wastes they produce each day. Wastes must be injected well below fresh water aquifers
to make certain that there is no communication between the usable water and waste-water strata. Leaks in
the well casing at shallow depths must also be prevented.
In addition to providing safe waste disposal, injection may also help prevent land subsidence. At
Wairakei, New Zealand, where wastes and condensates were not injected for many years, one area has
sunk 7.5 meters since 1958. Land subsidence has not been detected at other hydrothermal plants in long-
term operation. Since geopressured brines primarily are found along the Gulf of Mexico coast, where
natural land subsidence is already a problem, even slight settling could have major implications for flood
control and hurricane damage. So far, however, no settling has been detected at any of the three
experimental wells under study.
Most geothermal power plants will require a large amount of water for cooling or other purposes. In
places where water is in short supply, this need could raise conflicts with other users for water resources.
The development of hydrothermal energy faces a special problem. Many hydrothermal reservoirs are
located in or near wilderness areas of great natural beauty such as Yellowstone National Park and the
Cascade Mountains. Proposed developments in such areas have aroused intense opposition. If
hydrothermal-electric development is to expand much further in the United States, reasonable
compromises will have to be reached between environmental groups and industry.
5.4 Biomass
Biomass power, derived from the burning of plant matter, raises more serious environmental issues than
any other renewable resource except hydropower. Combustion of biomass and biomass-derived fuels
produces air pollution; beyond this, there are concerns about the impacts of using land to grow energy
crops. How serious these impacts are will depend on how carefully the resource is managed. The picture
is further complicated because there is no single biomass technology, but rather a wide variety of
production and conversion methods, each with different environmental impacts.
5.5 Air Pollution
Inevitably, the combustion of biomass produces air pollutants, including carbon monoxide, nitrogen
oxides, and particulates such as soot and ash. The amount of pollution emitted per unit of energy
generated varies widely by technology, with wood-burning stoves and fireplaces generally the worst
offenders. Modern, enclosed fireplaces and wood stoves pollute much less than traditional, open
fireplaces for the simple reason that they are more efficient. Specialized pollution control devices such as
electrostatic precipitators (to remove particulates) are available, but without specific regulation to enforce
their use it is doubtful they will catch on.
Emissions from conventional biomass-fueled power plants are generally similar to emissions from coal-
fired power plants, with the notable difference that biomass facilities produce very little sulfur dioxide or
toxic metals (cadmium, mercury, and others). The most serious problem is their particulate emissions,
which must be controlled with special devices. More advanced technologies, such as the whole-tree
burner (which has three successive combustion stages) and the gasifier/combustion turbine combination,
should generate much lower emissions, perhaps comparable to those of power plants fueled by natural
gas.
28
Facilities that burn raw municipal waste present a unique pollution-control problem. This waste often
contains toxic metals, chlorinated compounds, and plastics, which generate harmful emissions. Since this
problem is much less severe in facilities burning refuse-derived fuel (RDF)-pelletized or shredded paper
and other waste with most inorganic material removed-most waste-to-energy plants built in the future are
likely to use this fuel. Co-firing RDF in coal-fired power plants may provide an inexpensive way to
reduce coal emissions without having to build new power plants.
Using biomass-derived methanol and ethanol as vehicle fuels, instead of conventional gasoline, could
substantially reduce some types of pollution from automobiles. Both methanol and ethanol evaporate
more slowly than gasoline, thus helping to reduce evaporative emissions of volatile organic compounds
(VOCs), which react with heat and sunlight to generate ground-level ozone (a component of smog).
According to Environmental Protection Agency estimates, in cars specifically designed to burn pure
methanol or ethanol, VOC emissions from the tailpipe could be reduced 85 to 95 percent, while carbon
monoxide emissions could be reduced 30 to 90 percent. However, emissions of nitrogen oxides, a source
of acid precipitation, would not change significantly compared to gasoline-powered vehicles.
Some studies have indicated that the use of fuel alcohol increases emissions of formaldehyde and other
aldehydes, compounds identified as potential carcinogens. Others counter that these results consider only
tailpipe emissions, whereas VOCs, another significant pathway of aldehyde formation, are much lower in
alcohol-burning vehicles. On balance, methanol vehicles would therefore decrease ozone levels. Overall,
however, alcohol-fueled cars will not solve air pollution problems in dense urban areas, where electric
cars or fuel cells represent better solutions.
5.6 Greenhouse Gases
A major benefit of substituting biomass for fossil fuels is that, if done in a sustainable fashion, it would
greatly reduce emissions of greenhouses gases. The amount of carbon dioxide released when biomass is
burned is very nearly the same as the amount required to replenish the plants grown to produce the
biomass. Thus, in a sustainable fuel cycle, there would be no net emissions of carbon dioxide, although
some fossil-fuel inputs may be required for planting, harvesting, transporting, and processing biomass.
Yet, if efficient cultivation and conversion processes are used, the resulting emissions should be small
(around 20 percent of the emissions created by fossil fuels alone). And if the energy needed to produce
and process biomass came from renewable sources in the first place, the net contribution to global
warming would be zero.
Similarly, if biomass wastes such as crop residues or municipal solid wastes are used for energy, there
should be few or no net greenhouse gas emissions. There would even be a slight greenhouse benefit in
some cases, since, when landfill wastes are not burned, the potent greenhouse gas methane may be
released by anaerobic decay.
5.7 Implications for Agriculture and Forestry
One surprising side effect of growing trees and other plants for energy is that it could benefit soil quality
and farm economies. Energy crops could provide a steady supplemental income for farmers in off-seasons
or allow them to work unused land without requiring much additional equipment. Moreover, energy crops
could be used to stabilize cropland or rangeland prone to erosion and flooding. Trees would be grown for
several years before being harvested, and their roots and leaf litter could help stabilize the soil. The
planting of coppicing, or self-regenerating, varieties would minimize the need for disruptive tilling and
29
planting. Perennial grasses harvested like hay could play a similar role; soil losses with a crop such as
switchgrass, for example, would be negligible compared to annual crops such as corn.
If improperly managed, however, energy farming could have harmful environmental impacts. Although
energy crops could be grown with less pesticide and fertilizer than conventional food crops, large-scale
energy farming could nevertheless lead to increases in chemical use simply because more land would be
under cultivation. It could also affect biodiversity through the destruction of species habitats, especially if
forests are more intensively managed. If agricultural or forestry wastes and residues were used for fuel,
then soils could be depleted of organic content and nutrients unless care was taken to leave enough wastes
behind. These concerns point up the need for regulation and monitoring of energy crop development and
waste use.
Energy farms may present a perfect opportunity to promote low-impact sustainable agriculture, or, as it is
sometimes called, organic farming. A relatively new federal effort for food crops emphasizes crop
rotation, integrated pest management, and sound soil husbandry to increase profits and improve long-term
productivity. These methods could be adapted to energy farming. Nitrogen-fixing crops could be used to
provide natural fertilizer, while crop diversity and use of pest parasites and predators could reduce
pesticide use. Though such practices may not produce as high a yield as more intensive methods, this
penalty could be offset by reduced energy and chemical costs.
Increasing the amount of forest wood harvested for energy could have both positive and negative effects.
On one hand, it could provide an incentive for the forest-products industry to manage its resources more
efficiently, and thus improve forest health. But it could also provide an excuse, under the "green" mantle,
to exploit forests in an unsustainable fashion. Unfortunately, commercial forests have not always been
soundly managed, and many people view with alarm the prospect of increased wood cutting. Their
concerns can be met by tighter government controls on forestry practices and by following the principles
of "excellent" forestry. If such principles are applied, it should be possible to extract energy from forests
indefinitely.
5.8 Impact of Hydropower
The development of hydropower has become increasingly problematic in the United States. The
construction of large dams has virtually ceased because most suitable undeveloped sites are under federal
environmental protection. To some extent, the slack has been taken up by a revival of small-scale
development. But small-scale hydro development has not met early expectations. As of 1988, small
hydropower plants made up only one-tenth of total hydropower capacity.
Declining fossil-fuel prices and reductions in renewable energy tax credits are only partly responsible for
the slowdown in hydropower development. Just as significant have been public opposition to new
development and environmental regulations.
Environmental regulations affect existing projects as well as new ones. For example, a series of large
facilities on the Columbia River in Washington will probably be forced to reduce their peak output by
1,000 MW to save an endangered species of salmon. Salmon numbers have declined rapidly because the
young are forced to make a long and arduous trip downstream through several power plants, risking death
from turbine blades at each stage. To ease this trip, hydropower plants may be required to divert water
around their turbines at those times of the year when the fish attempt the trip. And in New England and
the Northwest, there is a growing popular movement to dismantle small hydropower plants in an attempt
to restore native trout and salmon populations.
30
That environmental concerns would constrain hydropower development in the United States is perhaps
ironic, since these plants produce no air pollution or greenhouse gases. Yet, as the salmon example makes
clear, they affect the environment. The impact of very large dams is so great that there is almost no
chance that any more will be built in the United States, although large projects continue to be pursued in
Canada (the largest at James Bay in Quebec) and in many developing countries. The reservoirs created by
such projects frequently inundate large areas of forest, farmland, wildlife habitats, scenic areas, and even
towns. In addition, the dams can cause radical changes in river ecosystems both upstream and
downstream.
Small hydropower plants using reservoirs can cause similar types of damage, though obviously on a
smaller scale. Some of the impacts on fish can be mitigated by installing "ladders" or other devices to
allow fish to migrate over dams, and by maintaining minimum river-flow rates; screens can also be
installed to keep fish away from turbine blades. In one case, flashing underwater lights placed in the
Susquehanna River in Pennsylvania direct night-migrating American shad around turbines at a
hydroelectric station. As environmental regulations have become more stringent, developing cost-
effective mitigation measures such as these is essential.
Despite these efforts, however, hydropower is almost certainly approaching the limit of its potential in the
United States. Although existing hydro facilities can be upgraded with more efficient turbines, other
plants can be refurbished, and some new small plants can be added, the total capacity and annual
generation from hydro will probably not increase by more than 10 to 20 percent and may decline over the
long term because of increased demand on water resources for agriculture and drinking water, declining
rainfall (perhaps caused by global warming), and efforts to protect or restore endangered fish and wildlife.