America’s infrastructure is being colonized by a new logic: little, flexible, fast, adaptive, local—the polar opposite of the way things have been up until now. This kind of thinking, and the activities that follow from it, confound the powers that be. If you want to understand just how much, think about the extent to which the U.S. military is flummoxed when beset by smaller, more mobile, more local, more flexible, more adaptive, and more creative militant forces.
As a later speaker in the day’s proceedings would point out, 60 percent of men who run our electricity system are within five years of retirement.
tidy list of solutions to the grid’s known woes: use smart grid technologies, curb customer demand, end peak demand, develop grid-scale storage, add a nationwide extra-high-voltage DC/AC transmission network, reduce line congestion, encourage interregional cooperation, develop interoperability standards, increase government investment, train a new generation of grid operators, and integrate large numbers of electric vehicles. This is the “solutions” laundry list, and a pretty thorough one, especially if we add deployable energy efficiency to the mix.
They just bring to light a problem that has been characteristic of our grid for more than half a century: it was made to be managed according to a command and control structure. There was to be total monopolistic control on the supply side of great electric loop—which included generation, transmission, and distribution networks—and ever-increasing yet always-predictable consumption on the customer side of things.
From these inauspicious beginnings we got a national grid with power plants far from view, long loping lines between us and them and, nearer at hand, distribution networks strung through neighborhoods, that link individual houses by means of pole-top transformers to the system as a whole. That this is how electricity works in America and pretty much everywhere else in the industrial world is not the logical outcome of physics, it’s the product of cultural values, historical exigencies, governmental biases, and the big money dreams of financiers.
In other words, the notion of consumer or mass culture, in which all people are promised access to all things, was in part the result of the universalization of electricity and not the other way around.
The singular advantage of alternating current is that low voltages, made at the generator, can be “stepped up” to much higher voltages by means of a transformer—a simple device made of two sets of tightly coiled copper wires that almost, but don’t quite, touch. Higher voltages can go farther than lower voltages. It has, if you will, a higher quotient of desire, it “wants” harder, and thus is propelled farther. The transformer is a simple if genius means by which this “stepping up” and “stepping down” of voltage is accomplished without any loss in efficiency. It only works, however, if the current traveling through it “alternates,” which is to say, moves in a series of rapidly reversing waves.
By the early 1890s regular people, industrialists, white-collar workers, small business owners, corporate managers, manufacturers, traction companies, investors, and inventors had all grown fond of electricity and what it could do. Nevertheless, most also found the wild way of building an infrastructure deeply unsatisfactory, with four different kinds of current (DC, single-phase AC, dual-phase AC, and polyphase AC), two different intensities of lighting system (arc and incandescent) that relied on different wiring logics (series and parallel), seven different possible voltages, nine different rates of oscillation, and up to forty competing electric companies (depending on the size of the market), plus all the folks who’d opted out of public power and installed private plants. The result was predictable: not only were there too many wires, but the lack of standardization made it difficult for anyone to make any money,
The hydroelectric plant at Niagara Falls was thus the closing bell on the effervescent, chaotic, immensely creative and inventive activity of the previous seventeen years: 1879, the first arc light grid in San Francisco; 1882, the first low-voltage direct current grid in New York; 1887, the first alternating current grid; 1891, proven long-distance high-voltage transmission. And in 1896, the completion of the first large-scale generating station at Niagara Falls, together with the first long-distance transmission wires in constant use,
All of these developments combined to confound the economic truisms of power production: the price of electricity had always dropped; the cost of making electricity had also always dropped, while plant efficiency had always risen and electricity consumption too had always risen. That all of these factors changed at once was a series of blows to the utilities not at all unlike the final moments of a boxing match. Their surprise was that of the man about to hit the mat, caught so thoroughly off guard by a sudden volley of well-placed fists that he failed to protect himself against them, flailing a little bit more as each blow hit home.
The seemingly natural law of grow-and-build, supported all along by a gentleman’s agreement on the part of the utilities to be regulated and on the part of the government to regulate, in ways not too onerous to the utilities’ profit margins, had begun to break down. This happened slowly with the Carter administration, and then much faster with Reagan-era deregulation.
There are 3,306 electrical utility companies in the United States, yet two thirds of us (68.5 percent) pay our bill to one of the 189 big for-profit, investor-owned leviathans.
a minor sub-clause of a sub-act of an omnibus bill called the National Energy Act that only just squeezed through Congress after significant cajoling and administrative arm twisting. It passed in the House by a single vote (207–206) in the autumn of 1978. What this clause said was simply that the utilities would need to buy, and move to market, electricity produced by any facility with an output of less than 80 MW (about a tenth of what might have been produced by an average nuclear power plant at the time).
they would be obliged to pay a nonmiserly rate for it. This rate would be set according to what were called “avoided costs”—that is, the money it would have cost the utility to make that precise amount of electricity for themselves.
At the end of his lone term in office we had our nation’s first ever National Energy Plan (the National Energy Act was its legislative form), a Department of Energy instead of the more than fifty unrelated government offices previously charged with overseeing national energy policy, a Strategic Petroleum Reserve to help smooth wildly vacillating oil prices caused by our national dependence on foreign oil, and the National Renewable Energy Laboratory, whose mandate was to explore and make feasible renewable and small, decentralized power options.
What is less common knowledge is that the monopoly has an echo on the consumer side of the business world called a “monopsony,” which is the sole customer for a product.
This was what PURPA reversed: The utilities could still be monopolies, but they couldn’t be monopsonies anymore. The utilities now had to buy power from entities making small amounts of electricity in their territory; and they had to pay the same rate for this independently produced power as it would have cost them to make it themselves. This second clause notably used the market to force small power producers to be more
PURPA was, in other words, a smart bit of anti-monopsony regulation. So long regulated as monopolies, the utilities were not initially alarmed that they would now be regulated as monopsonies as well. This doesn’t mean they were exactly happy about the terms, it’s just that they were almost myopically concerned with losing their ability to offer so-called promotional rates.
accurately estimating cost, profits, and price only became exponentially more difficult when applied to the future, which PURPA was now asking them to predict.
where in some states upwards of 90 percent of the new generation contracted during the first decade post-PURPA came from cogeneration, in California it would be wind that stole the show.
By the early 1990s as construction was completed on most of the remaining ISO4 contracts California had about 1700 MW of wind projects in place.” All in 80 MW chunks. “By 1990, California had become home of 85% of the world’s capacity of electricity powered by the wind and 95% of the world’s solar power electricity.”
Unwittingly Ronald Reagan had created one of the weirdest marriages American business has ever seen, as Manhattan investment bankers scrambled to buy up wind turbines made by commune-living, Vietnam-era draft dodgers. The result was that California, by the mid-1980s, had a massive wind bubble, ripe for popping.
Small was not only beautiful but efficient, and, as it has turned out, cost-effective.
What PURPA made clear was that this monopoly structure, at least on the supply side of the system, was a real and proven detriment to the efficient and humane functioning of the business as a whole.
The Energy Policy Act mandated absolute competition in the wholesale power market. In many states this also came with an obligation for total, or significant, divestiture in generating stations. From this point forward the main way for the utilities to make money would be by transporting, delivering, and metering electricity—rather than by producing it.
the greatest threat to the security and reliability of our electrical infrastructure is foliage.
Or, to put it differently, one doesn’t try to eliminate the holes in the Swiss cheese—by compacting all cheese into cheddar, for example—but rather to keep the holes in the cheese from lining up, from becoming one big hole that runs through the entirety of the loaf.
In order to understand why the grid has become so much less stable since the early 2000s (and it has), it is important to return again to a more careful consideration of the aftereffects of the Energy Policy Act and accompanying Order 888. Much like the 1996 deregulation bill in California (that made Enron momentarily very rich and that state by equal measure poor), the Energy Policy Act did not separate generation from transmission and distribution just for shits and giggles. It did so for a reason, and that reason was energy trading. The act turned electricity into a commodity—a thing like any other. While the grid, electricity’s infrastructure, was reconceptualized in law to something rather like a box or shipping container. Conceptually if not actually, electricity now is made and sold in units to whoever pays the best price and then shipped to them by means of the grid.
Electrons, from the point of view of the market, have never looked so much like pork bellies or pig iron. The idea behind the act was twofold. First, it would liberalize and thus also reform (by means of the market) electricity production.
The introduction of market forces to grid management has, after some initial bumps in the road, also had a profound effect both on how much electricity we as a nation use (less) and on the way that electricity moves through the grid (farther).
In May 2000, a mere two months after the order that utilities “wheel” electric power rather than make it went into force, the number of Transmission Loading Relief Procedures on the East Coast’s grid was six times what it had been a year earlier.
Historically, utilities made money when people used electricity; the more we used the more money they made. Now they don’t. Today’s utilities make money by transporting power and by trading it as a commodity. While they are still charged with keeping America’s power supply reliable they have a real incentive to sell electricity to whomever will pay the most for it wherever they may be. Long-distance wheeling is to their benefit; it is to the plant owners’ benefit; it is to the energy traders’ benefit; in theory, at least, it is also to our benefit. In fact, the only thing that really suffers from this arrangement is the grid.
If we follow the flow of modern money, a new terrain of investment and of privation emerges into view. And where money doesn’t go, where people don’t want to spend, starts to give us a good idea of which bits of our grid are given to falling apart. This is the landscape of collapse—imminent and actual. Money doesn’t go to the upkeep of the fleet of old, lumbering power plants trudging toward retirement that nevertheless still form the backbone of America’s electrical generation
Producers don’t need vars, and utilities, no longer in the business of producing power, reactive or otherwise, do need them to keep things stable on the lines. They don’t, however, want to buy them because they can’t be resold or otherwise rendered a viable product; there is no futures market in vars, for example.
The link between these surges (and blackouts and brownouts) and the absence of an invisible non-product from the electric grid is a simple reality about which most of us are entirely clueless.
What constitutes a “better world” is of course always up for grabs. But that’s America too. If some people want to get rich while others want to put an end to global warming, well, let them duke it out in the marketplace. That’s effectively what the Energy Policy Act has made possible.
soon. The utilities, having lost, irrevocably, their iron grip on the electricity business, are being forced by circumstances to look elsewhere for stable revenue streams. Demand-side reform, it turns out, is one of the lone domains left to them.
We aren’t (yet) inventing our way around the wires, but we are changing our habits to make them less necessary.
the smart grid makes it possible to shift consumption around the clock. It doesn’t reduce consumption; in fact, quite the opposite: the utilities would be pleased if everyone used more electricity than they currently do. Rather, what the smart grid does is change the time at which consumption takes place.
Following the model provided by the telecommunications industry the utilities would like to remake electricity into a new, more easily graspable commodity while remaking themselves into providers of services and gadgets. If they can manage this double task they will stay alive.
This, then, is exactly the problem. The utilities don’t know how to upgrade existing technology without putting themselves out of business. Nor do they know how to continue with the existing infrastructure without going out of business. As a compromise, most utilities in the country are opting to install the smart meters but not to provide consumers with the rest of what the Petersons got. At least not on their dime. A smart meter allows for smart appliances, such as an air conditioner or thermostat, that can be set in conjunction with a utility’s promotional-rate structure to encourage time-of-day use that smooths out peak demand. It also turns out, to almost nobody’s surprise, that smart meters allow the utility to remotely control electricity use.
The smart meter is the only part of the SmartGridHouseCarNanoGridComboPack that is actually necessary to the utilities, because, to borrow the words of the technology journalist Glenn Fleishman, “shedding 5 to 10 percent of their load at peak times on demand could reduce or eliminate turning to the expensive spot power market or powering up dirty old power plants. Shaving that usage can have enormously disproportionate cost and environmental savings.” Peak load is the utilities’ nightmare; it happens once or twice every year, and it’s always a scramble to make sure things don’t go disastrously awry every single time.
but since they provide 20 percent of American power with exceptional reliability, they are basically big machines for transforming plutonium into refrigeration.
Other things, such as lights—11 percent of domestic power, but 26 percent of commercial electrical consumption—go on at a predictable point in time. The utility charts sunrise and sunset, opening hours and closing hours. They have a good idea of who is going to need the lights on when and where.
Still other things are culturally predictable. Between five and six P.M., Americans tend to come home from work. When this happens we use all kinds of electrical devices we weren’t using before:
All of these things add up to a fairly substantial jump in demand from just after the close of the workday until about ten P.M., at which point demand begins a slow downward slide that ends at four A.M.—the hour of minimum load.
To meet this steady bump in demand, power plants ramp up everywhere in America just before five P.M. Unfortunately, it is also when the wind tends to slow down and, at certain times of year, when the sun starts its setting, making renewables without backup storage the least useful means of producing power at the most necessary moment of the day.
Because we still lack a good system for storing renewable power, America’s evenings are powered by coal, natural gas, and the ever-present baseline of nuclear.
Demand is usually within limits the utility can meet. In fact, 98 percent of the time this is the case. The problem is that 10 percent of the utilities’ resources are devoted to the other 2 percent of the time. The few days a year that are causing all the problems are the reason utilities like Xcel, CenterPoint, PG&E, and ConEd are paying so much money to wire up their service districts with smart meters. These are the days when demand is neither modest nor predictable.
“Anywhere there’s air-conditioning, smart grids will likely prosper.” This is not just because these devices use a lot of electricity (they do), but because everybody uses them at the same time and because when it’s very hot outside the utilities are already having a difficult time for a variety of reasons: long-distance wheeling goes up, spot markets get expensive, and lines sag and grow less efficient. Add to this that the utilities are in fact running a much tighter ship than ever they did. “In the need to stay competitive,” says industry expert, critic, and innovator Massoud Amin, “many energy companies and the regional grid operators who work with them have been ‘flying’ the grid with less and less margin of error. This means keeping costs down, not investing sufficiently in new equipment, and not building new transmission
With such systemic “shock absorbers,” as Amin calls them, in place, no day was ever too hot, no peak load too pointy, for the grid’s infrastructure to absorb. Granted, there were fewer people in the United States twenty-five years ago, fewer air conditioners, and even fewer disastrously hot days. But even given these changes, had utility investment in infrastructure kept pace with population growth and GDP growth, peak-load days would simply not be the sort of panic-inducing and blackout-causing affairs that they are today.
In a way the utilities are right in paring back. Having 30 percent of one’s power plants just sitting there 359 or so days a year, and 30 percent of the carrying capacity of one’s lines equally left unused “just in case,” is wasteful in all kinds of ways. The issue was not the tightness of the ship, or even the desire to shave and shave at the margin of the system. The problem is peak load itself. The phenomenon is what needs to be done away with, and without a viable means to store electricity the only way to control it is to control the part of the system that creates it—us.
We make peak demand with our air conditioners (and less often with our heaters). This is why the utilities robocall, why they are installing all those smart meters, why they want control over home air-conditioning, and why they prefer we all vacuum at midnight. Each
Soft energy technologies, the adoption of which they considered to be the first necessary step toward ensuring energy security in the United States, have five defining characteristics. First, they rely on renewable energy resources, like wind and solar, but also biomass, geothermal, wave, and tidal power. Second, they are diverse and designed to function with maximum effectiveness within specific circumstances. Third, they are flexible and relatively simple to understand. Fourth, they should be matched to end-use needs in terms of scale, and fifth, they should also be matched to end use in terms of quality. All of this is nested within a larger cultural commitment to energy efficiency that is built in to our structures and our life-ways from the ground up. For the Lovinses, the soft energy path is not about privation but thoughtful, thorough integration of energy use into social life.
From a study of biological systems rather than technological ones, the Lovinses argued that an organism’s longevity consistently relies upon “local back-up, local autonomy, and a preference for small over large scale and for diversity over homogeneity.” All of these things increase resilience in all cases. It’s a strong argument, but one very much at odds with the prevailing modes of imagining and designing energy
Unlike Edison’s private plants, these modern microgrids can connect and unconnect as needed to the big grid (which is now increasingly known as the “macrogrid”). And, unlike any system since the consolidation of power in the early twentieth century, these microgrids work perfectly well in “island” mode.
Our grid could just as well be an amalgamation of ten thousand microgrids as a single system. So long as microgrids can function interoperably with each other, and do so most of the time, they are indistinguishable from the end user’s point of view from the grid we already have. The difference, “resiliency,” rears its redeemer’s head only in moments of crisis when a microgrid’s capacity to operate autonomously from the big grid
Despite the emphasis on multiple forms of power generation and multiple users (or, at the very least, multiple meters), these littlish, publicish grids, especially those built into urban areas, are designed to remain connected to the big grid most of the time. It is only in moments of duress that they disconnect and run on their own.
The military’s most immediate goal, according to Phillip Jenkins, is to cut the amount of fuel it takes to support a marine in the field in half—from eight gallons a day to four. Anything that can be done to eliminate the necessity of diesel generators, and reduce the amount of oil necessary to feed them on the field of battle, strengthens—adds resiliency, flexibility, and mobility to—the war effort. Mobile, matte, lightweight, and diversified systems for keeping the lights on, the data safe, and the troops cool are critical to mission success. For while some of this fuel is poured into gas tanks, a lot of it is used to make electricity.
Small is, quite suddenly, not only beautiful, it’s also reliable.
the load carried by a “dismounted warfighter” now ranges from 65 to 95 pounds, almost half of which is either portable electronic devices or the batteries needed to run them. On a four-day mission, both these numbers rise. The pack goes up to 150 pounds, and the batteries alone constitute a third of this weight.
The weight of the technology meant to ease victory on the field of battle now structures and limits our strategic options, and rarely in ways that make our soldiers safer. In addition to being heavy, all of this technology is also wasteful and expensive.
more than 80 percent of the energy needed to power devices like computer displays, infrared sights, global positioning systems, night vision, and other sensor technologies each soldier carries comes from disposable batteries. A brigade “will consume as much as seven tons of batteries in a 72-hour mission at a cost of $700,000.”
At home the consequences are less extreme but the situation is not much better. Here, the most common, expensive, and disruptive forms of power outage are not the big storm-blown blackouts, but those of five minutes or less. These are rampant. Two thirds of the annual cost of outages in the United States are caused by those lasting less than five minutes, because of “the high frequency of momentary outages relative to sustained outages.” Lots of little outages are disastrous for any industry that needs constant access to information networks and for which electricity maintains security, including electric door locks, key pads, metal detectors, surveillance cameras, and so on.
Though the issues facing military installations and personnel might at first glance seem different—supply chains too long, batteries too heavy, existing electricity networks too brittle—in many ways the solution has been the same. The military too is turning to microgrids, of various sizes and structures, depending upon the specific needs of any given installation. It is estimated that in the next five years, the U.S. military will have added twenty-one microgrids to the twenty they already operate in the United States, more than doubling their electricity production to 578 megawatts.
But “eskimoing” the tents in the desert (as this process is called) is just the tip of the conservation iceberg. Reliance upon both generators and portable batteries needs to be drastically reduced.
the Department of Defense has proposed a set of adaptations that bear much in common with the soft path. They are aiming for a mix of generation, storage, and efficiency measures that include, but are not wholly reliant upon, renewables or, for that matter, diesel; that are flexible, which
The solutions being implemented by the military are thus a surprising blend of technologies. For example, the Tactical Alternating Current System (or TACS) includes: • 2.8–3.1 kW solar array • power management center • battery bank • 4 kW AC inverter • 4 kW backup generator The main source of power generation is a “clean, silent … lightweight flexible and durable solar array. Excess power from the solar array is stored in a battery bank for nighttime and cloudy day use and backup power is supplied by a generator connected to the system’s control unit.”
For larger applications, the DoD is also field-testing a “mobile power station” called, in high-acronymic fashion, the THEPS or “Transportable Hybrid Electric Power Station.” Much like the TACS this combines “rigid solar panels, a wind turbine, storage batteries, and an augmenting diesel generator to guarantee continuous power during prolonged periods when wind or solar alone do not meet power requirements.” But here is where it gets interesting. “THEPS provides, on average, 5kW of power output depending on the weather conditions. The inclusion of the diesel generator means the warfighter is not entirely freed from fuel logistics; however, even this challenge can be overcome if a system such as THEPS can obtain its diesel fuel via an in-situ resource such as biomass conversion.” In other words, kitchen garbage and latrine sludge, burbling away in a specially designed tank about the size of a boxcar, generating biogas (farts and moonshine, or methane and ethanol) that can be siphoned off and used to run the generator and, not incidentally, power the cookstoves. They have called this the TGER (Tactical Garbage to Energy Refinery) a nicely self-contained digestion machine that the army has spent three years and $850,000 developing.
In 2014 there were seventy-seven serious power outages reported in the United States due to severe weather, another seventeen due to fuel shortages (usually the result of supply-chain problems, like congestion on the rails), and sixty-six that were the result of physical attack, only two of which were cyber attacks—a new problem that follows from computerized infrastructure.
These activities, the tinkering as much as more formal constructions, are collectively known as grid edge, a term that encompasses everything from hooking up a generator to a house or adding some solar panels to the garage roof or using a 50-MW acronymically named S.P.I.D.E.R.S. microgrid for your base, to building a wind farm, a substation, and private lines into your corporate headquarters.
success, fusion has, at least since the 1950s, been the promised end point of our energy woes. As a “clean” energy source, fusion produces no radiation, it uses a neutral isotope found in water as its fuel, and the reactor that produces it can’t melt down. With fusion we would have a limitless source of nonpolluting power. If Lockheed Martin wins the race toward a fusion-powered future we will even get a smallish reactor that can be both mass-produced and strategically deployed.
The problem is that it takes about as much electricity to run a fusion reactor as that reactor produces.
Today the grail is less a new way to make power than it is to find a really good way to store it. Engineering a means to effectively store electricity is not a new problem. It’s been a priority for those involved in making, and making money off of, electricity since the Insull days. Figuring out this problem has risen up to grail-level urgency today because every dream for a cleaner energy future that involves a lot of renewables requires that we have some way to put aside the too much electricity they are capable of making.
What all of them have in common is a gut-felt understanding that a way to store electricity is necessary, if we are to meet the future with open arms and an optimist’s heart.
At the moment there is only one battery on our grid: a 22,000-square-foot, 1,300-ton nickel-cadmium battery that was built outside Fairbanks in 2003.
Ninety-five percent of the electricity “stored” in the United States is guarded in this way—22 gigawatts’ worth or the equivalent of 2 percent of our national generating capacity. Pumped hydro works great in places with hills and dammed rivers. It’s less great in the prairie, swamp, or desert states. From Nevada all the way to Indiana, there are effectively none. And where the South flattens out, so, too, do the pumped hydro stations thin and disappear.
storage as a hill near an existing reservoir. Alabama and Mississippi have salt domes. Starting just north of Mobile and stretching all the way under southern Mississippi are a warren of natural salt caverns long used for dumping toxic chemical waste. These can be just as profitably used to store compressed air. This is precisely what the CAES plant, in McIntosh, Alabama, does. When electricity is cheap or there is too much of it, usually at night, the excess is used to condense air and force it into these caverns. Then, during the day, when demand is high, the air is released. As it expands, or decompresses, it spins a turbine to regenerate an electric current. Unlike pumped hydro, however, this air isn’t storable indefinitely.
Compressed air is a twenty-four-hour affair. Electricity is used to “charge” the plant during off-peak hours, and decompressing air is used instead of coal to make electricity during peak demand the next day. Unlike a battery—the workings of which a compressed air plant mimics—the benefit of a mechanical rather than a chemical “charge-discharge” storage system is that it’s good pretty much indefinitely.
“flow” batteries—a grail many feel is worth questing for—seem to hold the promise of almost twenty years of rechargeability. Flow batteries are, if rumors are correct, only a year or two in the future.
An array of mirrors is situated in a sunny place, usually a desert, with all of their angles adjusted such that their individual beams of sunlight are directed at a looming central tower filled with something rather like table salt, which liquefies at right around 530 degrees Fahrenheit.
The sun’s daytime heat is stored in the liquid salt until needed, and then that heat is used to boil water, which drives a normal steam turbine to generate electricity.
Grid-scale storage, an element we need in order to integrate significant variable generation into the grid and to deal with our market proclivities for selling and shipping electricity as if it were a regular commodity, is limited to this: some artificial lakes, one compressed air plant, three molten salt towers, eight solar trough plants, and a lot of dreams about batteries. Each of the existing systems is a custom-designed one-off. We may be able to store electricity for all of southern Alabama, but we can’t yet do it for the few solar facilities that make the meager 0.8 percent of that state’s power.
All of this new activity is happening on the bit of the grid designed for distribution—the low-voltage wires slung between homes and pole tops in residential neighborhoods. Ironically, this is also the weakest part of the system, the most likely to give way, and the least well kept up.
Meanwhile transmission systems—those long high-voltage lines that stretch between distant power plants where electricity was once solely made and the urban centers where it is still mostly used—are much less prone to outages.
Since 2001, investment in high-voltage transmission infrastructure has held steady at about 7 percent a year, while investment in low-voltage distribution networks, including smart meters, is below the necessary threshold for basic maintenance.
In 2013 Germany’s two largest utilities lost a collective $6 billion as many of that country’s corporate entities got off the grid altogether. In the United States we raise electricity prices on poor and transient populations (people who don’t own houses, mostly); in Germany, however, they raise rates on businesses and manufactories.
All of this would also be playing out differently if only there was a way for the utilities to put some of this locally made solar power and farm-made wind power aside for evenings and winters, calm and cloudy days.
if one can look beyond the battery, the work being done on this front is both prodigious and intensely imaginative.
government subsidies for kooky but viable ideas (called ARPA-E), and state and federal funds set aside for infrastructural upgrade.
And mimicking familiar urban “skins” seems to hold a great deal of promise for the adoption of new electricity-storage technologies, both large and small. Three familiar forms are getting most of the attention from consumers and the press; these are the air conditioner, the office tower, and the car.
All of this is accomplished using about the same amount of daytime electricity as a ceiling fan. It’s an icebox and, effectively, also an electricity box—nighttime electricity stored in the form of ice.
A conversation about storage today is 85 percent a conversation about present-day and future battery technologies and 15 percent a conversation about weird ideas that somebody made work once, someplace suspect, like Alberta.
Parts of it will even function like an office building, but for the most part it will just be thousands upon thousands of stacked lithium-ion batteries, capable of generating up to 400 MW of electricity (though it can run at this rate for only four hours). And though it might look like an office building to us, from the grid’s point of view it might as well be a normal gas-fired power station. This project maintains one of the distinct advantages of batteries over the other kinds of mechanical, or even chemical, storage on the market: namely, it can be scaled up or down by the millisecond, useful for balancing out the power generated by solar and wind being pumped into the grid by all those rooftop panel owners and wind farm conglomerates.
In order to work, each of a battery’s two “terminals” has to be made from a different kind of metal separated by an electrolyte. Any number of things can serve as an electrolyte, from soda pop or a potato to sulfuric acid or even ceramic, though various kinds of salts and acids generally work best. Regardless of which electrolyte and which metals one chooses, a battery works because positive ions move in one direction through the electrolyte, effectively peeling off infinitesimal bits of the metal from one pole and sticking them to the other. Electrons simultaneously move in the opposite direction.
distributed battery systems as grid-scale storage. Vermont’s Green Mountain Power will be offering Powerwalls to customers at a very reasonable price (about forty dollars a month) if the customer agrees to share the battery’s storage capacities with the utility. It will serve as host to both homemade power and grid-made power, for rainy days and long, dark nights.
Better still, cars are used the least at night. The mass adoption of electric vehicles would thus help to provide a significant nighttime load that could be discharged during daytime hours when these same vehicles are sitting in parking lots.
This evening peak is much harder for a utility to deal with when the sun is powering more than about 25 percent of the grid.
But in the morning and the evening those same consumers turn to the grid for extra electricity. The result is a demand profile that looks like a duck’s back, rising at the tail and neck and dipping in the middle.”
In this way, owning an electric car becomes like owning a little money factory. All you have to do is make sure it’s always plugged in and the algorithms do the rest. At the end of the month you get a check in the mail. This admittedly has a certain appeal. It’s also largely the same system that led to the rampant adoption of rooftop solar. You make electricity and either pay nothing or get a tiny check every so often from the utility that is buying it from you. It’s a good investment. And, up till now, it doesn’t work.
Like the American scheme, this was also a good idea and it also didn’t work, in part because neither the cars nor the batteries were good enough yet. According to the Danish climate minister (yes, they have one of these), the cars themselves are not ready: “We need longer range and lower prices before this becomes a good option,” he said. “Technology needs to save us.”
The easiest way to begin operationalizing “V2G” technologies is to do it with fleets, all of which park together in the evenings, lessoning infrastructure costs for the installation of two-directional smart charging and also vastly simplifying the economics of figuring out how to pay a utility customer residing in one service area for the power she supplies to the grid, or takes from it, while parked in a different utility’s service area. Fleets should be the first adopters and yet, they aren’t.
As a result, the DoD, which operates a fleet of 200,000 nontactical vehicles, is working to convert them all to electricity with vehicle-to-grid technologies designed in from the start.
As such, it is, according to the air force’s Dr. Gorguinpour, “the largest operational V2G demonstration in the world.” Even these few vehicles can, in a pinch, provide more than 700 kilowatts of power to the grid (about 150 houses’ worth).
Batteries, especially V2G-enabled car batteries, may not turn out to be the answer, despite the constancy of their hold over the minds of today’s dreamers. That we are looking hard for a grail, sketching out and even prototyping a lot of different ideas, is itself an acknowledgment that we stand at the edge of something. Not the end of an infrastructure, though it could go that far, but certainly at the beginning of a new century’s imagination of how that infrastructure might be adapted to suit us better.
Storage, whatever forms it will take in the end, is not the holy grail because it helps to balance the grid we have (though this is the story we like to tell ourselves). It’s the holy grail because it allows us to build an electric world that functions otherwise, that has the flexibility to move and change with whatever the twenty-first century will throw at us. Or, more correctly, whatever remarkable, impossible things we will build into our own near future.
Infrastructure should, according to the design guru Donald Norman, fade into invisibility. It should be made to disappear from sight and equally to disappear from consciousness. It should be quiet, task-specific, and unobtrusive. We shouldn’t notice it, we shouldn’t think about it, and we shouldn’t seek it.
I don’t think we can “get” the grid without an intense care for the cultural context and microdramas that are playing out between technologies, people, government policy, and corporate concern for the bottom line all the time.
The future promises an even more thorough integration of electricity into our lives, more data (which is after all, just electricity), more “smart” things (coming to populate the Internet of Things), and the elimination of fuel from cars, necessary if we’d like to stop global warming before it exceeds the 2-degrees-Celsius disaster line. Most important, we’d like this means of “being electric” to come from nothing, to be transmitted by nothing, to cause no damage, and to work always and wherever.
One recent author described the project of overhauling the grid as akin to “rebuilding our entire airplane fleet, along with our runways and air traffic control system while the planes are all up in the air, filled with passengers.”
Given all this, the best routes forward are those that take the mess of competing interests seriously and design for them. This range of interests should not be limited to investors, visionaries, legislators, utilities, regulators, and all the other folks that have an active interest in the grid; it also needs to include people who don’t care a whit for the business of electricity. Figuring out how to design a system for maximum inclusivity is harder than it sounds, in part because it’s difficult for any one player in the giant tangle of our grid to have much comprehension about what motivates the others.
For example, in all my research into the grid I have never heard a utility customer referred to in the feminine. When speaking of the users of electricity, the (mostly) men who make the system work, and the (mostly) men who push at it and try to invent a way beyond it, imagine a nation of users who are also men. This is necessarily only ever half true. But if the quiet but undeniable fact of gender has not percolated up into the consciousness of those who make, and remake, our grid, what else is being lost? Attentiveness to the details, and not just aggregate data, is critical to the effective reform of an…
How to deal with the combined interests of many different players—which does, and should, include global warming. How to deal with the legacy technology, which is to say the grid we’ve got. And how to deal with the fact that it’s made and run by…
a more practicable solution would be to design something radically integrative. To err, at every moment, toward inclusivity and to design for the easy incorporation of as many different interests as possible. This will mean a clear set of obligatory standards that twist the arms of even the most stubborn players toward interoperability. It will probably also require legal and regulatory intervention. Plus we will need to find a way to pay for the most basic elements of the infrastructure, the wires and…
Coming up with a good system for grid-scale storage, with its capacity to unlink generation from consumption, is one way of pushing the grid toward a more open and…
A second, increasingly popular means for translating interests of different kinds into a single system is to rely upon a platform—an integrative computer program rather than a gadget. In order to help ensure that our grid is wrenched out of its current workings, this platform would need to be open to all the strange sorts of things people are dreaming up and building today (from vehicle to grid-enabled self-driving car pods to real live nanogrids) and to the boring old stuff we’re stuck with for the moment (like natural gas combustion plants and old coal or nuclear), and also to the desires and activities of regular people. All without letting the basic structures of the grid get too rotten or out of date. A platform is an interesting tool to think with in part because it moves us into a domain where computing, or “digital” systems, becomes the means for solving…
(at times) to bring down governments. This pattern of using platforms to organize unrelated and competing interests into new social and economic formations is a comfortable one now, in America. Comfortable enough that some already exist for our grid, and a solid subset of people inside the system are working out how to make these even better…
Across domains one begins to see an abiding concern for ways of reducing the material impacts of infrastructure and counting every zero we make as if it were something substantive.
Renewables, once they have been built and put into operation, are not chemically polluting and they involve neither extraction nor waste—nothing is brought up from below ground, nothing is burned, nothing is boiled, and nothing is released to thicken our atmosphere.
Doing away with heat engines, with their inevitable thermodynamic limitations, is for them a big step in the right direction. Large
Caltech’s Nate Lewis, one of many engineers working on artificial photosynthesis, speaks with the same reiterative stutter. His team’s artificial leaf has “no wires. I mean what I say: no wires. Leaves have no wires. In come sunlight, water, and CO2, and out come fuels.”
The utilities have been quick to recognize the fact that people who are normally quite stingy with their electric company, including those actively opposed to new high-voltage wires, will voluntarily pay a surcharge on their bill for renewable power. This surcharge is usually offered as a “percentage of total consumption” with deals like 85 percent wind or 100 percent renewable (wind, solar, hydro).
The less solid these things, the less visible, and the more thoroughly integrated into the built environment they are, the more likely individuals and companies are to volunteer their money for the cause.
And while some people, mostly those older than forty, still primarily approach problem solving in terms of buying better things, the nonbuyers of things, a characteristic the millennials have become famous for, take it one step further: Why buy a bulb at all? Why a fridge at all? What might we do to render the bulb and the fridge obsolete entirely? Might not a room be lit by a wall woven of fiber optic cabling controlled from the phone?
This much we have learned from the home solar movement, but conservation and efficiency—ways of causing power not to be made—matter just as much, if not more, to the future well-being of our energy system.
Offices, factories, and other workplaces also need to be retrofitted and outfitted, and so do the places where we shop, socialize, and eat out. These are slower to transform, largely because of the cost.
In proposing better means for making and delivering electricity, we need to ask ourselves: Does this path, the one we labor to produce, the one we legislate, the business plan we follow, cut off a whole set of options, or does it allow these to wrangle on in there with the rest? Ease of governance needs to cede way to a means of organizing a diversity of interests around a single vision. This is difficult even when the vision is fundamentally about implementing systems diversity. In the high-stakes, low-sex-appeal battle to ramp up the interoperability of the grid’s thousands upon thousands of subsystems, the most boring, if heated, conversations behind the scenes focus on standardization. A platform is not just a software problem. We actually need the technologies that currently constitute our grid to be able to work with, and communicate with, newer components and newer ways of doing things.
The wise grid, as this option is known, has a thousand opponents, each arming itself in whatever ways it can. Effective systems change can be derailed by any of these. If everyone with a stake in the game chooses to limit, rather than work with, the chaos of the present, we will end up with a balkanized system built of roadblocks and blind alleys. Any action, no matter how small, against interoperability creates new hurdles for anyone hoping for a future grid grounded in flexibility and reliability via diversity.
Even forward-thinking California has proved boneheaded on this point. Late in 2015 the legislature in that state passed an extraordinary new renewable energy standard into law, which (among other things) obliges California to make 50 percent of its electricity from renewables by 2030. As remarkable as it sounds, at the core of this piece of legislation lurks an unexpectedly retrograde logic. The only renewable electricity that will count toward the 50 percent is that produced by central stations. Rooftop solar will not be counted.
In this way the new California law has given utilities free reign not to work out how all the various means for generating, saving, and storing electricity might come profitably together. They have effectively limited systems diversity in favor of securing, more tightly, their own interests.
The utilities are masters of ignoring what people want. And they are practiced in running competitors out of business by controlling the market. California’s lawmakers, in this case, have given them a free hand to continue to do both. The right path was the more difficult one—to ask the utilities to work out a system whereby all renewable power was counted and integrated in the 2030 goal.
More important to the Lovinses, however, was that these technologies, when connected to our existing grid, force it to work differently.
This potential for a better, more robust, more secure grid grounded in technologies we both use and like is made possible by letting people find ways to make and store electricity at the smallest scale without excluding them, structurally or legally, from our common system.
At issue is that as poor as the utilities are at accepting small reforms by small players, Americans, in general, are not especially practiced at ascribing a value to what is not-used, especially when the count is of something as abstract as a watt. And yet conservation and efficiency measures that reduce our need for electricity are as important to reforming our energy system as is the mainstreaming of renewable ways of making that electricity—large and small. Two different sorts of things, then, need to be integrated into our accounting. First, all the electricity made, no matter who is making it. And second, all the electricity not used, no matter who is saving it. If we can work out how to do this, systemically, it will start to matter when a couple of big-box stores, a cement factory, or a subdivision or two are energy-efficient enough that a utility, or anyone else, can avoid building a new power plant.
This is the real story behind contemporary grid reform: not just valuing electrons made by unusual producers, but valuing electrons that we never needed to make at all—the saved power that we shouldn’t even notice has gone missing.
Or, as Amory Lovins (who coined the term “negawatt” way back in 1990) said, “Customers don’t want kilowatt-hours; they want services such as hot showers, cold beer, lit rooms,” and this can “come more cheaply from using less electricity more efficiently.”
This is half the secret, then: to design and to build places, things, and machines in ways that effortlessly and invisibly—from the end user’s point of view—reduce consumption. In some cases this might best be accomplished by reconceiving entirely the thing being built: to take the fridge out of refrigeration, the air conditioner out of air-conditioning, the light out of the lightbulb. In other cases, it might mean producing an identical thing, like a laptop that runs equally well on a quarter of the electricity as the same model three years earlier or a building that so seamlessly integrates power-saving systems that even its most constant users would be shocked to know that they are moving through a massive negawatt machine.
We don’t use what isn’t made, and (this is the new bit) this non-use will get factored into our financial thinking about the grid, and its reform. If it can be given a stable price, a negawatt will matter as much to how actors big and small choose to reform our grid as a tax cut, a subsidy, or a guaranteed low-interest loan does now.
Negawatts, in other words, can now be ordered up by the utility and delivered by an Albertsons. Network enough of these power-savers into a flexible, smart piece of software, and you have your platform.
This demand-response capacity, called DR in the business, not only brings energy saved into the mix of resources available to grid operators by literally making conservation count, but it is another non-thing slowly taking grid governance by storm.
When enough of these scattered but existing resources are networked together it is possible to create what is called a virtual power plant. Not a plant for making virtual power, but a platform that connects everything available to it and gets it all to work like a power plant.
A virtual power plant can link, for example, a big coal-burning plant to a local military base’s microgrid to three cogeneration plants to seven smaller natural gas combustion turbines to thirty-five hundred rooftop solar installations (three hundred of which also have deployable battery storage) to fourteen reliable, flexible, medium-scale negawatt producers to thirty thousand electric cars. It can then use the resources of each—generation, deployable efficiency, storage—to balance out demand with production capacity throughout the system by the millisecond.
Such interconnections between resources allows us to keep the idea of a power plant without our necessarily needing the power plant itself. A virtual power plant is thus primarily an organizational tool that uses information about electricity transmitted by electricity (digital smart meters most especially) to respond to the ebb and flow of power on the grid with a degree of timeliness and nuance that a human simply cannot match.
There are small versions running all over the place. The problem is getting all the necessary components into the mainstream (the cars most notably are still lacking, but smart appliances would make a big difference too), getting them all to speak the same language, and figuring out how to move through regulatory regimes and ownership blockades still in place from the twentieth century’s far more proprietary and centralized grid.
In October 2015, the Supreme Court heard oral arguments in a case that pitted the Federal Energy Regulatory Commission (FERC, the folks charged with managing our grid nationally) against the Electric Power Supply Association (ERSA, which advocates for competition among power providers). At issue was precisely the question of whether a watt saved would be compensated at the same rate as a watt made.
If, in all of this, nothings can be given an agreed upon, transactional value then virtual power plants might help us integrate all the resources at our disposal, material and immaterial, legislative and corporate, collective and individual—such that the whole system runs more efficiently.
We probably even keep the utility companies by figuring out how to pay them for something other than how much power we consume. We could pay them for gadgets perhaps, or a basic line fee per connected meter, or as consultants to newly aggregating towns and newly organizing microgrids, or as innovators in the still turgid waters of our energy future. The only thing we lose with this new version of our grid is our reliance on central stations.
Virtual power plants and their kin, the Energy Cloud, do something that microgrids and nanogrids threaten to undo. They keep the grid and its generic power for the people. All of us, together.
It’s complicated and expensive to get off the grid today, but in five years? In ten? If we want to keep America woven into one nation of equal opportunity then the grid, its technologies, and its wardens will need to pay more attention to what
Americans don’t like dealing with remnants or with imperfect things. We don’t want to see that what now serves us perfectly well was once broken or damaged. We want youth and vigor, not old age and a storied life. Repair is not a cultural value. Replacement is. And yet, if we are to maintain a grid with a national, or even half-national, span, we will need to change our minds about what constitutes a “good” thing. A good thing might very well be an old thing with veins of gold pressed into all the cracks. It might be a mended thing with the mends themselves constituting the most precious element of the whole. If we want to keep a grid for all, we might be wise to mend our grid like a Japanese pot. The most valuable bits, the golden threads, the tiny machines, we rub into all the seams—the glue would matter most. In the case of the grid, this glue isn’t real gold but rather millions of tiny machines—microprocessors—that when working together have the reactive capacity to make decisions.
variable generation, distributed generation, small power, big power, negawatts, nanogrids, mobile storage, weird weather—and integrate these into a self-balancing, highly reliable system. Such a grid would be something like a national (or half-national, or regional, or whatever size we choose to make it) computer in which all that is old, rusty, and broken is healed, and held together, by the densest network of intelligent agents anywhere on the planet.
The easiest way to do this is to force upon them, at every possible opportunity, a radical openness to variety—to avoid California’s path, with its eyes closed to small power producers; to avoid England’s path, where a shift away from big coal-burning plants has resulted in a grid-scale reliance on diesel generators; and to avoid Germany’s path, where the exploited (in their case, the companies) have walked away from public power, taking their poolable resources with them. The future we want is one in which difficult things are integrated, even when this is a more troublesome route than excluding them. Let’s take them all, every variation on the theme of “grid”; let’s consider them all, every form of belief about how electric power should be made and used (or not made and not used); and let’s integrate them all in a way that does the least planetary damage over the long term.