DRJ's Spring 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 31, Issue 4

Full Contents Now Available!

The Blackout of 2003 created major and costly inconveniences for an estimated 50 million people from Michigan, across Ohio and into New York and north into Ontario. Referred to by some media writers as the “Lake Erie Loop,” the interconnected power grid fell subject to a combination of problems and failures that with few exceptions interrupted the supply of electric power on Aug. 14, 2003, starting at about 4:10 p.m.

In the ensuing hours, reporters wrote about the system of generating plants, high-voltage lines, substations, and distribution lines connecting generating plants with “Main Street.” What most writers missed until much later was the system of computer controls used to monitor and adjust the delicate balance of generating capacity and demand (load) that must be maintained to keep the system up and running. What happened in most places was the automatic triggering of the safety controls on the system (grid) caused it to shut down before damaging equipment. Had the equipment been damaged, response, recovery and restoration would have been similar to a destructive wind or ice storm, possibly taking days if not weeks for full restoration of service.

 

 The morning following the blackout, after I got my power back, I sent an e-mail to a few close associates suggesting the following possibilities as to the cause.

1. A massive call for increased capacity (too much demand, possibly due to the heat).
2. A computer failure causing transactions to be made in error, disturbing the balance of supply and demand.
3. An inappropriate sale or purchase of power, causing too much capacity to be taken off the grid or put on the grid. Power is sold hourly to other systems when there is an abundance available, possibly due to high rainfalls creating extra hydro power.
4. A hacker getting into the SCADA or any other computer application. Most utilities have cutting edge security systems. There have been increased attempts lately.
5. Human error. A power broker in a control center could have bought/sold power and put it on the system or sent it off the system in error. Such transactions take place minute to minute.
6. Lastly, there could have been an equipment failure, but this is far-fetched because monitoring systems would have identified it very quickly.
7. On the wild side, voltage variances have been known to occur from solar flares. There is research on this topic, and there are documented cases in eastern Canada where solar flares have allegedly impacted the electric grid.

The jury is still out, but this is my list of possibilities. The engineering audit will discover the cause. It will take time.

Early media coverage hinted at over-demand due to the heat (No. 1). Some spoke of possible terrorism, a potential cause dismissed early by Washington officials (No. 4).

“Maybe people didn’t panic because word went out so quickly from every public official from President Bush on down that there was no evidence of any kind of attack. There was no sign of a bomb or a break-in, and for anyone concerned that it might have been a more subtle, cyber terrorist assault, Michael Gent, president and CEO of the North American Electric Reliability Council (NERC), offered soothing words. “It’s virtually impossible to get into a system without leaving some tracks,” he said.”
– Time Magazine Online, Aug. 17, 2003

Since several power plants went off line (a normal reaction built into the safety system to protect the grid), some looked to place the cause on lack of power generation (capacity). They later learned that plants must shut down when there is no way to transmit their power to market. Equipment failure was the next suspicion, but which equipment, and where (No. 6). At least two politicians stated in live interviews that the problem was with the Niagara Mohawk transmission system. Both were premature and inaccurate, or possibly confusing the corporate name, “Niagara Mohawk,” with Niagara Falls, a widely known source of hydro power. In either case, there was no factual basis for their comments (No. 3 or No. 5).

As time passed, writers and “experts” from around the nation began to speak of the “antiquated electric power grid” referring to the age of the wires, towers and poles that make up the infrastructure that is the transmission system. In the old days, such systems carried high voltage lines over reasonably short distances and were owned by the companies within whose service territories they were built.

Not so today. The 21st century power grid is an interconnected high voltage system that permits a generation company to sell its megawatts of power to distant markets, sometimes more than a single “grid” away. For example, a nuclear generating station in upstate New York could sell megawatts of power to customers in Ohio, Pennsylvania, Michigan, or New England. The power is “wheeled” by transmission companies and at the local level, distribution companies, who today may not own or operate the transmission grid.

Terry Boston, of the Tennessee Valley Authority, told Time Magazine, “Now utilities could get their power wherever it was cheapest, even if that meant it had to travel farther: power generated in Alabama is sold to Vermont. The nation’s power grid – the vast system of lines, transformers and switching stations – was never designed to move electricity long distances, let alone “from Maine to Miami.”

So, at which level of the power transmission/distribution chain should a company invest in upgrading the “antiquated power grid?”
For a time, that’s as far as the reporters went in their discussions. Then, finally, someone discovered that independent system operators using highly sophisticated computer control and warning systems controlled the power grid (No. 2).

“Bingo!” There’s more to the grid than wires and transmission towers. To make matters worse, each portion of the grid (usually geographically defined and owned) is operated by an independent system operator following an established set of rules. The rules are quite different from one region to another. In the old days this wasn’t a problem – local rules, local players.

“Michael R. Gent, president and chief executive of the North American Electric Reliability Council, the industry organization that created many of those rules after the 1965 blackout, is not willing to say which possibility he considers most likely at this stage, but his early conclusion has focused attention on the rules governing the power grid, a complex and, in the estimation of some experts, physically inadequate system for moving energy around the country.

“Many of those rules – how much power can move in a line, when systems need to be shut down in an emergency – were drawn up long before deregulation opened the sluice gates and enabled the present transfer of billions of watts of energy around the country daily in wholesale transactions across hundreds or thousands of miles. As detailed as those rules are, according to many people in the industry, they are no match for the overwhelming scale and complexity ofthe grid that lost power over vast stretches of the Northeast and Canada last week.”
– New York Times, Aug. 19, 2003

Today, some power generators are selling and sending power over multiple systems with multiple rules and regulations.

“Since the deregulation of the energy industry, the section of the Midwest grid identified in the report has become one of the great crossroads in the transmission of power across the nation — a kind of Times Square in the flow of electrical traffic. Power produced as far away as Denver flows through the Midwestern grid on its way to users in New York and elsewhere.”
– New York Times, Aug. 21, 2003

Hence, the computerized controls and warning systems should be tailored to an industry standard which does not exist on a national level.

Boston, of the Tennessee Valley Authority, told the New York Times, “We have worked for many years developing the rules of the road. The problem is, we’re asking the system to go into a mode of operation that is far different than what it was designed to do. It was designed for short distances. Now, in the new open market, we’re seeing transactions covering hundreds of miles.”

Can a human being override the computerized warning system? Yes, but in dire emergencies, when the grid is at risk, the automated systems shut things down faster than most humans can act.

Robert Blohm, an energy consultant who serves on a committee advising the reliability council, told the New York Times, “At the same time, the challenge of balancing power flows has continued to grow. There used to be seat of thepants rules for a lot of this stuff, but in a competitive environment that doesn’t work anymore.”

So why, then, did some local distribution companies manage to maintain power when so many others lost it when the grid shut down? Pure chance was mixed with a little diligence. Some distribution companies (most are local) had staff in their control rooms that could see the oncoming voltage fluctuations. There is a process known as “islanding” which can be implemented when certain factors exist.

“Because supply and demand need to match up for the system to work, relay mechanisms throughout the grid continuously monitor the flow of power. If there is a sudden failure for some reason, like a lightning strike on a transmission tower, circuit breakers open, and the sector unhinges itself from the grid. This process is called islanding; the goal is to contain the glitch by sealing it off from the rest of the network.”
– Time Magazine Online,
Aug. 17, 2003

Islanding must take place before damage is done, and the local power company must have a variety of power sources that can be accessed through bi-lateral power sale agreements, off the grid. That means they can get enough power through local or alternate transmission lines to meet their present customer demand. It may cost more, but it’s available.

The discussions of transmission line failures in Ohio, faulty warning systems on the computerized controls across the affected area, and the possibility of power transactions not following the rules, are all possible root causes of the initial episode. One should realize the Blackout of 2003 was most likely a series of events, not a single root cause event. Yes, something happened to upset the balance of generation and load, but subsequent actions and reactions by computerized control systems, warning systems, and human operators allowed the cascading effect of multiple city and regional blackouts.

The investigation started by Department of Energy Secretary Spencer Abraham will no doubt discover each item in the chain of events that caused such a widespread blackout. Hopefully, it will also discover those steps taken by local power company operators who escaped the blackout by islanding, the rules that need to be re-written to conform to a national standard, and the actions, if any, taken by electric power buyers or sellers that could have recklessly caused so serious a mishap.

Hoff Stauffer, a senior consultant at Cambridge Energy Research Associates, told the New York Times, “It appears they found evidence to be concerned that the utilities within ECARC [East Central Area Reliability Council] were not going to be properly coordinated.”

Two separate entities coordinate power flow on that part of the grid, the Midwest Independent System operator for some areas and PJM Interconnection for others, creating a “fragmented situation,” according to Stauffer.

This isn’t rocket science, but close. Power utility engineers, operators, regulators, and field crews should be applauded for the extremely high reliability we all enjoy with the exception of the two or three major problems that have occurred since 1965. Count them – one, two, three. Amazing!


 


Dr. Thomas D. Phelan is president of Strategic Teaching Associates. He is a board member and training director for PPBI, a member of the Disaster Recovery Journal Editorial Advisory Board, and disaster chair of the Onondaga-Oswego Chapter of the American Red Cross.