Fall World 2014

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 27, Issue 3

Full Contents Now Available!

Natural Disasters

Natural Disasters (13)

A recent windstorm which left much of western Washington groping around in the dark without electrical power also taught a lot of people an important lesson about back-up power supplies.

In Washington, a substantial number of the batteries installed as part of emergency power systems didn't operate, potentially leaving inoperable substations, telecommunications switching equipment, cellular and microwave networks, backup power generators, and a wide variety of microprocessor based equipment thought to be protected by uninterruptible power sources (UPS).

Those responsible for equipment, building or office operations thought they were protected where necessary. What happened?

The answer is that potential for major problems has increased over the last ten years. Valve regulated lead acid technology (VRLA) uses see-through containers allowing for a number of different and easy tests.

New equipment has been sold as "bullet proof" and "maintenance free" by manufacturers eager for a major share of the burgeoning of a highly competitive backup power market.

This has lead to a general complacency regarding testing and maintenance of a wide variety of critical emergency power supplies. That complacency is enhanced by manufacturers equipment instructions which leave testing and maintenance issues with a brief mention at the end of manuals.

All this makes it difficult to get testing funds budgeted and lulls those responsible into a false sense of security. Many people tend to equate all batteries with car batteries. In the case of standby batteries, this is a critical mistake. Standby batteries are on charge 24 hours a day, year in and year out and are only used when a power outage occurs. An automobile battery generally is used daily, providing quick regular feedback as to its health and is only on charge when the vehicle is being driven.

Seventy-five percent of the system failures are due to batteries not operating correctly, problems which could be avoided with periodic testing.

VRLA's are responsible for supplying varying amounts of direct current. Their cells have a valve which acts as a safety valve. Explosive gases produced by discharging and recharging are contained within the cell by the valve under normal conditions.

Abnormal usage can cause the valve to open and release the excessive internal gas pressure. If the valve doesn't reseat properly or continues to vent gases, the cell dries out, destroying it's storage capability. Too much heat (generally over 90 degrees) or overcharging causes the battery to develop a high interior pressure, opening the valve.

Many of today's most popular cheaper VRLA storage batteries have a "design life" of five years, but a real world life expectancy of two to three years. In the rush to be competitive, components have been made less heavy duty and battery use under less than ideal conditions can shorten life further.

With the old flooded battery design, we can measure specific gravity, open cell voltage, float voltage and temperature as well as doing discharge testing.

Today, depending on the application, up to 90 percent of the storage systems use VRLA technology because it is smaller and safer and can be used in a greater variety of surroundings. However, their maintenance requirements are different from the flooded technology systems.

Since we can neither see nor get inside, testing options are limited to float and open cell voltage measurement, discharge testing and to a new conductance/impedance measurement test. Conductance impedance testing has only been available over the last three years and requires special equipment.

If the internal impedance of the battery is too high, it can't deliver the required current during a discharge.

To be sure, testing standby power generally requires knowledgeable technicians and costs need to be compared to direct and indirect losses in case they fail to deliver.

Generally, however, it's safe to say that if the system was important enough to back up in the first place, the potential loss is too great to leave standby power equipment maintenance to chance.

As they say in the tire business, you need to assess what is resting on those tires, the direct and indirect costs associated with failure.

Furthermore, battery warranties generally are prorated. We recently were asked to test a five-year-old standby power installation. It was defective and the warranty provided only five percent of the replacement cost. It's more cost effective to find problems early.

Since so-called five year batteries generally begin to fail after two or three years, a maintenance program probably should kick in during the second year.

Some sophisticated battery plants can have hundreds of batteries. In these cases, you may want to install some monitoring equipment and train your staff in regular testing procedures.

We have developed some simple tests which can be performed on many of the more commonly available standby batteries. In addition to that, however, it'll still make sense to contract for regular maintenance.

In many cases, the results of backup power failure can be embarrassing, as in the case of telephone switching or utility substation equipment.

In some cases, however, the failure of a battery to provide the electrical start-up needed to fire a generator can be disastrous, as in the case of a hospital.


Steve Gomes is a power technician with Portland General Energy Systems, a non-regulated division of Portland General Electric, in Portland, Ore.

In recent months, the world has witnessed a flurry of powerful disasters.

Articles in this issue examine these disasters, but together they are changing the disaster recovery industry. They have focused the general public and the business world on disaster recovery planning, and what organizations must do to survive in the long run.

Chicago Flood—Business’ Biggest Disaster Ever

Corporate executives, building managers and contingency planners will long remember the Great Chicago Flood. Fortunately, the flood caused no injuries, but the toll to businesses and the city was enormous. The Chicago flood is the biggest business disaster ever, and the corporate recovery effort has taken months. Restoring the city’s infrastructure to prevent future disasters is a formidable task.

The Chicago Board of Trade (CBOT) Building was one of the hardest hit by the flood. Because its air conditioning system was underwater, the computer systems that operated price reporting and wall board equipment were shut down, effectively closing the markets.

The CBOT installed a $2 million air conditioner on La Salle Street next to the building. The system included two giant chillers, diesel generators and a 24-barrel cooling tower. The tower was brought in from Oklahoma.

Although the basements were still filled with water, the CBOT resumed regular trading within a week using temporary power. However, other offices in the building were not open until full power was restored two weeks after the flood began. Lynco Futures, a commodity firm headquartered in the CBOT building, relocated to their hot site by 2 a.m. the day following the disaster, and they were ready for business the next day.

On the positive side, some companies that fared well extended support to affected businesses.

Continental Bank was fortunate—their building’s access to the flooded freight tunnel was sealed in the 1950’s. They helped other banks by processing checks at their technical center. According to William Murschel, a corporate relations representative, Continental processed cash letters from four other banks the day after the disaster, so that the letters could be properly dispatched without serious delay.

McCormick Place, the nation’s largest exposition center, offered 60,000 square feet of exhibit space for free, temporary office use by displaced firms. The space was wired for electrical and phone connections, and McCormick Place provided free phone set-up.

“This is a city that pulls together,” said Jim Reilly, CEO of the Metropolitan Pier and Exposition Authority, which runs McCormick Place. “We want to extend our help to those businesses which may be unable to return to their normal office locations for some time.”

Four businesses moved in, bringing a total of 125 employees for three to five weeks. All of these companies were based in the Carson Pirie Scott building, one of the hardest hit by flooding. Articles from people who had to deal with the flood appear on pages 10, 13, 16 and 21. These authors have described many aspects of the flood recovery: dehumidifynig flooded buildings, relocating to an alternative site, using backup power, and restoring damaged basements, materials and records.

Riots in Los Angeles Affect Thousands

The rioting in Los Angeles was one of this country's most destructive urban disturbances ever. Thousands of businesses and even more jobs were lost.

A report on page 29 details the impact of the riots to businesses. Consultant Joanne Piersall has written an article on page 34 which discusses disaster recovery planning for small businesses. Piersall emphasizes documenting business procedures. She says, “Once a business owner has a head start on setting up an organized and efficient office, it's a short leap to the realization that having a copy of a well constructed office manual off site will also help them recover from a disaster.”

Although small businesses generally don’t require data processing recovery plans, and rarely have resources to invest in disaster recovery, establishing and documenting essential procedures can help businesses of any size survive a disaster.

The immediate effects of the Los Angeles riots were similar to those of any disaster--property damage, loss of retail sales and temporary paralysis of the business community. But the long-term effects are more serious than those of most natural disasters. Tourist and convention business in Los Angeles are likely to be depressed for some time. Social service spending will increase, while property tax revenues are likely to decline along with property values.

The riots have brought the problems of the American city to the forefront of national discussion; hopefully this attention will yield positive results.

Gas Explosion in Guadalajara

On Wednesday, April 22, Guadalajara, Mexico was hit by a series of underground gas explosions that destroyed 26 downtown blocks. The devastation was enormous: over 200 people were killed, at least 2,000 injured, and over 20,000 left homeless. Property damages are estimated at $300 million. American Medical Search & Rescue Team leader Dr. James Dugal administered emergency medical care to the victims. He reports first-hand on the disaster on page 32.

The explosions came from sewer lines filled with gas that had leaked from a nearby pipeline. The national oil company Pemex has been incriminated in the gas leak. Apparently, gas had been leaking from the pipeline for days prior to the explosion.

The gas explosion raises serious issues of corporate accountability in a disaster. At least eleven people are facing federal prosecution. Four top Pemex officials and three officials from the municipal water company have been charged with negligent homicide. The mayor of Guadalajara and the Jalisco state Secretary of Urban Development are also being charged in the disaster.

Environmentalists, opposition politicians and academics blame lax environmental regulation as the long range cause of this and potential future disasters. The responsible government regulatory agency is understaffed, underpaid and prone to corruption.

Industrial growth has far exceeded infrastructure development and maintenance in Mexico. This horrible human disaster stands as a dramatic example of the problems associated with unrestrained economic growth.

The Earthquake Threat

Between Wednesday, April 22 and Sunday, April 26, California experienced four earthquakes registering at least 6.0 on the Richter Scale. The first of these occurred near Palm Springs in southeast California on April 22. Considering the strength of the quake, there was relatively little damage.

Three more destructive earthquakes hit northern California the following weekend. Centered near Petrolia, 30 miles southwest of Eureka in Humboldt County, a 7.0 earthquake struck on Saturday, followed by aftershocks on Sunday measuring 6.0 and 6.5.

Most commercial and public buildings in Petrolia were destroyed or rendered useless. Fires caused by ruptured gas lines ravaged the downtown area. Motion-sensitive shut-off valves, which could have prevented the fires, had not been installed.

California’s new earthquake insurance program will be able to cover homeowners’ claims. Approximately 20 claims were filed in Palm Springs and 100 claims in Humboldt County. The earthquakes in northern California were seismically unrelated to the Palm Springs earthquake.

One of the greatest concerns raised by these earthquakes is that attention is focused on the damage they caused, rather than on preparing for the potential earthquakes they foreshadow. These earthquakes had relatively little impact on business operations in California, but they had even less impact on earthquake preparedness planning.

after the disasters

According to Dr. Robert Kuntz, President of the California Engineering Foundation, “A predictable cycle of activity occurs after every major disaster. Emergency preparedness systems are tested and found wanting. The print and electronic media move in to cover the disaster for the news hungry public. Political leaders visit disaster sites to express concern for the victims and to capitalize on media opportunities. Universities and research organizations submit proposals for more studies on all aspects of the disaster. Then, media interest diminishes, and life returns to normal for those unaffected by the disaster until another disaster repeats the cycle.”

In a survey of cities published before the Chicago flood in Financial World, Chicago received a C+ on its infrastructure, due to neglected maintenance. Chicago was not the only city to fare poorly; numerous other cities are struggling to support a failing structure.

The tunnel failure in Chicago that led to the flood was not unlike the pipeline leak that led to the gas explosion in Guadalajara. Fortunately, Chicago’s toll was primarily economic.

A major earthquake, however, could be both a horrible human tragedy and a devastating business disaster. Hopefully, these recent examples will awaken more people and businesses to involve themselves in disaster recovery planning.


Stuart Johnson was co-editor of Disaster Recovery Journal.

This article adapted from Vol. 5 #3.

The “Blizzard of ’93" also known as “the worst storm of the century” blanketed the east coast with 22 inches of snow. One of the many casualties from this blizzard was Electronic Data Systems Corp.’s data center in Clifton, N.J.

On March 13, 1993 at 4:20 pm EST, a 100 foot section of the roof collapsed under the weight of the snow, buckling the walls of the 35,000-square-foot data center. The facility supported a large installation of fault tolerant Tandem Computers which supported 5,200 of the some 87,000 automated teller machines (ATM), or six percent of the total ATMs nationwide.

Fortunately, there were no casualties from the collapsed roof, giving EDS the necessary time to do a controlled shutdown of all systems, while evacuating the 30 on-duty employees from the facility.

Obviously EDS had a well executable I/S disaster recovery plan. Despite the conditions, EDS experienced no data loss or damage.

“All financial data is intact.” says EDS spokesman Jon Senderling.

Thanks to the I/S disaster recovery plan that EDS exercises annually, Senderling said “relocating to an alternate site went very well, faster than anticipated.”

EDS was able to occupy a temporary data recovery facility in Franklin Lakes, N.J., and within 48 hours was able to relocate operations one more time to a more permanent site in Rochelle Park, N.J.

EDS had arranged for more than a dozen regional ATM networks to perform stand-in processing until the company could get its own network up and running. Despite the inconvenience, 98 percent of the card holders were able to access their accounts through alternate ATMs by March 23, 1993.

Local authorities condemned the EDS facility in Clifton, N.J., preventing EDS access to the site for four days. While processing at the Franklin Lakes, N.J. location, a recovery team was hastily gearing up the Rochelle Park, N.J. facility with the necessary Tandem equipment and communication lines.

Senderling went on to say “It was a real team effort. Due to the very high morale and commitment of EDS employees from North Jersey and across the country, and outside vendors such as Tandem, IBM, & AT&T. Over 400 people came to help out in whatever way they could. Everyone was dedicated to ensuring quality service to the card holders during this inconvenience.”

LESSONS LEARNED

The “dust” from the Chicago floods hasn’t even settled yet, and now we need to determine the impact of the WTC bombing. In any event, disasters of this magnitude have focused the general public, and hopefully, corporate management on the importance of disaster recovery planning and what your organization must do to survive in the 1990s and beyond

  • Move backup generators for emergency power, perhaps generators should be on upper floors, as opposed to the lower levels of high rise complexes. Or, even in an adjacent building to your original facility. Re-route all emergency electrical services.
  • Install battery powered emergency lights and communication systems.
  • Exhaust or ventilation systems for stairwells that aren’t pressurized. When evacuating 40,000 people, pressurized stairwells would soon lose their effectiveness. (Whatever happened to good old fashioned fire escapes?)
  • Due to the bi-state affiliation, The New York Port Authority was immune to compliance to the local municipality codes. What is the code for your high rise? Who’s in charge of security?
  • Public parking garages should have sprinkler systems, pressurized stairwells or similar systems or barriers that would prevent fires from spreading to adjoining buildings.
  • Train employees on what to do. Many employees walked into toxic smoke. Use wet towels to cover nose and mouth as a filter to thick smoke.
  • Second level risk -the possibility that a company’s backup arrangement will be “full.” We need to plan for a secondary backup site. Whenever there is a disaster like the World Trade Center or any other type of regional crisis, you should be calling your disaster recovery provider to see if you are at risk.
  • Obviously LAN recovery plans are essential. The World Trade Center alone housed thousands of islands of networked workstations.
  • Identify vital records especially “work in progress.” When some employees were finally allowed back into the WTC to get vital equipment and records, due to limited emergency lighting, it was difficult to identify what was what. Perhaps file folders that have night-glow tags? Or, even colored folders. Red for high priority; green for second level priority, etc.
  • 8,371 calls to 911 jammed the communications lines between 12:30 and 4:00 pm., twice the normal volume serviced by NYC Telephone Company. This is another example that communications are vital and should be well planned for in any crisis.

Richard Arnold would like to thank the staff of the Disaster Recovery Journal for the hard work and dedication in preparing the Special Report. Janette Ballman and Mike Beckerle, Editors, and Patti Fitzgerald, CDRP, Advertising Editor for Disaster Recovery Journal.

This article adapted from Vol. 6 # 2.

Severe winter weather in the United States caused more than $250 million in industrial and commercial property damage in the last decade – a number that companies could have dramatically reduced had they implemented proper winter protection measures, according to Factory Mutual Engineering and Research (FME&R), a leading authority on property conservation counseling.

Where do companies face the greatest risk? “Many losses occur in regions that are not accustomed to harsh winters,” says Ray Croteau, senior vice president of Factory Mutual Engineering Association, the division of FME&R that counsels companies on protection strategies.

When a powerful tornado touched down last November in Germantown, a suburb of Memphis, it struck a deadly blow, killing three area residents. In terms of property loss, the twister damaged a total of 400 homes and destroyed 22 throughout Memphis, ultimately causing an estimated $25 - $30 million in damage. It also devastated Grace Evangelical Church, Inc. a non-denominational congregation of 1,000 members.

The church, a new structure that was completed in early 1994, sustained extensive damage, including a torn roof. Virtually every area in the 30,500 square foot structure, including the sanctuary, several offices and a kitchen was affected. Office equipment, computers, copier, faxes and phone systems, as well as chairs used in the sanctuary were drenched as a result of the storm. All of the carpeting throughout the complex was soaked, as were books, files, sermons, resources and personal affects.

The powerful tornado tore two HVAC systems off their stands on the roof, throwing one into the front yard of the church. Two trailers parked in a lot and housing temporary classrooms were wrapped around the front of the building like aluminum foil.

According to Reverend Bill Garner, one of the first members of the congregation to arrive on the scene, he could see the path of the tornado from a nearby forest, leading right to the church. “It seemed to cut right through the building, as it directly hit the buildings’ southwest side.”

Luckily, no one was injured, and important church records, which are downloaded daily and stored off-site were not destroyed.

Since the church was relatively new, Reverend Garner initially turned to the building’s architect and general contractor for assistance in digging out damaged objects. However, it soon became apparent this was a unique situation requiring recovery experts.

“We have a good relationship with our general contractor,” said Rev. Garner, “but he didn’t have the ability to do the type of work that needed to get done.”

The disaster struck in the hometown of ServiceMaster Disaster Recovery Service’s corporate headquarters and a congregation member called Rev. Garner to alert him to the company’s services. While ServiceMaster DRS typically works on catastrophic commercial recovery jobs, within 30 minutes after the initial contact, they were at the site, surveying the damaged property.

As a first step, Watford and Rev. Garner conducted the all-important “walk-through.” As they assessed the damage, Rev. Garner explained his initial priorities - sealing the building to mitigate further water damage, determining what was salvageable, deciding which items should be moved to storage, and protecting the church from looters.

The walk-through helped Watford clearly communicate to Rev. Garner what could be restored, to what extent, and what would be most cost-effective to replace. This up-close analysis of the church helped Watford accurately and fairly judge and communicate to Rev. Garner exactly what needed to be done and how the work would be performed. The mobilization effort to begin restoring the church began four hours after the initial walk-through.

Ultimately, the walk-through enabled Rev. Garner to decide to award the restoration contract to ServiceMaster. But perhaps most importantly, his quick decision mitigated costly secondary damage to the church and its contents. Prompt action lead to approximately $125,000 in savings, as drying chambers salvaged many items - such as interior walls - that would have been lost if restoration had been delayed further.

With a torn roof, shattered glass, and water-drenched walls, the church required immediate and expert attention to mitigate secondary damage from rain that continued throughout the week following the tornado. The project, which was approved at 11 a.m., was well underway by 3 p.m. by a clean-up crew of 44 workers, including four supervisors.

Since the storm had downed power lines throughout the area, five generators were also brought to the site to provide electricity for power lights used into the evening. A drying chamber was erected on site to dry the walls, and sophisticated equipment was used to measure humidity in the walls.

The restoration crew worked until midnight of the first day of the restoration project to prevent further damage to the church’s computers - hand cleaning computers, video equipment and packing out 900 chairs. The crew also tore off soaked wallpaper to prevent further humidity from seeping into the walls.

A construction crew created temporary covering for the roof, and security guards were hired to prevent vandalism of the exposed structure.

On the second day, the restoration process continued with crews packing out the remaining contents of the church from the sanctuary, offices and kitchen. In total, over 1,400 boxes were filled, and the 40 foot trailer was filled four times with material and a 26-foot container filled eight times.

The contents of the church were moved to a 43,000-square-foot warehouse. At the warehouse, drying chambers were created to begin the restoration process. A portion of the warehouse was roped off and secured to serve as the chamber, as workers created walls of six mill plastic in which they placed the soaked chairs, desks, books and other objects. Huge drying fans and desiccant units pulled dry air through the area, funneling the air through plastic tubing. The burst of forced air into the sealed chamber pushed out the moisture, drying and restoring the objects.

Within two days, Rev. Garner was housed in a temporary work-space, enabling him to prepare the payroll for the church’s 12 full-time and 20 part-time employees. By the end of the first week, the church’s temporary office was established, with a few computers operational and eight employees back to work in this temporary office space.

“Overall, it was a good team effort by all involved - Church board members, insurance adjuster and our construction crew - that helped us get back on our feet in such a short time,” said Rev. Garner.


Keith Mathias is director of ServiceMaster Disaster Recovery Services, Memphis, Tenn.

October 30, 2007

We Cannot Afford Not To

Written by

Law enforcement must train for disaster response, or the bills may not be paid!

There are two questions you must ask of yourself and your agency. One, is your agency aware of a new mandate for emergency management that requires a standardized management system be used for state and federal expense reimbursement? Second, and more importantly, are you ready for the next disaster? If the answer is no to either question, it’s time for your agency to make rapid progress in both areas.

As a result of the October 1991 Oakland Hill wildfire, California State Senator Nick Petris authored Senate Bill 1841. The intent of the law is to improve coordination of state and local emergency response in California. It was passed by the legislature, and made effective January 1, 1993.

The statute directed the California State Office of Emergency Services to establish the Standardized Emergency Management System (SEMS). The basic framework requires a systematic management in responding to multi-agency incidents. This will mean the use of the Incident Command System (ICS), Multi-Agency or inter-agency Coordination (MACS), the state mutual aid agreement and mutual aid systems operational area concept, and the Operational Area Satellite Information System (OASIS).

The Incident Command System is a widely accepted structure for an emergency management organization. It provides unified command and a structure for the operations, planning, intelligence, personnel, equipment, and finance functions necessary to the management of critical incidents. The organization flexes as the incident changes in magnitude, creating an effective framework for accomplishing goals and objectives.

Multi-agency coordination is most effective when a systematic approach is utilized. The multi-agency coordination system (MACS) consists of a coordination group of jurisdictional/agency representatives, facilities, equipment, procedures, information systems, and internal and external communications systems integrated into a common system that ensures effective coordination.

When civil disorders or “unusual occurrences” beyond the resources of the local agency take place, the mutual aid system will respond with additional personnel and resources. Depending on the magnitude of the incident, surrounding jurisdictions with Memoranda of Understanding in place may respond. If additional resources are necessary beyond the local level, the operational or county level is the next stage of response. If the situation requires additional assistance of more than one operational area, the regional coordinator will request resources within a larger area or region. Finally, if the incident is so large that the regional resources are not sufficient, the state coordinator will coordinate resources from the state agencies.

OASIS is a satellite-based communications system with high frequency radio backup. It provides a rapid transfer of information between user agencies. In SEMS, OASIS can be viewed as both a communications network and information dissemination system linking local, operational, and state organizational levels.

This system must be used in incidents that involve multiple jurisdictions or multiple agencies. If it is not used, beginning December 1996, agencies will not be eligible for state funding of response related personnel costs. Further, if not eligible for state aid, it is unlikely that federal aid will be available.

If you think about it, incidents that require another agency to respond are common. For example, a fire at a plating company in Oakland occurred in 1993, and due to the highly hazardous nature of chemicals stored inside, a large area around the fire was secured. This required coordination with the California Highway Patrol and Cal-Trans to close a portion of Interstate 880. It also required the response of other fire departments, state, and federal agencies. Although just a fire, SEMS will be a required management system of this incident. If you think how often you call for another agency to assist or support, that's how often SEMS will apply.

Research of law enforcement in California shows that very little training is currently conducted in multi-agency incident response. To plan for the future, and work through agency-specific coordination problems, now is the time to start this management function.

What’s Being
Done Now

Since the legislation was passed in 1993, work has been ongoing to help agencies in the training process. The Commission on Peace Officer Standards and Training (POST) has been working on this project for over a year and a half, led by Senior Consultant Mickey Bennett. A POST telecourse is planned for early 1995 to assist agencies and its employees in understanding and using the system. Several modules of training are planned to address varying levels of need, from the first responder to the elected political leaders.

Some agencies that have heard about the mandate are contacting the Office of Emergency Services and POST to see what to do to satisfy the requirement. Courses of instruction are being planned to train officers, but this is only the first step. The use of the system needs to occur on an ongoing basis. It is difficult to only use the Incident Command System in disasters, when it has not been applied regularly. An incident may start small, like a fire, but when hazardous chemicals become involved, other agencies will be called to respond to the scene. The task of “backtracking,” and putting together an organizational chart takes away from the ongoing incident, and required decisions. This process must start at the onset of incidents that have any likelihood of involving other agencies.

Is SEMS enough?

Let’s imagine you have been trained on SEMS. Are you ready to make decisions and direct or perform functional tasks required in the next disaster?

The answer is probably not. SEMS is a management system that improves coordination and communication. It is not a system for emergency, critical incident, or disaster function training.

There are a limited number of courses on critical incident management. Courses are currently available through the Federal Emergency Management Association (FEMA), California Specialized Training Institute (CSTI), Office of Emergency Services, a few local colleges, and private consultants. Attendance at these courses can be costly, and have size limitation that results in long waiting lists. Especially with the SEMS legislation, it is imperative that agencies look for alternative methods of training.

We can do it better

I was a patrol sergeant the day of the Oakland Hill fire. I responded to the area and directed evacuation, traffic control and acted as the initial liaison with the fire department. I tracked the fire on a street map, as the fire department was overwhelmed with their job, and not able to establish a liaison. We did our job as best as possible, but training would have made it better.

The functional tasks required in emergency response are similar in all types of incidents.

The actions of an incident commander in a fire are very similar to those in a barricaded suspect, or other incident.
Training for these basic functions provides an excellent foundation for any emergency management or responder function.

Now is the time to examine our competencies in critical incident management, both individually and departmentally. We are good at what we do, but we can do it better.


Peter Dunbar is Lieutenant of Police, Oakland Police Department, Oakland Calif.

October 30, 2007

Corrosion Control

Written by

Corrosion can be a major factor in a loss situation and if not controlled, it can result in a substantial increase in the dollar amount of the loss.

Corrosion can be controlled if certain actions are taken immediately after the loss and specialist companies are called in for the cleanup.

INDUSTRIAL CORROSION CONTROL

Exposure to fires, floods and chemicals can cause costly and sophisticated equipment to corrode. The resulting affect can make the equipment malfunction. Restoration through corrosion control is, in most cases, faster and more cost efficient than replacement. The waiting time for new equipment also causes unnecessary downtime when corrosion control can have the company back in production in a relatively short time.

Corrosion control involves:

  • the cause, extent and type of corrosion
  • base metals involved
  • function, configuration and complexity of corroded item
  • risk involved for continued corrosive progress
  • finishes involved
  • accessibility for rework, repair and/or replacement
  • adjacent materials which can influence chemical selection
  • working environment
  • manufacturer’s protective system
  • cost effectiveness

Experience has shown that each job is unique. Analysis is needed to develop the right chemicals, equipment and procedures for each job. Chemists must custom formulate the chemicals for each situation. On-the-job chemical testing assures that cleaning will not damage equipment finishes.

The three major factors to be considered in developing a solution are:

  • metals
  • contaminants
  • environment

An alteration in any of these, such as a change in the humidity level, can cause an immediate change in the nature of the problem and the need for a change in treatment chemicals. If the unforeseen happens, corrosion specialists must reformulate the chemicals and procedures to satisfy the changed conditions. Chemists should work with the owners chemists or metallurgists to satisfy all quality control requirements set by the owner, equipment manufacturer or government regulation. Cleaning crews work around the clock if necessary to minimize downtime.

Although each situation is unique, there is a general plan that can be adopted to slow further corrosive action by disrupting the mode of corrosion. Removing or eliminating one or more of the elements of corrosion - moisture, oxygen or corrosive media - makes a temporary reduction in corrosion action. This is done by moving the equipment from a corrosive environment or removing the corrosion-causing elements from the equipment. Water can be removed by draining, vacuuming, forced air drying, heat or solvent displacement. If some lag time is expected before thorough cleaning and treating can begin, a suitable corrosion inhibitor can be used to restrict oxygen exposure. This preliminary action can reduce further detrimental effects until the more time-consuming cleaning and treating process is accomplished. Next, a detailed plan, precisely oriented for a particular set of conditions, must be initiated. Specialized cleaning solutions are utilized. If structural problems could occur with contamination to the steel building housing the equipment, all steel members must be decontaminated and a rust-inhibiting finish applied to affected metal parts.

Following the mechanical and chemical cleaning step, equipment should be closely inspected for pitting, erosion, scratches and other damage to the base metal that can interfere with its functioning or lead to future deterioration.

HIGH-TECH CORROSION CONTROL

Smoke, water and the heat, humidity and corrosive chemicals resulting from a fire can damage all metal surfaces, especially electrical connections, electronic contact points and finetooled precision parts. Electronic data processing equipment, electronic office equipment, elevator controllers, electrical motors, and similar items are highly susceptible to this type of damage.

Most such damage results from airborne contaminants that go undetected and untreated, causing corrosion. Often it cannot be seen in the initial stages, and if corrosion is not arrested, damage can become irreparable.

A three phase approach should be considered to control corrosion and restore damaged equipment to a pre-catastrophe condition.

EMERGENCY TREATMENT - Because corrosion normally starts immediately, emergency treatment should be performed as quickly as possible to preserve the equipment. A suitable corrosion inhibiting chemical should be applied to each potentially affected surface. This emergency treatment can be accomplished very quickly and inexpensively, creating a state of preservation until the decision is made to proceed with the more time-consuming processes of restoration and recertification.

RESTORATION - The unit is disassembled and the housing and chassis are cleaned in detail. Different cleaning solutions may be needed for each electronic surface. Some detachable parts are cleaned in ultrasound tanks. Other parts are cleaned by hand, using small cotton swabs, air brushes and other specialized tools. After restoration, a protective coating is applied to prevent further damage. This greaseless, non-conductive coating usually lasts one to two years. Following the application of the protective coating, the units are reassembled.

RECERTIFICATION - The specialist cleaning contractors should work with the manufacturer’s service representative to protect the customer’s warranty and take the necessary action to recertify the equipment.
Remember - corrosion begins immediately after a disaster, and its control depends on how soon these steps are taken.


Written by Melvyn Musson, M&M Protection Consultants, Marsh & McLennan. Information in this article was provided by BMS Catastrophe, Inc., Fort Worth, Texas.

This article adapted from Vol. 2 No. 1, p. 7.

October 30, 2007

Small Leak Teaches BIG LESSON

Written by

At 12:52 a.m., a flood alarm was triggered on the third floor of the Minneapolis Fed as water poured from the ceiling and onto the bank’s mainframe computer.

  • Within seconds of the third-floor downpour, four employees began covering the mainframe computer and other equipment with plastic. But the water came fast and the damage was swift: the $3 million mainframe was shut down.
  • Located between the third and fourth floors, on the south side of the building, the flood’s source was a small hole in a pipe that carries well-water for the bank’s air conditioning system. The hole occurred at an elbow in the two-inch pipe, and eventually allowed between 1,000 and 2,000 gallons of water to escape. The cause of the hole is unknown, although the hardness of the well-water, the velocity of the water’s passage through the piping and the presence of sand in the water may all be corrosive factors.
  • The water spread over about 80 percent of the third-floor ceiling, soaking the fiberglass atop the ceiling panels. When the water became too much for the fiberglass to hold, the water poured down through the ceiling in a torrent and eventually soaked through to the second floor, forcing the personnel department to move to another site within the bank. Some water also leaked onto the exterior plaza from the second floor.

There it was...the “perfect” pay station location, at a gas station on the corner of a busy intersection, but off on the corner of the property so people could drive up to it and make their calls. It was so accessible and visible it should have attracted several times the average number of local calls. It should have returned its investment in one-third the typical time.

But it didn’t. In fact, it was out of order more often than it was in service. No physical damage occurred. No vandalism, no cars driving into the booth, no floods. It never had a real line outage. Its own advanced electronics never reported a failure to you. You never found out it failed until someone at the location called to complain.

The electronics kept dying. And every time you replaced them, the new ones went dead. No smoke, no flame, no flash. Just dead. Well, somebody said it might be lightning, so you bought surge protectors for the power line and phone line, but to no avail.

Somebody said you must have a bad ground, so you hired the oldest, most skilled electrician you knew, and he thoroughly inspected the power line ground and declared that it fully met the National Electrical Code. He even knew how to use a “megger” and confirmed the ground resistance was so low it was almost unreadable. The installation met every safety requirement in the book, he told you. But the phone still kept dying.

The words the electrician used contain the seed of what is wrong. By making the installation safe for people, he also created an enormous hazard for solid-state electronics. The reason: the ground those electronics need has to be intimately connected to the potential of the ground right where the phone booth is, neither back at the building’s power panel nor back at the telephone exchange.

You suggested driving a ground rod at the booth, and the electrician told you that you’d violate the code if you did that. Yes, if you did that and that alone, you would be violating the code and creating a hazard for people. The white or green wire ground used for power distribution and safety purposes, if broken and replaced by another path to ground (like your proposed ground rod), could let the power ground in your equipment rise 115 volts above the ground that people stand on in the booth and shock them just as if they stuck a hairpin in a light socket.

What the electrician didn’t remember from his exam days was the special conditions in the national Electrical Code for communications equipment. They take the form of footnotes that refer to the use of an “alternative earthing electrode.” That’s the fancy name for your ground rod at the isolated booth.

When you read about the alternative earthing electrode, you find that the code simply requires that you “bond” that rod to the power ground point back in the building with a No. 6 insulated “bonding conductor.” It is very large to ensure that it is physically durable and won’t break...or at least will break last. Its size also reduces its inductive reactance to make it rapidly follow changes in the ground potential.

“Changes in the ground potential”? But isn’t the ground always “at ground”? Isn’t it by definition a potential of 0 volts? Don’t we always rely on it to be an unchanging electrical platform? Well, yes, we do. But the truth is that when a huge amount of current passes through the earth right beneath us, the ground takes on a very jelly-like electrical consistency for those thousandths of a second while the current is passing by. For a time shorter than the blink of an eye, the earth we rely on to be so stable simply cannot stay at 0 volts. A pay station relying on a ground at the other end of a power line that is 50, 85, or 120 feet long to the power panel is connected to the end of an electrical “whipsaw” whenever a huge current passes through the earth.

When can this happen? Whenever a lightning bolt strikes anywhere within a mile or so of the pay station. The effect is just like a huge invisible wave radiating outward for up to a mile, until it dissipates. For that instant, everything in the vicinity is at a different ground potential than the power safety ground.

Visualize being in a small boat with a rope tied back to the shore. Suddenly, a load of rocks gets dumped in the pond and creates a 6-foot wave. You know at that time the wave will swamp your boat when it pulls the rope tight. It would be better to cut the rope and try to float on the wave. But a smart sailor knows how to use a “sea anchor,” a device that works to keep your boat still by connecting it to the surface near the boat.

Electrical power engineers know the condition well, and they call a similar electrical wave ground potential rise. The sea anchor is analogous to the alternative earthing electrode, your ground rod at the phone booth. But, because power safety prohibits us from cutting loose from the shore, the code then requires us to back up the electrode with the bonding conductor.

In fact, in the NEC section formally titled “NFPA 78--The National Lightning Code,” the alternative earthing electrode is required. If you don’t provide it and someone is hurt in or near your booth, you can be liable for some very large damage costs because you didn’t take “prudent measures” to protect people against hazards from ground potential rise.

Besides, it just makes good business sense to protect the microelectronics boards in your phone. What’s happening to those microelectronics? It takes a surgeon’s microtome and a microscope to see it. Right where the 5-volt Zener diode is on the integrated circuit chips, a sliced-open chip will reveal a microscopic volcano where the diode used to be. Why? Because one end of that diode has to be connected to what it calls “ground” to make the circuit work. The amount of current the diode has to conduct for millionths of a second when its ground floats to thousands--perhaps only hundreds--of volts from normal is so large that the Zener diode explodes. All it takes is one hit and your smart pay station never gets a chance to dial you up for its own obituary. It just dies instantly.

So why didn’t this affect pay stations before? That’s easy: the electromechanical devices of the monopoly era, those simple relays and switches, could take repeated hits before they would fail. The dissipating voltage, if high enough to puncture a relay’s varnished cotton and even plastic insulation, simply leaves a pinhole in the insulation, and if the operating voltage of 48 volts is still much lower than the general insulation value of 600 volts or so, the relay keeps working. In fact, it works until enough “pinholes,” perhaps dozens or hundreds of them finally, cause so much insulation loss that the relay gets enough shorted turns that it will not work anymore. Technicians curious enough to unwind a failed relay often wonder what causes the wire’s peppered appearance.

For this reason, many “experienced” installers who put in electromechanical pay phones for years never believed the one lesson they got in bonding and grounding. It seemed to be a theoretical thing that the teacher just wanted to use for an exam question. They had many years of practical experience that led them to believe an extra ground was not necessary.

But a ground rod is not forever, either. To do its job, it must be in contact with moist soil. That sets up a constant corrosive action that eventually forms an insulating layer of corrosion on the surface of the rod, so it loses contact with the soil. An annual maintenance check using a megger--or the more sensitive devices we now have--can show deterioration of the grounding path.

The Rural Electrification Administration charted the United States years ago to show that the entire country, except for a narrow strip at the Pacific Ocean, has more than five lightning days a year, enough to put 95 percent of the nation at risk for lightning damage. (It’s a kind of natural tradeoff for earthquakes). But, even at the Pacific shoreline, with only one day of lightning per year, one lightning bolt nearby can kill your modern microelectronic pay station.

As our good ol’ boys near Southeast lighting centers say, “You’d better get your ground down to earth, son!”


Written by Donald E. Kimberlin, principle consultant of Telecommunications Network Architects in Safety Harbor, Florida.

This article is adapted from Vol. 3, No. 1, p. 50.

At approximately 9:20 p.m. on Tuesday, August 31, 1994, a spectacular fire destroyed the 3000 m2 ground floor of Knox Civic Centre.

The City of Knox is one of Melbourne’s largest municipalities with a population of approximately 130,000. It is located in the outer eastern suburbs and is responsible for an area of 11,000 hectarcs and an annual expenditure budget of around $60 million.

The building provided approximately 3000 m2 of office space; 1300 m2 of storage, and an 1100 m2 Council Suite. Approximately 160 staff were accommodated in the building.

City of Knox Mayor Cr. Tom Blaze described the fire “as a great tragedy,” but added that “it was fortunate no injuries occurred on what was a busy Council meeting night.” He praised the fire brigades and other emergency services who attended the scene and said he was overwhelmed by the offers of support from neighboring municipalities.

Chief Executive Officer Bob Seiffert was extremely pleased with the disaster recovery plan, which although only in draft form, proved invaluable in providing a “step by step” implementation strategy. Mr. Seiffert described the initial recovery phase as a real challenge which was successfully overcome due to the clear understanding of delegated responsibilities.

“Everyone was able to swing into action immediately in a coordinated and calm manner,” said Mr. Seiffert.
Peter Marke of Fire and Recovery Planning Pty. Ltd., who had previously conducted a disaster exercise for the Council, was contacted immediately (during the fire) to assist with the initial stages of the recovery phase. Mr. Marke commended the Council’s executive personnel and staff for the expedient initiation of the plan and described the progress made in the first 20 hours as “outstanding.”

“For example, we had a specialist restoration and recovery company on site and an initial briefing of Councilors and senior personnel was conducted before the fire was extinguished,” said Mr. Marke. “Mr. Seiffert had also contacted the Loss Assessors and Security Company during the fire.”

Mr. Seiffert recalls how one of the managers had the foresight to collect his copy of the disaster recovery plan while collecting files, etc., during the evacuation. “This was obviously beneficial as we would have been working from memory, because the plan was yet to be approved by Council - hence copies were not distributed.”

“A more detailed strategy was formulated early the following morning which included: salvage; alternative accommodation (immediate and long term); insurance liaison; communications; press briefings; EDP assessment; cost monitoring; trauma counseling; staff and union briefings; and restoration of full service deliver and business resumption,” said Mr. Seiffert. “It was also very impressive that a reduced telephone service and many counter functions were restored in relocated facilities by 9:00 a.m.”

Emphasis was given during the management and staff briefings of the likelihood of delayed reactions to the fire and typical stress situations that would be encountered. Details of crisis counseling arrangements were outlined during encouraging addresses by the Mayor and Mr. Seiffert.

Peter Marke also commended Mr. Seiffert on the manner in which tasks were allocated. “Tasks were delegated to the personnel who had the relevant experience or were familiar with what had to be done,” he said.

An interesting observation was the continual frustration because of the inability to gain access to the building due to safety reasons. “Managers and staff alike were understandably anxious to ascertain the fate of their files and equipment, etc,” he said.

It was therefore decided that once relevant approvals were obtained, (re OH&S matters) a team of executive’s would conduct an audit of their respective areas and document what they found to be safe or retrievable. This helped to allay concerns and plan the salvage operation in more detail.

A nearby office building of 3,000 m2 was leased after Council approval was obtained at a special council meeting, which convened at 6 p.m. that day to provide a briefing to the Council and to obtain delegation approvals for the CEO to continue with the restoration strategy.

“Everything was so well organized that by 10:30 a.m. on Thursday my task was completed - only 36 hours after the fire started,” said Mr. Marke.

Council were totally relocated in the new (temporary) building with full EDP support by the following Monday (Sept. 5) and demolition arrangements were being made.

A reassuring observation was made by Mr. Seiffert on the stress management and behavioral characteristics of management and staff. “There was only one incident where tempers started to fray and that was seven days after the fire. Even then it was only a minor situation. It certainly justified the priority we placed on our counseling and related welfare strategies. We ensured as much information as possible on symptoms and solutions for stress relief were given to everyone.”

Mr. Marke described the recovery and restoration project as “a great operational and logistical success.” Approximately 4,500 - 5,000 cartons of files; documents and records, plus the main frame and associated peripheral equipment, were transported to a warehouse and laboratory for decontamination and cleaning. Numerous other cartons of material and “work in progress” documents were treated “on site” in addition to the cleaning of furniture etc


Peter Marke is with Fire & Recovery Planning PTY. Ltd. in Australia. This article was submitted by Mark Fischer of the Disaster Recovery Journal Editorial Advisory Board.