DRJ's Spring 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 32, Issue 1

Full Contents Now Available!

Worst Wildfires in California History Prove Valuable Lesson for Continuity Planners


Emergency responders and continuity planners had their hands full when nearly a dozen wildfires erupted in California during mid-October. Strong Santa Ana winds and record heat combined in mid-October to ignite overgrown brush and thick timber and send thousands of people from their homes and businesses.
The fast-moving flames destroyed 740,000 acres, burned nearly 3,600 buildings, and killed 22, including one firefighter.

Planners at businesses in the fire zones rushed to implement continuity plans, while emergency responders dealt with the worst outbreak of wildfires in the state’s history. In total, more than 2,000 firefighters and hundreds of medical personnel battled the blazes. More than 50 businesses were burned or damaged, with hundreds more on guard as the erratic fires threatened the smoke-filled area.

The largest fire, known as the Cedar Fire, was located in the Cleveland National Forest in Southern San Diego County. The fire, which began Oct. 25 and wasn’t contained until Nov. 4, was responsible for 15 of the deaths and destroyed nearly 300,000 acres. At one point, the fire was burning more than 6,000 acres per hour. It is the worst wildfire in California history.

President George W. Bush declared five counties in Southern California as disaster areas because of the widespread destruction. The declaration paved the way for low-interest loans and federal assistance to residents and businesses. As of mid-November, the Small Business Administration had distributed some $22 million in loans in the Southern California region.

While the road to recovery is under way, so is the evaluation of the disaster. Many businesses are now examining their continuity plans to see what worked and what did not. At the same time, residents and officials are analyzing the emergency response to the fires. In both cases, lessons learned will result in procedural changes and better handling of such situations in the future.



Difficult economic conditions lead to fiscal belt tightening. The ever-increasing demand for data accelerates the growth of storage and causes these costs to appear like ripe, low hanging fruit to many cost-cutters. Buying low priced, “good enough,” or mediocre storage appears to be an opportunity to reduce a large and growing budgetary item. This, however, is only one part of the cost equation.

Low-cost gear costs less not only because of limited functionality but also because a number of engineering shortcuts were taken during manufacturing. For example, using lower-tolerance components that have higher failure rates or removing redundant components are common ways to reduce production costs. These shortcuts, however, negatively impact overall reliability.

Lower reliability means a greater number of outages that require restores, rebuilds, restarts, and reboots. The extra expense of these recovery actions as well as the lost productivity of diverting attention from more important productive activities can quickly exceed the one-time savings gained from buying cheap storage.

Mediocre storage can cause a much greater danger, however, than just increasing operating expense. Mediocre storage devices have a greater vulnerability to reliability problems and, therefore, they expose the organization to a higher level of data integrity risk and, more seriously, expose the organization to the risk of data loss.

Data is not an off-the-shelf commodity; you can’t buy replacement data if it is lost. Without a duplicate copy of critical data, the loss is irreversible and permanent. In addition, transactional data has not only increased in both value and volume but the reconstruction of transactional data is much more difficult if not impossible without a duplicate copy because the paper source has been eliminated.

Ian I. Mitroff, professor of business policy at the Marshall School of Business at the University of Southern California, said in a recent interview, “Corporations – or IT departments – tend to focus on crises they know about. That doesn’t serve them well. It’s not the crisis you know that will kill you; it’s the one you don’t know. There are all sorts of crises: economic, reputational, human resources. Organizations are susceptible to a wider array of crises than 30 years ago, and any one can be the cause or the effect of any other. Something in IT could trigger something elsewhere, or vice versa. Crises don’t give a damn about the silos and walls we set up.”

Unfortunately, Professor Mitroff’s warning may well go unheeded, if senior management fails to listen and to listen well. If the corporate audit function is truly to serve and to represent management as its “eyes and ears” then it becomes the duty of the audit function to look at this world through different lenses. You have to ask the questions no one would think of asking, even when those around you may scoff at your particular view of the world. For if those of us tasked with assessing the security and controls within organizations do not ask these questions, or do not have this view, who will?


In today’s technology-driven corporate world, more people communicate through e-mail than by any other method, including the telephone. Today “mail-tone” is more important to the continuation of your business than “dial-tone.” And it’s not only person-to-person communication that relies on e-mail. Today more than 90 percent of all business documents are created electronically and 60 percent of those are transmitted as e-mail attachments, but as few as 30 percent are ever stored or printed offline as hard copy. In other words, your e-mail system is also becoming the storehouse for your company’s knowledge base.

Yet e-mail systems are notoriously vulnerable and are prone to more outages than most other mission-critical applications. In a recent survey, more than half (56 percent) of companies polled reported a disruption in e-mail service, while only one in five cite a disruption in financial management systems (25 percent), enterprise resource planning systems (23 percent), order entry (21 percent) or their supply chain (21 percent).
So, how do you ensure that your e-mail system is protected from these sorts of interruptions and that it can be continuously available no matter what the circumstances?
Traditional disaster recovery and business continuity methodology revolves around the twin concepts of making sure your data is safe and then being able to recover it once something happens to interrupt your applications and e-mail service. Wouldn’t it be better to not let the interruption happen in the first place?

One answer is to develop a holistic approach toward addressing all the potential causes of both planned and unplanned downtime and impairment of your company’s e-mail system. The following 10-step strategy proposes just such an approach.


A recent fatal fire at a Chicago office building demonstrates the need for established evacuation plans and clear communication with emergency personnel. The fire, which occurred at the Cook County Administration Building on Oct. 17, 2003, killed six people and injured 15 who were trapped in a smoke-filled stairwell.

Questions surround the rescue procedures used by the fire department and the methods of evacuating the buildings’ occupants, according to published reports.

“I wish I could tell you that every aspect of this event went as well as we liked,” Cortez Trotter, Chicago’s emergency management director, told the Chicago Herald. “That is simply not the case.”

Edward S. Devlin, a leading contingency planning consultant, said, “From reading the newspaper reports it appears a lot of mistakes were made. We don’t know all the details of what went on, but it sounds like there was a breakdown in communications.”

Despite numerous 911 calls from the trapped employees and on-scene reports from other employees, the fire victims were not found for nearly 90 minutes after firefighters arrived on the scene.