Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 30, Issue 4

Full Contents Now Available!

Efficiently Building a Preparedness

Capability While You Manage the Program

Consultants are a key staffing tool that should be part of every organization’s toolbox to address resource constraints. Clearly, some organizations are appropriately staffed to address the development, management, and continuous improvement of business continuity programs. However, many have too much work for too few people. For these organizations, or for those lacking a specific skill necessary to meet organizational objectives, consultants provide a powerful opportunity to optimize staffing and enable rapid program development and improvement.

Why Might You Need Consultants?

It’s important to recognize that business continuity planning is a complex endeavor. Yes, business continuity is a relatively straightforward concept, but it takes a unique set of skills and experiences to be successful. These skills and experiences include:

  • “Strategic” thinking
  • Project and task management
  • Analytics
  • Communications (verbal and written), and in many cases, with an emphasis on selling ideas and concepts
  • Process understanding
  • Business and/or technical acumen
  • Specialized concepts, such as Six Sigma, Lean, and ITIL

Can any one person have all these skills? Maybe (but most likely not). Are all of these skills needed in each and every organization 100 percent of the time? Most likely not. This is where consulting organizations offer a cost-effective opportunity for many organizations. Instead of acquiring or taking the time to build these skills and experiences, consulting organizations offer the opportunity to quickly meet short-term objectives.

Comparing the FTE and Consultant

toolboxThere is a distinct difference between full-time employees charged with long-term business continuity planning in an organization and consultants often engaged to perform specific tasks over a defined period of time. The following table highlights the comparison.

Failure to recognize these differences can lead to:

  1. Hesitation to engage a consultant, thus leading to delayed program performance improvements when a specific skill or experience does not exist in the organization
  2. Unclear roles and responsibilities, which may impact the success of the preparedness initiative

Recognizing these differences, as well as the diverse skills necessary to achieve success in most organizations, it’s not hard to argue that organizations – even those that employ full-time resources dedicated to business continuity planning – should evaluate consultative assistance in certain circumstances. In most cases, supplementing the planning team is significantly more cost-effective when compared to sourcing and retaining permanent personnel necessary to build preparedness capabilities.

Above all, consultants should not be viewed as competition when compared to full time business continuity professionals; rather, one way to achieve more with less.

Ideal Consultant Focus Areas

Organizations engage the services offered by consultants to:

  • Get started doing something that is outside its core competency;
  • Tackle an urgent task when internal staff are resource-constrained;
  • Mature a process or solution and achieve a level of performance that is eluding the organization;
  • Bring focus to an urgent need that is presently failing to meet management expectations.

Successfully Engaging a Consulting Organization

It’s not an accident when an organization successfully employs a consulting organization. Although many articles highlight the attributes associated with a strong consultant, I’d like to offer four keys to success in evaluating the need for consulting assistance and the process to use to choose an appropriate consulting partner.

  1. Assess the skills, experiences, and competencies you feel are necessary to achieve success over the short- and long-term in your organization.
  2. Perform a cost-benefit analysis regarding a decision to hire full-time resources to close preparedness gaps versus retaining the services of consultants to do the same over the short- to medium-term.
  3. Where it makes business sense, investigate consulting organization options – as well as independent consultants – that appear to share the perspectives you feel are key to success in your organization.
  4. Evaluate potential consultants based on skills, experiences, and, of equal importance, cultural fit; discuss how the consulting organization approaches knowledge transfer and how they approach performing as a member of an internal preparedness team.

Overall, build a business case that addresses:

  1. Speed of program implementation
  2. Resource spend (short through long-term), including an investment in developing the skills and experiences necessary to be successful
  3. Organizational focus

An Example

Three months ago, Organization X hired John to develop, implement, and manage its business continuity program. John (CBCP, MBCI) has 15 years of business continuity experience – about half of which was as a consulting firm employee. During his first three months with Organization X, he did what most new employees do:

  1. Participated in new hire orientation, including required training and paperwork;
  2. Established personal employment objectives that align to his new manager’s expectations;
  3. Learned the organization and met with various managers to get a sense of the culture;
  4. Read and studied organizational charts, annual reports, and other business documentation; and
  5. Attended staff meetings and other discussions designed to understand business continuity-related needs and requirements.

Collectively, these five activities consumed a significant portion of his time, which left very little opportunity for John to establish the foundational elements of the new business continuity program. He quickly realized that he needed some help over the short-term in order to build the program. He was confident he could eventually manage the program day-to-day. However if he were asked to build and manage the program, it would take a long time to meet even his short-term objectives.

As a result, John began building a business case to acquire consulting services to assist him with building the business continuity program, with the objective of transitioning management and maintenance activities from the consulting organization to internal resources within nine months.

Conclusions

I’m not arguing that consultants are the solution to all issues, nor am I advocating that consultants can replace the need for in-house business continuity professionals. Rather, I am advocating that consultants offer a cost-effective means of meeting management objectives.

Recently, I’ve heard several internal professionals argue that consultants are competition and it’s their responsibility to build, manage and maintain the program – even when resource-constrained. I would agree it is their responsibility. I would also argue there are times when additional resources, possessing unique skills, are necessary to achieve success.

Unfortunately, business continuity professionals often hesitate to ask or make the case for consultants because of the fear that they will be perceived as inadequate in their own job. In reality, with the right business case, in-house business continuity professionals can actually be perceived by management as finding and managing the right mix of resources to achieve program – and organizational – goals.

Overall, I think it’s important to point out that the best consultants are those that work to complement and elevate the work done by the in-house personnel, deliver on assigned objectives, transfer as much information to the organization as possible, and eventually – unless they are assigned a long-term supporting role – do everything possible to work themselves out of a job!

Brian Zawada, MBCP, MBCI, is a co-founder and the director of consulting services for Avalution Consulting, a firm specializing in business continuity solution design, development, implementation and long-term program maintenance efforts. In addition to having served as both a consultant and internal business continuity professional, Zawada is a frequent author and speaker, a member of the US Technical Advisory Group to ISO, a former member of the ASIS Standard Development Technical Committee, and the former President of the Northern Ohio Chapter of the Association of Contingency Planners.

DRI International has been highly involved in the development of standards in emergency/disaster and business continuity management in the US and abroad. As part of our mission DRI International serves as an information center in providing the best information on this subject for our certified professionals, auditors and vendors as well as public , private sector and educational institutions.

Even before Public Law 110-53 (commonly referred to as Private Sector Preparedness or PS-Prep) was passed and signed by President Bush, DRI International had an active role in the creation of standards by various Standards Development Organizations (SDOs) , along with the call for voluntary adoption of standards, a need for the certification of business and industry preparedness plans by trained certification bodies was identified. The latest news regarding certification standards is very good and looks like it will align well with our position.

In November, the American National Standards Institute – Certificate Accreditation Program (ANSI-CAP) accepted the preliminary application of National Fire Protection Association’s (NFPA) “Emergency Management/Business Continuity Professional Auditor Training” as a training course for business continuity program auditors. This is the training program that was jointly developed by DRI International and NFPA.

A major benefit of that joint development project is that students who take the course, pass the exam and have their online application accepted can certify plans against NFPA1600, the Standard for Disaster/Emergency Management and Business Continuity Programs.

NFPA1600 is the one standard referenced in the legislation and the standard referenced in the 9/11 Commission Report and has been the US national standard since 2004.

The ANSI-CAP was launched to accredit organizations whose training program will qualify to accredit certification bodies that will certify a company’s compliance with the standard. This puts the full weight of ANSI behind the Certified Business Continuity Auditor (CBCA) and Certified Business Continuity Lead Auditor (CBCLA) certifications introduced by DRI International.

DRI International has been a member of the Council of Experts for ANSI-ANAB since its inception. This is the organization within ANSI that will set the accreditation standard for certifying bodies for PS-Prep and through our participation; our certified professionals have a seat at that table.

By seeking accreditation under ANSI-CAP, NFPA has demonstrated its commitment that, in business continuity management, NFPA 1600 is the US and Canadian standard.

But, the involvement doesn’t begin and end with the NFPA standard. DRI International is involved in technical committees on the other business continuity standards as well such as ASIS and BS 25999.

On a more global front, our audit course for the Singapore National Standard (SS540) has been designated by the Singapore Business Federation as the official training course for those seeking to audit Singapore companies against the SS540 standard. Regardless of the standard, the interests of business continuity management are more than served by the involvement of DRI International.

Alan Berman, MBCP, is a member of the ASIS BS25999 technical committee, a member of the Committee of Experts for ANSI-ANAB, a former member of the NY City Partnership for Security and Risk Management, executive director for Disaster Recovery Institute and the co-chair for the Alfred P. Sloan Foundation committee to create the new standard for the US Private Sector Preparedness Act (PL 110-53).

The longevity of your financial institution, as well as the livelihoods and safety of you and your employees, isn’t something to be left to chance. According to a recent study, 90 percent of companies who experience a disaster will fail within a year, unless they’re able to resume operations within five business days of a disaster. A disaster doesn’t have to be a hurricane, earthquake, or tornado. Often, it’s something as simple as a burst pipe, small fire, or even a utility outage.

In order to save the business you have worked so hard to build, it is critical to plan for the unlikely events. The first decision is the one you make on planning to stay in business. Start with the basics and keep it simple. Look at how your company functions and which of those functions it cannot do without. Identify potential disasters and plan for the impact and duration of the event, so that your plans will work in any scenario. The duration of the event will affect which portions of the plans you activate. For instance, a pipe burst in your building may only shut you down for days or weeks, while a complete loss of your building, such as a fire, can potentially shut you down long term. How and where will you get your employees back to work if your building is inaccessible or no longer there? When developing these plans, you need to talk to your employees about the organization’s recovery planning process, so that everyone is aware and involved in how they will participate in protecting the company and their jobs.

Planning Your Testing Process

Testing is the crux of keeping your plan up to date and increasing the resiliency of your institution. Yet, the fact remains that the majority of businesses have never tested their recovery plan. How can you be secure in your organization’s ability to function as expected during an actual crisis without first putting it to the test? Did you test at your recovery site? How did it go?

Perhaps the word “test” attaches a “pass/fail” criterion and should instead be looked at as more of a practice or drill. Most businesses who actively test their recovery plans will tell you that there is no such thing as a “failed” test, unless you fail to enact upon the deficiencies discovered during the test. These exercises are designed to bring certain realities to the surface that may not have been thought about without going through a dry-run.

Your test process should start with the creation of a recovery testing team and then determine what you would like to test and against what type of situation.

Consider phasing your experimentation:

  1. Phase I: Test server and communications recovery
  2. Phase II: Test customer service and internal process recovery
  3. Phase III: Test recovery of complete facility loss, such as power and location

You should:

  • Determine priorities and objectives among on-site and off-site staff
  • Determine realistic recovery time objectives (RTO)
  • Build outcomes of your continuity plan
  • Challenge all aspects of your recovery operation
  • Evaluate how much generator power is needed to transition seamlessly from land power to generator power and back again
  • Conduct off-site server restoration and reconstruction and measure against your RTO
  • Establish workspace and workflow of your recovery site
  • Improve cross-departmental and partner communication
  • Confirm connectivity to your recovery network
  • Simulation of real-time business transactions
  • Improve upon areas that do not meet the RTO and business objectives

Testing may seem to be a monumental task, but we encourage you to test annually, ensuring your plan stays current, well documented, and effective. With staff dedicated solely to assisting members with these needs, we are constantly striving for ways to encourage businesses to test their continuity initiatives. Testing should be a vital component of your continuity plan, not an afterthought.

When developing your plan, remember these words from President Dwight D. Eisenhower, “In preparing for battle I have always found that plans are useless, but planning is indispensable.”

Exercising your plan and involving staff in the planning process will ensure you leave as little to chance as possible.

Gregory R. Tellone, chief operating officer at Continuity Centers, is a well-known entrepreneur, speaker, and disaster recovery expert in the New York area, whose illustrious career spans more than 20 years in the business continuity and disaster recovery (BCDR) field.

Energy consumption has become a hot topic in the data center industry over the past few years. According to survey results from the Data Center Users’ Group – an organization of data center managers and decision-makers – power usage of data centers (average kW use per rack) jumped 23 percent from 2006 to 2009, and respondents predict per-rack averages of 10 kW by 2012. The Uptime Institute reported data center energy use doubled between 2000 and 2006 and predicts it will double again by 2012. Rising energy costs, coupled with a move toward environmental responsibility, have pushed many companies to look at energy efficiency as a way to cut data center operation costs.

More recently, however, many high-profile data center outages have proved that availability cannot be sacrificed when maximizing efficiency. Availability was the No. 1 concern reported by respondents in the fall 2009 Data Center Users’ Group survey, having dropped out of the top three concerns behind energy efficiency and heat density in previous years. Depending on the industry, downtime can cost a business hundreds of thousands – if not millions – of dollars per hour.

Modern data centers have evolved as a result of new technologies, but in the process the business world has become increasingly dependent on the IT infrastructure that supports those applications. With the progression of technology and unprecedented business demands, a new challenge has emerged: maintaining availability while improving efficiency in an environment where computing demand is growing and IT budgets are shrinking.

downtime_costTactics to Increase Efficiency Without Compromising Availability

Living in a world where businesses are dependent on access to technology despite natural (storm surges) or man-made disasters that may interrupt continuity, there are various tactics to optimize energy efficiency without compromising availability. Here are a few of the best practices:

High-density design

Data centers are moving toward high-density computing environments as newer, more dense servers are deployed. Sixty-three percent of the respondents to the fall 2009 DCUG survey indicated they plan to make their next data center new-build or expansion a high-density (>10kW/rack) facility. This indicates that although there is growing understanding of the savings that can be achieved through efficiency, the magnitude of the savings available through increasing density continues to be underestimated.

The average cost to build a data center shell is $200 to $400 per sq ft. By building a data center with 2,500 sq ft of raised floor space operating at 20kW per rack versus a data center with 10,000 sq ft of raised floor space at 5 kW per rack, the capital savings could reach $1-3 million. Operational savings also are impressive, with about 30 percent of the cost of cooling the data center eliminated by the high-density cooling infrastructure.

It’s important to note that moving to a high-density computing environment does require a different approach to infrastructure design, including:

  • High-density cooling: This approach brings cooling closer to the source of heat through high-efficiency cooling units located near the rack to complement the base room air conditioning. These systems can reduce cooling power consumption by as much as 30 percent compared to traditional room-only designs.
  • Intelligent aisle containment: Aisle containment prevents the mixing of hot and cold air to improve cooling efficiency. While hot-aisle and cold-aisle containment systems are available, cold aisle containment presents some clear advantages. By integrating the cold-aisle containment with the cooling system and leveraging intelligent controls to closely monitor the contained environment, systems can automatically adjust the temperature and airflow to match server requirements, resulting in optimal performance and energy efficiency.
  • High-density power distribution: Power distribution has evolved from single-stage to two-stage designs to enable increased density, reduced cabling, and more effective use of data center space. Single-stage distribution often is unable to support the number of devices in today’s data center as breaker space is expended long before system capacity is reached. Two-stage distribution eliminates this limitation by separating deliverable capacity and physical distribution capability into subsystems. The first stage receives high-voltage power from the UPS and can be configured with a mix of circuit and branch-level distribution breakers. The second stage or load-level units can be tailored to the requirements of specific racks or rows. Growing density can be supported by adding breakers to the primary distribution unit and adding additional load-level distribution units.
Ensuring availability

Availability was a major concern for 56 percent of respondents to the fall 2009 DCUG survey versus just 41 percent in the spring 2009 edition. Understanding that a large percentage of outages are triggered either by electrical or thermal issues, the challenge is optimizing the efficiency gains related to power and cooling approaches while understanding IT criticality and the need for availability. Some of the choices to be made and the potential trade-offs between efficiency and availability include:
  • Uninterruptible power supply: Data center managers should consider the power topology and the availability requirements when selecting a UPS. In terms of topology, online double conversion systems provide better protection than other types of UPS because they completely isolate sensitive electronics from the incoming power source, remove a wider range of disturbances and provide a seamless transition to backup power sources.
  • Energy optimization features can help minimize the amount of energy being lost, by allowing data center managers to tailor the performance of the UPS system to the specific efficiency and availability requirements of the site. Energy optimization modes enable the UPS to switch to static bypass during normal operation. When power problems are detected, the UPS automatically switches back to double conversion mode. This allows double conversion UPS systems to achieve 97 percent full-load operating efficiency; however, it could also allow certain faults and conditions to be exposed to the load.
  • Economization: Economizers, which use outside air to reduce work required by the cooling system, can be an effective approach to lowering energy consumption if they are properly applied. Two base methods exist: air side and water side. Water-side economization allows organizations to achieve the benefits of economization without the risks of contaminants presented by air-side approaches. All approaches have pros and cons. Data center professionals should discuss the appropriate applications with local experts.
  • Service: A proactive view of service and preventive maintenance in the data center can deliver additional efficiencies. Making business decisions with the goal of minimizing service-related issues may result in additional expense up front, but it can increase life cycle costs. Meanwhile, establishing and following a comprehensive service and preventive maintenance program can extend the life cycle of IT equipment and delay major capital investments.
top5-data-concernsProviding flexible support

IT demand can fluctuate depending on everything from weather disasters to strategic organizational changes and new applications. Responding to those swings without compromising efficiency requires infrastructure technologies capable of dynamically adapting to short-term changes while providing the scalability to support long-term changes. Previous generations of infrastructure systems were unable to adjust to variations in load. Cooling systems had to operate at full capacity all the time, regardless of actual load demands. UPS systems, meanwhile, operated most efficiently at full load, but full load operation is the exception rather than norm. The lack of flexibility in the power and cooling systems led to inherent energy inefficiency.

There are now technologies available that enable the infrastructure to adapt to those changes. Where previous generation data centers were unable to achieve optimum efficiency at anything less than full load, today’s facilities can take full advantage of these innovative technologies to match the data center’s power and cooling needs more precisely, regardless of the load demands and operating conditions.
  • Cooling systems: Newer data center cooling technologies can adapt to change and deliver high-efficiency at reduced loads. Specifically, digital scroll compressors allow the capacity of room air conditioners to be dynamically matched to room conditions, minimizing compressor cycling, which reduces wear and creates energy savings of up to 30 percent over traditional technologies. Variable speed drive fans allow fan speed and power draw to be increased or reduced to match the load resulting in fan energy savings of 50 percent or more.
  • Power systems: New designs in power systems allow improved component performance at 40 to 60 percent load compared to full load. Power curves that once showed efficiency increasing with load now have been effectively flattened as peak efficiencies can be achieved at important 40 to 50 percent load thresholds. Scalable UPS solutions also allow data center managers to add capacity when needed.
  • Distribution systems: Modular in-rack PDUs allow rack power distribution systems to adapt to changing technology requirements through the addition of snap-in modules. They also provide monitoring at the receptacle level to give data center and IT managers the ability to proactively manage changes.
Visibility and control enable optimization

Monitoring and controlling infrastructure performance is vital to making system improvements. Management systems that provide a holistic view of the entire data center are key to ensuring availability, improving efficiency, planning for the future, and managing change. Today’s data center supports more critical, interdependent devices, and IT systems in higher density environments than ever before. This fact has increased the complexity of data center management and created the need for more sophisticated and automated approaches to IT infrastructure management.

Gaining control of the infrastructure environment leads to an optimized data center that improves availability and energy efficiency, extends equipment life, proactively manages the inventory and capacity of the IT operation, increases the effectiveness of staff, and decreases the consumption of resources. The key to achieving these performance optimization benefits is a comprehensive infrastructure management solution.
  • Data center assessment: The first phase should involve a data center assessment to provide insight into current conditions in the data center and opportunities for improvement. After establishing that baseline, a sensor network is strategically deployed to collect power, temperature, and equipment status for critical devices in the rack, row, and room. Data from the sensor network is continuously collected by centralized monitoring systems to not only provide a window into equipment and facility performance, but point out trends and prevent problems wherever they may be located.
  • Optimization: A comprehensive infrastructure management system can reduce operating and capital expenses by helping data center managers improve equipment utilization, reducing server deployment times, and more accurately forecasting future equipment requirements. Managers not only improve inventory and capacity management, but also process management, ensuring all assets are performing at optimum levels. Effective optimization can provide a common window into the data center, improving forecasts, managing supply and demand, and improving levels of efficiency and availability.
Conclusion

Although energy consumption has been a key concern for data center managers, the trend of using efficiency as a cost-cutting tactic has led to the reemergence of availability as a priority in the business world. Despite this dependence on continuity through natural or man-made interruptions, there are tactics – such as high-density power and cooling, flexible/scalable infrastructure, and data center assessment and optimization – to help improve efficiency while maintaining the 24x7 availability that businesses require.

Ron Bednar leads the strategic marketing and marketing services teams for the Liebert division of Emerson Network Power. Additionally, he is the chairman of the Green Grid’s Data Collection and Analysis (DC&A) working group, and also manages programs and industry research for the Data Center Users’ Group (DCUG).

Recently I was having a discus- sion with a fellow crisis manage- ment and business continuity profes- sional regarding the challenges that organizations face with respect to testing plans. Testing BC/DR, crisis management or any response plan, and personnel is often low down on the list of "things to do." However, with recent crises such as the Mexican Gulf oil disas- ter, will we see a shift in the way that corpo- rations test their plans? Here are the details of my discussion with Jonathan Bernstein of Bernstein Crisis Management.

Bernstein: In my experience, a lot of organizations create crisis and business continuity plans of various types but then never test them. Why is that?

Burton: It really varies from one industry to the next but generally an orga- nization believes that if they have a plan, they are prepared for a crisis and that the "box" has been checked. We initially observed this problem with the maritime sector after 9/11 when vessels, ports and port facilities were required to develop security plans to comply with a new mari- time security federal regulation. Upon approval by the USCG certain vessels, port facilities, and ports were required to conduct an annual exercise, but the owners of the plans (the ports and termi-nal operators) didn't believe that it was a real necessity even though the regulation required it. This was mainly due to poor training and a lack of ownership from most terminal operators. It was a few years before they really started to conduct these tests. The marine industry in my opinion is still very short of best practice in terms of really testing their ability to respond to crises.

So, where an industry is required (regulated or publicly traded companies) to have a response plan (be it a BC/DR, crisis, emergency, or security) it is easy for the entity to hire a consultant or write the plan internally and place it on the shelf to gather dust. The plan is never complete without testing it from a number of direc- tions to ensure it is put "through the mill," and even then these programs should con- tinue to mature. Maturity comes with reg- ular testing, and it is extremely important as an environment may change, a proce- dure or plan may get updated, or personnel may get replaced. Continually assessing risk is essential.

I will finish this question with the word "ownership." Organizations that evaluate their employees BC/DR or crisis respon- sibilities on their annual report cards will see better results in all areas of the pre- paredness process. As an employee, if I'm taking on the important role (ownership) of a BC/DR or crisis management process, then I would like to be credited for it. It also works both ways if something goes wrong. The system has failed and some- one might be fired. Where organizations have dedicated teams for preparedness, this not the case. However, for many other organizations it's often a role that is given to employees as an add-on to their existing responsibilities.

Bernstein: If you have a multi- location business, how can you engage all the members of your crisis response teams in a simulation exercise without incurring a huge expense?

Burton: It can be difficult because you use the word "engage." Do we really engage during a conference call or an online power point preparation that some would classify as a table-top exercise? I will let your readers answer that one the next time they are sat twiddling their thumbs during such events. So, to get buy-in and be successful for these types of events you first need an interactive Internet tool that is secure, scalable, and easy to use. Multiple locations may also mean testing partners and other stakehold- ers to ensure they are prepared to meet the needs during the time of a crisis. Take BP for example. They are now offering employee bonuses based on safety results. In our opinion there are five main compo- nents that an evaluation tool must have:

  1. Communications and collaboration – During the exercise design, the delivery, and after the exercise it is critical that teams can easily communicate and collaborate. A good communication tool should always be backed up by a second, and where possible, a third. These tools should give the participants a few options to be able to collaborate so IM chat, video conferencing, and e-mail are all good to have. We find that keeping people engaged by video conferencing is something that works extremely well and with today's technology is cost effective and easily achievable.
  2. Situation awareness – During an exercise it's important that the tool provides a synopsis of current exercise activities which is what we call the situation awareness dashboard. This should be easily accessible and provide a 1,000-foot view of what's happened and what the current state of play is. This is very important especially for regulators, senior management, board members, and any other observers who only have a few minutes to check in and see how the exercise is progressing. GANTT charts and widgets provide real time information on critical activities such as facility, data center, and other key recovery activities.
  3. Easy to use and easy to follow – The tool has to be easy to use at the front end (much like a simple website) or you will run the risk of losing your constituents before the exercise even starts. Once a team has coordinated a response to a scenario, the tool should aggregate the content and accept various types of data and media including audio, video, and various documents. The gathering of responses in these various formats allows for better record keeping and evaluation, which leads to continual improvement.
  4. Immediate results – The tool should enable an organization to get immediate feedback from the exercise. This can be done in a number of ways depending on the evaluation criteria and scoring methodology. Immediate results support plausible deniability when an organization is faced with identifying to a regulator or internal management that a team is prepared for an incident. This is also good for preparing for those last-minute operations where a team needs to get a quick evaluation of its readiness.
  5. Training – It's vital that teams are provided with the basic information regarding the plan and their roles. Having access to training on a regular basis can only be achieved cost effectively in an eLearning format, especially when it comes to organizations that are dispersed across a city, region, or the globe. Having a training component built into a crisis management and business continuity evaluation tool provides standardized training that can be easily modified, updated and shared with the local, regional and global teams and partners.

Bernstein: I think a lot of C-suite executives are somewhat technophobic. Do you know any tricks for getting them comfortable with using any of the web- based tools for crisis management?

Burton: We are seeing a shift due to easy-to-use social media sites such as LinkedIn, Facebook, and Twitter and also the fact that younger executives are now entering the C-suite. Easy to use is key. Will an organization purchase a tool that is difficult to use and time consuming due to technical training requirements? So, easy- to-use and no technical training but with all the bells and whistles is what you want.

Bernstein: How often does an organization need to run simulations in order to truly be ready for a breaking crisis?

Burton: It depends on a number of specifics, but the general rule of thumb is that the plan should be tested when a key procedure or other major part of the plan changes, when personnel that might be impacted by the plan change, when new personnel join a team, when a regulatory body requires it, when an incident has occurred that may have require changes, and as often as it's determined in the organization's policies and procedures. A policy for exercise regularity should be written based on an assessment of the pro- gram. Generally an organization should conduct more exercises in the first three years of a new plan to ensure all the kinks are ironed out and then reduce the amount after that point. If an organization is con- tinually responding to incidents, then the plan will be indirectly tested which may reduce the requirement to run regular exercises.

Bernstein: What do you say to organizations that do horribly on their first exercise?

Burton: "Rome wasn't built in a day." No, seriously, testing a plan for the first time always has the potential for something to go wrong. However, what I will say is that if you have built up to the first simulation exercise with training personnel on their roles and run through a number of potential scenarios in meet- ings, then you should be at least prepared to respond in a coordinated and efficient manner. Working through a problem for the first time with new plans and personnel will be a learning experience for all and ultimately lead to more successful exer- cises in the future. Organizations should focus on an exercise program where they have a goal to conduct a certain amount over a period of time to ensure any gaps are filled.

Jonathan L. Bernstein, president of Bernstein Crisis Management, Inc. has more than 25 years of experience meeting clients' needs in all aspects of crisis man- agement – crisis response, vulnerability assessment, planning, training, and simulations.

Robert A. Burton is a principal of Blue Water Partners Global, where he special- izes in crisis and security management and business continuity. Burton assists all sizes of organizations in assessing risk, plan development, training program design, the testing of plans and personnel and integrated software solutions.