Spring World 2015

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 27, Issue 4

Full Contents Now Available!

Specials

Specials (14)

Specials

data-security-img.jpgWhen your company experiences a security breach, whether it’s a result of fraud, theft, or human error how you respond can make a world of difference to your shareholders, your customers, and your overall corporate image. An Identity Theft Resource Center poll states that “failure to communicate clearly in a crisis can increase (customer) turnover rates by up to 80 percent.”

Tuesday, 08 December 2009 14:54

Working from home since 1980

Written by

suburbanhome1.jpg As I sit here on my fiery little laptop using high speed internet I remember the days when working from home meant something a little different.

I’m not quite as old as dirt, I’ve never had a dinosaur burger, I didn’t start and stop my car with my feet, and a well known figure ranked a little higher than corporal when I was born. The Cold War was a teenager, the music had already died, but Miranda Rights hadn’t made it to the Supreme Court.

After graduating high school I took the county bus to all the major factories and applied for a job hoping to get into one of the big names – AO Smith, P&H, Allis Chalmers, Rexnord, Miller brewing. Buddies got together and drove dad’s car to neighboring counties where firms like American Motors once lived. I learned that I lived on the edge of what would come to be known as the rust belt. No jobs for high school grads without some inside advantage. A brand new term, getting “laid off” meant there was a chance at getting your job back again and was much better than getting fired.

My buddy said we should get into computers. So I took out a school loan and went to a local quick start training program hosted by Manpower Business Training Institute (MBTI). With a 97 percent placement rate it was easy to get a job after seven and a half months of book work with some hands on computer operations.

images4.jpgMainframe computer operator terminal.

After a few years I found myself a systems programmer specializing in the installation and maintenance of mainframe operating systems (OS) named virtual machine (VM) and virtual storage extended (VSE) as well as all the applications that ran under the two OSes. This VM even had predecessors ranging back to the early 60s.

There’s nothing more fun than modifying an OS using a bunch of assembler instructions, as gracefully as a BALR-ina. You talk tech now and ask about control registers or chaining channel command words (CCWs) together and you get the deer in the headlights look. I used to joke that when I died a core dump would pass before my eyes. Running the RDEV blocks to find a broken chain when a device doesn’t respond. Using the program status word (PSW) to indentify the failing instruction in a thousand pages of system dump. Techies say, what are you talking about? Today, the most complex problem is understanding the Windows registry and rebooting.

Okay, some actual years. I graduated high school in 1977 and started in IT in 1980, then became a systems programmer in 1983.

Working from home

While some big companies had modems and terminals, I was still required to talk computer operators through what I wanted done over the phone. They’d say, “Bill, the printer is running slow.” I’d say, “OK, type in CP SET PRI VSEPRT 5 and when you’re done printing set it back to 99.”

Since 1980 I was using a neat little feature of the OS where I could type information into a file and then send that file to a user on the system. They could edit the file I sent them, add some text, and send it back to me. Although it wasn’t called “e-mail” at that time, it was quite handy in a noisy computer room where telephone conversations were difficult. Instead of hearing, “You’ve got mail!” you’d get “00023 PUN FILE RCV” and have to type in RDRLIST and press a few PF keys to get your mail.

Another neat little command allowed you to actually send some text to others on the system. You’d type in “CP M OPERATOR I NEED TAPE XYZ ON DRIVE 580” and the operator would type “CP M BLANG TAPE MOUNTED.”

No, using all caps wasn’t yelling back then, it was just easier to read on the screen. Interesting that I was texting back and forth with people in 1980. I thought texting was such a new technology.

Another cool piece of software took control of your terminal and actually allowed you to have multiple terminals (or windows) all running separate programs – in 1980. I could draft letters in a word processor, compile and run programs, view reports, all kinds of activities on a single terminal in multiple sessions.

In 1984, I received the first remote terminal, a Silent 700. About the size of a laptop, it had a small keyboard and used paper tape and electric typewriter technology to display what would ordinarily come up on the screen. It operated at a speedy 300 baud over an acoustic coupler which is “about” 300 Bps (no not Kbps just Bps) or about 30 characters a second. Oh yeah, an acoustic coupler takes the phone handset and uses it to transmit data via sound. It looks like –

Since the phone line was taken up with the terminal, I had to communicate with computer operators using that brand new technology, texting. “CP M OPERATOR …” The operator would message back, “CP M BLANG ….”

My company bought a brand new large PC that was capable of running PC VM on it, but at 20 Mhz it simply wasn’t fast enough to run any meaningful business applications. Interesting that in 1984 we were running VM on a PC.

I got sick and tired of restoring files for people from tape, so I set up a virtual machine to automatically backup everyone’s changed files to a special drive where I automatically maintained daily copies for one week. For some programmers that heavily modified files all day long I had to modify the XEDIT program’s “file” and “save” commands so that I would be sent a copy to be archived every time they filed or saved something. That made it easy and fast when they called and said the last good file was from 10 a.m. and I could send them back their 10 a.m. file.

In 1985, the IT director springs for new informer terminals in carrying cases a little larger than bowling ball bag. No more paper tape, all green screen, or amber screen in this case. Just like using a 3278 terminal at work, but much slower at 1200 baud. Bringing up files in full screen took forever, but using line mode commands worked really well. Line mode is a lot like the DOS prompt were all the commands are on a single line and all the response come back as if they were going to a printer. Later upgraded to 2400 baud, twice as fast.

 

In 1986, a new job downgraded me to an ADM3A terminal a little larger than a 13-inch TV and back down to about 300 baud. White screen which seemed nicer than green or amber, but it sure took up table space. And what a horrible color, teal blue over medium blue.

We had two data centers with pretty much the same technology, one in Milwaukee and the other in York, Penn. We agreed to exchange OS files weekly over a 9600 baud dial up line so that either data center could run each other’s operations under VM. Since very few business people had terminal access to the systems, they would really never know where the systems were running. Each data center had banks of dial in lines for programmers so they could work at either building or even some satellite buildings.

In 1988, a new job brought a PC dual diskette system with an IRMA card for 3270 mainframe connectivity to my desk.

For remote connectivity I had a 3276 remote control unit with 4800 baud dial in capabilities so I took a 3278 terminal and modem home.
Okay, this is a massive mainframe operator console, but the 3278-2 is similar – fewer colored buttons on the keyboard.

Eventually, I got a nice little laptop that ran off of two, 3.5 inch diskettes. Screen was painfully small, but with a Mb+ of disk space I could save some files to disk and use Edlin to make changes.

In 1989, I got a laptop with a 20Mb hard drive built in.

I began to spend more and more time dialed in, so I got a second phone line.

From 1990 to 1998, I worked through a series of laptops and eventually worked up to a 9600 baud, then 14.4Kbps, then 28.8Kbps, then 56Kbps. External modems and internal modems (ATZ, ATDT… if you know what I mean).

When 9600 baud modems were hundreds of dollars each you had to do the ROI on the time savings from the faster speed. Number of characters transmitted, how long it took, how a faster speed would allow you to get things done faster. Today we are getting to the point where we just can’t throw hardware at problems anymore, like it was in the old days when careful calculations and planning was needed to succeed.

I connected the mainframe up to the our hardware vendor’s global network bringing internet email to the company as well as internet faxing. Terminal connectivity through the vendor’s dialer and later through Systems Network Architecture (SNA) and Internet Protocol (IP) with and without dial up. I Administered more than 1,800 login ids for vendor’s network using the vendor’s nifty little terminal emulator package with scripting to upload/download production files and mainframe email.

Our vendor’s terminal emulator had a nice compression that made 9600 baud and anything higher almost as fast as being at work on a directly connected terminal and you could access the mainframe, as well as several midrange computer systems.

In 1995, I got into project management and traveled around the country implementing systems and conducting training. Remotely connecting to training computer systems using laptop and the a large hardware/software vendor’s global network.

In 1998, had a DR contract with a large vendor and bullied them into allowing us to use their technical network link to also connect to their DR site from work and home using the vendors dialer and terminal emulator product for remote DR.

From 1998 to 2005 I changed careers to BCM and didn’t need a laptop, but I did use my high speed internet and home computer to connect to work’s web based e-mail. I brought work diskettes and zip drives home to do work or just emailed files around.

I now use a laptop and VPN in to work over my high speed internet to access everything as if I were sitting at my desk – e-mail, VoIP phone, shared drives, messaging. There’s even VPN to our DR site so all our technicians can work from wherever they are over high speed internet.

A laptop backup client runs in the background and copies my changed files to the office servers as I update them. I can restore to just about any point in time. Or I can choose to work with files directly from the office servers, like I use to do when the server was a mainframe.

Indeed everyone in my current company can work from home if necessary and some even rotate working offsite to eliminate the need for new office space or to take care of the kids when the schools decide to close. Or when the winter flu comes around and people decide that it’s not worth coughing and sneezing all over each other to get the job done.

images56.jpg

We’ve come a long way since the days of the Silent 700 and 300 baud. Slowly but surely we are moving from a brick and mortar business world to one where people can work from anywhere and still get their job done effectively.

The recent inauguration along with concerted efforts by telecom providers to prevent outages has made the internet more robust and resilient. It is by no means at the level of being termed resilient and probably won’t be for years. But in the mean time we can use our network infrastructure and technology to work remotely and improve our work environment preventing or mitigating the risks caused when working in close groups.

 

lang.jpgBill Lang, CBCP, MBCI, CBCV, is the business continuity program manager for VCPI and it clients (www.vcpi.com). Lang uses more than 25 years IT experience to implement BCM techniques that have been proven successful in disasters and is an active contributor to many online forums, long-term care and BCM conferences, and has memberships in several emergency management associations and BCM related standards committees.

goldman-ex-graphic.jpg

An exercise of the elements of a business continuity or disaster recovery (BC/DR) plan is an important aspect of an organization’s emergency preparedness. An BC/DR plan exercise validates the plan and procedures, tests/trains responders in simulated real conditions, and provides feedback to the plan developers. An exercise also can explore the ramifications of the crisis on the organizations involved. More importantly, an exercise helps answer the question: “Will my BC (or DR) plan actually work?”

With H1N1 now present in all parts of the world and the annual flu season underway, health experts believe as much as 40 percent of the global workforce could be affected at any given time. Not since 1968 have we faced such a bona fide threat from a pandemic outbreak. Obviously, this has prompted business continuity/disaster recovery planners to take comprehensive, if not extreme, measures to help their respective organizations prevent illness or more importantly, prevail in its presence.

blue.jpg

 

When you imagine the organization you're working to protect, how do you see it in your mind's eye?  How do you develop a sense for this complex set of buildings, people, computers, software, vendors and more?  And once you see the big picture, how do you record it and share it with others to improve reliability and recoverability?  In this short article I want to make the case for blueprinting your organization's dependencies, and explain how this consistent approach to understanding and documenting is key to successful planning and response.

A firefighter sees a mountain in terms of tinder, canyons, and prevailing winds by which a wildfire might spread.  Within a business, the impact of an event spreads via critical dependencies between such resources as people, business processes, and services.  Lose a datacenter, and the fire spreads up the chain of dependencies.  Unavailable servers mean downed applications and downed business processes.  Without functional business processes, the ability to conduct business goes up in smoke.  Like the firefighter assessing the mountain, the DR planner must grasp the critical dependencies that could hamstring an organization and use this knowledge to increase reliability and recoverability.

Fortunately, understanding and documenting dependencies, though not a small task, is consistent, repeatable, and produces tangible blueprints used by anyone in the organization who wants to "see" the business more clearly.

It's as simple as asking yourself (and the business units), "What does that need?" and documenting the answers.  If you do this graphically and consistently while keeping the results together, you're on your way to compiling dependency blueprints.  Use that same single-minded focus of recording what needs what, and the big picture reveals itself.  You can start anywhere, though it's wise to begin with higher level abstractions first, like critical business processes.  Record a business process by naming it and asking, "What does this process need?"  Record all the answers: people, hardware, computer applications, vendor provided services, or other business processes.  Then take it further and ask what those resources need.  Keep walking down the chain of dependencies and build a diagram detailing everything involved in supporting that high level business process.

Like a web crawler following links among web pages, tracing and blueprinting dependencies is a simple strategy for searching out important resources of a company and putting them into a greater context.  The blueprints detail what an individual resource of the company requires to function, as well as why that resource is necessary to the company.  

Here are seven reasons why you should consider blueprinting your organization’s dependencies:

1.A Picture is Worth a Thousand Words:
Cliché or not, the statement really does characterize the goals of visualization where large amounts of data must be absorbed quickly. Traditional textual BC/DR plans are insufficient. This is especially important when the information must be dissected and discussed within groups of people. The visual representation of an organization allows experts to agree that dependencies are accurately documented.

2.Bridge the Gap Between Business and IT:

A large disconnect exists between business units and IT workers in most companies that creates communication challenges between the users of IT services and the IT service providers. A critical function of the BC/DR planner is to bridge this gap between business and IT ends of the organization. The blueprint connects the business process to the IT resources.

3.Resource Planning:
Comprehensive blueprints allow for intelligent resource planning. During a rebuild they detail the order of restoration and help parties involved divide the work. The blueprints reveal gaps in understanding of the current environment. Blueprints enable planners to easily identify critical sub-resources such as backups, vendors, software keys, etcetera by clearly linking to critical business processes.

4.Consistency of Information:

Consistent presentation aids understanding. During crisis when minds are already overwhelmed, you don’t want inconsistencies in the way critical documentation is presented and interpreted.

5.Accurate Impact Analysis

A visual blueprint enables accurate “what-if” analysis of IT reliability, environmental hazards, and both intentional and unintentional human-caused events.

6.Less Downtime:

Staying up when others may be down is good business – not to mention good public relations. Comprehensive blueprints allow organizations to reduce downtime during routine events through dramatically better troubleshooting.

7.Rebuilding at a Hot Site
Much like a craftsman needs a good set of plans to work from when building a house; your organization needs a comprehensive set of dependency blueprints to rebuild from at a hot site following an event.

Tom Normand is the CEO of Pathway Systems. Tom has recently been elected to serve on DRJ’s Editorial Advisory Board. Tom can be reached at tnormand@pathwaysystems.com.

Wednesday, 09 September 2009 10:59

The Essential Components of Data Center Design

Written by

The modern data center spans the gamut from the tiny “cargo container” style to gigantic data center campuses that sprawl across hundreds of acres, and from energy gluttons to theorized electricity sippers. The best design is not only focused on energy efficiency but must consider flexibility to meet a constantly changing environment. Since requirements are constantly changing, we are continually shooting at a moving target. Datacenters built only a few years back didn’t think of cooling with outside air, using hot and cold aisle containment, or anticipate the kind of power densities required of a modern day facility. If you’re building a datacenter today you want to look at maximizing profit by minimizing operational costs, while at the same time providing a bullet proof solution for your customers. Consequently, solid initial design must be paired with the ability to adapt to new customer requirements and emerging technologies.


Communication failures have historically plagued organizations in their ability to respond to and minimize the human, operational, and financial impact of a crisis. When disaster strikes and every second counts, organizations need to focus on the mission-critical tasks of ensuring the safety of their people and continuing operations. One of the backbones to accomplishing these tasks are to ensure that you have sustained on-going communication with key personnel, constituents, and partners with high levels of mutual understanding, comprehension, and coordinated behaviors. The need for such high performance communication is often at its peaks during the most challenging and confusing moments of a crisis, making sustainable crisis communication of the most important priorities.

Transforming Business Continuity Capabilities by Targeting Actionable Information to Thousands

This article will discuss best practices around emergency notification as well as list the requirements an organization should demand before choosing an effective business continuity planning (BCP) or emergency notification system. It covers the downfalls of mass notification which excludes the ability to target actionable information through role-specific information delivery. Additionally, the article addresses how an organization can recover from disasters faster and in a more organized manner saving the organization time, resources and reputation as well as ensuring the safety of personnel.

As I mentioned in my recent DRJ article, there are many kinds of virtualization and all of them can be used to support your disaster recovery or business continuity plan. When you mention the word, most IT staff tend to think of server virtualization. However, application and desktop virtualization can also be of help in your BC planning process. I will first describe how applications and desktops can be virtualized then I will show you how they can be used as part of your BC program.

desktop-virt.jpg


Application Virtualization

A virtualized application is not installed in the traditional sense, although it still may be executed as if it is. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all of the resources managed by it, when in reality it is not. Application virtualization can improve portability, manageability, and compatibility of an application by unpairing it from the underlying operating system on which it is executed.

Server Side Virtualization
There are multiple ways of virtualizing applications. With server side application virtualization (Figure 1), applications run in the data center and are displayed on the user’s PC through a browser or specialized client. The application does not need to be compatible with the operating system running on the PC because the PC is just displaying a window into the application. The beauty of this is that just about any computer system with a browser can be used to access the application, and most malware will not have any effect on the application. I say most because a keystroke logger still could be used to capture information between the PC and the application.

Streaming Virtualization
With streaming or client side virtualization, the application resides in the data center but is delivered to the user’s computer to be run locally (Figure 2). Because it is running locally, the resources that normally would be installed into the OS, such as dynamic linked libraries (DLL), code frameworks, control panels, and registry entries are installed into an application container and the entire container is streamed. Because each application is in its own container, negative interaction between applications is prevented.

The container can be sent to the PC every time that it is needed, or it can be stored on the user’s PC for a specific period of time before it expires and needs to be streamed again. The latter method allows for use of the application even when not connected to the network, for example, while on an airplane.

As with the server side virtualization, application updates are easy since there is only one copy of each application and it resides in the data center. This means that only one copy gets updated, rather than needing to push updates out to hundreds or thousands of PCs on your corporate network. From a business continuity perspective, this means that you can store laptops for a long period f time without needing to fire them up periodically for updates.

Another way to virtualize an application is similar to the previous approach in that the application is still packaged into its own container, but it permanently resides on the user’s PC instead of being streamed. When the application needs to be updated, a new container is downloaded to the PC.

An immediate benefit to virtualizing an application in any of the ways shown above is the elimination of DLL hell, which happens when incompatible applications are installed on the same OS. A common and troublesome problem occurs when a newly installed program overwrites a working system file with an incompatible version and breaks the existing applications.

presentation.jpg

Desktop Virtualization
Desktop virtualization or virtual desktop infrastructure (VDI) provides a personalized PC desktop experience to the end user while allowing the IT department to centrally run and manage the desktops. Desktop virtualization is an extension of the thin client model and provides a ‘desktop as a service’ which runs in the data center.

Desktop Virtualization
The user does not know and does not care where their desktop is running. They access it through a window, which may be a specialized client or web browser. In fact, depending on the security policy they may be able to access their desktop from anywhere using any device, even one that is not compatible with the desktop OS being served.

Since virtualized desktops are centralized, it is easy to keep them patched, prevent users from installing software or making configurations changes that they shouldn’t, and load balance the users or upgrade their OS as needed without needing to upgrade the user’s endpoint hardware.

When you virtualize a desktop and add virtualized applications on top of it, the user is provided with a brand new PC experience every time that they connect to their desktop. The well-known problem of PCs slowing down as they are used becomes a thing of the past.

And when the user leaves, you don’t need to worry about them taking the data with them as it is in the data center. As part of your termination process, simply remove access to the virtual desktop.

stream.jpg


Disaster Recovery and Virtualized Applications
While desktop virtualization can be used to provide protection against information leakage, desktop and application virtualization also can be used for disaster recovery purposes. Since server side virtualized applications or desktops are running in the data center, theft or destruction of the employee’s PC will not cause loss of data since the data usually is stored within the corporate network as well.

However, if the applications are streamed or locked down on the PC, the chances are high that the data will be there too. Your information security policy should require periodic backups of PC data files onto corporate storage where the information safely can be stored with other corporate assets.

The Hybrid Approach
An interesting hybrid approach would combine streamed or local applications with server side virtualized applications or a virtual desktop.

That is, instead of taking backups of user data to static disk or tape, the user’s local data and preferences are merged with a compatible virtual desktop on a periodic basis. After the user’s data and application preferences are captured, they can be served up securely to any PC which the user has access to, whether it be in a work area recovery center, hotel business center, or at a relative’s house.

The opposite can be done as well, where data from virtualized applications can be synced with a user’s local PC. Imagine using Google apps in the cloud on an everyday basis, but when Google is unavailable or you are on an airplane, you can use a local copy until you can reconnect.

Summary
When you think virtualization, don’t just think of server virtualization. Application and desktop virtualization can provide powerful tools for both information security and business continuity. Not only do your corporate applications need to be available after an event, but your employees need the resources and infrastructure to be able to get to them. And if your company is like many others, critical data is on employees’ desktops and laptops. Backing up data on employee PCs is not enough; employees may need access to this data within a very short period of time and from a system which either may not be compatible or doesn’t have the proper applications installed to access the information. One of the most flexible and secure ways to deliver applications and data to your employees is to deliver it via a virtualized application or desktop.

Ron LaPedis is a trusted advisor at Sea Cliff Partners which brings together business continuity and security disciplines. He has taught and consulted in these fields around the world for more than 20 years and has published many articles.  Ron has two virtualization patents pending and is a licensed amateur (ham) radio operator, instructor, and volunteer examiner. He can be reached at rlapedis@seacliffpartners.com.

Monday, 27 July 2009 14:11

By The Numbers: Risk Analysis 90-10

Written by

Thirty-four million nine hundred thousand results. Having Googled “risk analysis,” this number was the immediate result. Twenty-five million eight hundred thousand results. This number is the Google result for “risk assessment.”

These numbers are overwhelming. Given their magnitude, one could conclude that everything that could be written has been written about risk analysis and risk assessment.

In the spirit of these numbers, this article is not an attempt to add one more promotion of the added value or the basic planning foundation provided by either a risk assessment, a risk analysis, or both but rather to present to the reader a technique this author has used, in many cases, to obtain ninety percent of the benefit of a full risk analysis with 10 percent of the effort. The author is the first to concede that this technique is not 100 percent all inclusive and is not fool-proof; however, over years and multiple client circumstances it has provided a substantial and dependable building block with which to construct emergency response plans and has a potential for disaster recovery and business continuity plans too. The author leaves it to the reader to determine the value this technique brings to each situation and its appropriateness for use.

Step I
A list of risks must be established or built in order to prioritize and classify their severity to the organization. The team must not only consider those risks internal to the organization such as illness or lack of air conditioning, over which they may have some element of control, but also external influences from the outside environment, such as blizzard, airplane crash, and labor disputes over which they will have little or no control. Appendix A displays a one-page listing of a variety of risks. The author provides this list to his clients, not as an all inclusive list of everything that could happen, but to nurture the client’s thought process in reviewing their business environment. As the list of risks is built, it is imperative that the participants understand and agree on the scope and definition of each risk added to the list.

To achieve maximum benefit of this exercise, a representative population of the organization and site personnel should participate in the selection process in order to have the most accurate list. Consider making a part of the risk analysis exercise, (as appropriate), facilities, emergency response, security, business process owner(s), system owner(s), IT, and the right audit, quality, or validation representatives in addition to those who will first come to mind.

The two most important factors in performing this process are the perspectives of the participants around the table and the risks itemized on the list to be considered and analyzed.

The risk analysis exercise facilitator will want to have all potential risks identified. Using this method, the maximum numbers of risks that can be successfully addressed at any one sitting are 20; however, it has been this author’s experience that a quantity between seven and 14 risks usually works best. More than 14 requires a greater piece of the participant’s time than is usually available. A larger number of risks to be considered make it more difficult to retain the participant’s attention and participation in the process. There have been instances where the author has combined similar risks to be considered as one risk. Keep in mind that a major goal of this process is to keep it short, effective, focused, and accomplishable in one sitting, thereby obtaining the ninety percent benefit with ten percent of the normal overhead of time and effort.

Step II
To perform the risk analysis process as this paper describes, the team will want to have at least three copies of the worksheet displayed in Appendix B, because the exercise, as described herein, will require three passes over the list of risks. A suitable substitute is a writing surface, such as coordinate graph paper, that can be laid out in a manner similar to Appendix B. The author initially used graph charts in a manual process but now uses a multi-tabbed spreadsheet which remembers, counts, calculates and sorts values. The reader will understand that Appendix B was originally designed as a demonstration tool (for the conservation of time) and not for actual use.

The reader may at first be confused by the appearance of two numbers for each intersection of a column and row. Remembering that this is a demonstration model, each pair of numbers represents the finite selection from which choices will be made later in the process. Using the author’s spreadsheet, the selected number is typed into the cell.

Step III
Down the diagonal line from upper left to lower right, enter the list of risks determined in Step I. This is displayed in Appendix C. Again, Appendix C was built as a demonstration tool. The risks displayed were randomly selected and do not represent any specific situation or condition. Optimistically, your list will not have as many as 20 entries. For this example, 20 risks provide the fullest execution of the process.

Step IV
Under the direction of a facilitator, the assembled team makes three passes across a sheet or work space that resembles Appendix C, thus the need for three copies. With the exception of the facilitator, all participants vote by raising their hand or some other signal. A very basic but important rule to this process is that “the majority rules.” When there are even numbers of participants and tie votes happen, allow a fixed time for discussion, and then vote again. An egg timer works well in this circumstance. A representation of the first pass is displayed in Appendix D. Beginning in the upper left corner; ask the question, “Which is more likely to happen? A fire? (No. 1), or an earthquake? (No. 2). As indicated on the sheet by the bold italic number 1, (for this demonstration), a fire was voted to be more likely than an earthquake. A key point to the simplicity of this process is that it is not necessary for the participants to know the exact mathematical probability of either event; only that one is more likely than the other to happen. Ask the question a second time, “Which is more likely to happen? A fire? (No. 1), or sabotage, (No. 3)? Sabotage was selected to be more likely to happen so No. 3 was marked in bold italic. And again, “Which is more likely to happen? A fire? (No. 1), or a disgruntled employee, (No. 4)? The disgruntled employee was selected so the 4 was marked in bold italic. Continue this process comparing the likelihood of a fire to the rest of the risk items. When FIRE is completed, then move to the next column to the right and compare EARTHQUAKE against the remaining risk items just as done previously for FIRE. “Which is more likely to happen; an earthquake, (No. 2), or sabotage, (No. 3)?” Continue this process until the last question asked is “Which is more likely to happen? Embezzlement (No. 19) or war (No. 20) ?”

Step V
Using the T-table at the bottom of the grid, count the number of times each risk was selected. In the example, FIRE was selected six times, EARTHQUAKE, one time; SABATOGE, eight times, and so on as displayed in Appendix D. Using charts or graph papers, this counting is a manual process. The use of a multi-tabbed spreadsheet has the potential to make the remembering, counting, sorting, and calculating automatic.

Step VI
The risk analysis participants then complete two more passes of the grid. On the second pass, ask the question, “Which will cost more when it happens?” This cost is not only in lost revenue, but also in lost discounts, overtime, fines, etc. This is displayed in Appendix E. Again, it is not necessary that the participants know exact dollar amounts; but relatively which event they would expect to costs more when it happens.

Step VII
Again, tally the occurrences.

Step VIII
On the third pass through the risks, ask the question, “Which will have a greater impact on the organization when it happens?” Examples of impact are also displayed in Appendix E

Step IX
Having completed the three passes, the review team’s worksheet or collection of worksheets should look something like Appendix F. The three T-tables show the number of times each risk was selected for (1) likelihood to happen, (2) costs, and (3) impact on the organization.

Step X
The fourth T-Table in Appendix F, labeled RISK #/SUM is obtained by adding across the previous three T-tables. For the first entry, (6) plus (18) plus (2) equals 26. For the second entry, (1) plus (18) plus (16) equals 35, and so on.

Step XI
After all addition sums are completed, the fifth T-table, labeled, RISK #/SUM SORTED, is obtained by sorting the SUMs in descending order. At this point the list of risks has been prioritized from most severe to least severe, considering (1) Likelihood to Happen, (2) Costs, and (3) Impact on the Organization.

Step XII
The last step in the process is to determine and assign the risk priority number (RPN). To determine the RPN:

  1. Multiply (N-1) by 3 where N is the number of risks being analyzed and 3 is the number of passes made. In this case of this demonstration it would be twenty risks minus one or 19. Nineteen multiplied by three is 57. Nineteen is the number because 19 is the maximum number of times any one risk of the list of 20 can be selected when compared to the other 19.
  2. Then calculate the percentage the sum is of 57 to obtain the RPN. For the purpose of this demonstration, an RPN of 60 or higher is a “High Risk;” 40-59 is a “Medium Risk”, and less than 40 is a “Low Risk.” The results of this demonstration example are displayed in Appendix F. As an example of the power and flexibility of this process, is that referencing the RPN permits the practitioner to compare results across multiple risk analysis exercises using a common unit of comparison.

At times, two or more risks will have identical RPNs. The immediate situation may require an ability to break ties. For the example displayed in Appendix F, risks 2, 3, and 4 have an RPN of 61.4 percent. By reviewing Appendix D, the reader will observe that risk 3 was selected over risk 2. Risk 4 was selected over both risk 3 and risk 2. For the purpose of this example, even though they have equal RPNs, within this tie, these risks prioritize 4, 3, and then 2.

Flexibility
The 12 steps described represent a basic execution of the 90-10 process. Flexibility is a key to the success and power of this process. While the three questions demonstrated in this paper could be replaced by other questions, these three seem to generally be the questions of highest concern and attention. The risk analysis team does not necessarily need to make three passes. They may decide that two passes are appropriate for their situation. It is certainly possible they could have different questions more specific to their local condition. Or, they may think of a fourth question to be asked and make four passes. It may be that the risk analysis team prefers to weight one question heavier than the others. To do that, they would accordingly adjust the counts. The No. 3 in the example above would be increased or decreased proportionally with the weighting factor.

It may be that the situation requires threshold percentages different from 60 percent and 40 percent. These too can be adjusted to fit the business condition. The reader should keep in mind that consistency across the organization must be maintained to obtain meaningful results. Maintain the same questions and the same thresholds across the organization.

At the end of the process the risk analysis team has a prioritized finite list of risks. Each risk is assigned a label of “High,” “Medium,” or “Low.” While the risk analysis team could more quickly and with less effort get to a high-medium-low rating by using a subjective three by three square, this 90-10 process provides detailed support for the rating, an understanding of how the rating came to be, and displays the response to the relative questions of, “High, with respect to what? Medium, with respect to what? Low, with respect to what?” This process allows the team to look inside and interpolate the results. The RPN provides a method to observe how the risks cluster together or separate themselves from each other. The RPN feeds management an understandable number it can use to elect to address or accept a risk.

The power of this process lies within its flexibility. The end result is a risk analysis process that is relatively straight forward, easy to execute, and highly fruitful to the planning process. It is short, effective, focused, detailed, and accomplishable in one sitting.

For the past 16 years, Gary G. Wyne, CBCP, has been the business continuity planning coordinator for enterprise information services of Eli Lilly and Company, Indianapolis, Ind. He is a past president of Midwest Contingency Planners, Inc. and is currently a member of the DRII Certification Commission, chairing the recertification committee.