Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 30, Issue 4

Full Contents Now Available!

In late 1988, the Disaster Recovery Journal featured an article by William Bedsole, who discussed some of the complications arising from today’s PCs as opposed to those of eight years ago. “In most cases,” he asserted, “corporate management and MIS don’t understand what today’s PCs are being used for or how dependant their organizations have become on their availability” (“Are Your PCs Protected?”, page 189).

Bedsole defined the main problem that arises from these complications—namely, protecting the vast amounts of vital corporate data. “Users of these systems often are not from the DP ranks and have not accumulated the hard-earned backup disciplines of their mainframe counterparts,” he stated.

These problems, however, can now be put to an end. Secure Data Network, Inc. has recently developed the first online data backup and retrieval service for IBM-compatible PCs and PC-based local area networks (LANs). The latest onslaught of natural disasters, besides the profusion of computer viruses, makes this sophisticated system of PC data backup well timed.

“With the SDN Backup System, PC users can have the kind of off-site, out of state, secure backup of data that might cost hundreds of thousands of dollars for a mainframe computer—at a fraction of the cost,” said Frank Reed, Executive Vice President of Secure Data Network, Inc.

The SDN Backup System, through automatic dial-up connections, compresses and encrypts files, then relays the data to two SDN-maintained remote sites at some of the highest attainable transmission rates. Using SDN custom-designed communications software and hardware, this method of off-site, multiple location storage is not only secure, but unique.

Perhaps most importantly, at least in reference to Bedsole’s concerns, the SDN Backup Software involves a minimum of training. After the user and SDN select the data to be backed up and specify the user-defined time intervals, the rest of the backup process is entirely automatic.

At the designated time, the SDN backup system automatically scans hard discs for new or altered files. This data is then compressed roughly 65% and relayed through standard telephone lines or by satellite to the two remote sites. One of these sites is an SDN “Substation” site, usually within the user’s area code or at least accessible through a toll-free number. To ensure the utmost data security, however, the SDN Station itself is located out of state.

The SDN Backup System has several built-in security measures. The system compresses and encrypts files using Defense Department approved Data Encryption Standard (DES) level security so that only the owner, with a personal password, can retrieve and de-encrypt data.

Another security precaution is the automatic detection and isolation of computer viruses. In the event one is discovered during the 18-point check for known viruses, the user is notified and can remove it from his own system. During the backup process, however, the virus is compressed and encrypted with the files and rendered harmless. According to Reed, “It can’t do anything else to the system or to anyone else’s data.”

The SDN Subscriber System 1000 is now available at a very low cost. This fee includes the custom SDN backup software and 60 minutes of backup time, with additional backup time available.

Subscribers are provided with 24-hour, on-line access to stored files. Upon request, SDN will deliver backup files on various forms of duplicate media, such as floppy disc, tape, optical storage, or hard disc drive.

In the future, Secure Data Network, Inc. plans on adding higher-speed modem options, such as satellite and fiber optic modems, and the capability of backing up scanned images, such as documents and engineering drawings.

This backup system not only accepts responsibility for data protection, but even establishes the frequncy of data backup.

Bedsole’s concerns have finally been taken care of—and then some.


Richard Newman is a staff writer for the Disaster Recovery Journal.

This article adapted from Vol. 3 No. 1, p.53.

As businesses and other organizations become increasingly reliant upon computerized systems--particularly OLTP (on line transaction processing) systems--MIS executives are required to develop backup plans that ensure the survival of critical data. In a number of industries, the notion of relying on last night’s backup of a critical data base has become obsolete. Users cannot be expected to re-enter lost transactions after recovering from a disaster; instead, a technology-based solution can address the requirement of remote recoverability of critical data bases.

In the IBM mainframe world, a number of possible configurations exist for support of online transaction processing. Generally speaking, mainframe sites use a full function DBMS such as IMS/VS or IDMS/R, or possibly a teleprocessing moniter system such as CICS/VS, to manage the corporate data base. DB2 is emerging as the relational DBMS for future production applications; online transactions are processed using either CICS or IMS as a front-end to DB2. In many large installations, the reality of today’s requirement is frequently some combination of the systems listed here.

Online and batch processing of data bases requires full integrity and recoverability. This is ensured through the use of log or journal files. As transactions are processed, the DBMS writes log or journal records to record the changes in consistent format. In the event of a hardware failure, a forward recovery utility is used to recover from a starting backup copy of the entire data base. Users are already familiar with these techniques, so it would be desirable to extend the concept for remote recovery situations. Thus, remote logging (or remote journaling) in conjunction with off-site backups of complete data bases would seem the most logical and straightforward approach to solving the problem.

This article will describe a number of approaches currently available and analyze each alternative from a number of perspectives. Some of these alternatives use log or journal data and some do not. The important point to keep in mind is the tradeoff between cost and risk; spending more money should lower the risk of data loss. How you assess this tradeoff will affect your decision on which alternative to pursue.

Hurricane Hugo’s appearance on the East Coast in September, 1989, and the earthquake on the West Coast in October, 1989, reminds us all of how vulnerable we are to various threats that mother nature can bestow upon us. The widespread devastation to life and property emphasizes the need to plan for recovery and loss prevention in case of a disaster. This planning applies not only to personal property, but also to the property of the organization.

In the event of a disaster, various organizations play a key role in the recovery process. However, before an organization can assist others in the recovery process, it must first recover itself.

Some organizations, such as financial institutions, are faced with a regulation that requires the an organization recovery plan. Many other organizations do not have this regulatory requirement. Whether mandatory or not, the development of a comprehensive organization recovery plan results in the following benefits:

  • Minimizes economic loss
  • Reduces disruptions to operations
  • Provides organizational stability
  • Achieves an orderly recovery
  • Reduces legal liability
  • Limits potential exposure
  • Lowers the probability of occurrence
  • Minimizes insurance premiums
  • Reduces reliance on key personnel
  • Protects assets
  • Ensures the safety of personnel and customers
  • Minimizes decision-making during a disaster
  • Reduces delays during the recovery process
  • Provides a sense of security

THE ANSWER IS: Any organization implementing a disaster recovery plan should select the appropriate techniques and technologies for collection, report generation and maintenance of the requisite information as a part of the plan development process.

WHAT’S THE QUESTION? It should be, "Is relational database technology appropriate for use in development of a disaster recovery plan?"

It is my opinion that the use of a relational database, or any other technique and supporting technology, should be determined by the organization developing the plan, and not be imbedded in the planning product. The reality of Disaster Recovery Plan Development is that:

1. It should be approached as all other projects, remembering that selecting a technical solution prior to developing the project requirements most often fails.
2. It is a process that lacks “glamour” at best, and for most organizations is done to “get it over with” rather than gaining corporate visibility. To this end, the easier the development and maintenance process, the better it is received by those involved.
3. The personnel involved, along with the technology used in the computer center, will change over time. The recovery plan must “survive” these changes, and therefore should be done in a manner that is as independent of them as possible.

The approach that we have taken in the development of our family of disaster recovery planning products for computer center and business unit environments has been to some extent anti-consultant in nature. By that I mean that our plan development methodologies were developed independent of any PC or mainframe software, and focuses on the processes of developing the plan itself. We have made the process as easy to learn and easy to implement as we possibly could to expedite and simplify the development effort. We believe that with the right planning methodology in use, most end users can plan, manage and implement the recovery plan with minimal outside consulting assistance. We suggest that the user of the planning methodology select the best approach for collecting the information in an automated manner AFTER the development process is understood by all involved in the project. While our planning products have PC support modules available, they are separate from the base product and offer the user just one PC-based option to use among other alternatives that we suggest they evaluate.

We chose to separate the disaster recovery plan development methodology from the PC technology for a number of reasons:

1. The only thing clear about PC hardware and software technology is that it is unclear just what the life of any product is in today’s rapidly evolving world. I am not convinced that any one product will fit a majority of user environments, any more than we know what OS/2 Extended will mean to the databases in use today. Certainly the use of an SQL capability, compatible with the mainframe, is most desirable, but hardly here today. We have chosen, for today, not to complicate the disaster recovery plan development process by forcing a product decision at the outset of the project, when better options may evolve as the project progresses.

2. When an organization makes a commitment to integrate a PC software product in the disaster recovery plan, they are making a number of commitments. If the PC product is already in use by many people in the organization, there is a shorter learning curve as the plan itself becomes one more application. However, that is seldom the case. When a PC product is integral to developing, maintaining, and possibly using the plan, the organization must have a number of people very knowledgeable in the use of the PC product for the disaster recovery application. This requires training, cross-training, and the inevitable problems encountered when software and hardware technology changes.
We recommend that if an organization has already made a commitment to standardizing on a PC database package, that organization should take a hard look at using that product for disaster recovery. With personnel skilled in the applications generating process for most PC database products, the development process of the database structure, screens and reports is not a lengthy process, and leaves the user with a package that can be supported by their own staff. If a PC package is brought in and implemented for disaster recovery only, and if the source code for that application is not available, the end user has created an exposure in the plan development process which for most is unacceptable.

3. People are people, and while some packages have much “gee whiz” about them, reality sets in when the Disaster Recovery Coordinator must get things done. Often the capabilities of a software package are only useful to a proficient programmer, which is not usually a resource to those responsible for the development of the project. Look very carefully beyond the demonstration provided by the salesman and fully understand how the product will work in your environment - i.e. buyer beware!
It is our experience that few of the elements required in the recovery plan benefit from a relational database. Beyond the personnel data (name, address, title, phone number, etc.), not much in the plan really requires the data element management capability of the DBMS. You should understand the actual requirements before committing the entire project to a certain software/hardware solution.

4. Seldom does the end user have the developers of the methodology and/or software available during the project to make customized changes to the recovery plan being developed, and without that ability many organizations must adapt to the software, complicating the development process and compromising the plan’s usability.

In summary, I would reiterate that the best technology available for collecting, reporting, and maintaining a disaster recovery plan is the one selected during the plan development process, after those involved have an understanding of the information and tasks required of them. For most it will be a combination of information already maintained on some automated system(s), possibly in conjunction with some information maintained on a PC. All plans must have some of the information in hardcopy form (contact lists, plan action steps, etc.) for when “the call” comes. At that time you won’t be able to count on the technology to get you started - in reality, it will be information on paper that you will use to initiate the recovery plan. Don’t get tangled up in the technology or you will never get to the real issue - that of having the needed elements in offsite storage and the recovery plan in place, testable and maintainable!


Written by Jim Mannion, M-Plus Consulting Service

This article adapted from Vo. 2, No. 1, p. 52.

Electronic Vaulting” is a term which has recently joined the armada of buzzwords associated with disaster recovery. The intent of this article is to remove some of the mystery shrouding this evolving concept and provide an explanation of capabilities and benefits that apply to both contingency plans and production data center operations.

How Does it Work?

Envision a scenario where online mass storage operations (such as tape subsystems) could be relocated from a computer room to a dedicated and secure facility within the confines of a commercial or internal disaster recovery center. While located away from the production CPU complex, advanced channel extension and communications technology allow devices to operate as if the devices were still resident at the data center. Systems and application programs utilize devices located at the extended data center without change, and continue to perform in a manner consistent with former production operations. Programs that manage system storage, create dataset or database backups, or preserve critical real-time update transactions now direct their output to the recovery site, instead of the production center.