Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 30, Issue 3

Full Contents Now Available!

Many companies have a significant investment in large mainframe systems and communications networks. Over the past five to ten years, major efforts in contingency planning have provided reliable, secure and automated disaster recovery plans for the corporate data stored on the mainframe platform.

However, just as we see the emergence of mature, well tested plans for mainframe recovery, much of the critical data in large organizations is moving to the personal computer. Will this require years of effort to develop totally new processes for the backup and recovery of PC data? Or, can we benefit from past efforts and use the mainframe for disaster recovery? This article will explore the need for effective backup of all the important data across the organization and a new solution to enhance existing disaster recovery plans to cover valuable data stored on micro computers.


Every company, large or small, private or public operates under a basic and common business platform. This platform, simply stated, looks at each organization as a combination of different areas that interact together. These areas have sets of critical functions and activities that should be performed to ensure the continued viable operation of the organization. To perform these functions, resources of various kinds are needed. However, these resources are faced on a daily basis with events such as natural disasters, hostile activities, human errors, equipment malfunctions, and so on. These events undermine the critical resources on which your business depends. With varying severity, they affect the availability, integrity, and confidentiality of those resources.

The basic business needs are to clearly understand the critical functions and resources, and select the cost effective strategies that best deal with the impact of the event. While some risks are generally dealt with effectively through insurance and physical security measures, other major technological risks such as information processing risks are generally handled inadequately, if at all.


Surveys of North American businesses have consistently shown that most organizations have a significant level of computerization and are highly dependent on information technology, but have not adequately addressed information control issues. This illustrates the dramatic advances made in information technology, and the lag in addressing the risks posed by the heavy dependence on computers for business decisions. If your organization is to be resilient to events which threaten profit maximization, growth, and successful functioning, all critical data processing resources must be adequately protected. To do this requires management programs and appropriate tools that are integrated across the organization, balancing preventive, detective, and corrective strategies against impacts of events and exposures. Management strategies can only be effective when the controls are established to cover all critical data, the data that produces information necessary to run the organization on a daily basis.


Five or ten years ago, contingency planning projects were often regarded as a dark cloud that we wish would just disappear. Pressures from auditors and security officers were often successful in establishing some recognition for the need to have a contingency plan however, in most cases such projects maintained a fairly low position on the priority list. Some executives continued to ignore the fact that their organizations have become extensively dependent on data processing. Others continued to refuse to believe that a disaster could actually occur, causing their data processing resources to be unavailable or drastically degraded for an extended period of time. With this perception, many executives continued to consider contingency planning projects as unnecessary expense, time wasting and interfering in the schedules of daily activities and operational projects.

Over the last decade, a large number of reported disasters have directly caused severe financial impact to many businesses that were not able to recover their critical functions within an acceptable time frame. This has lead to the recognition that contingency planning projects can no longer be ignored and have to carry top priority. The most effective plans continue to be centered around the fact that no matter how prepared an organization is for a disaster, it would be quite costly to recover and restore its critical resources. The best solution is to prevent, or detect at early stages, any events that negatively impact critical resources.

A typical data processing contingency plan consists of a four step approach that starts by introducing the scope of the project, followed by a definition of critical functions and assessment of events and exposures, identification and implementation of prevention, detection, and response strategies, and identification and implementation of recovery, restoration, or replacement strategies.

In almost all cases, the scope of contingency planning was limited to the mainframe data processing operation which was where all the critical applications were located. Contingency planning activities spanned the areas of security (physical and logical), management controls, back-up procedures, maintenance procedures, documentation and audit trails.


While the contingency planning activities for the central (mainframe) computers were being formulated with proper controls and check lists, micro computers were being installed throughout the organization and critical information started to appear on a variety of platforms. Micro computers have become entrenched in many facets of the business environment and have evolved from being stand alone productivity tools to essential resources that play a significant part in supporting critical functions.

Many organizations moved quickly to distributed processing with micro computers, work stations, and local area networks handling critical and sensitive data.

The flexibility and simplicity of most micro computer software enhanced their popularity and encouraged the development of many business applications within micro computer environments with significant shifts from central (mainframe) processing. Managers are now faced with the dilemma of how to maintain an optimal balance between the flexibility and freedom brought forward with the use of micro computers and the need for on-going controls and operational efficiency.

In light of the rapid spread of micro computers, organizations had to reassess their strategic systems plans and rethink their contingency planning process. Many feared that all their investment in main frame controls will no longer provide the anticipated returns and that new strategies have to be introduced. To some, the use of micro computers became part of the problem rather than part of the solution. They either shied away from taking advantage of the micro computer benefits or totally ignored the need for expanding the scope of controls and contingency planning to cover corporate-wide computer usage encompassing micro computers and work stations. There are several causes for the negative and inconsistent approaches to micro computer usage. These include:

  • Lack of recognition that critical applications are no longer centralized in the main frame environment and that many end user applications support strategic operational functions
  • Inconsistency among business units in their approach to controls within micro computer environments
  • Responsibility of the central Information Services department. “out of sight, out of mind”
  • Lack of separation of duties in the micro computer application development environment and limited awareness of the need for controls
  • Reluctance to accept the fact that micro computer environments face similar events and impacts as in a main frame environment. Back to square one in selling the need for contingency planning
  • Difficulty in maintaining coordination, communication and overall coverage among all micro computer environments for security, back-up and recovery activities
  • Difficulty in justifying another contingency planning project for micro computers after having expended significant dollars in establishing a main frame contingency plan.


In addressing the above issues company executives wanted to introduce into their organizations a simple, globally integrated solution to backup and recover critical micro computer data. While on recent contingency planning assignments, we have come across companies such as Amoco Canada, Union Pacific, and PanCanadian, who have shown leadership in this area. Their solution was in the implementation of a new product called HARBOR which was introduced by New Era Systems Services Ltd. With this system they can automatically backup and restore PC data to their MVS mainframe. This means that PC’s and workstations will be provided disaster recovery protection using the existing resources and plans. This global control provides the micro computer environment with many benefits such as:

  • Ability to take advantage of the power of the mainframe to perform critical back-ups
  • Automatic off-prime scheduling of back-ups
  • Ability to centralize back-up and recovery while maintaining decentralized data and applications
  • Simplified and consistent process in identifying and securing critical applications, which is the basis for successful contingency planning efforts
  • Ability to take advantage of pre-established back-up and recovery procedures for the mainframe, such as off-site storage and hot-site recovery
  • Global data management and classification
  • Consistent access control and multi-security layers.

At Amoco Canada the implementation uses their MVS host environment to offer backup to DOS and OS/2 workstations. This includes scheduled nightly backup, data compression, incremental backups and a unique function to store only one copy of common PC files to reduce data volumes. Audit trails and detailed information on backup files are provided throughout the process. In the future Amoco Canada will expand its backup system to include Novell and UNIX systems and to provide automatic virus protection.

With a solution like the one at Amoco Canada, companies can now safely put in the hands of users the tool that provides the ability to take advantage of micro computer flexibility and ease of use while maintaining reliance on proven controls. An investment in such a system introduces immediate payback with tangible, ongoing, and incremental benefits.

Fadi J. Nasr, B.Sc., CISA, CDP, CIA, is a Managing Director of CRISP Management Ltd., a firm specializing in Risk Management, Contingency Planning, and Auditing.

This article adapted from Vol. 5 #4.