Alternative Sites (18)
Determining the best form of alternative processing for your data center, should it become incapacitated, is a difficult task and, it is never completed. The conditions that influenced yesterday’s alternative processing decision have changed! Is that change sufficient to cause the alternative processing question to be reevaluated? YES!
The first and most important aspect of making the alternative processing decision is that it must be fully documented. Ideally, in some form of a matrix that provides values to each of the factors considered. Then, as new factors are added or the value of original factors require adjustment, it is a smoother (and swifter) process to measure the impact of those changes upon the current strategies.
What ever method you currently utilize or choose to employ, you must be prepared to reevaluate. Many feel that either Hot or Captive sites (see Table 1, Definitions) are the pinnacle. This simply is not the case. While they are the most responsive, several companies have switched from one to the other. Additional organizations have downsized their needs by shifting less critical and non-critical workloads to less responsive (and less costly) alternative processing methods.
Do reciprocal agreements work? The answer to this question is a definite yes--maybe! To see if they will work for you, first ask yourself the bigger question--what do you mean by “work”?
In my conversations with people in a variety of business endeavors, one of the most frequent responses I hear is “We’ve got a reciprocal agreement with another (bank, savings and loan, credit union, business, etc.), we’ve tested it, it works, and it satisfies the regulators, so we’re not interested in a formal disaster recovery or hot-site contract.”
On the other hand, I sometimes hear this response: “Yes, we have a reciprocal agreement but we know that ultimately, it is not our best option for alternative backup.”
The most common method of “testing” in the reciprocal agreement world appears to be transporting a tape(s) to the reciprocal site after hours, loading the database tapes onto the other computer, and determining if they can be read. If so, the “test” is declared a success and recorded as such to “prove” to the regulators that compliance has been achieved. This is a relatively easy and definitely inexpensive answer to the disaster recovery quandary. The harsh reality of this arrangement is that the disaster recovery plan is worth nothing more than an attempt to satisfy regulators and convince ourselves that we do have a plan.
There is a wide variety of companies and software programs providing disaster recovery solutions with a wide range of features and costs. Whatever option a firm chooses, it needs to take into consideration such factors as data communications, personnel, equipment replacement, facility repair or replacement, etc.
If I were a party to a reciprocal agreement and had the privilege of “hosting” my reciprocal partner, I can imagine many uncomfortable feelings that would arise. The idea of having two or three unknown people in my computer facility from eight o’clock in the evening until three in the morning would be very unsettling. And because I would have one of my own people there during that time to assuage my uneasiness, I would have to work shorthanded.
Each night my database of customer files, parameters, and system software would need to be unloaded and reloaded. The additional time required to do this is significant but relatively minor compared to the possibility of something going wrong during the load/unload process. I must back up ALL of my software and files daily to be completely covered.
Another point of vulnerability in a reciprocal agreement is the possibility of my reciprocal partner upgrading his hardware or software, going to another hardware or software vendor, merging with another institution and changing to the acquirer’s system, or even going out of business. If any of this comes to pass, I have to go find another partner and conduct another “test.”
Imagine that I am using the computer during the day and my partner is using it at night. If we have an equipment breakdown, we are both going to be in a very difficult situation. Under normal circumstances, I may be able to work around such a breakdown if an engineer can’t respond quickly. However, with two firms on the system for twenty hours a day, a routine breakdown becomes a crisis. Operating for these extra hours may bring about more equipment failures. Each partner in a reciprocal agreement will incur some significant costs in the event of a disaster. These include overtime for your staff who sits up with the reciprocating organization, extra personnel costs to cover for those not available because of night work, additional maintenance costs from your hardware vendor because of after-hours operation, and additional utility costs for air conditioning, light, heat, and power.
I can easily envision the disaster situation causing a lot of frayed nerves and short tempers among the staffs of the reciprocal partners as time goes on. Conflicts over time windows, supplies usage, equipment breakdowns, or simply general resentment can all begin to wear as days stretch into weeks.
A disaster recovery center or hot-site location should be able to assist you in avoiding these drawbacks found in a reciprocal agreement. With the possible exception of a multiple disaster, you should have the equipment and facility to yourself. This fact alone should eliminate a great deal of the problems found with the sharing aspect of reciprocal agreements. In some cases, the disaster recovery center may provide assistance by having personnel available to load tapes, change printer paper, etc. This would allow you to operate the disaster recovery center computer remotely from your home city and keep your key data processing personnel at home to help rebuild your data center. This would be very difficult, if not impossible, for a reciprocal partner to provide.
Inexpensive reciprocal agreements are a poor alternative to a dedicated disaster recovery center. As long as the partners are aware of the risks and limitations to such an agreement, they can get by. However, if their goal is a truly effective disaster recovery plan, they need to go further.
It is clearly becoming the trend that regulators and insurance companies are going to demand more from companies. It may very well be that insurance carriers will soon insist on having disaster recovery plans on file in order to obtain business continuation coverage. Regulators will also probably insist on a more significant test than the mere process of reading a tape. Disaster recovery planning is easily pushed down to a low priority when the day to day deadlines draw nigh, but don’t ever believe it will go away!
Richard Snyder is the Director of Marketing at Oklahoma Hotsite, Inc.
This article adapted from Vol. 3 No. 4, p. 54.
Business Continuity is the underlying objective of all Contingency Plans; however, the requirements for Business Continuity vary by application within an organization, as well as from company to company.
Solutions employing electronic technology range from immediate synchronized recovery of “most critical” applications to recovery of “less critical” applications in days or weeks. In all cases, data integrity is the first and foremost concern.
It has long been recognized that recovery begins with corporate data. For most organizations, the vital records program has three key components:
- Archive Data
- Operational Data
- Transaction Data
Off-site tapes provide a basis for recovery to some prior point in time by providing access to Archive and Operational data. On-site transaction “Journals” or “Logs” provide protection in the event of system or disk failure at the primary processing site. Neither, however, provides for the protection of daily transactions in the event of a serious facility outage.
CONTINUOUS AVAILABILITY SERVICES
Solutions are being implemented today which employ communications and systems technology to provide data integrity and reduced recovery time.
ELECTRONIC VAULTING, the bulk transfer of backup data over communications facilities, can simplify the backup process and provide more timely offsite “Operational” data protection. This process can be facilitated with a “host-to-host” or “channel extension” connection and typically serves to reduce, but not eliminate the exposure to loss of data.
REMOTE JOURNALING delivers realtime data integrity by capturing and transmitting the Journal and Transaction Log data offsite as it is created. This solution utilizes a software product known as ENET1,which interacts with standard database journal and logging facilities in IMS, CICS, IDMS, CPCS and other DBMSs. Just prior to the writing of the Journal, ENET, utilizing a user exit and Cross Memory Services, copies the Journal record into its own address space and immediately lets the Journal “write” continue. ENET then transmits the Journal data offsite using SNA as a standard VTAM Application. This technique requires a “host-to-host” communications link.
In addition to being the only realtime data protection product available today, Remote Journaling provides the basis for the use of In-place “Forward Recovery” procedures for recovery of Database Shadowing. Additionally, there are no application changes and minimal communications requirements involved with the use of this technology.
Used in conjunction with traditional alternate site recovery programs, Remote Journaling enables a customer to validate the complete recovery process including “Transaction” data recovery without disrupting the user community.
AN INTEGRATED SOLUTION
For some companies, data integrity is only the tip of the iceberg. Their requirements include recovery of “most critical” applications in hours or even minutes. Even so, these applications represent a relatively small percentage of the total recovery requirement.
An integrated solution is required to maintain price/performance of the recovery program and adds to the equation:
DATABASE SHADOWING, which reduces recovery time by staging the database restore and roll-forward process, enabling recovery within hours.
STANDBY SERVICES, which provide recovery of most critical applications in a matter of minutes and GUARANTEED access to an alternate processor.
There are two key elements in all cases: all of the Operational and Transaction data is available, and the solutions are integrated into the hot site recovery capability. The Continuous Availability Services enhance and refine the typical disaster recovery process by adding flexibility, data integrity and reduced recovery time to subscribers.
As the business world advances the use of technology for competitive advantage, traditional disaster recovery programs will have to evolve to keep pace. While faster recovery times and automated off-site backup are tempting targets for disaster recovery program enhancements, backup of transaction data represents the biggest exposure for most companies.
David Nolan is the Assistant Vice President of Marketing and Sales at Comdisco Computing Services Corporation.
This article adapted from Vol. 3 No. 3, p. 19.
The concept of disaster recovery, clearly a topic coming to the forefront of critical issues for enlightened management, is nonetheless a fairly recent phenomenon. As a result, its place in assuring a company’s continued health is often misunderstood. A good example of this misunderstanding is the frequent comparison of hot-sites to insurance policies. While a hot-site subscription is a form of risk management, for many reasons it is not merely an insurance policy. Among the reasons are:
- Insurance policies are largely passive in nature; hot-site subscriptions are proactive.
- Insurance policies are fundamentally actuarially-based and function by paying covered losses; hot-site subscriptions have, by definition, a business continuation orientation.
- Insurance policies may be able to compensate for management’s personal liability exposure/losses, while hot-site participation has, to date, allowed management to avoid the personal liability issue.
The evaluation of hot-site backup processing locations, processing and contract terms and conditions is perhaps one of the most crucial aspects of completing a Disaster Recovery plan. Hot-site evaluation and selection often suffers the lowest priority of any of the contingency planning efforts.
Many individuals, groups or teams have spent countless hours identifying the crucial processing requirements of your company. All critical files and programs have been identified and documented. Fail-safe techniques have been implemented. Off site storage of critical files backup have been contracted and utilized. There remains one major task to complete the contingency plan: the selection of an appropriate place to process data and carry on the company business in the event of a major interruption or disaster. This effort may determine the corporate viability (survival) of your company.
Many data processing professionals and corporate executive officers face difficult business decisions that will affect the cost of providing information processing disaster contingency plans in the future. The time has come for managers to take a proactive approach toward managing one of their company’s biggest assets — computerized corporate data. Twelve disaster contingency processing options are considered below.
When analyzing this list of options, each company must balance immediate disaster contingency expenses with the strategic planning goals of the company. Many corporate strategic plans may include such considerations as market flexibility, corporate positioning for acquisition or divestiture, real estate investment and certain tax strategies such as investment tax credit or asset depreciation or appreciation.
Due to the inherent difficulty — and in some cases, virtual impossibility—of quantifying financial exposures resulting from a data center disaster, an analysis of the following options should aim at providing a logical business solution that includes estimated costs, quantifiable benefits and addresses the strategic goals of the corporation as well.
As more businesses realize the importance of contingency planning and off-site data storage, the question of how to find a proper media protection facility must be answered. The facility chosen should have a comprehensive program which offers more than simply storage. The following is a listing of features which you should look for in choosing a company to store your media:
First and foremost, you should look for a company with a strong, well-built vault. The best types of vaults for media storage are constructed of steel and concrete. The combination of these two elements provides dual reinforcement and is therefore less likely to collapse in the event of a natural disaster.
Surveillance and Categorization
In addition to physical security, your media should be well guarded. No matter how strong any vault is, there is no substitute for human surveillance, Your data should be watched by trained guards and monitored by closed circuit video cameras. Also, your data should be categorized as to its importance. This process allows classified or highly secretive documents to be handled with extra care, eliminating the possibility of unwarranted access or employee tampering.
Data Tapes are highly susceptible to atmospheric extremes. The climate within a storage vault should be maintained between 60 and 70 degrees to prevent tape damage. Humidity must be regulated as well, with levels varying between 40% and 50% to prevent harmful condensation. Another atmospheric condition that must be considered when storing data is dust. Dust particles can cause false readings or even deform your tapes. The vault you choose should be equipped with a ventilation system that eliminates impurities from the air. Then with the proper storage surroundings, you are ensured of the best possible reading when information is retrieved from your backup files.
Modern Fire Protection
One of the biggest threats to a data bank is fire. When combatted with conventional methods--water or foam--media cannot survive. Quite often, in the case of small fires in particular, water and smoke can cause more damage than the actual fire. When looking for an off-site storage facility, always look for a company which uses Halon Gas for fire protection. Halon Gas eliminates the oxygen necessary for the fire to burn, killing the flames while leaving your data unharmed. This is one feature which you should absolutely demand when storing your backup tapes.
Look for a company which will cater to your needs. Because of the importance of your data files, your off-site shelter should offer 24-hour access, courier service, free photocopying, notary services, conference rooms and a convenient location. The combination of these elements will make storage and retrieval easier, minimizing your down time.
As anyone who handles media knows, data tapes are a fragile item. To get the best possible readings, tapes and cartridges should be cleaned after every 8 to 10 uses. This is necessary because dust and other foreign materials which are magnetically drawn to media can cause permanent errors. Amazingly, microscopic particles can tear tiny holes in data tapes, making the damaged area useless.
Until recently, tape cleaning was a costly, time-consuming chore. However, modern technology has made proper tape maintenance more practical. In a process similar to shaving, particles can now be removed from tapes, then gently whisked away, preventing recontamination.
Companies spend millions on computers and data records, yet they often neglect basic tape maintenance. Tape cleaning is an affordable necessity. Logically, any prudent executive should have a standardized maintenance program.
By using this simple guideline, finding the right off-site home for your backups should be less of a problem. Don’t be fooled by companies claiming to specialize in media protection who don’t use the aforementioned techniques.
Remember, the cost of storing and cleaning your media is an inexpensive form of insurance that you can’t afford to be without. So find a facility which offers you maximum protection and start securing your business’ future.
This article adapted from Vol. 1 No. 1, p. 17.
Whether it's lease agreements stashed in file cabinets or mailing lists on a computer file, your company's records are the heart and soul of your business.
A study by the University of Minnesota concluded that 93% of businesses that lost their data center for 10 days or more filed for bankruptcy within one year. Of those businesses, 50% filed for bankruptcy immediately.
These businesses learned a valuable lesson concerning data and record storage: It's a vital part of business.
'Why take the risk with your computer data?' asked Bill Cannon, president of The Safe Deposit Co. 'Your computer media is far more valuable than the company delivery truck that is usually insured with a $500 deductible.'
'One would not imagine leaving the company plant unprotected by failing to buy insurance to cover unforeseen disasters that could destroy all important physical assets.'
For years, companies attempted to store documents and computer disks internally, but the costs have been high.
Steven Davis, general manager of Data Safe Storage, Inc., said most of his clients have stored their files internally at one time, but they opted for an outside firm when the costs became too high.
'Most companies just run out of space,' Davis said. 'We helped a client that owns a factory that had a 1,000-square-foot room filled with paper files. Once they let us keep their files, they were able to utilize that space by turning it into a production room.'
Thanks to today's technology, some outsourcing firms can take paper files, scan them onto a computer disk and thereby turn a company's large file cabinets into small computer disk holders'making storage much simpler.
No matter what documents need to be stored, professional storage companies have become popular throughout the country. Tragedies like the flood of 1993, the World Trade Center and Oklahoma City bombings, and disasters such as tornadoes and earthquakes have forced companies to seek out record storage companies.
Davis said even firms with two employees have embraced the idea of record storage. 'We have clients that have only two personal computers,' Davis said. 'Why? Because those computers are running the business. These companies can't afford to lose that information.'
Most of the magnetic files stored with professional storage firms are company backup files, according to Davis. 'Storing these backup files in a safe place is crucial,' he said. 'If a disaster strikes, a company can get hardware quickly, but the software took years to develop and will take years to build back up.'
When looking for the right storage company, look for the following:
- A secure site. Normally a building of steel and/or concrete construction protected by a state-of-the-art burglar alarm system operating around the clock is essential. When it comes to storing magnetic media, stringent environment controls are also essential.
'It is vital to minimize temperature fluctuations and maintain a temperature of 68 to 72 degrees,' Davis said.
- Access. Depending on your firm's needs, 24-hour, 365-days-a-year access might be needed.
'Access to files for routine and emergency service should be defined before commencement of service,' Davis said. 'Rules must be established as to who can have access to files and under what conditions.'
- Pickup and delivery. One of the main reasons for the growth of record centers can be directly attributed to quick, convenient pick-up and delivery that most record centers offer.
- Location of facility. High risk areas such as flood zones and tornado lanes should be avoided. The distance between your location and off-site storage has to be far enough apart to protect you against natural disasters. Yet proximity is critical, allowing you convenient and timely retrieval of data. Most companies recommend a distance of at least five miles.
Ron Ameln is a staff writer for the St. Louis Small Business Monthly. Reprinted by permission of the St. Louis Small Business Monthly.
computer science and was formerly a Certified Information Systems Auditor (CISA).
Solving Storage Management Issues for Today's Open, Networked Computing Environment:
Security Standards Top the List
The 90's data explosion and open systems revolution have spread mission critical data across the enterprise in a heterogeneous, networked computer environment. As businesses increasingly rely on information as a competitive advantage, the storage demand for non-mainframe, secure, network-based systems is escalating, expected to grow more than 30 percent in '96 to a level now totaling $6.1 billion. These trends, combined with the emergence of data warehousing, data mining and the ubiquitous intranet and Internet-type networks have forced companies to consider how to provide reliable, fault tolerant and comprehensive protection from data loss.
This dilemma requires an economical network storage management solution, preferably network-attached rather than the traditional storage server. The device connects directly to the network and does not require a separate server or network operating system. As a result, the network-attached solution delivers:
optimized performance with easy non-disruptive installation
pre-configured and pre-tested components in one box supporting multiple environments
The network-attached approach compares favorably to the storage server which must connect to the network through another general purpose server and share resources with other applications. The new configuration typically results in streamlined connectivity with fewer components to administer.
Marquette Medical Systems Relies on Storage Management System To Protect Its Customers Through Lengthy Product Lifecycle
While storage management is critical for any business, when people's lives depend on a company's software and information resources, there simply can be no margin for backup or restore errors or delays. In short, if an information system needs to be rebuilt for any reason, at any time, and a person's life depends on that system, then only a storage management solution that can quickly, easily, and accurately access and restore the needed files with absolutely no delay, and without impacting other day-to-day operations, is acceptable.
At Marquette Medical Systems, we rely on just such a solution. The patient diagnostic and monitoring systems we manufacture are essential for the treatment and care of patients. As a result, the software code that drives these systems is the real essence of our business. If we lose this code, or any part of it, not only do we lose our business, but lives may be at stake as well.
To ensure complete protection of software resources, as well as the ability to reliably restore them in the event of a mishap, an upgrade of the company's entire storage management solution occurred. This upgrade was necessary because our business had expanded so rapidly that the existing architecture was no longer adequate to handle the backup throughput we required.
In fact, that early architecture was so overloaded that to complete a monthly full backup of 20G took three days during which time end-users had no access to the storage management software, and so could not do any file restores that may have been required. With our new system, by comparison, the time it takes to complete a full backup has been slashed 88 percent, to just eight hours.
The net benefit of this performance boost is that we can now complete two full backups each month without affecting end-users whatsoever. In addition, because the system is so fast, our incremental protection has been dramatically increased through the use of level eight type incremental backups that encompass all system changes since the last similar backup. By completing one of these backups every three days, we have significantly reduced the number of tapes required to rebuild the system. With level eight incrementals we can now rebuild a partition with one or two tapes rather than the five or six that would otherwise have been needed. As a result, restores can be done much faster and with a lower level of risk.
Our new storage management architecture is based on NetWorker 4.2.5 for UNIX from Legato Systems running on a SUN Enterprise 3000 UNIX server. To achieve the bandwidth we required without bogging down the network, we have outfitted this server with five 100 megabit/second Ethernet cards and linked it to an ATL 6176 (ATL Products, Inc. Anaheim, California) autoloader equipped with 176 tape slots and six DLT tape drives, each having its own SCSI bus. Together, these storage management system elements provide a significant performance gain over our previous architecture which relied on an earlier version of NetWorker running on a SUN Sparc 10 server and an 8mm tape autoloader equipped with only two tape drives and 60 tape slots.
Despite the fact that we had been completely satisfied with the NetWorker product in our earlier architecture, and recognized that the limitations of the system were primary hardware related, we still embarked on a comprehensive five-month in-depth evaluation process before selecting our current system. The reason is that, with such a large investment at stake, we simply wanted to consider all options to be absolutely certain that we were selecting the best solution.
We began our evaluation process by implementing three different storage management software solutions. We then benchmarked performance parameters, compared feature sets, and considered end-user usability issues.
There were several key areas that led us to our final decision. For one, the product we selected supports the largest number of servers and clients, an important concern as we look to the future and the possibility of continued expansion through acquisitions. Since we cannot be sure what platforms might be used in these environments, we wanted to have the flexibility of a storage management solution that can support whatever hardware we might encounter in the future. Also, we felt that with its history of supporting so many disparate platforms, the vendor would continue to do so in the future as new hardware comes to market.
With our rapid growth and need to keep our storage management system optimized at all times, we also viewed the system's support of remote tape devices to be a significant advantage. With this capability, we have the option of supporting remote storage devices, located at other Marquette facilities, from our central operational headquarters here in Milwaukee.
Another critical factor for us in our storage management software selection was its ease of use for both system administrators and end-users. We did not want a system that required extensive technical training to administer, nor did we want users to be beleagured with a complex process each time they wanted to restore files. With NetWorker's graphical user interface, our solution satisfies both these concerns. In fact, the system is so easy to use that end-users completed their own backups and restores with a process that was so intuitive they required no training at all.
People's lives depend on our systems, so we must always stand ready to assist our customers with their software issues. Our approach to storage management comes down to this: if Marquette devices are in use - even if more than a decade has elapsed since they were installed - then we must support them. Only by having reliable access to stored information resources through a storage management solution that can effectively back up our growing volumes of data- and restore them if needed - can we meet this objective.
Mr. Goodman is systems engineer for the Monitoring Engineering Division, Marquette Medical Systems, Inc. Milwaukee, Wisconsin.