Software To Save Your Skin: Using Software to Back Up and Recover Your Company's Data
- Published on October 30, 2007
The increase in client/server computing and distributed systems is causing an explosion in LAN disk capacity. Growing companies typically add 30 to 50% more storage to their networks each year, and newer image-based applications will accelerate this trend.
The cost of storing magnetic data is gaining the network limelight, as is the potential cost of lost data. Organizations without a backup and recovery plan for what may be their most precious asset are susceptible to the ravages of disaster, both natural and man made. Major catastrophes over the past few years like Hurricane Andrew, the Midwest Flood of ‘93, Los Angeles earthquake, and World Trade Center bombing have been front-page reminders that valuable information should be protected.
Concern is usually generated at the highest level of the corporation. The corporate officers of public companies could be held liable if their negligence resulted in data lose that affected profits.
Few sizeable companies are without some kind of data backup and recovery plan. However, there are plans and there are plans. At the weak end of the planning spectrum is the company whose management simply feels that a pinch on staff resources precludes developing and implementing a full-blown program. As a result, this type of company may issue a policy that employees must perform regular backups. But without documentation and accountability, this plan doesn’t have much chance of saving the corporate jewels. The sad fact is that failure of this plan will not be identified until significant data loss occurs.
Unfortunately, this “independent” approach also increases the workload of every employee who is responsible for their own backup. And without an administrator to control the process or be responsible for an ongoing strategy, the backup process is often at risk and backup media is not protected. An employee’s idea of safeguarding backup media could be storage in the trunk of a car or in a closet at home, where it is subject to many other kinds of all-too-familiar disaster.
In the middle of the plan continuum is management that directs a committee comprised of various departments to devise a plan. Documented procedures are essential. The people who devise the plan may not be present when the recovery portion is executed.
Documentation doesn’t guarantee data recovery. Most solutions today entail data backup on site and most are not as automatic as they should be. When deciding on a location for housing data to protect for recovery, companies should not forget that disaster can strike anywhere, including data centers. That’s why a hotsite should be considered. If a hotsite is not a viable alternative, an onsite storage vault should be as secure as possible.
Companies at the strong end of the planning spectrum usually have a network backup and recovery plan with centralized backup ownership and administration. In the past, companies thought that the volumes of backup tapes they created would preserve their data. They learned too late that those piles of tapes couldn’t be read. Therefore, it’s a good idea to verify that tapes are readable. Beyond readability, restoration is the most critical part of the plan. The only way to insure that recovery will work when needed is to test the process.
Tape verifications is one aspect of the “fire drill” approach. In any procedure, practice is essential for insuring that the documented process does what it’s supposed to do and employees have been trained properly. There are various degrees of fire drills, some with warning and some without. But the organization must build its expertise in executing the network backup and recovery plan through practice.
Some organizations are looking for ways to simplify and improve control of network backup and recovery. They want to centralize both planning and administration but support distributed backup using a variety of different backup strategies, systems and media in different parts of the organization. A centralized body develops standards and implements them across the entire organization. A small centralized staff can cost-effectively manage the network backup requirement and restrict the number of qualified products on the market today.
Although standardizing a level of service may simplify initial planning and documentation, a uniform minimum requirement does not address diverse departmental and workgroup needs. A system that can be tailored to different departments within the organization best serves its diverse user.
Custom handling of applications on the client end and backup data on the server end are two areas where versatility of service are important. The concept that a system should offer the flexibility to handle data on PCs within one department differently from PCs in other departments may be expanded to vary data backup from application to application on PCs. A database application may need to be either shut down or placed in a special state prior to backup of the database file. After backup is complete, the application must be restarted. The process should be automated along with the actual backup.
On the server side, it is advantageous for the system to offer customized handling of data backup files. This would allow mounting and dismounting of special media for specific departments or projects.
Consideration should also be given to the amount of flexibility in how the system backs up data, including full and incremental options, when and how frequently it does so, what data may be selected for backup, and options regarding storage media, tape rotation, and storage locations.
A single enterprise network with distributed subnet-networks is likely to include a variety of hardware platforms: VAXs and midrange computers in manufacturing, OS/2 and DOS in administration, Macs and UNIX workstations for various user groups. Add a variety of different departments with varying functions, procedures, and of course backup requirements, and you can begin to see why companies are overwhelmed by the ubiquitous task of developing a backup and recovery plan.
The determination of where to store backup data is a major issue impacted by the physical nature of the organization. Remote locations with lots of data to backup could make the process quite tedious and time consuming if transmitted over a WAN to a centralized managed backup program is an alternative that would eliminate the need for a resource at the remote office to oversee the process, and eliminate the need to move backup data over the WAN. This alternative may come in handy when today’s plan falls short in six months as backup data volume exceeds capacity.
The experts point to a few key criteria that a backup and recovery solution should meet. First of all, it must be an automated, multi-vendor, multi-protocol network backup and recovery solution in order to support a variety of operating systems and platforms (including proprietary and UNIX) that comprise current client systems and adapt to the computing environment as it changes. These factors eliminate quite a few solutions.
Automation is crucial for optimal reliability and efficiency. Automated synthesized backup is the initial backup procedure which entails a complete image copy of all data in the organization. After the original image copy is complete, the system is programmed to automatically produce backup tapes of only the changes made to the imaged data. The original image copy is merged with backups of all changed data sets on a regular basis to recreate an image of the current environment.
The process is likely to take several days or weeks, so the system should offer backup that is interruptible and restartable in order to accommodate daily use of the network. The trade-off to this process is its inability to restore data at any given point in time. For instance, engineering could not restore a CAD drawing to its status as of a particular day to eliminate an error and subsequent unusable data. The drawing as it appeared that day would have to be recreated from scratch.
Another issue for consideration is that network transmission for backup uses alot of bandwidth. An intelligent product can implement more efficient movement using compression. A caution: compression algorithms that run on an average server PC lengthen the backup process by adding compression time to transmission time. However, compression does work if the computers performing the compression are powerful.
Although the requirement may sound trite, ease of use for both users restoring data and for network backup and recovery administrators is imperative. The system cannot live by a point-and-click graphical interface alone. The system must be easy to instal, administer and monitor through comprehensive reporting. It must define concrete parameters--the who, what, where, when and how--of data handling and adapt easily to a wide range of disaster recovery procedures.
Karen Winner is a consultant for Winners Marketing Communications in Minneapolis, Minn.