Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 30, Issue 3

Full Contents Now Available!

The regular daily backup is still the primary method most businesses use to protect their vital data and ensure their ability to resume function after a disaster. Backups protect equally well against natural disasters like a flood, earthquake or blackout, hardware malfunctions like the failure of a hard disk, and human and software errors leading to corruption of critical business information. Backup technology is well understood and stable and, as long as the vital procedures that provide for and verify the integrity of the daily backup are in place, it will long continue to provide the foundation for effective information infrastructure protection for business.

Increasingly, however, the rising cost of downtime in lost productivity, missed opportunity and dissatisfied customers is forcing many businesses to recognize the traditional daily backup as only one element of an effective strategy. Indeed, the backup serves best as the final, last-resort layer of protection. When backup is used by itself as the primary data protection method, the recovery process is very long, the hours of data actually lost can be quite costly, and the chances of an invalid backup due to faulty media or other reasons are often unacceptably high.

In short, traditional backups, used alone to recover from a disaster, can cost precious downtime and serious data loss, which translate directly into lost productivity and profits.

Snapshots, Disk-based Backup, Replication

A variety of technologies have evolved to supplement or replace traditional tape backup.

For example, disk-based backup systems speed both the backup process and recovery, but do not address the loss of data between the time of failure and the last backup. In order to ensure higher data availability, and faster recovery from data corruption, IT-dependent organizations have exhibited growing interest in snapshot-based backup solutions. A complement to conventional periodic backups (which typically occur once a day), these solutions provide the ability to store the state of your data more frequently, typically once every few hours. Since they are online, restoration of data can also be significantly faster than with traditional backup. If a corruption should occur, data can be restored and the system recovered with relative ease.

On the downside, disk activity typically needs to be suspended while snapshots are taken. It is also important that snapshots be well integrated with the specific application server being secured so that data integrity can be assured – taking a snapshot of the application data at a point when it is not in a consistent state may provide only a false sense of security. If the most recently recorded snapshot is out of sync with your application’s latest consistent state, restoring data from the snapshot may have unpredictable results. Finally, most reasonably-priced snapshot solutions provide a local backup copy of the data to address disk failures and corruption, but are inadequate to address the loss of an entire site.

Many of the weaknesses of backup and snapshot technologies are effectively addressed by replication technologies. In particular, asynchronous replication is typically not overly expensive, and provides the ability to maintain a constantly up-to-date offsite copy of the data for rapid recovery or failover to a secondary server.

Each of these technologies has an important role in the arsenal of data protection methods. In fact, it is a good idea, if possible, to use all three in order to provide multiple layers of protection, avoiding a single point of failure. Even in combination, however, these technologies leave a critical vulnerability without protection: the occurrence of data corruption due to operator error, software error, or a malicious attack. Backup and snapshot technologies can restore data from before the corruption, but risk the loss of hours of information. Standard replication technologies will, of course, simply replicate the corruption to the secondary system, rendering it equally useless.

Continuous Backup Gets the Job Done

Users interested in the highest data integrity and recovery speeds are probably best served by boosting their backup strategy with the latest continuous backup solutions. These may be added to your existing backup infrastructure (tape or snapshot-based, or a combination of both) and will monitor your application servers, while capturing and recording all operations (writes, deletes, copies, etc.) applied to them in a journal at all times. No data is actually moved around, as only the operations carried out on the data, and not the actual data, are logged.

Should data corruption occur, affected servers may simply be “rewound” by playing back an opposite operation (or “counter-event”) for each operation previously logged in the journal. Not only does this carry the benefit of allowing you to back up vast amounts of data accumulated over a long period of time (remember, it’s not the data itself that gets backed up but the actions taken to create or modify it), but it also means that recovery will be practically instantaneous.

The figure on the previous page shows the synergetic relationship between the three backup technologies. Snapshots are typically made once every few hours, and may additionally be taken to create offline backups (most likely on a daily basis). Should disaster occur, and only tape backups and/or snapshots are available, you stand to lose as many as 24 hours of updates-not counting the amount of time it will require to carry out restore operations. At the very least, you will lose all updates made during the hours that have gone by since the last snapshot or backup was made. If corruption occurred even earlier than that, you will have to go back further, and lose even more data. Also note that rapid restoration from snapshots will only be possible if you restore entire volumes, and things will significantly slow down if you attempt to restore individual files, directories or databases.

Ultimately, continuous backup may be added and configured to monitor every update made to your servers, either all the time or, if you prefer, in between the two most recent snapshots.

Should your data be corrupted, you will be able to choose between virtually unlimited restore points. Continuous backup is the only solution that will allow you to restore single or multiple databases to the most recent consistent state, sometimes logged just minutes before corruption occurred, so that the highest data integrity and negligible data loss, if any, are ensured.

As the duration of restore operations only depends on the volume of changes applied since the most recently journaled consistent state, and no actual data is moved, users will be able to simply “rewind” either single-megabyte or multi-terabyte servers in seconds!

Eric Jackson, vice president of products for Xosoft has nearly 20 years experience in the development and commercialization of advanced software technology. Prior to joining XOsoft, Jackson co-founded two technology companies, Ibrix and DeepWeave.