Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 30, Issue 3

Full Contents Now Available!

Nnear the beginning of the digital revolution, computation devices relied on punch cards and other physical media to house information. This data was not stored solely in digitized form, but rather was converted to physical machine-readable information when it was required, so that the actual data remained in some other format. Ledgers, files, and other information storage systems were kept in secure locations, and backup meant copying the data to more files, ledgers, and storerooms.

Then the computers started to acquire permanent storage devices, and data that was held on them was not simply a temporary copy of information stored in hardcopy. Huge tape reels were safeguarded and copied to ensure data was held on more than one piece of storage equipment, and vaults were established to house data on tape and other media. The paradigm created in the physical, hardcopy world was ported over to the digital world. For a while this worked, but eventually the technology outgrew the simple idea of locking away the media for safekeeping. The changes most likely began the first time a tape was pulled back from the vault after years locked away, when two major issues would have been noticed.

First, any physical media is subject to degradation over time. Tape can break if overused, but it also can succumb to just the eventual breakdown of the physical materials from which it is made. Though it takes a good deal of time for a plastic-based tape substrate to fall apart, it happens. Mold, mildew, bacteria, and other common agents in the air can be devastating to tape media if they’re permitted to begin growing on it. Dank, dark basements (where vaults usually ended up) would lead to expedited decomposition of the tape media itself, and that meant data loss.

The second issue was that technology changes over time. The huge reel-to-reel tape systems used years ago are nowhere to be found today. Even a difference of one or two years can be the difference between using a back-up tape successfully and having an unrecognizable format over your data. So when data from several years back was recalled from the vault, even if the tape was sound, there was no guarantee you would the proper device to read it.

These issues led to the idea of keeping large amounts of data on spinning disk. Tape backup was used to keep a secondary copy of the data only, not the primary copy. It was here that backup really took shape as a protocol of its own, as opposed to just another copy of the data that might be used as the primary copy at some point. Each month, or week, or night, a copy of all data changed since the last backup would be committed to tape and stored someplace safe, so that if the primary copy was destroyed or removed from the disk, this tape could replay the data back onto a new disk.

Tapes were made more reliable and long-lasting. As much as possible, tape drives were made backwards-compatible. Even when that wasn’t possible, tape devices had become small enough that keeping one of the old ones around after an upgrade wasn’t a hardship. This let data sit longer on tape safely and gave a better chance that the tape could be used even years after it was finalized.

However, as with all things, technology marches on. Demands of the end-users and management quickly outpaced the ability of a tape-based system to keep up. Today, it’s expected that an entire data system can be brought back online with all data and applications quickly – often within the same business day of the outage or even less time. The idea of recalling certain archived data from a back-up system is still very much alive, but organizations now demand that same recall ability on the entire system, not just data objects. While a fast tape-based system could probably re-create a single server quickly enough, if a data-system is comprised of multiple physical or virtual systems, tape recovery can take well longer than the business is willing to tolerate.

To assist in meeting the current goals of most organizations, vendors began creating hybrid solutions that allowed IT staff to create a mid-line copy of the data on spinning disk, then to shuttle the older back-up information to tape. They also created the ability to protect and restore an entire server, including applications and OS components. The solutions worked great if you could get the original hardware working or if you could acquire a nearly identical piece of hardware, but offered little flexibility otherwise. Modern organizations began to desire a higher level of flexibility that could move from system to system as required, without limitations on hardware and without the need to collect several incremental backups to restore the latest data set.

The latest backup systems allow for not only the ability to restore data components on-demand at various points in the data lifecycle, but to restore entire data systems back to one or more different servers when necessary. The idea is to protect the data using a methodology that allows all changes to the information to be check pointed, allowing for object-level recovery of anything from a single file to all data on a server. This is paired with the ability to return that data to either the original server or any other system capable of reading that data-type.

If there is no system capable of reading that data type due to loss of one or more systems, these tools also have the ability to restore the entire server with applications, switches and settings intact. Different solutions vary in their flexibility, with some able to restore the system to any piece of hardware capable of running the base OS instruction set that the failed servers held, regardless of the underlying hardware.

The latest solutions for data protection go well beyond what our predecessors simply called “backup,” but the idea is the same. Nothing should exist at a single point, and everything should be able to change as the times demand it.

Mike Talon is a technology professional living and working in New York City. Currently a subject matter expert in Microsoft Exchange technologies for Double-Take Software, Talon has worked for companies from individual consult firms through Fortune 500 organizations. Talon has had the opportunity to design systems from all over the technological spectrum.