Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 31, Issue 1

Full Contents Now Available!

As we quickly approach the next millenium, all thoughts turn to assuring our readiness for January 1, 2000 and the potential disastrous impact. Fortunately for us, it falls on a weekend so we will have time to react. Now would be a good time to take a look at ourselves as to how we stand ready for this potential disaster, and others that haven’t given us the consideration of announcing the time of their arrival. In order to get an accurate picture of where we stand, it is always best to use where we came from as a point of reference. We can look at where we want to be, but hindsight is 20/20 and history is our best teacher.

In the past decade, the computer industry has gone through significant changes that have evolved at turbo-charged speeds. Let us take a look at some of these changes. A decade ago, we worked in a modern data center, which typically ran MVS from an IBM 3084Q processor. This processor ran at 25 mips and sat atop a DASD farm which consisted of 3380-Ds,Es and Ks. Our typical DASD farm was measured in terms of gigabytes and 500 was usually the size of a large organization. The tape library consisted of 3420 and 3480 tape volumes. Our backups were produced on the latter of the two and rotated every morning to the vault. Many of us still worked on 3270 dumb terminals. For the lucky few using PC technology ten years ago, a 386 PC running DOS eagerly performed our tasks. Our hard disk capacity seldom exceeded 100 megabytes. Back in 1988, most of us couldn’t spell TCP/IP. E-mail, and the Internet were still foreign concepts to many of us. The true techies were strictly mainframe people who had stacks of Mainframe Journal on their desks.

In 1988, we were beginning to realize that our needs were becoming more complex. Our backup windows were shrinking and the data we needed to backup was growing. Twenty-four hours was not long enough and the 24x7 shop was conceived. For some, this was a little earlier. We needed to account for all this change. Since the 30-hour day was not an option, mechanisms were devised to assure that what was most critical to our business was accounted for and recoverable to a certain point in time. Many shops devised their own tool sets to accomplish this. Third-party products began to appear to assist a data center with identifying their critical data. The term "enterprise" hadn’t arrived yet. On a weekly basis, we backed up all of our volumes and sent this data offsite. During the week, we took incremental copies of our changed data and sent this information offsite as well. With the assistance of our automated software, we would segregate our daily backups to consist of only critical production data. In this way, we could make our backup windows and bring up our online regions as fast as possible. DASD managers were a skilled group of people who were experts at recovery. They had to be. A typical HDA failure would not wipe out one but rather two volumes. The affected volumes would usually consist of a system pack as well as an online volume. Our spare volumes would never be hit until they themselves became production volumes. We started to get a real handle on how things were happening. With the assistance of our automated tools, we would comfortably make the bi-annual pilgrimage to our local hot site, recover our data, and execute a successful disaster recovery test. The interesting thing was that we always had difficulty with one item or another. Somehow, we always devised a work around. Our technical staff was excellent and we were proud. We entered the 1990’s eager and anxious for what faced us, no task was too large for us.

Downsizing, consolidation and reduction in force…well, almost no task was too large for us. Over the past decade, the size of our data centers grew as our companies absorbed some of the small fish competitors. If you were the lucky one in a larger shop and you got to keep your job, your responsibility now included managing a "step child" LPAR and somehow assimilating it into your primary environment. In addition, the new environment had to be recoverable as well. Of course, the other company did things a little different. The old "experts" were let go and were working at their new careers in UNIX and client server. Eventually the pieces were placed together and we adjusted to the dynamic pace of life in the 90’s. Change, and our adapting to it, became a crucial element to our success.

As we approach 1999, we have become acclimated to the rate of change and we are better able to adjust the ever-changing dynamic environment. It is important that we do not throw away the lessons that we have learned from the past but somehow apply them to the tasks of today. Just as hardware and software has evolved, so have we all. It is crucial that we keep pace with our DR plans as well. Let us look at a typical data center today.

Going into 1999, the Enterprise consists of many platforms. No longer is the mainframe the sole workhorse of an organizations data processing needs. Although, a mainframe is perhaps the most powerful processor, it is typically a component of an array of Mainframe, UNIX, AS400 and NT servers. Today’s selection of mainframe processors are now complimented by CMOS processors which make yesterdays 308x’s pale in comparison to today’s CPUs. An IBM 9672-YX6 CMOS processor runs at 1068 mips, over 40 times the mips of the 3084Q! Beside the exponential growth in processor capability, the DASD farm today has grown and most organizations now manage terabytes of data. Fortunately for us, a revolution in DASD technology has occurred with the introduction of RAID devices from vendors such as IBM, Storage Tek, and EMC. The HDA failure of today is a hot swappable fix with no significant impact to our online users. In addition, emerging technology has introduced remote mirroring of data as well as offsite journaling of database activity. Backup strategies today can now take advantage of tools such as snapshot and concurrent copy.

The tape library has undergone a revolution as well. In the early 1990s, Storage Tek’s tape silos revolutionized the way we handle tape. With the introduction of 3490 and 3590 devices, tape capacity has been significantly increased. It is not uncommon today for an organization to utilize a tape silo downtown as a remote repository of critical data. Additionally, hybrid technologies such as virtual tape, which combines DASD and tape technologies to improve throughput and performance of tape data, have emerged as well.

An automated approach to recovery can accomplish some of the following issues:

  • Simplifying the hot site preparation process;
  • Detecting any changes to the application environment;
  • Synchronizing backups and the recovery process;
  • Efficient overall backup process as well as reducing tape vaulting costs;
  • Faster recovery and efficient resource utilization at the hot site;
  • Standardizing the overall corporate approach to disaster recovery.

 So what challenges do we face today with regard to data recoverability? Today we have even smaller backup windows, 24x7 operations are more common as well as even larger amounts of data to recover. Data going offsite does not necessarily leave on a truck any longer, this data must be tracked. Today, applications change on a daily basis and the application personnel are not necessarily down the hall or for that matter even in the same country. In fact, they might not even speak the same language. You must keep up with the dynamic changes. What was true yesterday is false today and even more false tomorrow. Although the tasks remain the same (recover the data), the hurdles have been raised a few feet. How are you going to achieve a successful disaster recovery plan? If history is a great teacher, we would have learned that a focused approach is going to be key to our recovery. We can not back it "all" up in a reasonable amount of time. Automation of our DR process is going to be extremely important. Just as storage management has been automated with the concept of SMS, so has disaster recovery been automated by tools from vendors like 21st Century Software, Mainstar, and Sentryl.

The good news is that the technology of today is ready to help you with the task at hand. The bad news is getting the tools to work in harmony is not necessarily an easy task. Manual processes are going the same route that the typewriter took with the advent of word processing. Automation software will greatly facilitate this process while taking advantage of emerging technologies. Developing a dynamic plan will also prove to be invaluable. Constrictive plans will become obsolete overnight. A little forethought goes a long distance.

The bi-annual pilgrimages must continue. Although we have gone a long way with our hardware and software, testing of our methodologies remains a crucial factor to our success. Testing helps maintain our sharpness and allows us to fine-tune our process. We no longer just recover full volumes and critical applications anymore. Many other components are involved. Synchronization is a more daunting task. In many cases, it is the only opportunity a DASD manager ever gets to recover a volume so the test becomes a skill-sharpening lesson. Without a full-scale test, you can’t locate all the weaknesses in your plan. Other skill sharpening ideas include the unannounced test that is always a favorite as well as rotating the players on occasion. Your ability to pass or fail reflects on the preparedness your company possesses for the inconceivable. It is very much like the soccer team that practices for the playoffs and is prepared to win. In this case, substitute soccer team with IS organization and playoffs with cataclysmic disaster. Management is the team sponsor and without their commitment, the process will not possess the respect needed to succeed. The DR plan and the team is an integral component of an IS organization that faces an ever growing task. The playing field is changing and growing constantly. The players must be properly equipped in order to harness this dynamic environment.

A company’s data is its most precious resource. Shouldn’t that resource be protected with a proper amount of zeal? Looking back at a disaster is 20/20 as well, no matter what happens, a learning opportunity will emerge. Successfully recovering, however, will make looking back much easier to handle.

Paul Eckert is Vice-President and Director of Development for 21st Century Software located in Wayne, PA. Mr. Eckert has over 15 years experience in Storage Administration, Technical Services, and Disaster Recovery Planning and Implementation. He has also designed and developed automated tools for Storage Management and Disaster Recovery. For more information, contact him at This email address is being protected from spambots. You need JavaScript enabled to view it. or call (800) 555-6845.