Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 31, Issue 1

Full Contents Now Available!

Two Canadian companies are living proof that being prepared pays off. When crippling ice storms struck last January, QL Systems Limited, a Kingston, Ontario provider of legal data, and Pictet (Canada), a Montreal, Quebec brokerage and portfolio management firm, had disaster recovery plans that got them back up and running, saving hundreds of thousands of dollars in lost revenues and maintaining customer confidence.

The ice storms that swept across the Northeast began in Ontario on Wednesday, January 7. Moving east in the days that followed, the storms also hit Quebec and the northeastern U.S., leaving devastating power outages in their wake. With customers who rely on their services all day, every day, staying online was critical for both QL Systems and Pictet.
"Without our disaster recovery plan we could easily have lost a half million in business and worse; we could have lost customer confidence. But our plan was in place, we had practiced it, and we didn't panic," said Donna Ashton, Director of QL's Computing Centre. "Once we declared a disaster, we were back up and running within 24 hours and our customers were very pleased with how quickly that happened."

QL Systems Limited is the major Canadian supplier of computerized legal information and retrieval services. It operates more than 1,000 online databases and bulletin boards for customers worldwide, providing information typically used for research in preparing court cases. Pictet, a branch business of a Geneva, Switzerland bank, is a brokerage and a portfolio management firm for private and institutional investors. It monitors the stock market and buys and sells shares on a daily basis.

"We must be prepared should the main office in Geneva ask us to buy and sell on the North American Stock Exchanges," said Joseph Bassil, Assistant Vice President, Division Informatique, Pictet. "Without the capability to monitor the market and answer our customers, we would have lost a large volume of work. This is why our disaster recovery plan was so essential for us."

Both companies subscribe to IBM's Business Recovery Services. In the event of a disaster, companies generally issue an "alert" to IBM before "declaring" a disaster. Once on alert, IBM prepares the recovery site so it will be ready when the subscriber "declares." QL's business recovery subscription with IBM calls for systems that are identical to its home office environment. This includes hardware as well as software and data (including manuals and CDs) which it backs up daily at offsite storage on magnetic tape. Pictet's business recovery subscription with IBM provides it with a workplace, equipment service and support in the event of a disaster. During the ice storms, QL used IBM's Toronto disaster recovery site only, while Pictet used the Toronto site for AS400 operations and the IBM Montreal site for all other operations.

"IBM Business Resumption Services are designed to get businesses back in operation as soon as possible. That's the bottom line. The QL and Pictet experiences underscore the value of having this capability in place when disaster strikes," said Ralph Dunham, Manager of IBM's Business Recovery Services. "We were at the ready for QL and Pictet, with a base of operations and a team of experts at their disposal for as long as they needed."

Experts agree that effective disaster recovery plans must begin with an analysis of the customer facility, covering buildings, surrounding hazards, electrical and mechanical facilities, geological conditions, logistics, communications and safety. Plans should be tailored to meet the requirements of the specific business and address vital business processes and applications, future operating environment requirements, financial impact, acceptable recovery windows, and recovery mode performance, among others. Plans should provide subscribers with services including assessing damage and risk, coordinating recovery site resources, establishing communication and documentation procedures, managing and solving recovery conflicts and conducting crisis status meetings.

On Wednesday, January 7, the storm had already begun by closing time at the QL Systems Kingston headquarters. When power went out at 10:30 p.m. that night, the company had no inkling of what lay ahead. Except for a two and a half hour period when a backup power source kicked in, QL ultimately was without power from January 7 at 10:30 p.m. until Tuesday, January 13 at 5 a.m.

"Certainly ice, snow and sleet are not unusual for us in the winter, so the ice storm itself didn't take us by surprise," said QL's Ashton. "What we never expected was how long it would last or the impact it would have."

In addition to its Kingston headquarters, QL has nine regional offices across Canada and has a high-speed Frame Relay network in place. It gathers data through arrangements with different courts that send QL electronic or hard copies of information. The bulk of this incoming information goes to QL's regional offices for processing. The company then uploads the data to the IBM mainframe at its headquarters which in turn provides it to customers. When the crisis hit, the regional offices were able to hold the data and work with it on their PCs, but unable to upload it to the headquarters mainframe and subsequently get it to the customers depending on it to do research in preparation for court.

QL's backup power supply failed for good at about 1 a.m. Thursday January 8. When Ashton arrived later that morning, the company was totally off-line, with customers unable to get any data. Ontario Hydro and the Kingston mayor's office were issuing assurances that power soon would be restored and asking that companies have employees stay home. But as the storm continued unabated throughout Thursday, heavy icing kept knocking out power each time it was restored. By now, high winds were worsening the situation by making it dangerous for utility company workers to continue any efforts at all to restore power.

"On Friday morning, with no letup in sight, city and power company officials were telling us it could be anywhere from four to 48 hours before power was restored," said Ashton. "At this point we put our IBM disaster recovery site on alert but was still difficult to decide if we should move to declare a full disaster and actually go to our recovery site."

QL decided to declare a disaster on Saturday morning, January 9. Although the storm now had stopped and the sun was beginning to melt ice from power lines and trees, the winds remained strong. The company contacted IBM and then selected the team to travel to the disaster recovery site in Toronto and the team to remain at the QL headquarters.

On alert since Friday morning, IBM's Toronto disaster recovery site - two hundred miles away from Kingston and unaffected by the storm - had the QL office area ready and systems configured and was prepared to provide technical support including consultation on methods for completing tasks. The QL disaster team arrived at the Toronto site Saturday afternoon and had the systems up and running by early Sunday morning, January 10. Within three seconds of going back online on Sunday, the first customer got through.

Resuming business operations on Monday morning, January 11, the regional QL offices began uploading the new data to the mainframe at the Toronto site. Because the IBM recovery site was so similar to QL's own headquarters, there was no visible difference between the two for customers and regional offices conducting daily business.

With business operations running smoothly, QL turned to preparing a plan to bring its systems back once power was restored at the company headquarters. It consulted with IBM on how to coordinate the changes that were on its home site after the last backup before the power outage and how to integrate them with changes that occurred while it was at the disaster recovery site.

To lessen the impact on its customers, QL chose to wait for the weekend of January 17 to make the transition back to headquarters. The team performed a backup at the Toronto site on Friday, January 16 and sent the backup tapes to the headquarters team by taxi. On Saturday, January 17, the QL home team restored the home system to the Toronto site so that both were running in parallel. At this point, QL's users were still on the Toronto system - but the Kingston site was updated. By Saturday afternoon, headquarters was back on line. To be safe, the disaster team remained in Toronto until Monday morning, January 19, before shutting down the backup site.

"In retrospect, I wish we had declared a disaster sooner," said Ashton. "But everything else really went so smoothly that declaring sooner is just about the only thing I would have changed in the way we handled the situation." QL has made very few changes to its recovery plan as a result of the disaster. These changes include making some modifications to documentation and adding plans for transitioning back to and resuming operations at the home site after the disaster.

As it watched the ice storm that crippled Ontario move eastward, Pictet had the advantage both of knowing the storm was on the way and having an idea of its seriousness. The company was aware it had to be ready and that if the power went out it could be some time before it was restored. On Thursday, January 8, Pictet still had power but was getting reports that power was out in different areas of Montreal. Knowing it was simply a matter of time before it also would lose power, the company called IBM with an alert, then performed a backup, storing everything on magnetic tape.

In addition to its Montreal, Quebec headquarters, the company also has an office in Vancouver, British Columbia and is responsible for keeping systems up and running for both offices as well as for the Luxembourg, and Nassau Bahamas offices of the Geneva bank. It has an NT network on a Token Ring and runs its business applications on an AS400.

"The storm's impact on our business was limited in that we were at the back-up site only one day. But the disaster did provide an excellent test for something that had previously been a plan only and it has given us great confidence," said Bassil. "We proved to ourselves, our colleagues in the Pictet Group and especially to our clients that even if the crisis had continued over the long term, the company could be functional from the back-up location."

The company's Portfolio Management clients experienced minimal operations for a day during the disaster, but expressed confidence in the fact that Pictet was able to maintain operations despite its downtown office being closed. On January 9, Pictet switched almost all North American Trading from European clients to other brokers in other centers. Pictet's Institutional Marketing team was able to operate during the disaster by using a combination of the home office and the back-up site systems. Pictet did find that conducting its Swiss Equities Selling business from the backup site was somewhat difficult operationally, although reasonable.

"The biggest revenue impact was the lost commissions on the North American and Swiss trading desks," said Pictet's Bassil. "But putting the plan into action clearly demonstrated that the intended objective of continuing to serve our customers is possible and that it is unlikely that a disaster situation will result in our being forced out of business."

When Pictet lost power at 10:30 a.m. Friday, January 9, its crisis management team swung into action. The team decided not to reroute calls on Friday but rather to wait until Monday and to remain on alert status for the time being. Like QL Systems, Pictet's backup power supply only ran for a short time before failing. The team decided to declare a disaster if the power was still out on Sunday morning and to leave a message that evening informing customers and employees it would be operating from the IBM recovery center.

Before the end of the day Friday, Pictet chose a team of 12 people who would go to the recovery center, including three people from the computer department and eight people from the various departments who could best help the company continue doing business. At the end of the day, Friday, Bassil gathered all the backups and vital records (magnetic tapes and disks) and all necessary papers to bring them home. On Sunday morning, with power still out, Pictet notified IBM it was declaring a disaster. On Sunday evening, it informed the Geneva headquarters of the situation and redirected its phone lines to the IBM site in Montreal - so that the change in location would be invisible to people calling in.

In utilizing its IBM Business Recovery Service, Pictet used two IBM sites. Because its AS400 (which is the mainstay of its data) runs an older version of the operating system than the version available in the Montreal IBM backup site, Pictet had to send Mr. Naji El-Hayek of its IT Team to IBM's Toronto site. The company performed all other backup operations at the IBM recovery center in Montreal where it had two rooms set up to mirror its home office.

Pictet had its equipment delivered to the Montreal site Sunday night and began building its network. IBM unpacked the PCs and connected them to the Token Ring, while Pictet people configured the software. The company decided not to restore its NT file server because it did not expect to be out long enough to make it worthwhile. Instead Pictet used Bloomberg Traveler software to get stock information using laptop computers. Earlier that day at the Toronto site, Pictet had begun restoring its AS400 system. By about 3a.m. Monday, the AS400 was up and running in parallel with the systems at the Montreal IBM site.

The company conducted business from the IBM sites through Monday, January 12 when the storm stopped and power was restored to Pictet's offices. The Pictet disaster team made the decision to return to its own offices on Tuesday morning, January 13. However, on returning Tuesday, it learned that Hydro Quebec was asking it to operate with less power and with minimum staff through the following Monday (January 19). "In hindsight, we should have stayed at the backup center longer and next time we will wait for assurance of full power before returning," said Bassil. "But we were able to conduct business, and above all, we showed customers we can operate regardless of conditions. Even though we hadn't done a practice run of our plan before this happened, we think the recovery went quite well."

In addition to not being back at full power for a week, the company also experienced some minor problems routing telephones back to its home office. It is working with the telephone company to avoid this problem in the future. In addition to the telephone problems, the company is addressing feedback from the Pictet staff regarding access to the NT file server and Lotus Notes during the disaster. Pictet now also has completed technologic testing at a backup site for all of its systems and has upgraded its AS400 to the 4.1 version of the operating system.