Avoiding Big Risks for Mid-Sized Businesses
- Published on January 31, 2008
- Written by Mike McClain, Senior Web Designer & Site Manager
These mandatory impositions are particularly onerous for the midsized business – the company with perhaps as few as 300 employees and certainly no more than 1,000. These are usually businesses on the up and up. Rapid growth and success often means that their administrative and management practices lag behind their expansion curve.
In other words, it is very easy for a midsized company to find itself (albeit unwittingly) trading illegally. Its officers may not fully understand the requirements of Sarbanes-Oxley (SOX), or HIPAA, or Gramm-Leach-Bliley, or Check 21, or any of the dozens of other pieces of regulation that may apply to their particular area of business. However, one thing is certain; whatever nuances may apply to specific verticals (e.g., SOX, HIPAA), each and every one of them demands that data be protected and retrievable in case of failure.
And yet according to Enterprise Strategy Group more than 60 percent of mid-sized businesses do not have a DR plan and are inconsistent at best at making backups. In general, they have no contingency plans for restoring critical services/operations, no secure offsite data storage, and certainly no on-demand, exact-copy, data retrieval.
So why is this? Who in their right mind wouldn’t want to keep a secure, up-to-the-minute copy of business-critical data. (Never mind what the regulators want.) The answer is one word: complexity.
Until now it has just been too difficult, especially for the midsized
organization. Data storage infrastructures have always been somewhat
volatile, and traditionally, the solution has simply been to have multiple
copies of everything – and that includes hardware, software, documentation,
indeed entire data centers. This catch-all approach is simply too expensive
to be considered by the midsized businesses, and they have watched their
data expand with a touching faith in the reliability of disk drives
and the foibles of human nature as “the IT guys” have struggled
with floppies, tapes, and any other medium that might give them a sporting
chance of retrieving something – anything – in the face
of a massive outage.
That’s the bad news. The good news is that technology has moved on to a point where efficient DR need no longer be the realm of the few or only the very large. DR for the many is upon us. These new technologies, however, do not eliminate the need for a decently well-thought-out DR plan.
For one thing, DR is not necessarily an off-the-shelf activity. A well-thought-through DR plan should consider, and perhaps contain, several levels of disaster “category.” For example, a “disaster” may be the loss of a particular network due to human error when manipulating network cabling. It may be the accidental deletion of an important document. But it might also mean the partial or full loss of an entire facility. So while one person’s “disaster” might be another’s “inconvenience,” the relative gravity of the event or events does not mitigate the need for appropriate DR measures. In any event, developing a satisfactory and robust DR scenario is particularly challenging for midsized businesses.
One approach is to ensure, as a bare minimum, redundancy of all technologies involved in securing, recording, transporting, authenticating, and authorizing access to customer record data. This is vital to ensure against practices and procedures that might be construed as regulatory violations or threats to customer privacy, triggering potential liability and practice litigation. This is particularly notable in data storage technologies, where it is vital to store multiple, redundant, synchronized copies of the electronic customer record, including all transaction-oriented data, to prevent a regulatory-related compliance issue from being raised. In turn, this involves constructing storage networks that are simple to maintain, yet have sufficient flexibility and resiliency of design to ensure future growth and increasing disaster tolerance without affecting customer operations or customer record data access at any time, i.e., a zero-disruption storage architecture.
According to research by IT industry analyst Gartner, planned downtime is by far – up to 80 percent – the most common cause of downtime in a network. Planned downtime is followed by application failure, operator error, and operating system failure. Coming near the end of the list are hardware failures, power outages, and natural disasters. Networked storage technology can reduce or eliminate almost all these downtime causes. However, networked storage technology does not necessarily eliminate complexity or difficulty in operation. To overcome these obstacles, the use of storage virtualization technology is vital to make networked storage easy to use, creating opportunities for storage managers to perform real-time testing of software changes and upgrades. Storage virtualization technology also eliminates much of the tedious, labor-intense tuning and data management present in nonvirtualized storage environments.
The choice of technologies for a DR solution for customer data is entirely dependent on the answers to two questions: How long is the recovery window, and what constitutes a ”recovery”?
If a recovery consists of a complete restoration of operations and all customer record data, then several technologies will be involved, namely server platforms, operating systems, IP networking, storage networking, storage platforms, and all customer-data-related and financial software applications necessary to perform audit and analysis. In addition, trained personnel will be involved to perform the recovery functions, as well as implement any special procedures necessary to ensure complete customer record data restoration.
In terms of time, the optimal set of technologies to ensure a zero-time recovery window – in other words, complete disaster tolerance or “never being down” – involve geographically dispersed computing and storage facilities, interconnected by several redundant but logically and physically separate networks for both TCP/IP and storage (FCP) protocols. In this architecture, customer record data is kept in multiple locations via synchronous mirroring or other image-consistent replication techniques, ensuring real-time updating of all instances of the customer record. This method obviates the need to rely on any one physical facility for the necessary files or particular documents tied to or within an electronic customer record.
In addition, zero-time recovery mandates the use of online magnetic disk technologies in all locations to hold enough customer record data to perform all necessary and sufficient examinations and consultations. A typical time period for online customer record data retention is three to five years, after which magnetic tape may be used for archival purposes. However, restoration of customer record data from magnetic tape is a suboptimal process and should only be carried out in parallel with recovered operations via redundant sets of servers, networks, and magnetic disk storage subsystems. In other words, storing customer record data on tape may be sufficient for archival purposes, but not zero-time recovery purposes.
A highly available, disaster-tolerant environment for storing customer records should optimally include at least two disk storage subsystems connected to each other via IP (or Fibre Channel with dense wave division multiplexing, at metro-area distances) and then attached to servers via Fibre Channel connections. Subsystem A is mirrored to Subsystem B, which can be connected to a massive array of idle disk (MAID) library, or a tape library for archival storage. MAID backups or tape archives can be constructed from Subsystem B without impacting the performance of the network. Clinical or financial application servers connected to Subsystem A can be clustered so access to data is not disrupted by a server failure. A fully redundant system means that any component can fail for any reason, and applications still have access to their data. With new technologies including thin-client application, thin (diskless) rack-mounted or blade servers, and data replication over IP, these solutions are now very cost effective.
Other facets to consider while selecting an optimal solution for customer data storage and DR include choosing a solution that is standards based and easy to implement and operate, requiring minimal full-time-equivalent (FTE) staff hours. This is particularly important in recovery and restoration procedures, in the event of a declared disaster. A number of storage solutions on the market today are highly proprietary, meaning they do not operate according to ANSI open-systems standards, making them at best challenging and at worst impossible to integrate into existing networks. A standards-based, open-systems solution is not only easier to integrate, but is optimized to work with future standards-based technologies.
Storage systems are designed with widely varying levels of complexity. For radiology applications, look for a system that removes complexity from storage management. The complexity of the regulatory regulations creates an opportunity for hospitals to re-examine and upgrade their data management procedures to ensure less interaction by FTE staff. Electronic data can be made more secure than paper data, are much faster to retrieve, and are able to be viewed in multiple locations at the same time. However, these benefits are lost if a healthcare organization implements a cumbersome system that integrates poorly with existing applications and networks.
Prior to the recent regulatory requirements, many organizations would randomly purge customer data from their corporate archive, i.e., had no standardized retention policy. Now certain regulatory bodies require the information to be available anywhere from five to seven years, up to a lifetime, or even beyond an individual’s lifetime (e.g., HIPAA).
Certainly, in the nonregulated customer service domain (no one has as yet mandated that you must get good service!), quick and reliable nonstop access to its customer record data confers highly tangible commercial advantages, and very few businesses without such access would survive very long in today’s dog-eat-dog business climate.
Whether mandated by law or by the exigencies of best practice and customer
service, the systems discussed above can create an environment that
complies with regulatory requirements, serves customers, and makes data
management easier and cost effective for all sizes of organizations.
Rob Peglar is vice president of technology marketing for Xiotech Corporation. A 28-year industry veteran, he has global responsibility for the shaping and delivery of emerging technologies, defining Xiotech’s product and solution portfolio including business and technology requirements, planning, execution, technology futures, strategic direction, and industry/customer liaison. Peglar holds the bachelor’s degree in computer science from Washington University in St. Louis, Mo., and performed graduate work at Washington University’s Sever Institute of Engineering. His research background includes I/O performance analysis, queuing theory, parallel systems architecture and OS design, storage networking protocols, and virtual systems optimization.
"Appeared in DRJ's Winter 2006 Issue"