Recovering an operating data center following a disaster is not a trivial task – and involves far more than simply recovering data. It involves re-constituting software, run-time configurations, servers and their configurations, network I/O, network topology, cabling, and server storage connections. Plus, these configurations constantly change – and are unique only to your enterprise.
The traditional approach of copying and/or maintaining a duplicate production data center and bringing it online (RTO) in fewer than six hours requires a hefty ongoing investment.
The trouble is, most disaster recovery (DR) tools and procedures are either highly-specific to only one part of the data center, require a complete duplication of infrastructure, or are manually-intensive. Hence, these DR plans require frequent drills for verification and are mostly limited to a subset of only the most critical tier-1 applications.
With the advent of virtualization, there are actually more silos of data center assets to recover – those that have been virtualized, and those that have not. Unfortunately, tools for virtualization-based DR just don’t apply to physical software instances. In summary,
Clustering/DR software is often specific to a given server or application. While reliable, it is expensive to install/maintain, and often doesn’t apply beyond a limited set of applications or OS’s. Further, it assumes a second set of hardware as a dedicated recovery environment – costing capital, space and power
Warm or hot DR facilities have similar challenges to above – they involve sunk-costs for hardware and space. And, they are required to be kept in lock-step configuration with production, often requiring periodic DR drills for verification
DR services are an alternative, but require staff to follow configuration procedures to duplicate the production environment – subject to error, version skew, and usually accompanied by RTO’s measured in Days.
VM-based DR approaches work well, but assume that all services have been virtualized, and that the recovery environment is pre-configured and “warm” – both not always being the case.
Is There an Alternative?
The Holy Grail for the data center DR and CooP professional would be an approach to re-configuring all of the moving pieces, not just software. Software (or OS) virtualization has provided us with a partial solution: the ability to logically-define software stacks that can be moved or re-constituted on different servers, even in different physical locations. But the data center is more than just software, and there are good operational, performance and licensing reasons why software virtualization may not be present in all domains of today’s production data center.
So, instead consider: what if the infrastructure domain of the data center could be virtualized and re-constituted too? Regardless of the software and OS world, what if server I/O (Ethernet, FiberChannel, etc.) could be virtual? What if the network (cables, switches) itself could be virtualized, along with load balancing? What if any storage unit (LUN) could be connected to any server?
That would mean any given data center configuration – including physical and virtual software, servers, I/O, network configurations and storage connections – could be instantly re-constituted somewhere else, on “cold” hardware. A guaranteed carbon-copy, right down to address and network names for servers and ports.
Although you might not have yet heard about it, this approach is alive-and-well, and used in today’s higher-end data centers … and is available for more mainstream DR/CooP users.
Enter: Converged Infrastructure
The new approach is termed converged infrastructure (also referred to as unified computing) and focuses on the logical configuration of data center infrastructure. In essence, all server, I/O, network and storage resources are pooled and then assigned as needed. It allows IT operators to use the same software applications (physical and virtual) they’re accustomed to using, and allows operations to continue to use a SAN to replicate the data and software that they run on top of the infrastructure.
The beauty of this converged infrastructure approach is that it vastly simplifies how data centers are configured, while helping eliminate much of the physical complexity/diversity present. And best, your existing software applications are indifferent to the technology.
Converged infrastructure can be created using some of the same standard hardware and networking components common to data centers, but are governed differently with a unifying form of software. That software performs the following functions:
Virtualizing server I/O – Allowing IT to define as many/few network and storage connections per server as they desire, and eliminating multiple physical I/O cards
Virtualizing the network – permitting a single wire to carry both data and storage traffic. This creates a “wire-once” environment, where a single physical cable can act as if it is multiple virtual cables.
Virtualizing storage naming and connections – so that any storage unit can be attached to any server as demands dictate
Defining what software images will run on what servers – simplifying how-and-where physical and virtual software is placed during configuration or recovery
All of these components are part of individual server “profiles” that can be instantiated anytime, anywhere. And a group of server profiles can be joined to represent an entire data center profile that can be re-constituted onto a “cold” data center in case disaster demands.
Use by Enterprises and Service Providers
The converged infrastructure approach was first developed by Wall Street technologists almost 10 years ago to cope with complex availability and DR issues – and is still in use today by firms demanding the highest levels of universal server fail-over and data center disaster recovery.
Because it works across diverse data center workloads, converged infrastructure management frequently displaces clustering, run-book automation and other manual recovery activities. And, for that reason, it’s often used in heterogeneous and/or complex data centers needing a simplified DR/CooP policy.
For example, consider Military Health Services, a federal health care provider, who operates dozens of critical facilities around the globe, each with somewhat differing IT configurations and workloads. To meet RTOs, an ordinary DR approach would require multiple “warm” recovery facilities. Instead, they use a converged infrastructure approach, where a single recovery environment location can be shared among all facilities, yet maintain a RTO on the order of a few hours. Alternatively, development and staging facilities can be re-purposed as production environments – without requiring the use of a dedicated recovery facility, and still maintaining the required RTO.
Or consider Brown Shoe Inc., a footwear manufacturer and retailer. Brown Shoe operates an SAP environment to manage the corporate Enterprise Resource Planning. The system consists of multiple landscapes consisting each of physical and virtual servers; each landscape is assigned to production, staging, development, etc., and each may differ slightly from one-another. The company turned to converged infrastructure – instead of traditional clustering – to simplify overall infrastructure, and to simplify providing availability and disaster recovery services to each landscape.
So, when considering IT DR services that include clustering, manual configuration or “warm” recovery sites, also consider the advantages of converged infrastructure systems. Typically it is more universal across software applications, actually reduces infrastructure hardware needs for production and recovery environments, and decreases the price-point to achieve a given RTO for the majority of IT systems. CooP and IT professionals are increasingly turning to this new approach to simplify their jobs, and to provide a more reliable recovery tool while decreasing the degree of testing/validation required.
Ken Oestreich is vice president of marketing for Egenera Corporation. His more than 20 years of industry experience spans developing markets for utility computing, cloud computing, and converged infrastructure technologies.