Press Releases (1348)
“Despite the fact that wildfires, tornadoes and severe thunderstorms persist, and major storms continue to impact Coastal States, the numbers indicate that many families and businesses have not taken the steps to be prepared to handle a disaster,” said Marty Henry, Vice President, Travelers Risk Control. “Now is the time to avoid being complacent and take the necessary steps to get your home and business ready.”
Travelers recommends three steps to help families and business owners prepare for disaster:
· Make a survival kit – Pack enough supplies to last between three and seven days for your family and pet(s). For businesses, make sure you have copies of important documents and contact lists that you can find and reference after the storm.
· Map out an evacuation plan – Have a plan for where your family will evacuate. A solid business continuity plan should include information to share with employees about steps the business would take if it were impacted by a disaster.
· Create an inventory – Be sure to have a copy of your home’s inventory in a separate location. The Insurance Information Institute offers a home inventory app, making it easy for families to create one. For businesses, Travelers’ alliance with the Insurance Institute for Business and Home Safety has Open for Business®, a comprehensive toolkit to help plan ahead.
“While many may think the large-scale catastrophe may not happen to them, even pop-up thunderstorms can cause significant property damage and they occurred 28 times more frequently than hurricanes in the first half of 2012,” added Henry. “This is just one example of how vulnerable individuals and business owners may be if they are not prepared.”
For additional information on ways to get prepared, visit the Prepare & Prevent and Protect Your Business pages on Travelers.com.
The Travelers Companies, Inc. (NYSE: TRV) is a leading provider of property casualty insurance for auto, home and business. The company’s diverse business lines offer its global customers a wide range of coverage sold primarily through independent agents and brokers. A component of the Dow Jones Industrial Average, Travelers has more than 30,000 employees and operations in the United States and selected international markets. The company generated revenues of approximately $25 billion in 2011. For more information, visit www.travelers.com.
COLUMBUS, OHIO –Emerson Network Power, a business of Emerson (NYSE:EMR) and a global leader in maximizing availability, capacity and efficiency of critical infrastructure, today announced the three-year performance results of the Emerson Global Data Center, located on Emerson’s corporate campus in St. Louis, Mo. Constructed in 2009 as part of an effort to consolidate Emerson’s data centers globally, the facility has exceeded expectations for availability, capacity and efficiency. Emerson also released a new video today to celebrate the data center’s third anniversary and showcase the results, which include:
- Efficiency 39 percent greater than today’s average data center
- 100 percent availability, even when hit with extreme weather
- 80 percent server virtualization
- LEED Gold recognition
In addition to playing a major role in Emerson’s consolidation efforts, another goal for the 35,000 square-foot data center was to use the same best practices and technologies that Emerson Network Power recommends to customers. The first best practice employed was using Emerson Network Power’s Energy Logic roadmap to guide each decision that was made to ensure efficient energy use and to optimize space, power and cooling. Energy Logic leverages the cascade effect that occurs when lower energy consumption at the component and device level is magnified by reducing demand on support systems.
Following all Energy Logic steps allowed the data center to operate at a 39 percent more efficient level than the average data center operating today. In fact, Energy Logic played a major role in the data center becoming LEED certified (Leadership in Energy and Environmental Design) by the U.S. Green Building Council. The project surpassed its green building-goal of achieving LEED Silver recognition and instead won LEED Gold based on sustainability practices, the materials used, indoor environmental quality, energy use and design innovation. One notable example was deploying a 100-kilowatt solar array, which has provided the equivalent of powering 18 homes since the solar array began operating.
“When we started out, we expected to have virtualization rates of about 70 percent,” said Jake Fritz, vice president of infrastructure and operations for Emerson. “But as we’ve grown over time and consolidated more, we’ve actually reached 80 percent or more.”
Other than through virtualization, the highly flexible capacity of the data center is a result of designing the facility to scale three-fold. The extra space won’t be needed immediately, however. Between virtualization and increased equipment efficiency, there is enough raised floor space to handle the next 3-5 years of capacity and growth expectations.
The Emerson Global Data Center, which meets the criteria that The Uptime Institute has set for a Tier III facility, has been always-on for the past three years.
“We had an F4 tornado that went through and ripped the St. Louis airport apart, came within a mile of our data center and took out a lot of power to the area,” Fritz said. “But we didn’t see one minute of downtime.” The 100 percent availability experienced by the data center is credited to multiple layers of electrical redundancy, as well as a building design that can withstand an 8.0 earthquake.
STORServer Enterprise Backup Appliance 3100 takes No. 1 spot against competition in DCIG 2012 Backup Appliance Buyer’s GuideWritten by Mike McClain, Senior Web Designer & Site Manager
The STORServer EBA 3100, which represents a different tier of backup appliances than what other providers on the market currently offer, scored so highly overall that it fell outside of the two standard deviations that DCIG generally uses as a guideline for inclusion and exclusion of products. Its score in the Buyer’s Guide was notably higher than the next closest model, which is significant considering that nearly 70 backup appliances were evaluated in this Buyer’s Guide. The EBA 3100 is the largest model in the STORServer appliance line offering up to 1 petabyte of data storage.
“We included the EBA 3100 in this Buyer’s Guide because we haven’t come across any other backup appliances that come close to matching its software and hardware attributes, putting the model in a class of its own,” said Jerome Wendt, president and lead analyst for DCIG. “As such, we felt it would be doing the market a disservice by not informing them that this superior backup appliance existed and was generally available for purchase.”
The STORServer EBA 3100, 2100, 1100 and 800 models took four of the top five positions in the Buyer’s Guide.
“To garner four of the top five positions in such a prestigious analysis of backup appliances on the market today stands as the highest compliment of our efforts in providing best-in-class appliances,” said John Pearring, manager of sales and marketing for STORServer. “We are thrilled by the conclusions of DCIG’s comprehensive study.”
STORServer’s appliance solutions offer a robust feature set making them easy to install, use and manage on a daily basis, as well as a support model that enables a stronger total cost of ownership than its competitors. STORServer customers have the ability to incorporate all recent technological advances, such as global deduplication, virtual machine data protection management, block level incrementals and production-available replication, into the backup appliance.
“These hardware features coupled with EBA 3100’s high scores in every other category should give enterprises a high degree of confidence that this model will satisfy and likely exceed whatever backup workloads or requirements they have,” said Wendt. “Those organizations needing a backup appliance with the most robust set of features available on the market may start and stop with the STORServer EBA 3100.”
While the EBA 3100 scored at the top or near the top in every category in the Buyer’s Guide, it was the only backup appliance to earn an “Enterprise” rating in the “Hardware” category. The model distinguished itself with its support for Active-Active controllers, a large amount of storage capacity, multiple storage networking interfaces and high levels of redundancy.
“We have worked diligently with IBM to focus on the enterprise quality of our appliances,” said Bill Smoldt, president and CEO of STORServer. “Their excellence in product development has made possible STORServer’s presentation of our data backup appliance lines. They deserve our recognition and thanks for this honor as the only enterprise level backup appliance in the marketplace.”
Built on IBM Tivoli Storage Manager, STORServer offers a complete suite of enterprise backup appliances, plus software and services that solve today’s backup, archive and disaster recovery challenges. For more information on the company’s line of data backup solutions, visit http://www.storserver.com.
WASHINGTON - As the remnants of Hurricane Isaac continue to impact portions of the country, FEMA kicked off the ninth annual National Preparedness Month, which occurs every September.
On Friday, August 31, 2012, President Obama signed a proclamation designating September as National Preparedness Month. The effort is led by FEMA's Ready Campaign in partnership with Citizen Corps and The Ad Council. The campaign is a nationwide effort encouraging individuals, families, businesses and communities to work together and take action to prepare for emergencies. Only 60 percent of Americans say preparation for natural or man-made disasters is very important to them, yet only a staggering 17 percent of Americans claim to be very prepared for an emergency situation.
"This year's wildfires, the derecho, and Hurricane Isaac are all important reminders that disasters can happen anytime and anywhere," said FEMA Administrator Craig Fugate. "By taking steps now to prepare for emergencies, we ensure that our families and communities are prepared to respond and recover from all types of disasters and hazards. Together, our efforts will build a stronger and more resilient nation."
Readiness is a shared responsibility, and FEMA asks all Americans to make the pledge to prepare this month and truly help themselves, their neighbors and their communities be Ready. People can get started by visiting www.Ready.gov/today and download a family emergency plan, emergency kit checklists, and get information on how to get involved locally. Be informed about the types of emergencies that can happen in your area, and the appropriate way to respond.
National Preparedness Month is supported by a coalition of public, private and non-profit organizations that help spread the preparedness message. Last year, FEMA had a record number of 8,952 coalition members. This year, FEMA expects to have another record-breaking number of coalition members. By hosting events, promoting volunteer programs and sharing emergency preparedness information, coalition members can help ensure that their communities are prepared for emergencies. To become an NPM Coalition Member and find readiness events that may be taking place near you, visit: http://community.fema.gov.
During National Preparedness Month, and throughout the year, FEMA and the Ad Council to prepare in advance for all types of natural disasters. The Ready Campaign's websites (ready.gov and listo.gov) and toll-free numbers (1-800-BE-READY and 1-888-SE-LISTO) provide free emergency preparedness information and resources available in English and Spanish.
Follow FEMA online at http://blog.fema.gov, www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema. Also, follow FEMA Administrator Craig Fugate's activities at www.twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
WASHINGTON -- Americans are becoming increasingly reliant on mobile devices during emergencies to provide information, useful tools and a way to let loved ones know they are safe, according to a new survey conducted by the American Red Cross.
Mobile apps now tie social media as the fourth-most popular way to get information in an emergency, following TV, radio and online news. The Red Cross survey found that 20 percent of Americans said they have gotten some kind of emergency information from an app, including emergency apps, those sponsored by news outlets and privately developed apps.
"We've monitored more than 100,000 mentions about Hurricane Isaac on social media," said Wendy Harman, director of social strategy of the Red Cross. "People are stressed out, scared and seeking information. Social media and apps become a way to reach out to them with emotional support and tips on staying safe."
The survey also identified a subsection of the population deemed "emergency social users," people who are the most dedicated users of social media during emergencies. These users are likely to take a safety or preparedness action based on the information they see in their social networks. Three out of four of these users say they've contacted friends and family to see if they were safe and more than a third say social information has motivated them to gather supplies or seek safe shelter.
Other key findings include:
- Emergency social users are also most likely to seek and share information during emergencies. While they look for the hard facts—road closures, damage reports and weather conditions—they share personal information about their safety statuses and how they are feeling.
- Three out of four Americans (76 percent) expect help in less than three hours of posting a request on social media, up from 68 percent last year.
- Forty percent of those surveyed said they would use social tools to tell others they are safe, up from 24 percent last year.
The Red Cross continues to encourage people to call 9-1-1 as the best first action when in need of emergency assistance. At the same time, the organization is responding to the interest in mobile assistance by releasing a series of free apps for both iPhone and Android users.
The Red Cross introduced apps for shelter locations, first aid tips and instruction and hurricane preparedness, the last of which also includes a flashlight feature as well as one-touch, "I'm safe" messaging that connects directly to the users' social media channels Red Cross plans to unveil several other preparedness apps throughout the fall. Links to the apps can be found at redcross.org//prepare/mobile-apps.
For more information and to view the full survey and infographic, visit redcross.org.
Two similar polls were fielded during the period June 14-17 by CARAVAN® ORC International using two methodologies. The first was online survey of 1,017 respondents representative of the U.S. population aged 18 and older on June 14-17, 2012. Respondents for the online survey were selected from among those who have volunteered to participate in online surveys and polls. The data have been weighted to reflect the demographic composition of the 18+ population. Because the sample is based on those who initially self-selected for participation, no estimates of sampling error can be calculated. The second was a telephone survey of 1,018 U.S. adults 18 years and older on June 14-17, 2012 conducted by CARAVAN® ORC International. Margin of error is +/- 3.1 percentage points at the 95% confidence level.
About the American Red Cross:
The American Red Cross shelters, feeds and provides emotional support to victims of disasters; supplies more than 40 percent of the nation's blood; teaches skills that save lives; provides international humanitarian aid; and supports military members and their families. The Red Cross is a not-for-profit organization that depends on volunteers and the generosity of the American public to perform its mission. For more information, please visit redcross.org or join our blog at blog.redcross.org.
Columbus, Ohio – Emerson Network Power, a business of Emerson (NYSE:EMR) and a global leader in maximizing availability, capacity and efficiency of critical infrastructure, today announced the availability of its Dynamic Cooling Assessment for data centers in the United States. The service employs sensors and wireless communications to track environmental temperature and humidity in order to effectively apply industry-leading engineering expertise to the management of the data center environment. The assessment service is a cost-effective, long-term approach to gaining the benefits of a data center cooling and efficiency assessment.
“Managing the ideal environment in a data center is a never-ending challenge,” said Brian Humes, vice president and general manager, Emerson Network Power’s Liebert Services business. “IT loads change and grow as equipment is replaced or upgraded; airflow and cooling effectiveness are altered when infrastructure ages, moves or gets enhanced with new technologies. The Dynamic Cooling Assessment provides real-time engineering analysis of the cooling infrastructure.”
Continuous, real-time performance tracking enables trends and problems with temperature and airflow to be identified before they jeopardize IT load availability. When issues are identified, solutions are recommended to prevent environmental problems such as hot spots, over-cooling and cooling system inefficiency.
The Emerson Network Power Dynamic Cooling Assessment includes installation of wireless transmitters, Liebert Sensor Network and Ntegrity Gateway without disruption to data center operations. Quarterly reports can be generated to provide trending analysis, identification and prioritization of potential problems affecting availability or efficiency, and detailed recommendations for corrective action. An annual Computational Fluid Dynamics (CFD) model is also generated allowing end-users to plan and optimize future data center changes.
For more information on any other Liebert technologies and services from Emerson Network Power, visit www.EmersonNetworkPower.com.
COLUMBUS, OHIO – Emerson Network Power, a business of Emerson (NYSE:EMR) and a global leader in maximizing availability, capacity and efficiency of critical infrastructure, today introduced Energy Logic 2.0, a vendor-neutral roadmap of 10 strategies that can reduce a data center’s energy use by up to 74 percent. The approach, detailed in a new e-book, updates the original Energy Logic, introduced in 2007, to incorporate the advances in technology and best practices that have emerged in the past five years. The company also launched the Energy Logic 2.0 Cascading Savings Calculator, an online tool that allows data center managers to calculate the approximate energy savings they would capture by employing strategies in the updated approach.
With the inclusion of new technologies and best practices, Energy Logic 2.0 illustrates how the energy consumption of a 5,000 square-foot data center could be cut by up to 74 percent using available technologies. It accomplishes this by leveraging the cascade effect, the cornerstone of the Energy Logic approach.
The cascade effect quantifies how savings at the IT component level are magnified in the supporting systems, recommending an overall approach that focuses on optimizing the efficiency of core IT systems to drive the greatest savings. In a data center with a PUE of 1.9, a 1 W savings at the server processor creates a 2.84 W savings at the facility level as a result of the cascade effect. At higher PUEs, the savings is even greater.
What’s New in Energy Logic 2.0
The original Energy Logic approach in 2007 was designed and tested on a 5,000-square-foot model data center. This year, Emerson Network Power again used a model data center of the same size to build the Energy Logic 2.0 roadmap. This time, however, recent technological advancements have enabled even greater energy savings.
- Energy Logic 2.0 shows how the energy consumption of the base data center can be reduced from 1,543 kW to 408 kW.
- Key strategies, such as high-efficiency server components, power architecture improvements, and temperature and airflow management have been updated to reflect recent technological advances.
- The base data center takes into account the rapid adoption of server virtualization. The optimized Energy Logic data center in 2007 assumed 20 percent virtualization, whereas the un-optimized data center in 2012 assumes 30 percent virtualization. In addition, server consolidation and virtualization are now treated as one strategy because they typically happen in concert.
- Information and Communications Technology (ICT) architecture is highlighted as an emerging best practice that delivers energy savings by optimizing IT and networking architecture.
- The emergence of data center infrastructure management (DCIM) is incorporated throughout the Energy Logic strategy, because DCIM provides the visibility and control required to fully leverage multiple Energy Logic strategies, including server power management, virtualization, power architecture, and temperature and airflow management.
In addition to the cascade effect, Emerson Network Power has quantified the “reverse” cascade effect: the total energy wasted by stranded capacity. Just as 1 W of savings at the server component level can save 2.84 W at the facility level, 1 W of energy wasted on an unproductive server creates an additional 1.95 W of waste at the facility level.
“Energy Logic 2.0 clearly shows there still are great opportunities to optimize the data center,” said Jack Pouchet, vice president of business development and director of energy initiatives for Emerson Network Power. “Energy efficiency remains a priority, and a new generation of management technologies that provide greater visibility and control of data center systems has arrived. The data center industry is better positioned than ever to make a serious impact in reducing overall data center energy consumption.”
Using the Cascading Savings Calculator
Data center managers can use the Cascading Savings Calculator to explore the impact Energy Logic 2.0 strategies might have on their facility. Users enter their compute load and facility PUE, then adjust sliding scales for nine strategies to show percent utilization. Based on this information, the calculator will approximate a strategy’s impact on compute load, PUE, and total energy and cost savings. The strategies data center managers can explore are:
- Low-Power Components
- High-Efficiency Power Supplies
- Server Power Management
- ICT Architecture
- Virtualization and Consolidation
- Power Architecture
- Temperature and Airflow Management
- Variable-Capacity Cooling
- High-Density Cooling
By trying out the Cascading Savings Calculator, data center managers can see how implementing each strategy – and varying degrees of each strategy – might impact their savings.
Nimbus Data Launches Powerful New Flash Memory Arrays to Enable Future-proof Ultra-efficient Data CentersWritten by Mike McClain, Senior Web Designer & Site Manager
San Francisco, CA – Nimbus Data Systems, Inc., the leader in Sustainable Storage®, today introduced its Gemini flash memory arrays, setting new industry standards in solid state resiliency, performance, and data center efficiency. In unveiling this new platform, Nimbus debuts several patent-pending design achievements in hardware and software that offer compelling operational and economic advantages for virtualization, database, and cloud initiatives:
- 10 year endurance that supports up to 1 PB of weekly data writes without performance loss
- No-single-point-of-failure with hot-swap redundant controllers and self-healing flash drives
- Patent-pending Parallel Memory Architecture capable of 12 GBps and over 1 million IOps
- Software-configurable ports to support the highest Ethernet, Infiniband, and Fibre Channel speeds
- Up to 48 TB of flash capacity in 2U, achieving 1 PB and 20 million IOps per rack
“The new Gemini flash memory array from Nimbus certainly sets a high bar in terms of completeness and vision for flash storage arrays,” stated Mark Peters, senior analyst at ESG. “Gemini stands out from the crowd with its architecture designed from the ground up and seamless multiprotocol versatility. The flexibility, performance and efficiency needs of contemporary and future applications will increasingly require data centers to deploy capabilities inherent in designs such as those demonstrated by Nimbus.”
The new Gemini system meets the rigorous availability and performance requirements of server virtualization, databases, VDI, and IO-intensive data processing applications. The fully-redundant “hot-swap everything” design offers dual controllers, non-disruptive software updates, and intuitive lights-out management in one system – no bulky external controllers, gateways, or cabling required. Unlike IO-bound off-the-shelf servers and disk arrays that fail to harness the performance potential of solid state technology, Gemini offers 6x more bandwidth through a non-blocking design. This Parallel Memory Architecture (PMA) enables all flash drives to operate at full speed in unison, providing superior scalability for virtualized infrastructures that depend on low-latency, high throughput storage.
Gemini supports all major storage networking technologies at their fastest available data rates, including Ethernet (1/10/40 GbE), Infiniband (20/40/56 Gb), and Fibre Channel (4/8/16 Gb). Gemini’s ports are also software-programmable, switching instantly from Ethernet to Infiniband or from Fibre Channel to Ethernet simply by changing transceivers. This dual-personality capability offers flexibility should storage networking requirements change in the future. All major block and file storage protocols are supported in-the-box, including iSCSI, FCP, SRP, NFS, and SMB, giving customers choice and the investment protection to adapt to changing needs effortlessly.
To enable service providers to defer data center capital expenditures, Gemini packs 8x more capacity per rack than 15K RPM disk arrays, without resorting to cumbersome top-loading designs that complicate field service. Gemini also utilizes a fraction of the power and generates significantly less heat than 15K RPM disk arrays, slashing power and rackspace costs by 85%. Delivering up to 20 million IOps in one rack, Gemini flash memory arrays enable a 30:1 consolidation in rackspace on a performance basis, giving customers a powerful weapon against unrelenting data growth.
Gemini incorporates Nimbus’ new Flash Lifecycle Management technology, which builds on over two years of production deployments of Nimbus solutions at some of the most demanding Global 5000 customers. Wear-leveling is performed system-wide, not just at the drive level, eliminating hot-spots and delivering consistent performance. This technology enables Gemini flash arrays to deliver up to 10 years of endurance, surpassing disk arrays in reliability. An intuitive graphical dashboard displays real-time and historical flash utilization, offering customers unique insight into the health of the storage infrastructure.
Gemini flash memory arrays feature Nimbus’ HALO software stack, a total system management solution offering thin provisioning, deduplication, replication, snapshot, and encryption capabilities. Included without any licensing fees, HALO is easy to use, enabling complete system setup in minutes.
“The ground-breaking advances in Gemini are the culmination of two years of nonstop hardware and software engineering and extensive feedback from our loyal customer base,” stated Thomas Isakovich, CEO and founder of Nimbus Data. “We believe that we have set the new standard in flash memory storage, enabling efficiency and performance that unequivocally redefine data center infrastructure.”
Gemini flash memory arrays are available in single and dual-controller configurations with either 4 x QSFP ports per controller (for Ethernet and Infiniband) or 4 x SFP+ ports per controller (for Ethernet and Fibre Channel). Capacity ranges from 6 to 48 TB per 2U system, before data reduction. The new Gemini flash memory arrays will be generally available in Q4 2012.
About Nimbus Data Systems
Nimbus develops award-winning Sustainable Storage® systems, the most intelligent, efficient and fault-tolerant solid state storage platform engineered for server and desktop virtualization, databases, HPC, and next-generation cloud infrastructure. Combining low-latency flash memory hardware, comprehensive data management and protection software, and highly-scalable multiprotocol storage features, Nimbus systems deliver dramatically greater performance at a significantly lower operating cost than conventional disk-based primary storage arrays, all at a comparable acquisition cost. For more information, visit www.nimbusdata.com, or follow Nimbus at twitter.com/nimbusdata.
“Expanding our global storage strategy to include the cloud makes a lot of sense from both a financial and a maintenance perspective,” said Jerry Bogart, Infrastructure Manager, at ChemPoint.com. “However, we really needed a solution that was simple to buy, simple to implement and simple to maintain. We found this with the TwinStrata CloudArray Subscription – we get easy, secure access to unlimited cloud storage for each of our offices globally without having to deal with multiple vendors or big upfront costs.”
Designed to address the needs of both small and large organizations, CloudArray Subscription delivers accessibility, security, performance and simplicity, enabling organizations to support multiple concurrent use cases and seamlessly use cloud storage as another tier of storage. Already organizations such as ChemPoint.com and CoreIP Solutions have begun using CloudArray Subscription.
In addition to obtaining a robust storage infrastructure, premium support, cloud snapshots and multi-tenant capabilities, customers of the new offering get tiered pay-as-you-go pricing and access to CloudArray software or hardware cloud storage gateways for the same flat monthly rate, without upcharges for bandwidth usage or other variable fees.
“Our customers turn to TwinStrata because we make cloud storage easy, painless and worry-free,” said Nicos Vekiarides, CEO of TwinStrata. “The introduction of CloudArray Subscription builds further on our vision. Now we can accommodate you no matter how you want to buy cloud storage – as a complete offering or by bringing your own cloud – through a monthly subscription or a perpetual license – ultimately giving you peace of mind.”
Pricing and Availability
The CloudArray Subscription is offered in addition to TwinStrata’s perpetual license cloud storage gateways. With a starting price of just $0.19/GB, CloudArray Subscription is available now and includes both Google Cloud Storage and TwinStrata’s software and/or hardware gateways. More information about CloudArray Subscription can be found at www.twinstrata.com/subscription. Download a 14-day free trial of CloudArray Subscription at: http://www.twinstrata.com/subscription-trial
About TwinStrata CloudArray
CloudArray virtual or physical appliances take minutes to configure and integrate public cloud, private cloud and local or remote storage devices into flexible “Cloud SANs” that provide unlimited storage, continuous data protection, tapeless offsite backup, instant disaster recovery, and branch office storage consolidation. CloudArray appliances are available from TwinStrata with software appliances available for immediate free download at www.twinstrata.com/CloudArray-download.
About TwinStrata, Inc.
TwinStrata is an innovator in enterprise-class data storage, data protection and disaster recovery/business continuity solutions using cloud storage. With TwinStrata CloudArray®, companies of all sizes can simply and economically leverage the scalability and efficiency of cloud storage while maintaining the availability, performance and security of local storage. CloudArray software and hardware solutions support all file and operating systems, and deliver substantial advantages over traditional off-site storage solutions, including a pay-as-you-go model, unlimited elastic capacity, local performance, in-cloud snapshots and disaster recovery, dynamic caching, automated policies, AES256 encryption, and continuous access to data. For more information visit TwinStrata.com or call +1 508-651-0199.
SunGard Availability Services Outlines How to Address the Challenge of Recovering Hybrid EnvironmentsWritten by Mike McClain, Senior Web Designer & Site Manager
Wayne, Pa. — Virtualization technology has changed the landscape of IT and data centers, delivering substantial benefits to not only production environments but also in disaster recovery. However, while data centers are becoming increasingly virtualized, most IT operations are a mix of physical and virtual systems – a hybrid environment. According to Gartner, “as of early 2012, 50 percent of all installed workloads were running in VMs.”*
While newer applications may run exclusively on virtual workloads, there are still many mission-critical applications running on a combination of mainframes, Windows servers, Linux/Unix systems and virtual machines. And managing a recovery site requires enterprises to purchase a whole new set of costly application software licenses for the secondary location.
This reality has created an IT issue that is still flying “under the radar” of many IT organizations: How to best protect and recover applications in hybrid environments – and do it a way that works within business and cost constraints?
“The challenge of recovering production operations that support heterogeneous mixes of business applications, virtual and physical computing platforms, supporting middleware, storage systems and database managers, all of which have complex interdependencies upon each other, is an extremely complex one,” said John Morency, research vice president, Gartner. “Ensuring that the entire recovery execution is 100 percent consistent with the operations restoration expectations of the business across all recovery tiers raises this bar even further. Currently, there is no one single technology that completely addresses this challenge. This means that world class execution, coupled with the right set of technologies, will continue to be required in order to make recovery management both effective and sustainable.”
The three top challenges for enterprises looking to recover hybrid environments are addressing their needs to:
- Recreate a multi-layer, multi-platform hybrid stack for each mission-critical application.
- Recover mission-critical applications within the time requirements needed to avoid unacceptable consequences to the business (recovery time objective – RTO).
- Avoid busting the IT budget on CAPEX for building a secondary site for recovery and OPEX for maintaining the site.
Why Recovery in Hybrid Environments is So Difficult
To better understand the complexity and difficulty in managing recovery in hybrid environments, let’s examine a typical three-tier web application – for instance, an e-commerce application. The application may have a database layer that runs on two different systems – a Linux system running Oracle and a Microsoft Windows server running SQL. Next, the middleware – or business logic – of the application could be on a Win2K server running WebLogic, and its job is to aggregate data from the Oracle and SQL servers. Lastly, the application has a web layer on an ESX server running Apache.
Add into this scenario some of the hardware supporting the application. For example, the web and middleware tiers are stored on an EMC SAN device, with the Oracle database on a NetApp SAN device and the SQL server on a Dell storage device.
Here is what this enterprise faces: multiple storage platforms, multiple compute platforms, multiple operating systems, and a mix of physical and virtual environments. So when a disaster or outage hits, if the enterprise has not created the identical physical and virtual stacks in its recovery environment to accommodate all three layers, the recovery will fail.
If the enterprise has the wrong version of VMware’s hypervisor running in the recovery environment, the recovery will fail. If it has the wrong hypervisor running in the recovery environment (say, Xen), the recovery will fail. If the enterprise only has the ability to recover the database layer by itself, or both the database and middleware layers without the web layer, the recovery will fail.
And now add in another level of complexity. The previous scenario is just one application. What if the organization has 50, 80 or even more than 100 applications to recover?
As enterprises examine the challenge of recovering a large number of important applications – all with aggressive recovery time objectives – the reason why recovery in hybrid environments is so difficult becomes very clear.
SunGard Availability Services recommends organizations address the following set of questions when developing a recovery strategy for hybrid environments:
- Is your production environment 100 percent virtualized, or do you run a hybrid environment with multiple platforms, operating systems, hypervisors and storage technologies?
- Do you have a full understanding of your recovery environment? Is it compatible from a platforms, operating systems, hypervisors, storage and application data point of view with your production environment? Do you understand all the interdependencies within your mission-critical applications?
- Do you have the diverse skills and the automation technologies to be able to recover all of your applications in an application-consistent way and be able to meet the RTOs and recovery point objectives (RPOs) for all of your applications?
- Have you created the processes and procedures to recover your hybrid environment? Have you tested your ability to meet your RTOs?
- Is your disaster recovery runbook current? In particular, have all production configurations been captured in the recovery environment – addressing change management?
What’s Needed to Achieve Recovery in Hybrid Environments
In order to support recovery of a hybrid environment, an enterprise needs to have in place:
- The right technologies for each platform and operating system at a secondary site.
- A well-documented disaster recovery playbook that contains all recovery processes.
- The right staff and expertise (a multi-discipline team skilled in VMware, Oracle, Windows, storage technologies and more) – trained and tested in running the playbook.
- Change management processes in place so all changes in production configurations – which happen frequently in enterprises – make their way into the recovery environment.
The challenges created by the complexity of applications-based recovery of hybrid environments often drive enterprises to turn to specialized assistance, such as SunGard Availability Services. SunGard has unmatched operations staff expertise to plan, implement, test and operate recovery processes in multi-layer, multi-platform hybrid environments.
SunGard Site Recovery Manager-as-a-Service is a new SunGard offering launching in Fall 2012 that delivers VMware vCenter Site Recovery Manager as a service to provide vSphere Replication or storage-based replication of applications to a secondary SunGard site.
SunGard Recover2Cloud is a Disaster Recovery-as-a-Service (DRaaS) offering that delivers cloud-based managed recovery services backed by guaranteed service levels.
“As a company which has successfully recovered thousands of environments in the last thirty years, we've seen the range of things that can go wrong, and we try to plan ahead for future issues,” said Michael de la Torre, vice president of product management for recovery services, SunGard Availability Services. “Today, we're seeing the need to recover multiple applications running in complex production environments, on both physical and virtual workloads, across multiple geographies, with multiple interdependencies, within specific RTOs, is the primary challenge. Our ability to recover complex, hybrid environments is absolutely a key differentiator for us.”
* Gartner, Inc., Top Five Trends for x86 Server Virtualization, Thomas J. Bittman, March 22, 2012.
About SunGard Availability Services
SunGard Availability Services provides disaster recovery services, managed IT services, information availability consulting services and business continuity management software. With approximately five million square feet of datacenter and operations space, SunGard Availability Services helps customers improve the resilience of their mission critical systems by designing, implementing and managing cost-effective solutions using people, process and technology to address enterprise IT availability needs. Through direct sales and channel partners, we help organizations ensure their people and customers have uninterrupted access to the information systems they need in order to do business. To learn more, visit www.sungardas.com or call 1-800-468-7483. Connect with us on Twitter, LinkedIn and Facebook.
SunGard is one of the world’s leading software and technology services companies. SunGard has more than 17,000 employees and serves approximately 25,000 customers in more than 70 countries. SunGard provides software and processing solutions for financial services, education and the public sector. SunGard also provides disaster recovery services, managed IT services, information availability consulting services and business continuity management software. With annual revenue of about $4.5 billion, SunGard is the largest privately held software and services company and was ranked 480 on the Fortune 500 in 2011. For more information, please visit www.sungard.com.