Extends Phoenix Capabilities to Include Disaster Recovery; All-in-One Solution Offers Unprecedented Cost Savings
SUNNYVALE, Calif. – Druva, the leader in converged data protection, today announced new Disaster Recovery (DR) functionality to extend its converged cloud-based data protection solution for enterprise infrastructure, DruvaPhoenix. Introduced a year ago, Phoenix was the first to provide backup and archival of both physical and virtual server environments direct to cloud. With today’s announcement, Druva adds new DR functionality creating a single platform supporting multiple, often disparate data protection capabilities. The first-of-its-kind, integrated public cloud offering eliminates software, expensive hardware, tapes and process complexity while saving enterprises anywhere from one-third to one-fifth of their total operating costs. Druva Phoenix is built on Amazon Web Services (AWS) and leverages the public cloud’s elasticity, global presence and security, meaning companies can store, protect and manage large volumes of data simply, efficiently and effectively.
“Companies have been forced to juggle multiple hardware and software resources – including on-site tape, secondary disk hardware and backup software – to manage, protect and secure data. This has created organizational silos and significant expense. Phoenix has been designed as a one-stop-shop for on-demand, infrastructure data protection services,” said Jaspreet Singh, CEO, Druva. “By adding DR to Phoenix’s existing public cloud backup and archival capabilities, these silos are eliminated, saving enterprises money and resources.”
“The convergence of ‘data protection’ and ‘cloud-services’ continues to make increasing sense for organizations of all sizes, all of whom are looking for better recovery-agility while reducing costs and complexity,” says Jason Buffington, Principal Analyst, Enterprise Strategy Group. “Many IT professionals are asking ‘Why BaaS when you can DRaaS?’ as a growing recognition that most businesses cannot afford the downtime of waiting for data to be restored before the business processes can be resumed – and cloud-services can offer that agility. Druva continues to be an innovation leader in cloud-based data protection, so their broadening approach to cloud-centric Backup, Archive and Disaster Recovery within a single framework is a model that many should look earnestly at.”
Phoenix’s new DR capability enables organizations to continuously back up their VMware environments, and automatically recover and spin-up their virtual machines in the AWS public cloud when disaster strikes, ensuring business continuity and eliminating the need for additional dedicated on premise software, storage or hardware, significantly reducing cost and improving agility. With its advanced configuration settings, Phoenix provides the setup of detailed polices to automate network and security failover to a DR environment, significantly reducing downtime. Additionally, administrators can automatically spin-up multiple copies of virtual machines across geographies and accounts for test and dev automation.
Druva Phoenix unifies hot, warm and cold backup and archiving processes, performing backups and restores 20 times faster than competitive solutions to ensure companies meet their recovery objectives. Druva’s cloud architecture creates a single converged platform that scales to support multiple workloads, all while natively using AWS storage technologies. Since only a single copy of data is stored, the risk and cost associated with maintaining multiple copies is eliminated.
In addition to the new DR capabilities, Phoenix also provides:
- Unified Backup & Recovery for Physical and Virtual Environments - Removes the need for costly multi-vendor approaches and provides ever-incremental backups for greater storage flexibility and recovery.
- Seamless Archival – Automates storage management and provides infinite retention ensuring data can be held and securely stored for decades.
- Test & Dev- Boots a copy of a virtual image in the cloud, allowing companies to create a replica of the production environment. Tests and validation can be run against a copy of the production data.
- Integrated Analytics – Converging multiple services provides administrators unified visibility to data and usage patterns.
In terms of cost, enterprises derive bottom-line benefits from the public cloud “pay as you play” pricing model and converged cloud-first approach. Companies save on administration, licensing and maintenance costs, as cloud removes the need for additional software, hardware, tapes, multiple vendors, gateways and more.
Druva Phoenix backup and archival is available today. Integrated disaster recovery is currently in limited availability with GA in 60 days at additional charge. To learn more about Druva Phoenix, visit www.druva.com/phoenix.
It’s cold and flu season, joy of joys. But still, flu and all, That Guy is in the office, sneezing and coughing all over everything and everybody, sharing his germs with the whole team. Ick. Don’t be That Guy.
That Guy should be working at home, hacking and spluttering away from other people. No one wants to catch his flu or live in a full-body shroud of Purell, but due to restrictive IT policies and a dearth of secure, remote work options, he can only complete his work from the PC in his cube.
Modern enterprise data centers are some of the most technically sophisticated business activities on earth. Ironically enough, they are also often bastions of inefficiency, with equipment utilization much below ten percent and 30 percent of the servers in those facilities being comatose (using electricity but performing no useful information services). The operators of these facilities also struggle to keep pace with rapid changes in deployments of computing equipment.
These problems have led to much attention being paid to improving data center management. While almost every enterprise data center has taken steps to improve its operations, virtually all are much less efficient, much more costly, and far less flexible than they could be. Those failings ultimately prevent data centers from delivering maximum business value to the companies that own them.
Well-managed data centers use what I call the three pillars of modern data center operations: tracking, procedures, and physical principles.
Doug Cutting, chief architect at Cloudera, and Mike Olsen, the company's chief strategic officer and cofounder, were having dinner with their families at a restaurant on Jan. 28, during which Cutting blew out a candle and shared some champagne in honor of Hadoop's 10th anniversary.
Cutting developed Hadoop with Mike Cafarella as the two worked on an open source Web crawler called Nutch, a project they started together in October 2002. In January 2006, Cutting started a sub-project by carving Hadoop code from Nutch. A few months later, in March 2006, Yahoo created its first Hadoop research cluster.
In the 10 years that followed, Hadoop has evolved into an open source ecosystem for handling and analyzing Big Data. The first Apache release of Hadoop came in September 2007, and it soon became a top-level Apache project. Cloudera, the first company to commercialize Hadoop, was founded in August 2008. That might seem like a speedy timeline, but, in fact, Hadoop's evolution was neither simple nor fast.
IT organizations are quickly moving to embrace the notion of having multiple cloud computing options. The challenge now is figuring out which application workload to run where, based on the actual costs of running a workload on a specific cloud platform.
To make that simpler to ascertain, Cloud Cruiser has unfurled a version of its cloud analytics software that can now be invoked as a software-as-a-service (SaaS) application. Rather than going to the trouble of setting up an application that is not going to be used every day, Andrew Atkinson, senior director for product marketing at Cloud Cruiser, says Cloud Cruiser now makes available version 16 of its namesake application as a service.
At present, Cloud Cruiser 16 is designed to make it simpler for IT organizations to identify the true costs of deploying application workloads on Amazon Web Services, Microsoft Azure and Google Cloud Compute. Atkinson says down the road, Cloud Cruiser might add support for other clouds, but right now these three represent the lion’s share of the demand for cloud services being generated by cloud customers.
A data center is very much like a car – it needs maintenance to run smoothly and not break down in the middle of your journey. The measurement of how vulnerable your system is to failure determines the resilience of your facility. You can increase that resilience to boost your uptime.
Data Center Resilience (or Resiliency) as described by TechTarget is defined as: “the ability of a server, network, storage system, or an entire data center, to recover quickly and continue operating even when there has been an equipment failure, power outage or other disruption.”
Here are five ways data center operators can increase the resilience of their facility – and secure smooth operations without failure – by deploying the best-of-the-breed data center infrastructure management (DCIM) solutions.
There is a lot of talk about the commodity data center these days, but this usually refers to the type of hardware that goes into building it.
Increasingly though, as more of the data infrastructure becomes virtualized and portable and enterprises at large gravitate toward cloud and colocation solutions, we are starting to see the data center itself treated as a commodity; that is, a thing to be bought and sold, hopefully for a profit.
Verizon Communications recently embraced this new paradigm by putting its substantial data center assets on the market for an asking price of $2.5 billion. The move is part of a broader strategy to divest itself of its landline businesses and even a good number of its wireless towers to concentrate instead on communication services. The nearly 50 data centers up for sale produce estimated annual revenue of about $275 million (minus EBITDA), and include the collection acquired from Terremark for $1.4 billion several years ago. AT&T is said to be exploring the sale of its data center assets as well.
(TNS) - Pregnant women take heed: You may want to postpone that spring break trip to Mexico or summer getaway to the Caribbean.
Health officials are advising women who are pregnant or trying to become pregnant to avoid traveling to certain parts of Mexico, Central America, South America and the Caribbean due to mosquito transmission of a virus that has been linked to a serious birth defect of the brain.
The Centers for Disease Control and Prevention issued a travel alert two weeks ago after health officials in Brazil reported links between the Zika virus and microcephaly in babies of mothers who were infected with the virus while pregnant.
(TNS) - The World Health Organization declared Monday that explosive growth of the mosquito-borne Zika virus — which has been spreading rapidly in the Americas and may be linked to birth defects — constitutes an international public health emergency, signaling an new phase in the global effort to battle the virus.
The United Nations health agency made the decision after convening an panel of experts in Geneva amid reports from Brazil linking the virus to microcephaly, a birth defect of the brain in which babies are born with abnormally small heads.
The recent “cluster” of microcephaly cases and other neurological disorders reported in Brazil followed a similar “cluster” in French Polynesia in 2014, WHO Director-General Margaret Chan said in a statement.
As our global online world evolves before our eyes, the topic of cybersecurity seems overwhelming to most people. Just as new innovative opportunities are announced daily, emerging cyberthreats can undermine online progress in virtually every area of life.
The official numbers seem daunting from the U.S. CERT regarding cyberattacks, with incident numbers rising sharply in 2015 (see chart below).
So how can we get our arms around this problem of protecting the homeland from the bad actors in cyberspace? What issues are most pressing? How is the U.S. Department of Homeland Security addressing these challenges? What partnerships and new developments are important?