Enterprise Solution Increases Total Capacity by More Than 50% to a 448TB Full Backup in a Single Scale-out GRID System
With 14 in a GRID, New Appliance Offers 3x the Ingest Performance and 10x the Restore Performance of Large Vendor Solutions at Half the Investment
LONDON – Today, ExaGrid Systems, a leading provider of disk-based backup solutions, announced the largest, most powerful appliance in its arsenal of backup solutions with data deduplication: the EX32000E.
Leveraging the strength of ExaGrid’s scale-out GRID technology, the EX32000E can combine up to 14 appliances in a single scale-out GRID, allowing for a 448TB full backup in a single system, representing a 52 percent increase in total capacity. With 14 in a scale-out GRID, this increase positions the EX32000E as one of the largest full-backup target systems on the market, with 882TB of usable storage and over 1PB of raw storage.
The EX32000E has an ingest rate of 5.6TB per hour upwards to 7.5TB per hour in a single system depending on CIFS, NFS, Veeam Data Mover, or OST. With OST, the EX32000E has an ingest rate of 105TB per hour with 14 appliances in a GRID system.
This rate (105TB per hour) is three times the ingest performance of EMC Data Domain with Boost. And with ExaGrid’s unique landing zone, the most recent backups are kept in their full undeduplicated form for restore, recovery, and VM boot performance of up to ten times faster than inline deduplication appliances, such as EMC Data Domain, which only stores deduplicated data.
“We are excited to announce the EX32000E with 14 appliances in a single scale-out GRID. We have spoken with many large IT departments that understand the challenges of inline data deduplication with a scale-up storage model and are looking for a solution that provides faster backups, a fixed length backup window as data grows, and fast restores – especially VM boots in second to minutes,” said Bill Andrews, CEO of ExaGrid.
The new appliance, which houses 72TB of raw and 63TB of usable data, can take in a 32TB full backup undeduplicated in a front-end landing zone for fast restores and recoveries and maintain a long-term historical version in a deduplicated data repository.
“Our unique landing zone and scale-out approach provides restores and recoveries that are up to ten times faster than the inline deduplication approaches of other vendors, and will provide two to three times the backup (ingest) performance when compared to a scale-up approach, which only provides more disk as the data grows. The backup and restore performance of the EX32000E is unmatched – and at half the price, is in a league all its own,” said Andrews.
The ExaGrid scale-out approach brings compute with capacity – adding processor, memory and bandwidth as well as disk – allowing the backup window to stay fixed in length even as data grows. This approach is unique to ExaGrid and makes it the only disk-based backup system that maintains a fixed backup window.
A Growing Network of Partners and Support
Understanding and appreciating the complexity of robust backup at organisations of all kinds, ExaGrid supports a growing number of backup applications and utilities.
At the enterprise level, ExaGrid works with a number of solutions, including Symantec NetBackup, EMC Networker, IBM TSM, and CommVault Simpana. Leveraging ExaGrid with any of these applications gives IT departments the best of all worlds with:
- The fastest ingest performance for short backup windows
- The most recent backups in their native, undeduplicated form in the landing zone for fast restores, recoveries, and VM boots
- A fixed backup window as data grows due to full appliances in a scale-out GRID with processor, memory, and bandwidth with disk capacity expansion
- Symantec System Recovery
- Unitrends Enterprise Backup
- Unitrends Virtual Backup
ExaGrid also announced support for additional backup applications and utilities, increasing the number of supported applications, utilities, and database dumps to more than 25. In addition to backup applications already supported, such as Veeam, Symantec Backup Exec, arcserve, HP Data Protector, Oracle RMAN, SQL Dumps, and many others, ExaGrid has added support for:
Additionally, ExaGrid has reduced the form factor of two of its models, the EX5000 and EX7000 appliances, from 3U to 2U, saving valuable rack space in the datacentre.
Organisations come to us because we are the only company that implemented deduplication in a way that fixed all the challenges of backup storage. ExaGrid’s unique landing zone and scale-out architecture provides the fastest backup — resulting in the shortest fixed backup window, the fastest local restores, fastest offsite tape copies and instant VM recoveries while permanently fixing the backup window length, all with reduced cost up front and over time. Learn how to take the stress out of backup at www.exagrid.com or connect with us on LinkedIn. Read how ExaGrid customers fixed their backup forever.
ACE Jumpstart will optimize data center performance
GLASSBORO, N.J. – DCIM Solutions, LLC (DCIM Solutions), a leading provider of Data Center Assessments and Infrastructure Optimization Solutions, today announced a strategic partnership with Future Facilities to adapt ACE predictive modeling as part of DCIM’s Data Center Assessment Services.
The ACE Jumpstart Service assesses three critical indicators for optimal data center performance: Availability, Capacity and Efficiency. ACE scores data centers on how compromised its availability, physical capacity and cooling efficiency have become by analyzing and mapping the interrelationship between the three variables. The resulting ACE score determines how well a data center is performing, and in turn, how costly the facility is to build and operate.
“This partnership provides immediate benefits for data center owners and operators that are looking to treat their data centers as a business unit.” said Dan McDougal, Managing Partner, DCIM Solutions, LLC. “Using the ACE Methodology, DCIM Solutions will be well equipped to help data centers of all sizes plan for capacity changes and prevent negative trends before they begin.”
Data inputs from ACE can also be synchronized with any DCIM suite or other system monitoring toolkit and are mapped to a powerful 3D Computational Fluid Dynamics (CFD) model to create a Virtual Facility (VF). The VF allows for precise simulations for a variety of critical operational decisions, including: airflow distribution, temperature, physical resource collision and electrical systems.
DCIM Solutions has more than a decade of experience perfecting the calibration process, which is integral to establishing ACE goals and maximizing predictability. Through VF simulations, corrective measures are identified to bridge the gap between the data center’s current state and the aspired ACE Goals.
The calibrated VF produced by ACE Jumpstart will be imported into Future Facilities’ 6SigmaDC software and be available for immediate use, with a 90 Day Software License and formal training included. By engaging in an ACE Jumpstart assessment data center owner-operators will be able to utilize simulation and predictive modeling throughout the life of their data center to stay on track to reach their ACE Goals.
“Future Facilities is excited to partner with a data center infrastructure leader like DCIM Solutions,” said Sherman Ikemoto, Director, Future Facilities NA. “Through this partnership, ACE Jumpstart will be further optimized for the data center owner/operator. It’s gratifying to see the ACE Assessment becoming adopted as an important metric for data center efficiency.”
DCIM Solutions’ goal is to educate users on the opportunities to use predictive simulation to recover stranded capacity within the data center and maximize the real estate without sacrificing availability or efficiency.
Visit DCIM Solutions and Future Facilities at Booth 616 during the upcoming AFCOM Data Center World Expo in Orlando, FL -- Oct 19th through Oct. 22nd. Data Center experts will be available to provide information on ACE Jumpstart, the importance of modeling and predictive simulation, and the benefits of increasing data center efficiency through analysis within the Virtual Facility.
About DCIM Solutions, LLC
DCIM Solutions, LLC is the innovative leader for Data Center Infrastructure Optimization Solutions. With a focus on power, cooling, and space utilization, our products and services provide unparalleled optimization and efficiency resulting in cost avoidance, lower operating costs, and better utilization of assets. For more information visit www.nerdata.com.
One of the issues that mobile device vendors, service providers and users are well aware of is battery life. While it still is a hot issue, the dynamics have changed a bit during the last couple of years.
In the past, twin trends were seen as a tremendous problem. On one hand, applications and services were becoming more power-hungry and, on the other, devices were getting smaller. The small size of the device limits the size, and therefore the power, of the battery. This was seen as a looming threat to the very survival of the sector.
The pressure has eased a bit, however: The popularity of video on mobiles has led to a consistent growth in screen size, which means batteries can grow a bit.
Despite the tremendous gains it has made over the past decade, storage is still lagging behind its compute and networking counterparts in terms of speed and performance.
This isn’t an indictment of storage itself, mind you, as technologies like Flash and other forms of solid-state infrastructure have done wonders for both speed and throughput in advanced enterprise settings. Rather, it is in the support infrastructure surrounding physical storage where most of the bottlenecks remain.
Latency in the storage farm, in fact, is increasingly seen as an impediment to many higher order data center functions, such as virtualization and cloud computing. According to a recent survey from PernixData, a vendor of server-side Flash solutions, about half of respondents say storage performance is a higher priority than additional capacity, while only 21 percent cited capacity as a priority. As well, the survey has upwards of 70 percent of respondents considering storage acceleration software to help boost performance. A key driver in this shortage of performance continues to be the proliferation of virtual machines, which tends to flood storage infrastructure with more requests than it can handle.
Rapidly developing computer technologies and the unrelenting evolution of cyber risks present one of the biggest challenges to the (re)insurance sector today. Liabilities from cyberattacks and threats to the data security of cloud computing and social media have become key emerging risks for carriers. The unprecedented rise in cyberattacks, in addition to the threat cyberrisk poses to global supply chains, has seen the cyberinsurance market grow significantly in recent years.
Client demand for cyber coverage has been growing, on average, 30% annually in the United States over the past several years, according to Marsh. While demand varies by industry, the one constant has been that more clients are investigating and analyzing existing traditional insurance coverage and whether they need standalone cyberrisk insurance coverage.
(MCT) — As scary as the Ebola incidents in Texas and the outbreak in Africa are, it's worth noting that nine years ago this month the country was confronting another outbreak that looked rather ominous, too: a deadly strain of influenza that had originated in birds in Asia.
The so-called bird flu elicited a widespread government response, including a white paper from then-President George W. Bush's White House laying out the strategies should the flu reach pandemic levels in the United States. There were worries at the time that the flu, which was passed from birds to humans, could mutate, turning into a flu pandemic similar to the one at the end of World War I that killed between 20 and 40 million people globally in 1918-1919.
Millions of birds were purposely killed to stop the disease, and the bird flu scare abated over that winter of 2005-2006.
Which disaster recovery measurements do you really need? The answer is the ones that are effective in helping you to plan and execute good DR. So your choice will naturally depend on your IT operations. The two ‘classics’ of the recovery time objective (RTO) and recovery point objective (RPO) are so fundamental that they apply to practically all situations. But suppose your organisation is running a service-oriented IT architecture with business applications like ERP using resources supplied by other servers. If some of the servers cannot be recovered satisfactorily, there may be a secondary impact elsewhere. How can you measure this situation and define a minimum acceptable level of recovery?
DALLAS — As a 26-year-old Dallas nurse lay infected in the same hospital where she treated a dying Ebola patient last week, government officials on Monday said the first transmission of the disease in the United States had revealed systemic failures in preparation that must “substantially” change in coming days.
“We have to rethink the way we address Ebola infection control, because even a single infection is unacceptable,” Thomas Frieden, director of the Centers for Disease Control and Prevention, said in a news conference.
Frieden did not detail precisely how the extensive, government-issued safety protocols in place at many facilities might need to change or in what ways hospitals need to ramp up training for front-line doctors or nurses.
By Matthew Neigh, Global Technology Evangelist, Cherwell Software
Today’s IT environments are complex, and the commoditization of IT is one of the driving elements. This is manifest in a variety of ways in the enterprise. However, few are as vexing as “bring your own device” (BYOD).
BYOD is not only the future—actually, it’s already here. Organizations should expect the trend and learning curve to increase, and the required time to adapt to decrease at a sharp rate. That means IT organizations are responsible for laying the groundwork for today’s need: the creation and implementation of policy. Listed below are key factors you’ll want to consider as you move toward the creation and implementation phase.
(MCT) — If the Loma Prieta earthquake happened today, Buck Helm might have survived his Nimitz Freeway commute to watch his two youngest children grow up. Donna Marsden could have finished fixing up her Victorian home. Delores Stewart could have cheered on her beloved Oakland A's.
Twenty-five years later, the freeways and bridges that collapsed have been rebuilt to stand up to a quake even more powerful than the 6.9 magnitude Loma Prieta.
More than $22 billion in infrastructure upgrades have built a metropolitan area that is far safer and far more resilient than before. It's a testament to the power of long-term planning, borne of the ashes of the tragedy — 25 years ago Friday.