CSO — Over the past twenty years or more, corporations in nearly all industries have been outsourcing and offshoring at hyperdrive.
Venture capitalist firms, public shareholders, various types of financial firms, and corporate executives are driven by the temptation of reducing labor expenses, so they're delegating accountability and responsibility to foreign parties. Often the money saved by offshoring simply goes back into the pocketbooks of executives. They also often get bonuses, sometimes in seven or eight figures, to reduce as much domestic labor as possible.
But the costs of this trend are insurmountable.
First of all, with more and more Americans, Canadians, and other people in developed countries out of work, our economies are being destroyed. That doesn't reflect in the stock market -- not yet, anyway. But it will, probably within the next decade. Often the millions of chronically unemployed or underemployed (such as working at McDonald's or Walmart) have BAs, MAs, or even PhDs. Many more have significant licenses and certifications in various trades.
"When outage horror stories take over headlines, executives tend to have kneejerk reactions and look to adopt whatever disaster recovery offering they can implement fastest," he says. "But every organization and location is unique, and failing to thoroughly assess your situation may lead you to adopt a solution that is expensive overkill or cheap and inadequate."
And while most IT executives and data management experts acknowledge that there isn't one failsafe solution to protecting and recovering data, they agree that there are certain steps organizations should take.
What are the necessary precautions companies should take to protect critical files and applications in the event of disaster? Dozens of data storage, data management and disaster recovery experts share their advice. Here are their top 12 suggestions regarding how to disaster-proof data (files and applications).
CAMBRIDGE, Mass. — With the success of its free open online course system, called MITx, the Massachusetts Institute of Technology finds itself sitting on a wealth of student data that researchers might use to compare the efficacy of virtual teaching methods, and perhaps advance the field of Web-based instruction.
Since its inception several years ago, for instance, MITx has attracted more than 760,000 unique registered users from about 190 countries, university officials said. Those users have generated 700 million interactions with the school’s learning system and have contributed around 423,000 forum entries, many of them quite personal.
As researchers contemplate mining the students’ details, however, the university is grappling with ethical issues raised by the collection and analysis of these huge data sets, known familiarly as Big Data, said L. Rafael Reif, the president of M.I.T.
Interoperability testing reveals 300 percent faster backups vs. local disk
Newark, Calif. – March 5, 2014 – Tegile Systems, the leading provider of flash-driven storage arrays for virtualized server and virtual desktop environments, today announced that it has aligned with Veeam® Software, innovative provider of Protection for the Modern Data Center™, to provide organizations with an easy-to-use, easy-to-manage VM-aware backup and restore solution.
Tegile’s Zebi new generation of flash-driven storage arrays are designed to make server virtualization easier, faster, more reliable, more scalable and less expensive, allowing IT personnel to manage more hypervisors at a significantly lower cost than with standard hard disk-based arrays. When used in conjunction with Tegile Zebi storage arrays, Veeam Backup and Replication™ completes backup jobs 300 percent faster than local disk while performing restore operations 20 percent faster in Tegile lab testing compared to standard disk systems, helping organizations meet their most-demanding recovery time objectives with as little manual management as possible.
“While virtualization offers a tremendous amount of benefits on the server side, its use can add layers of complexity to the storage and backup side of the equation,” said Warren Adair, the Vice President Information Technology at Donahue Schriber Realty Group. “With both Veeam and Tegile being VM-aware solutions, they simplify the management of virtual machines independently but when used together, they take it to an even higher level. The combined attributes of Veeam and Tegile make it possible for us to concentrate on managing our business rather than our backup and storage processes.”
Veeam Backup & Replication provides fast, flexible, and reliable recovery of virtualized applications and data. It unifies backup and replication in a single solution, increases the value of backup, and reinvents data protection for VMware vSphere and Microsoft Hyper-V virtual environments.
Zebi arrays leverage the performance of SSD and low cost per TB of high capacity disk drives to deliver five times the performance and up to 75 percent less capacity required than legacy arrays. Tegile has architected the performance benefits of SSDs throughout the data path, giving every application a performance boost. One-click virtual machine optimized storage creation can deploy hundreds of virtual machines and desktops in minutes, not hours.
“The simple fact is that trying to backup and store virtualized data with 20- to 30-year-old management techniques just isn’t going to cut it,” said Rob Commins, VP Marketing of Tegile Systems. “The combination of Veeam’s VM-aware backup and Tegile’s VM-aware storage is a nice marriage. It allows the IT guys to figure out which VM is associated with which array to simplify management and provide ease of use while ensuring that backups and restores can occur magnitudes of order faster than local disk.”
About Tegile Systems
Tegile Systems is pioneering a new generation of flash-driven enterprise storage arrays that balance performance, capacity, features and price for virtualization, file services and database applications. With Tegile’s Zebi line of hybrid storage arrays, the company is redefining the traditional approach to storage by providing a family of arrays that is significantly faster than all hard disk-based arrays and significantly less expensive than all solid-state disk-based arrays.
Tegile’s patented MASS technology accelerates the Zebi’s performance and enables on-the-fly de-duplication and compression of data so each Zebi has a usable capacity far greater than its raw capacity. Tegile’s award-winning technology solutions enable customers to better address the requirements of server virtualization, virtual desktop integration and database integration than other offerings. Featuring both NAS and SAN connectivity, Tegile arrays are easy-to-use, fully redundant, and highly scalable. They come complete with built-in auto-snapshot, auto-replication, near-instant recovery, onsite or offsite failover, and virtualization management features. Additional information is available at www.tegile.com. Follow Tegile on Twitter @tegile.
Did you know that in six years’ time each individual on the planet will correspond to over 5,000 gigabytes of stored data? That’s the estimate from market research company IDC and digital storage enterprise EMC who see worldwide data holdings doubling about every two years to reach 40,000 exabytes (40 million billion gigabytes) by 2020. Right now in 2014, that means making moves to extend and enhance data storage solutions appropriately, and update those disaster recovery plans too. To store and manage all the data forecast to arrive, new techniques and technologies are available to blend with revamps of existing ones.
By now, cloud computing is a familiar resource at most enterprises. But like any data infrastructure or architecture, good enough won’t do, which is why many organizations are looking beyond mere deployment strategies and into full-blown optimization.
However, optimizing the cloud will not proceed along the same track as optimization of traditional data technology. For one thing, nearly all of the functionality in the cloud, at least as far as the enterprise is concerned, happens on the virtual layer or above. So rather than creating optimal environments through advanced technology, the play here is in tighter integration of services and applications. At the same time, optimized platforms are no longer focused solely around enhancing PC or desktop productivity, but on mobile devices and both wired and wireless infrastructure.
In the time it took me to write this sentence, approximately 20 networks were hit with a cyberattack. No, it did not take me very long to write that sentence—it’s just that, according to the 2013 threat report from FireEye, a cyberattack is happening every 1.5 seconds.
Or, at least, that’s what happened in 2013, the time period the report covered. That number could be more frequent now. After all, in FireEye’s 2012 Advanced Threat Report, companies experienced a malware attack "every three minutes."
Look at that time difference in the course of one year. We went from enterprise networks being subjected to an attack every three minutes to nearly every second. For those who think the high-profile attacks we’ve seen over the past few months are an anomaly, think again. Enterprise is under attack, pure and simple. As the bad guys become more sophisticated and create even trickier ways to sneak onto a network, next year’s FireEye report will declare numbers that seem unimaginable right now.
The steady drip of data breaches on the news and in consumers' lives isn't doing anything to build confidence in the state of today's business environment. At the heart of the matter: data privacy, or perhaps more accurately, the lack of it.
A new report from PwC, "10 Minutes on Data Privacy," points out that privacy is evolving beyond a risk and regulatory issue. Winning consumer trust is essential, and privacy polices directly correlate with brand image. How businesses manage data privacy and communicate with customers says everything about public perceptions of trust.
According to the report, 89 percent of consumers surveyed said they avoid doing business with companies they believe do not protect their privacy online, and 85 percent of investors said boards should be involved in overseeing the risk of compromising customer data.
This week I want to examine in more detail the good news coming out of the 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council . Based on hundreds of responses from organizations worldwide, the Annual Report provides several insights into the best practices of companies that are better prepared to recover from outages or disasters.
You can download the report for free at http://drbenchmark.org/
I want to examine why some companies appear to be doing much better at preparing for outages by implementing more detailed DR plans.
TUCSON, Ariz. – On his 50th birthday, John Halamaka, the CIO of Beth Israel Deaconess Medical Center in Boston, was surrounded by his senior staff having cake. Then his second-in-command came in with "some" news.
A physician had gone to the Apple store and returned with a MacBook, downloaded email, and then left the office. When he returned, the new MacBook was gone. On it was a spreadsheet embedded in a PowerPoint with information on 3,900 patients, data for which the hospital was responsible.
The hospital issued a news release, in which Halamka pointed out how the incident was being treated, "extremely seriously," but also being used to bring about change. In this case, accelerating implementation of a program to assist employees with protecting devices they purchase personally.