Fall World 2015

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 28, Issue 1

Full Contents Now Available!

Jon Seals

When should you bring in new technology? When it does a better job at meeting your needs, of course. It’s the same for business continuity management. Migrating from in-house physical servers to cloud computing services should be properly justified by lower costs, higher reliability and better performance for instance. Without sacrificing data confidentiality, control or conformance. While cloud computing makes sense for many organisations, there are cases where it doesn’t (example – cloud computing isn’t always cheaper). Looking at the following business criteria and then analysing what new generation technology has to offer may be the smarter way to do things.

...

http://www.opscentre.com.au/blog/business-benefit-checklist-for-new-business-continuity-technology/

Suppose your business suffers a temporary disruption.  (The cause of the disruption doesn’t matter; neither, necessarily, does the length of the disruption.)  A disruption that impacts customers, prospects or finances (and almost every disruption – even for a few minutes – will), may trigger compliance obligations.  You may need to file an insurance claim.  Or you may need to provide government or industry regulators with the details of how your organization dealt with the disruption.

Do your Business Continuity and Incident Management plans lay out the needs and requirements for documenting actions taken during disaster or other disruption?

 Any business disruption will generate a flurry of activity.  Will you be able to recall all of those actions once order has been restored?  Or will you have to spend countless hours reconstructing what happened, who did what and how long each action took.  It is unlikely you’ll be able to capture every action by every participant.  And the longer the disruption lasts, the longer that list of action will be.

...

http://www.ebrp.net/event-documentation-dont-leave-it-for-later/

Tuesday, 07 October 2014 15:07

Consumers Are Growing Tired of Data Breaches

Two surveys have been released recently that show the way consumers think about enterprise data breaches.

The first survey, conducted by HyTrust, isn’t surprising. It found that the majority of consumers will take their business elsewhere after discovering their information was compromised in a breach. And consumers aren’t patient on this matter. For approximately 45 percent of survey respondents, data security is a one strike and you’re out deal – they aren’t going to wait around for your company to get its act together and fix the security holes.

Also, that 45 percent wants to see companies held criminally negligent when a data breach occurs. Eric Chiu, president and co-founder of HyTrust, told eWeek that this survey result may have been the most surprising statistic to come out of the survey, adding:

...

http://www.itbusinessedge.com/blogs/data-security/consumers-are-growing-tired-of-data-breaches.html

Tuesday, 07 October 2014 15:06

Global Footprints Require Global Storage

One of the primary benefits of the cloud is the ability to distribute data architectures across wide geographic areas. Not only does this protect against failure and loss of service, but it allows the enterprise to locate and provision the lowest-cost resources for any given data load.

But problems arise in the ability, or lack thereof, of managing and monitoring these disparate resources, particularly as Big Data and other emerging trends require all enterprise data capabilities to be marshalled into a cohesive whole.

When it comes to storage, many organizations are attempting to do this through global file management, which is essentially putting SAN and NAS capabilities on steroids. The idea, as Nasuni and other promoters point out, is to extend resource connectivity across broadly distributed architectures while maintaining centralized control. This is not as easy as it sounds, however. Traditional snapshot and replication techniques must now work across multiple platforms and be free to make multiple versions of data that would overwhelm standard storage architectures. They must also be flexible enough to accommodate numerous performance levels, but not so unwieldy as to drive up costs by endlessly copying data sets for each new cloud deployment.

...

http://www.itbusinessedge.com/blogs/infrastructure/global-footprints-require-global-storage.html

Data can be a fundamental tool in disaster preparedness, but the insights aren’t always heeded. This was the observation of three emergency management experts from academia, government and the private sector in an exchange last week on natural disaster data.   

The trio, who spoke about data use for city resilience at the Atlantic CityLab Summit in Los Angeles, Sept. 29, said that an analysis of data shows an overwhelming need for infrastructure improvements, but states and cities typically take short-term savings over long-term protections against catastrophe.

Lucy Jones, a seismologist at the U.S. Geological Survey (USGS), is collaborating with Los Angeles to draft a seismic-resilience plan. She said the city is a prime example of what happens when there’s an abundance of data and absence of investment in disaster preparation. About 85 percent of the city’s water supply is delivered by aqueducts across the southern San Andreas Fault — a fault line the USGS estimates will generate a major earthquake sometime in the next decade or so, according to its data. The danger centers on indications city aqueducts will break, leaving only a six-month supply of water reserves for residents, she said.

...

http://www.emergencymgmt.com/disaster/Is-Data-Best-Preparation-Natural-Disasters.html

“What if there was a case of Ebola in my community?” With the growing outbreak in West Africa, public health preparedness planners across the country are mulling this question as news broke that the CDC confirmed a case of Ebola in Texas and concerns grow over the threat posed by Ebola to global health security. This question is inevitably followed up with, “Are we ready?”

These are the types of questions that keep public health preparedness planners up at night. The reason these questions are so pressing right now is not only because of the alarming symptoms and mortality rate of Ebola, but also because of the continuous funding cuts that local health departments have faced since 2007. The United States is not West Africa, and Ebola is unlikely to have sustained transmission here because of better infection control in healthcare facilities, cultural differences, and protocols put in place by the Centers for Disease Control and Prevention (CDC) to stop the spread of the disease. But while local health departments would do everything in their power to protect lives in the face of a public health emergency like Ebola, there are other consequences to a community tasked with responding to a public health emergency that are complicated by ongoing funding cuts. For example, even the containment, treatment, and contact investigation of a small number of Ebola patients would have the potential to quickly overwhelm local health department budgets, as per capita spending on public health preparedness has decreased by nearly 50 percent in just the past year. Administrative burdens often delay state and federal emergency response funding that supplements local budgets. Additionally, lack of funding has decreased the number of preparedness programs.

...

http://www.emergencymgmt.com/health/Have-Public-Health-Funding-Cuts-Impacted-Response-Capabilities.html

Business Continuity and IT Disaster Recovery planning tends to first focus on system and application recovery (Recovery Time Objective – RTO) and data recovery (Recovery Point Objective – RPO) second. That makes sense when you consider the order it which things are usually recovered, but does it really? Isn’t the data or the information the life blood of the company? Isn’t that why it is called Information Technology and not just technology?

Customer information, financial data, product specifications, research data, procedures, accounts payable, forms (the list could go on and on) is what the company runs on.

I read two articles recently – Michael O’Dwyer’s “How snapshot recovery ensures business continuity” and Marc Staimer’s “Why Business Continuity Processes Fail and How To Recover Them.” Both share a lot of good information about improving data backup methods and timeliness. They explain how important the RPO is to disaster recovery planning and talk about backup and restore procedures, media, storage and locations. I would like to add some additional considerations for determining the RPO and developing recovery strategies that will meet the business need.

...

http://www.strategicbcp.com/blog/recovery-point-objective-rpo-considerations/

SAN MATEO, Calif. Enterprises are moving more and more applications to the cloud. The use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend, according to Gartner, Inc. 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.

“While the benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, for mission-critical applications, such as SQL Server, Oracle and SAP, companies need a strategy for high availability (HA) and disaster recovery (DR) protection,” said Jerry Melnick, COO of SIOS Technology Corp. (www.us.sios.com), maker of SAN and SANless clustering software. “While traditional SAN-based clusters are not possible in these environments, SANless clusters can provide an easy, cost-efficient alternative.”

According to Gartner, IT service failover automation provides end-to-end IT service startup, shutdown and failover operations for disaster recovery (DR) and continuous availability. It establishes ordering and dependency rules as well as IT service failover policies. The potential business impact of this emerging technology is high, reducing the amount of spare infrastructure that is needed to ensure DR and continuous availability, as well as helping ensure that recovery policies work when failures occur, thus improving business process uptimeii.

Jerry Melnick says separating the truths and myths of HA and DR in cloud deployments can dramatically reduce data center costs and risks. He debunks these five myths:

Myth #1 - Clouds are HA Environments. Public cloud deployments, particularly with leading cloud providers, are high availability environments where application downtime is negligible.

  • The Truth - Redundancy is not the same as HA.  Some cloud solutions offer some measure of data protection through redundancy. However, applications such as SQL Server and file servers still need additional configuration for automating and managing high availability and disaster recovery.
  • The Truth – You can provide high availability protection for Windows applications in a cloud simply by adding SANless cluster software as an ingredient and configuring a WSFC environment. The SANless software synchronizes local storage in the cloud through real-time, block level replication, providing applications with immediate access to current data in the event of a failover. 

Myth #2 - Protecting business critical applications in a cloud with a cluster is impossible without shared storage. You cannot provide HA for Windows applications in a cloud using Windows Server Failover Clustering (WSFC) to create a cluster because it requires a shared storage device, such a SAN. A SAN to support WSFC is not offered in public clouds, such as Amazon EC2 and Windows Azure.

Myth #3 – Remote replication isn’t needed for DR. Applications and data are protected from disaster in the cloud without additional configuration. 

  • The Truth - Cloud providers experience downtime and regional disasters like any other large organization. While providing high availability within the cloud will protect data centers from normal hardware failures and other unexpected outages within an availability zone (Amazon) or fault domain (Azure), data centers still need to protect against regional disasters. The easiest solution is to configure a multisite (geographically separated) cluster within a cloud and extend it by adding an additional node(s) in an alternate datacenter or different geographic region.
  • The Truth: Companies can use the

Myth #4 - Using the cloud is “all or nothing.”

Myth #5 - HA in a cloud has to be costly and complicated.

  • The Truth: A cluster for high availability in a cloud can be easily created using SANless clustering software with an intuitive configuration interface that lets users create a standard WSFC in a cloud without specialized skills. SANless clustering software also eliminates the need to buy costly enterprise edition versions of Windows applications to get high availability and added disaster protection or as described in Myth 4, to eliminate the need to build out a remote recovery site.

iGartner Says Cloud Computing Will Become the Bulk of New IT Spend by 2016.(http://www.gartner.com/newsroom/id/2613015)

iiGartner Hype Cycle for IT Service Continuity Management, 2014.  September 10, 2104.  Analysts:  John P Morency, Carl Claunch, Pushan Rinnen

About SIOS Technology Corp.

SIOS Technology Corp. makes SAN and #SANLess software solutions that make clusters easy to use and easy to own.An essential part of any cluster solution, SIOS SAN and #SANLess software provides the flexibility to build Clusters Your Way to protect your choice of Windows or Linux environment in any configuration (or combination) of physical, virtual and cloud (public, private, and hybrid) without sacrificing performance or availability. The unique SIOS #SANLess clustering solution allows you to configure clusters with local storage, eliminating both the cost and the single-point-of-failure risk of traditional shared (SAN) storage.

Founded in 1999, SIOS Technology Corp. (www.us.sios.com) is headquartered in San Mateo, California, and has offices throughout the United States, United Kingdom and Japan.

Tuesday, 07 October 2014 14:39

Permabit SANblox Now Available from EMC

Plug-and-play inline data efficiency appliance for Fibre Channel SANs available through EMC Select
CAMBRIDGE, Mass. – Permabit Technology Corporation, the innovative leader in data efficiency technology, today announced that it has joined the EMC Select Program. Under the partnership, EMC will sell SANblox™ through its sales and reseller channels, providing a ready-to-run, high performance data efficiency appliance for new and existing EMC Fibre Channel SANs. The SANblox appliance, which leverages Permabit’s award-winning Albireo VDO and HIOPS™ Compression software, provides ‘plug and save’ data reduction across a wide range of applications, including mixed virtual server, VDI, database (OLTP and data warehouse) and Big Data environments. EMC customers who purchase SANblox can increase the effective capacity of their SANs by 6X, drop effective cost by up to 85% and increase performance by up to 400%. “We are committed to helping our partners expand market share by providing field tested data efficiency solutions that deliver competitive advantage and enable them to get to market quickly,” said Tom Cook, Permabit CEO. “We are thrilled to join the EMC Select Program and to make SANblox available to the market via the leading EMC sales force and expansive channel partner network.” SANblox will become available during Q4 2014 through EMC Select. Look for up to date information at http://permabit.com/partners/oem-partner. About Permabit Permabit pioneers development of data efficiency technologies. Our innovative data deduplication, compression and thin provisioning products enable the world’s leading storage OEMs to cut effective cost, accelerate performance, reduce time to market and gain competitive advantage. Just as server virtualization revolutionized the economics of compute, our data reduction technologies are transforming storage economics, today. Permabit is headquartered in Cambridge, Massachusetts with operations in California, Texas, Florida, Korea and Japan. For more information, visit www.permabit.com.

First step in securing Payment Card Data 

DENVER, Colo.ViaWest, the leading colocation, managed services and cloud provider in North America, today announces the launch of its KINECTed PCI Compliant Cloud, a purpose-built, audit-ready cloud solution that was created using industry-leading virtualization and security technology. This service offers protection for companies that accept, store, process or transmit credit card data.  

In a 2013 study, the Ponemon Institute found that thirty-five percent of data breaches identified were the result of company negligence. The study also confirmed that companies with an incident response plan and a strong security posture can significantly decrease the cost per breached record. 

“The importance of choosing the right providers to help transition your company beyond compliance and into a secure infrastructure cannot be emphasized enough,” states Matt Getzelman, Director PCI Practice at CoalFire. “ViaWest’s attention to detail on their compliant cloud solutions and secure infrastructure provides a level of security in which customers can feel confident.” 

“We’ve designed our PCI Compliant Cloud solution from the ground up to satisfy the needs of customers who want to protect themselves against PCI DSS non-compliance,” says Jason Carolan, Chief Technology Officer at ViaWest. “Our virtual private cloud leverages a dedicated, secure network infrastructure that enables data security in the cloud without having to invest in additional hardware, software, or in-house compliance expertise.”

Backed by ViaWest’s team of security and compliance experts, KINECTed PCI Compliant Cloud comes complete with:

  • A fully audited infrastructure
  • Fully staffed compliance department
  • Security solutions to protect beyond compliance requirements
  • 24x7 customer support with highly trained engineers
  • Dedicated account manager
  • 99.9% availability SLA on compute resources
  • 15+ years of architecting individualized customer solutions

The ramifications of PCI DSS compliance extend far beyond organizations in the financial and e-commerce sectors. Businesses of all sizes that accept, store, process or transmit credit card data are impacted. By offering hybrid solutions, ViaWest enables a company to grow over time while only using what it needs. ViaWest’s security solutions and team of experts help to offer protection beyond compliance. For more information about KINECTed PCI Compliant Cloud, visit http://www.viawest.com/cloud-services/cloud-computing/kinected-pci-compliant-cloud.

 

About ViaWest

ViaWest is the leading colocation, managed services, and cloud provider in North America. We enable businesses to leverage both their existing IT infrastructure and emerging cloud resources to deliver the right balance of cost, scalability and security. Our data center services include a comprehensive suite of fully compliant environments, premium wholesale and retail colocation, private and public clouds and managed services. For additional information on ViaWest, please visit www.viawest.com or call 1-877-448-9378. Follow ViaWest on LinkedInTwitter or visit their YouTube channel.