Skip to content This page will be automatically closed in 6 seconds.

Hot News:

Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Bonus Journal

Volume 29, Issue 5

Full Contents Now Available!

Industry Hot News

Industry Hot News (6806)

You’ve taken the time to implement a disaster recovery (DR) plan for your company – you’re prepared for anything. You’ve covered all the milestones, including:

  • Performing a Business Impact Analysis (BIA) to determine the recovery times you’ll need for your applications.
  • Tiering your applications and documenting their interdependencies so you know which order your servers should be restored in.
  • Putting your recovery infrastructure in a geographically-diverse data center.
  • Created a comprehensive recovery playbook and tested each and every step.

Bring on the storms … the floods … the power outages … you’re ready. But are you really?


To small business owners, the buzz words from the Big Data world (i.e., petabytes, zettabytes, feeds, analytics, etc.) seem very foreign indeed. According to research from the SMB Group, only 18 percent of small businesses currently make use of Big Data analytics and business intelligence solutions. On the other hand, midsize businesses have shown greater adoption, with 57 percent of those surveyed reporting that they use BI and analytics to gain actionable information.

However, many Big Data vendors have begun creating a better story for smaller businesses, focusing more on how they can use their tools to achieve deeper insight into business data to help them make more informed decisions. And the ones that listen to this retooled message will receive a decent payoff for their efforts.


Wednesday, 21 January 2015 00:00

High Performance Data Storage Tips

Talk to many data storage experts about high-performance storage and a good portion will bring up Lustre, which was the subject of a recent Lustre Buying Guide. Some of the tips here, therefore, concern Lustre, but not all.

Use Parallel File Systems

Parallel file systems enable more data transfer in shorter time period than their alternatives.

Lustre is an open source parallel file system used heavily in big data workflows in High Performance Computing (HPC).  Over half of the largest systems in the world use Lustre, said Laura Shepard, Director of HPC & Life Sciences Marketing, DataDirect Networks (DDN). This includes U.S. government labs like Oakridge National Lab’s Titan, as well as British Petroleum’s system in Houston.


Whether you are planning a traditional data center build-out or all-new cloud infrastructure, the appeal of white box hardware is difficult to resist.

Provided you need enough of a particular device to benefit from economies of scale, and you have a plan to layer all the functionality you need via software, white box infrastructure can do wonders to reduce the capital costs of any project. Plus, you always have the option to rework the software should data requirements change.

But it isn’t all wine and roses in the white box universe. As IT consultant Keith Townsend noted to Tech Republic recently, white box support costs often emerge as a fly in the ointment. Large organizations like Facebook and Google have the in-house knowledge to deploy, configure and optimize legions of white boxes, but the typical data center does not. It takes a specialized set of skills to implement software-defined server, storage and networking environments, and white box providers as a rule do not offer much support other than to replace entire units, even if only a single component has gone bad. There is also the added cost of implementing highly granular management and monitoring tools to provide the level of visibility needed to gauge a device’s operational status to begin with.


Is your business prepared for IT outages? Disaster preparedness is vital for businesses of all sizes, especially for those that want to avoid prolonged service interruptions, and companies that prioritize disaster preparedness can find ways to protect their critical data during IT outages as well.

Managed service providers (MSPs) can offer data backup and disaster recovery (BDR) solutions to help companies safeguard their sensitive data during IT outages. These service providers also can teach businesses about the different types of IT outages, and ultimately, help them prevent data loss.


Wednesday, 21 January 2015 00:00

Can You Make Disaster Information Go Viral?

What role could social media play in effectively communicating information about breaking news such as natural disasters and disease outbreaks? It’s not a new question, but one that lacks an easy answer. Researchers and emergency response personnel in San Diego plan to spend the next four years exploring the topic, and what they find may eventually serve as a model for other communities looking to better leverage social media for disaster response.

San Diego County and San Diego State University (SDSU) recently formed a partnership to research and develop a new social media-based platform for disseminating emergency warnings to citizens. The project aims to allow San Diego County’s Office of Emergency Services (OES) to spread disaster messages and distress calls quickly and to targeted geographic locations, even when traditional channels such as phone systems and radio stations are overwhelmed.


In a Jan. 13 presentation to the federal Health IT Policy Committee, Annie Fine, M.D., a medical epidemiologist in the New York City Department of Health and Mental Hygiene, described both the sophisticated software used to track disease outbreaks such as Ebola, as well as how better integration with clinicians’ electronic health records (EHRs) would improve her department’s capabilities.

“In New York City, every day we are on the lookout for unusual clusters of illness. And we receive more than 1,000 reports a day just in my program,” Fine said. Epidemiologists run a weekly analysis to detect clusters in space and time, and use analytics and geocoding to compare current four-week periods with baselines of earlier four-week periods.

“We get a large number of suspect cases reported, and they may be way out of proportion to the number of actual cases,” Fine said. Epidemiological investigations require hundreds of phone calls to providers and labs. “That could be made much less burdensome and efficient if we could have improved integration with EHR data.”


Now that the dust has settled on the infamous hack of Sony Pictures Entertainment, it would be prudent to take a look back at how the attack was carried out, consider what lessons IT security professionals can learn from it, and formulate a plan to counter a similar attack.

To that end, I recently conducted an email interview with Gary Miliefsky, an information security specialist and founder and president of SnoopWall, a cybersecurity firm in Nashua, N.H. To kick it off, I asked him what the likelihood is that a Sony insider assisted with the attack, and whether it could have even been carried out without the help of an insider. Miliefsky dismissed the insider theory:

While many speculate that the attack on Sony Pictures Entertainment was done by a malicious insider, I believe that the DPRK carried out the attack themselves, originally initiated from IP addresses they lease from the Chinese government. I believe they initially eavesdropped on emails to learn a pattern of behavior for socially engineering a Remote Access Trojan to be installed via email of an unsuspecting employee, inside the network.


Component distributor partners with DigitasLBi Commerce and hybris to scale its commerce capabilities in global markets


LONDON – DigitasLBi Commerce, the global connected commerce specialist and hybris software, an SAP company and the world’s fastest growing commerce platform provider, have been selected by RS Components (RS), a trading brand of Electrocomponents plc, the global distributor for engineers, to implement a new connected commerce platform. This will enable it to enhance and rapidly scale its B2B eCommerce offerings to an expanding customer base and deliver a highly personalised experience to individual customers in markets around the globe.


Under the agreement, DigitasLBi Commerce will implement the hybris Commerce Suite, a powerful and scalable single-stack commerce platform capable of delivering highly sophisticated B2B features to a global user base. The solution enables RS to further enhance its online B2B functionality while seamless integration with the company’s enterprise architecture, which includes a SAP business intelligence system, will support streamlined business operations and make the faster initiation of go-to-market strategies and new business models possible.


Guy Magrath, Global Head of eCommerce at RS, commented: “eCommerce is a major driver of growth for our business and the entry point for our customers to a long term multi-channel relationship with us. By partnering with DigitasLBi Commerce and hybris we’ll gain the ability to respond faster to new market needs and further exploit the potential of our eCommerce offer to a diverse B2B customer base.”


With operations across 32 countries and a global network of 16 distribution centres worldwide, RS is the world’s largest distributor of electronics and maintenance products, shipping over 44,000 parcels daily. With around 500,000 products available for same day dispatch and serving more than one million customers worldwide, the company is dedicated to helping customers find the right product at the right price.


As a next phase, DigitasLBI Commerce will undertake the global deployment and rollout of a new connected multi-language, multi-currency, multi-site commerce platform that can be adapted fast to changing market conditions. DigitasLBi Commerce’s robust agile implementation approach will enable RS to incrementally advance its eCommerce capabilities.


With 58 percent of global revenues generated online, RS’s ambition is to build a £1 billion plus connected commerce business and DigitasLBi Commerce will support the brand in extending its ‘eCommerce with a human touch’ vision to further improve the online customer experience with innovative B2B functionality that make it even easier for customers to transact.


“With connected commerce at the heart of the company’s operation, RS has to make the online customer experience the best and most relevant in each and every market they do business in,” said Jim Herbert, Managing Partner at DigitasLBi Commerce. “As a leading exponent of global hybris implementations we’re delighted to have been chosen to support RS in extending how it connects to its global audience to reach customers locally, at the point of need.”


The new multi-device optimised commerce platform will power 29 highly localised websites, and finely tune procedures that address specific market requirements. Under the agreement, DigitasLBi Commerce will enable the brand’s global connected commerce team of 100 staff, who oversee online trading, merchandising and behavioural repurchasing (email/offline event triggers across all channels and digital devices), to become fully self-supporting in their utilisation of the hybris Commerce Suite.


“In today’s market where B2B customers expect and are demanding a B2C-like experience, companies - especially industry giants such as RS - require a new breed of solutions that consider the customer interaction across touch points and channels, including that pivotal moment in the journey where a purchase is made,” explained Rob Shaw, Vice President New Business EMEA and MEE, hybris software. “hybris makes it possible to integrate web, customer service, print, mobile and social commerce that will give RS’s customers a more seamless multi-channel shopping experience.”

Tuesday, 20 January 2015 00:00

Defining the Five Lines of Defense

As the Board of Directors focuses its attention on risk oversight, there are many questions to consider. One topic the Board should consider is how the organization safeguards itself against breakdowns in risk management (e.g., when a unit leader runs his or her unit as an opaque fiefdom with little regard for the enterprise’s risk management policies, a chief executive ignores the warning signs posted by the risk management function or management does not involve the Board with strategic issues and important policy matters in a timely manner). As illustrated during the financial crisis, the result of these breakdowns can be the rapid loss of enterprise value that took decades to build.

An effectively designed and implemented lines-of-defense framework can provide strong safeguards against such breakdowns. From the vantage point of shareholders and other external constituencies (an external stakeholders’ view), we see five lines of defense supporting the execution of the organization’s risk management capabilities.1 They are outlined below.


(TNS) — "A rising tide lifts all boats," John F. Kennedy said, in defense of the government taking on big public works projects for the greater good.

About 10 of Iowa's river towns will share a $600 million pot of state money based on the belief sales tax revenue will rise higher and commercial and residential development will flourish along riverfronts, if protected from flood with sophisticated green and hard infrastructure.

Flooding in Iowa is occurring more often, making the nomenclature 100-year or 500-year flood levels meaningless. The city of Burlington had 500-year floods in 1993 and 2008, a 15-year interval.

Cedar Rapids, which sustained $6 billion of the state's $10 billion flood damage in 2008, led the way in convincing the Legislature to establish a flood mitigation fund.


(TNS) — Until now, North Texas has been one of the least likely places in the country to have an earthquake.

But after the Dallas area suffered a series of more than 120 quakes since 2008, the U.S. Geological Survey is re-evaluating the metroplex’s “seismic hazard” — or the risk of experiencing earthquakes.

This year, for the first time, the USGS will include quakes believed to have been caused by human activity in its National Seismic Hazard Map, which engineers use to write and revise building codes, and which insurers use to set rates.

The map predicts where future earthquakes will occur, how often they will occur and how strongly they will shake the ground.


Tuesday, 20 January 2015 00:00

The Modular Approach to a Scalable Cloud

Following up on my previous post regarding hyperscale infrastructure, I feel I should point out that once the decision to go hyperscale has been made, it will most likely take place in a Greenfield hardware environment.

Unless you are already working with a state-of-the-art data facility, any attempt to convert complex, multiformat legacy environments will almost certainly lead to a morass of integration issues. The key benefit to hyperscale is that it is both large and flexible, allowing data executives to craft multiple disparate data architectures completely in software. This is why current hyperscale plants at Google and Facebook rely on bulk commodity hardware.

But as I mentioned last fall, the average enterprise does not have the clout to purchase tens of thousands of stripped down servers and switches at a time, and besides, all those components still need to be deployed, provisioned and integrated into the cluster, which takes time, effort and of course, money.



More than a third (34%) of IT professionals claim that their organization has suffered a major incident that has required them to implement disaster recovery procedures. In the event of such a disaster or other incident occurring, 51% believe they are only ‘somewhat prepared’ to recover their IT and related assets, and of those who had experienced a major incident, more than half lost data and 11% experienced permanent IT losses.

These were some of the findings in a report published by Evolve IP which also showed that the leading causes of such incidents are hardware outages (47%), environmental disasters (34%), power outages (27.5%) and human error (20%). Perhaps surprisingly, a significant number of organizations continue to use legacy methods for disaster recovery. 45% of those surveyed continue to use backup tapes and 41.5% use additional servers at their primary site as a principal method for disaster recovery.

“For many organizations the question isn’t ‘if’ they will suffer a disaster, it’s ‘when,’” says Tim Allen, Chief Sales Officer of Evolve IP. “As we saw in the survey, when disaster hits, it hits hard typically taking over a day to recover and causing financial as well as data losses.”

The results of this survey demonstrate just why the IT related threats are the biggest concern for business continuity professionals as shown in the Business Continuity Institute’s annual Horizon Scan report. The latest report revealed that 77% of respondents to a survey expressed either concern or extreme concern at the prospect of an IT or telecoms outage occurring.

Organizations that reap high return rates on Big Data projects do so by changing operational systems for everybody, rather than “enlightening a few with pretty historical graphs,” according to IT research and consulting firm Wikibon.

How do you do that? You stop using Big Data to drive “using the rear-view mirror.” Instead, you couple Big Data insights with in-memory technologies so you’re “driving with real-time input through the front windshield,” writes David Floyer, a former IDC analyst and co-founder and CTO of

Floyer’s lengthy piece on Big Data ROI goes into the technical details on how you piece this together. His technologist background really shows, though, so here are a few terms you’ll need to know to follow it:


For most organizations, employees, or the human resources, account for the largest percentage of total costs. Northeastern University D’Amore-McKim School of Business Distinguished Professor of Workforce Analytics and Director of the Center for Workforce Analytics Dr. Mark Huselid says the workforce often represents fully 60 to 70 percent of all expenses. Quite clearly, the refinement of workforce management, and attempting to “connect human capital and performance with management strategy and business goals” is a keen point of interest for both HR and upper management.

The fact that a Professor of Workforce Analytics position exists is intriguing, and the sort of academic research that the Center for Workforce Analytics conducts may well result in some rather unexpected outcomes for some industries. Consider this idea, for example: “Most organizations tend to invest in talent hierarchically, where senior-level talent gets the most pay, best development opportunities and other professional perks. However, organizations should be managing vertically in who and what really matters – and in measuring and managing the outcomes associated with these processes.”

In the tech world, the idea of investing a higher percentage of pay and perks in less senior and less experienced employees is not foreign. Raising pay rates and bonuses for, say, highly in-demand developers and designers can often be easily justified in shortened time-to-market or other deliverables. In other areas, though, HR and the business would have a hard time with the concept without some solid predictive numbers.


As the enterprise tries to make the data center more efficient in the face of rising operating costs, one problem keeps reoccurring: Disparate infrastructure makes it very difficult to determine what systems and solutions are in place and how they interact with each other.

The data center, after all, is a collection of assets, so it only makes sense to have a good idea of what those assets are and how they operate in order to either improve their efficiency or swap them out for new, better assets.

The idea of asset management (AM) in the data center is not new – in fact, it is a bustling business. MarketsandMarkets puts the total value of the AM industry at $565.4 million, with annual growth rates averaging 34 percent between now and 2019 to top out at more than $2 billion. The report segments the market by region, components, services, support and other factors, concluding that efficiency, management, planning and expansion of data footprints are key drivers, while limiting factors include tight budgets, poor awareness of available solutions, and a lack of perceived benefits. And as with most technology solutions these days, established markets in Europe and North America provide the bulk of activity, while emerging markets represent the fastest growth.


Cyberattacks are clearly on the minds of President Barack Obama, Islamic State jihadists, Sony Pictures execs and the CBS producers who are launching a new show this spring called CSI: Cyber. On Jan. 13, Obama announced plans to reboot and strengthen U.S. cybersecurity laws in the wake of the Sony Pictures hack and the one on the Pentagon's Central Command Twitter account from sympathizers of the Islamic State. Whether a real attack or depicted in television and films like Blackhat, this flood of cyberattacks means that hackers are relentless and more sophisticated than ever before.

I’m not a fear monger by trade but want to sound the alarm that there is another cyber-risk that is looming and warrants attention of our emergency management community and government: electronic health records. The American Recovery and Reinvestment Act of 2009 authorized the Centers for Medicare and Medicaid Services to award billions in incentive payments to health professionals (hospitals, long-term care agencies, primary care, etc.) to demonstrate the meaningful use of a certified electronic health record (EHR) system.

The intent to create EHR systems is to improve patient care by providing continuity of care from provider to provider by creating health information exchanges (HIEs) that allow “health-care professionals and patients to appropriately access and securely share a patient’s vital medical information electronically,” says In addition, financial penalties are scheduled to take effect in 2015 for Medicare and Medicaid providers who do not transition to electronic health records.


Integration isn’t an excuse to avoid trying SaaS enterprise applications, argues principal cloud architect Mike Kavis.

“Sometimes enterprise IT executives think their requirements are so different than those of other companies that they cannot be met by a SaaS provider. This thought process is often nothing more than a poor excuse …” Kavis writes.

Kavis is also now a vice president at Cloud Technology Partners, but I’ve followed his writings for years. Kavis is an industry veteran with extensive experience as an architect and IT analyst.


A new survey from Chicago-based managed security service provider (MSSP) Trustwave revealed that organizations with 1,000 Internet users or fewer spent more than twice as much on IT security on a per-user basis than larger organizations (those with more than 1,000 Internet users).

The survey of 172 IT professionals showed that IT security cost $157 per Internet user in smaller organizations versus $73 per user in larger ones.

Also, Trustwave found that 28 percent of all respondents said they believed they were not getting full value out of their security-related software investments.


(TNS) — In the wake of the March 11, 2011, Great East Japan Earthquake, local and prefectural governments around the country rushed to assist the Tohoku region, sending material aid and personnel, while private firms and individuals arrived to volunteer their services wherever they were needed.

Few were as quick to respond as Hyogo Prefecture and the city of Kobe, which had experienced their own earthquake in January 1995, and had worked in the intervening years to become Japan’s premier center for disaster response-related knowledge, and an example that towns, cities and prefectures in Tohoku could use as they attempted to rebuild.

At a recent symposium, held ahead of the 20th anniversary this Saturday of the Great Hanshin-Awaji Earthquake and attended by officials and representatives of nonprofit organizations from Iwate and Hyogo prefectures, Hyogo Gov. Toshizo Ido and Iwate Gov. Takuya Tasso spoke on the administrative and planning challenges governments face when dealing with a large-scale natural disaster.


How to balance the risks and rewards of emerging technologies is a key underlying theme of the just-released World Economic Forum (WEF) 2015 Global Risks Report.

The rapid pace of innovation in emerging technologies, from synthetic biology to artificial intelligence has far-reaching societal, economic and ethical implications, the report says.

Developing regulatory environments that can adapt to safeguard their rapid development and allow their benefits to be reaped, while also preventing their misuse and any unforeseen negative consequences is a critical challenge for leaders.


By Sue Poremba

Hybrid cloud. BYOD. Big Data. Internet of Things. These are terms that have become part of the daily lexicon, not only within the information technology (IT) and cyber security world but also in the main stream. Jargon is integral to IT. They make complicated terms more accessible to the non-technical person, even if they aren’t easier to understand.

Buzzwords are commonplace in IT security, as well, but are they truly understood? As Frank Ohlhorst writes in Tech Republic, “it seems that IT security managers are giving too much power to terms and buzzwords, letting them dictate security best practices.” Ohlhorst goes on to point out that while BYOD is just an acronym that means, simply, Bring Your Own Device (such as when a company allows its employees to use their personally-owned phones, laptops, and other devices to access the network for work purposes), security professionals see it as Bring Your Own Disaster and the beginning of a security nightmare.



Less than twelve months ago the UK suffered severe flooding in many parts of the country and this is not an infrequent occurrence. During the last five years over half (51%) of businesses have experienced some form of damage through floods, wind and thunderstorms alone and this can often prove costly. The situation could be further exacerbated within smaller organizations as a new study has shown that 46% of small to medium sized businesses (SMBs) haven’t considered a business continuity plan to carry on trading or mitigate losses.

There are nearly five million SMBs in the UK and each one risks suffering on average £38,311 worth of damage because of the elements. As a result, the potential cost to the economy could be as high as £86 billion. Weather chaos means small businesses could also lose over three working days (26 hours) of staff time.

Weather is a threat to many organizations, so much so that in the Business Continuity Institute’s Horizon Scan report, adverse weather came fourth in a list of potential threats with 57% of respondents to a survey expressing either concern or extreme concern at the possibility of their organization suffering a disruption as a result.

The findings come from a survey of 1,000 SMBs conducted by Towergate Insurance and designed to ascertain the impact of bad and unexpected weather on the UK’s mass of smaller firms. The nationwide survey also reveals 43% of UK SMBs either simply do not have cover or do not know whether they are covered in the event of serious bad weather.

Commenting on the findings, James Tugendhat of Towergate Direct, said: “Small businesses are a vital part of the UK economy and can’t afford to lose money due to the unpredictable British weather. Whilst the good old British weather has become a joke, losing large sums of money or business days due to damage is no laughing matter. Making sure businesses are aware of the risks bad weather poses and how to mitigate against it means SMBs can be guaranteed peace of mind and get back to the business of business.”

Thursday, 15 January 2015 00:00

Global Risks 2015

The World Economic Forum has published its annual look ahead at the risks that are likely to dominate in the coming years.

The biggest threat to the stability of the world in the next 10 years comes from the risk of international conflict, according to the 10th edition of the World Economic Forum Global Risks report.

The report, which every year features an assessment by experts on the top global risks in terms of likelihood and potential impact over the coming 10 years, finds interstate conflict with regional consequences as the number one global risk in terms of likelihood, and the fourth most serious risk in terms of impact.


All business in a competitive market is risk-based, whether or not enterprises admit it. Positive risk indicates opportunities. Negative risk points to the need to take measures to avoid, transfer or mitigate that risk. Banks are a case in point, with risk analysis at the heart of their daily activities as they continually calculate the probabilities of profitability in investments and loans. For enterprises in other sectors, risk may be less in the spotlight, but no less important. All companies need good disaster recovery and business continuity management for instance. Both depend on properly assessing risks and their impact. So how can you tell if senior management is taking risk management seriously?


It’s a near daily occurrence for most enterprises—a laptop or server becomes obsolete or unusable. But often the most important step is forgotten before a new media is brought in. How do you ensure that the old device is cleansed of all usable traces of important data before it is disposed of?

Many organizations have internal procedures for disposing of technology, and those steps include wiping hard drives of data or restoring a device to its original status before use. But does this alone ensure that no discernible traces of private data are left on the media? Are there ways to absolutely be sure that the organization’s confidential information has been completely and absolutely removed? Or is there a level of data removal that may not be complete, but is acceptable?


Thursday, 15 January 2015 00:00

Mapping for Ebola: A Collaborative Effort

Map of Africa

One of the difficulties faced by teams responding to the current Ebola outbreak in West Africa is identifying individuals and communities residing in remote areas. Existing maps of these regions either do not exist or are inadequate or outdated. This means that basic data like location of houses, buildings, villages, and roads are not easily accessible, and case finding and contact tracing can be extremely difficult.

To help aid the outbreak response effort, volunteers from around the world are using an open-source online mapping platform called OpenStreetMap (OSM) to create detailed maps and map data of  Guinea, Sierra Leone, Liberia, and parts of Mali.

Person mapping at a computer

Commonly referred to as “Wikipedia for maps,” OSM is working toward the goal of making a map of the world that is freely available to anyone who wants to use it. The Humanitarian OpenStreetMap Team (HOT) is a U.S.-based non-profit organization that represents a subset of the OSM community. HOT’s mission is to use OSM data and tools to help prepare and respond to humanitarian disasters. Because OSM data is available for free download anywhere in the world, volunteer mappers generate data that are useful not only to CDC but also to other agencies involved in the Ebola response, such as Doctors Without Borders (MSF), International Red Cross (IRC), and World Health Organization.

Mappers frequently use satellite images to identify villages, houses, paths, and other details that were previously unmapped. The U.S. State Department’s Humanitarian Information Unit (HIU) is supporting HOT and OSM by creating the website, which provides easy-to-follow instructions on how to begin mapping very quickly. Personnel in CDC’s Division of Global Migration and Quarantine (DGMQ) are coordinating with HIU and HOT to support and promote volunteer mapping in affected West African areas where CDC teams are currently working.

Members of Emory’s Student Outbreak and Response Team (SORT) are some of these volunteer mappers. SORT is a graduate student organization that collaborates with CDC and provides hands-on training in outbreak response and emergency preparedness. Ryan Lash, a mapping scientist in DGMQ’s Travelers’ Health Branch, initially contacted SORT for help in August as the number of Ebola cases in West Africa continued to rise. He has since provided two workshops for SORT members, taught a small number of CDC staff, and trained students at the University of Georgia.

Rabies response-EOC

In the 8 months that HOT has been mapping countries with Ebola outbreaks, more than 2,500 volunteers have mapped more than 750,000 buildings and hundreds of kilometers of roads, resulting in detailed maps of affected West African communities. Not only do these maps help first responders and other organizations around the world, they also contribute to the national information infrastructure essential to the recovery and rebuilding of affected regions. The value of OSM was highlighted especially well during the 2010 Haiti earthquake, after which the U.S. State Department decided to promote volunteer mapping as a way for the general public to get involved in humanitarian emergencies.

Volunteer mapping in OSM for HOT can be done by anyone. All you need is a computer, an internet connection, and the time and willingness to learn. Find out more about how you can help here: Learn to MapExternal Web Site Icon

With the arrival of Ebola in the U.S. came public fear, widespread misinformation, and the ever-present danger of contamination and contagion. While the cases have been isolated, the threat of the virus required state and local leaders to assume unprecedented leadership and extreme diplomacy in dealing with the public, the medical community, and even medical suppliers and contractors, who balked at handling blood samples, soiled linens and hospital waste out of fear of the virus.

But when a virus like Ebola hits a jurisdiction, there is a hefty fiscal price as well. In Texas, Dallas County was the first U.S. locality to deal with the sudden challenge of an outbreak. The impact on the budget was not inconsequential. It cost the county a quarter of a million dollars to gut and decontaminate the one small apartment of the nation’s first Ebola victim, Thomas Eric Duncan -- part of the approximately $1 million the county expended in the first weeks of the crisis.

Unlike with some contagions, the unknowns with Ebola could constitute the gravest challenge. There are surprising gaps in scientists’ knowledge about the virus, including the time it can survive in different environments outside the body. That is information vital to EMTs, solid waste departments, hospitals and clinics, and public and private water and wastewater systems -- as well as public transportation agencies.


Forecasting what the IT security landscape will look like in the year ahead has become an annual technology tradition, and following 2014 as the Year of the Data Breach, I think anyone could make a fairly accurate guess as to what the major trend of the New Year will be: more data breaches.

Forty-three percent of organizations reported a data breach in the past year, a figure that Forrester predicts will rise up to 60% in 2015. And it’s not just the frequency of breaches that we will see escalate in the year ahead, but also that malware will be increasingly difficult to dismantle. P2P, darknet and tor communications will become more prevalent, and forums selling malware and stolen data will retreat further into hidden corners of the Internet in an attempt to avoid infiltration.

By now, it is no longer a matter of if your business is going to be breached, but when. The last thing any organization needs as we enter another year of risk, is a blind side. The good news, though, is that there are ways to prevent them if we act immediately.



When was the last time you conducted a business continuity exercise? Were your colleagues enthusiastic participants? It’s not always easy to get buy-in, either from top management who don’t want to fund it, or from your non-BC colleagues who don’t have time to take part.

This is why 'testing times' was chosen as the theme for Business Continuity Awareness Week as we want to support you in explaining to your colleagues just how important testing and exercising is to the whole business continuity process. To put it simply, a plan that has not been exercised is not a plan! We also want to support you in organizing your exercises, or more to the point we want BC professionals to support each other.

To begin with, the Business Continuity Institute has produced a series of posters that are free to download and can be placed in a prominent location in your workplace to highlight the importance of exercising your plans. Each poster asks the question:

When do you want to find out your business continuity plan doesn’t work?
A) During an exercise?
B) When an incident occurs?

These posters can be found on the BCAW website.


We also plan to post a series of case studies, white papers and other material that would support your case for an exercise or help you in planning one, but for this we need your help. We need the help of those people who do this work on a daily basis. Have you recently run an exercise, then why not submit a case study? It doesn’t have to be lengthy, just say a little bit about what you did. Have you recently conducted some research? Then perhaps you’d like to submit a white paper, it could provide some great publicity for you and your organization.

As with previous years, we are putting together an extensive webinar programme where business continuity experts will discuss the relevant issues relating to the theme and offer the viewer the opportunity to ask questions.

If you would like to become involved with BCAW, either by submitting material or hosting a webinar, or if you would just like further information, please do get in touch by emailing This email address is being protected from spambots. You need JavaScript enabled to view it..

Post-apocalyptic movies such as “The Road Warrior,” “I Am Legend” and “The Matrix” have long been a Hollywood staple. You will need something more than a backup service to keep your business going in the event of nuclear war or an alien invasion, but for customers of MSPs, disasters are not an all-or-nothing proposition. Instead, they encompass a whole range of large and small incidents that can result in data and service losses. A properly designed disaster recovery system will protect against:

  • Ransomware
  • Accidental deletion
  • Hardware failure
  • Software corruption
  • Power surges, brownouts or outages
  • Lost smartphones, laptops and tablets
  • Fires and fire protection system damage
  • Vandalism
  • Theft
  • And whatever floods, earthquakes, tornados, tsunamis, lightning strikes, hurricanes or blizzards our dear, sweet Mother Nature decides to give us

Here are five critical factors MSPs should keep in mind when setting up their own and their customers’ systems for easy data recovery after a disaster.


Wednesday, 14 January 2015 00:00

Data Storage Benchmarking Guide

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more.


It would be interesting to see what would happen if there was another Ebola scare in the U.S. The answer might depend on when it happened and perhaps where the person became infected. But chances are the health infrastructure would handle it, and perhaps respond to another infectious disease outbreak much better, having had the experience that the recent Ebola episodes provided.

That experience included hiccups and communication errors that resulted not in panic but disagreement on the part of some in the health community and alarm in the public. One target of criticism is the Centers for Disease Control and Prevention (CDC), which was confident from the beginning in expressing that hospitals throughout the U.S. were ready to handle Ebola cases and messaging to the public about the difficulty of transmission of the infection. The CDC chose not to participate in this discussion.

When Thomas Eric Duncan, who eventually died, was first found to have Ebola the CDC sought to calm fears and educate the public about the likelihood of the disease spreading by normal contact with an infected individual, and what should be done if someone was thought to have symptoms. It also expressed confidence in the ability of the health infrastructure to deal with an outbreak.


As companies increasingly turn to the public cloud to house various components of their IT infrastructures, it will probably always be the case that other components will remain on-premise. So the question of how to best manage that hybrid environment becomes one that an increasing number of IT pros will have to be able to answer.

I discussed that question in a recent email interview with Lynn LeBlanc, co-founder and CEO of HotLink, a hybrid IT management software provider in Santa Clara. I started by asking LeBlanc what she finds companies tend to keep on-premise, and why they’re going that route. She said the reasons for hybrid cloud deployments vary from organization to organization, but it’s generally more a question of what they want to put in the cloud:



It may not come as a surprise that cyber security incidents are on the rise. Open any newspaper today and you will no doubt come across yet another article highlighting some organization that has become the latest victim of a breach in online security.

Quite by how much these incidents are on the rise is perhaps a little more concerning however, as a recent report produced by PwC on the Global State of Information Security has shown that the number of information security incidents reported by survey respondents has increased from 28.9 million in 2013 to 42.8 million in 2014 – a 48% increase. The report also cites additional research which suggests that 71% of compromises go undetected meaning that 42.8 million could just be the tip of the iceberg.

The Business Continuity Institute’s Horizon Scan report has consistently shown that cyber attacks and data breaches are a major concern for business continuity professionals with the latest survey highlighting that 73% of respondents to a survey expressed either concern or extreme concern at the prospect of one of these threats materialising.

The cost of security incidents can be high with the report noting that a recent study by the Center for Strategic and International Studies estimated that the annual cost of cybercrime to the global economy could be somewhere between $375 billion and $575 billion. The report further notes that this doesn’t cover the cost of IP theft which could range from $749 billion to as much as $2.2 trillion.

You might think that with this significant increase in the number of security incidents and the financial impact these incidents can have, budgets would be also be increasing in order to protect against them, however the opposite appears to be the case as the report reveals that the average security budget among respondents had decreased by 4% from the previous year.

The Global State of Information Security Survey 2015 was a worldwide study by PwC, CIO, and CSO conducted online during the first half of 2014. Readers of CIO, CSO and clients of PwC from around the globe were invited to take the survey and the results discussed in the report are based on the responses of more than 9,700 CEOs, CFOs, CIOs, CISOs, CSOs, VPs and directors of IT and security practices across more than 154 countries.

The concept of digital transformation is not a new one, as technology has been used to augment business functions since the dawn of the computer age. However, these days, digital transformation means different things to different companies, requiring each company to tailor their integration of technology in a way that increases productivity and improves communication with internal and external parties.

Personally, I like the Altimeter Group’s definition of digital transformation, since it is the most appropriate for modern market-focused usage: “The realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touch-point in the customer experience lifecycle.” In most cases, the goals of digital transformation include better engagement with digital customers, greater collaboration with internal resources, and improved efficiency.


There are times when you wish you could undo what you just did. Sometimes, you can’t. Financial investments, office reorganisations and even that too-hasty email you sent often cannot simply be reversed. With IT on the other hand, it’s a different story. From individual PCs to corporate data centres, the ‘Undo’ function has become a standard feature of many computing systems for making errors and problems disappear. As little as one mouse click may be enough to turn back the hands of time and begin again as though a mistake had never been made. But is this disaster recovery capability the magical solution it is often made out to be?


Tuesday, 13 January 2015 00:00

Tackling the Unstructured Data in Big Data

There’s a lot of talk about Big Data as if it is one entity. We hear: How do you manage Big Data? How do you govern Big Data? What’s the ROI for Big Data? The problem with this is that it puts too much focus on the technology, while obscuring one of the major challenges in Big Data sets: the unstructured data. 

I suspect CIOs haven’t forgotten that component since about 80 percent of data in organizations today is unstructured data, according to Gartner. That’s a lot of value currently hiding in social media, customer call transcripts, emails and other text-based or image-based files.

That’s a problem, because that also happens to be where you may find the real value in Big Data. These disparate data sets were previously unanalyzed or sitting in application silos. Obviously, Hadoop will let you migrate that into one location, but what then? How do you turn that into valuable information?

This recent Datamation column by Salil Godika goes a long way toward answering these questions. Godika is the chief strategy & marketing officer and Industry Group head at Happiest Minds. I admit this gave me pause, because pieces by chief marketing officers can be too self-serving.


Not too long ago, organizations fell into one of two camps when it came to personal mobile devices in the workplace – these devices were either connected to their networks or they weren’t.

But times have changed. Mobile devices have become so ubiquitous that every business has to acknowledge that employees will connect their personal devices to the corporate network, whether there’s a bring-your-own-device (BYOD) policy in place or not. So really, those two camps we mentioned earlier have evolved – the devices are a given, and now, it’s just a question of whether or not you choose to regulate them.

This decision has significant implications for network security. If you aren’t regulating the use of these devices, you could be putting the integrity of your entire network at risk. As data protection specialist Vinod Banerjee told CNBC, “You have employees doing more on a mobile device and doing it ad hoc here and there and perhaps therefore not thinking about some of the risks that are apparent.” What’s worse, this has the potential to happen on a wide scale – Gartner predicted that, by 2018, more than half of all mobile users will turn first to their phone or tablet to complete online tasks. The potential for substantial remote access vulnerabilities is high.


Tuesday, 13 January 2015 00:00

CDC: Flu Season a Bad One

(TNS) — The Centers for Disease Control and Prevention said this year’s flu season is shaping up to be a bad one.

Already there have been 26 confirmed pediatric deaths and flu is widespread in almost “the entire country,” CDC director Tom Frieden said on a conference call with reporters Friday morning.

The number of hospitalizations among adults aged 65 and older is also up sharply, rising from a rate of 52 per 100,000 last week to 92 per 100,000 this week, Frieden said.

And there’s still more possible hospitalizations and deaths to come. The nation is about seven weeks into this year’s flu season and seasons typically last about 13 weeks, Frieden said.

“But flu season is unpredictable,” he said, adding it could last longer than 13 weeks.


Integration isn’t exactly a fast-moving part of IT, so it isn’t usually listed on New Year technology prediction lists. This year, I spied two integration trends among these lists that could potentially shake up IT and the business.

First, lists deeper ERP integration as a top trend for enterprise software in 2015. This could be huge for business users, who could then leverage that rich ERP data for other applications — especially CRM. Jeremy Roche, CEO of cloud ERP provider FinancialForce, explained it thusly:


Pricing data backup and disaster recovery (BDR) and business continuity services can be challenging, especially for managed service providers (MSPs) that offer cloud-based storage of customer data.

A time-based cloud retention (TBCR) fixed-pricing model, however, ensures the monthly cost for cloud-based storage of customer data does not vary based on volume.

Also, service providers can use TBCR to offer customers secure, rapidly recoverable off-site backup for a fixed monthly price that is based on how long they need to retain their data.


Never before have there been so many options for alerting the public. In the last few months alone potential for new alerting channels has been unleashed for complementing an already growing array of channels. Names like Google, Twitter, Facebook and the Weather Channel have entered the alerting field. Legacy vendors have enhanced their offerings. The federal government now has impressive alerting success stories to tout. An industry and practice area that once seemed sleepy is wide awake. At the same time, new complexities and challenges have shown themselves.

As part of the move toward ubiquitous alerting, an organization is working to turn online advertisements into emergency alerts. Members of the Federation for Internet Alerts (FIA) are substituting “interest-based advertising” with targeted alerts. Interest-based ads are the ones you see online that know what you’ve been looking for by using Web cookies or mobile service identifiers left behind when you conduct a search or visit a site. Through the FIA plan, interest-based ads would be replaced with emergency alerts for a specific geographic area. The FIA’s Jason Bier, chief privacy officer at the company Conversant, said through a pilot, Amber Alert messages have been exposed via 500 million “impressions” to more than 100 million devices.


(TNS) — There are chilling similarities between the deadly Charlie Hebdo attack in Paris and the Boston Marathon bombings, with lessons to be drawn for law enforcement, terrorism experts say.

Both attacks have been blamed on homegrown terrorist brothers — in each case with a brother who had drawn law enforcement attention for Islamic radical ties before. In both cases, both police and citizens were targeted with equal cold-blooded vigor.

“I think what you’re going to see is governments going through their watch lists to see how many names appear identical. They should have added worry when you have two or three members of the same family giving prior warning, governments should be taking a second and third look at them,” said Victor David Hanson of the Hoover Institution. “When you are dealing with familial relations, it means there are fewer people who have privileged information about the ongoing plotting and the secret is reinforced by family ties ... it’s going to be much harder for Western intelligence to break into them.”


“Pandemic” and “panic” sound a lot alike. Certainly, the first can trigger the second in next to no time, as the recent outbreak of Ebola has demonstrated. But as a leader in your company, you can avoid both by encouraging your cross-functional teams to take the following six steps.


Friday, 09 January 2015 00:00

Why CIOs Will Want Data Lakes

Edd Dumbill may have just won the argument over whether data lakes are a practical, achievable idea.

Data lakes are a simple enough idea: You dump a wide range of data into a Hadoop cluster and then leverage that across the enterprise.

The problem is what Gartner calls the “Data Lake Fallacy,” which is the challenge of managing data lakes in a governable and secure way.

Dumbill acknowledges the barriers to data lake adoption in a recent O’Reilly Radar Podcast. Ultimately, though, the VP of strategy at Silicon Valley Data Science says data lakes will happen for one reason: Data lakes free data from enterprise silos.

“One of the hardest things for organizations to get their head around is getting data in the first place,” Dumbill told O’Reilly’s Mac Slocum. “A lot of CIOs will be, ‘Great, I want to do data science but I’ve got this database over here and this one over here and these all need to speak to each other and they’re in different formats and so on.’ In many ways, having data in a data lake provides you with a foundation (with) which you can start to integrate data with and then make it accessible as a building block in an organization.”


Friday, 09 January 2015 00:00

BYOD: Follow the Money

The topic of money – who pays for what and how to get the best plans when business and consumer activities are mixed – has been a vexing one since Bring Your Own Device (BYOD) emerged. It has taken something of a back seat while companies figured out how to keep data secure and separate in the two spheres.

Those primary tasks are well on their way to being solved. Now, attention in turning, as it eventually always does, back to the money. The industry is getting serious about the issue, at least at the rudimentary level of splitting work and consumer bills.

Mobile Enterprise reports that the AT&T Work Platform will enable organizations to separate work and consumer expenditures. The story says that it is an important task from several points of view. Of course, there is the simple point of figuring out who pays for what. Beyond that are the legal, human resources and tax regulations. AT&T is cooperating with big-name vendors MobileIron, AirWatch by VMware and Good Technology on the platform.


Friday, 09 January 2015 00:00

IBM Stays the Storage Course

The overall storage market has seen a number of challenges recently in achieving desired goals, such as in the number of petabytes vendors actually sell. That has led a few prognosticators to express a “sky-is-falling” analysis (as that attracts attention) to the situation. But that approach is fundamentally wrong.

Now, in any dynamic and rapidly changing market such as storage where trends, such as software-defined solutions and Flash technologies are transforming vendor and customer expectations, and where global IT trends, like cloud, big data, and mobile also have an immense impact, there are likely to be challenges. That is especially the case where both established vendors and newer players duke it out.

The key is not to panic. And that is why it is so important to IBM’s storage customers that the company is staying the course. This does not mean standing still, but rather progressing in a measured manner. IBM’s recent 4th quarter storage announcements do not contain any blockbusters. For that we can be grateful as blockbusters absorb all the attention and we have to expend a lot of thought, time and energy in trying to understand what impact the blockbuster will have.


Friday, 09 January 2015 00:00

Abiding by the rules of business continuity


There are many 'rules' that govern what we do as business continuity professionals – some are sector specific, some are based on geography. But which of them apply to your organization? When you start to look into it, it's not difficult to become confused as to which you are supposed to abide by.

The Business Continuity Institute now aims to simplify this by publishing what we believe to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based upon information provided by the members of the Institute from all across the world. Some of the items may only be indirectly related to BCM, and should not be interpreted as specifically designed for the industry, but rather they contain sections that could be useful to a BCM practitioner.

The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, finally, it provides a link to the full document itself.

The BCI has done its best to check the validity of these details but takes no responsibility for their accuracy and currency at any particular time or in any particular circumstances. To download a copy of the document, click here.

(TNS) — After a series of 13 small earthquakes rattled North Texas from Jan. 1 to Wednesday, a team of scientists is adding 22 seismographs to the Irving area in an effort to learn more.

The team of seismologists from Southern Methodist University, which has studied other quakes in the area since 2008, deployed 15 of the earthquake monitors Wednesday. SMU studies of quakes in the DFW Airport and Cleburne areas have concluded wastewater injection wells created by the natural gas industry after fracking are a plausible reason for the temblors in those areas.

But Craig Pearson, seismologist for the state Railroad Commission, said that is not the case with the Irving quakes.

“There are no oil and gas disposal wells in Dallas County,” said Railroad Commission of Texas seismologist Dr. Craig Pearson in a Wednesday email.


The recent Ebola outbreak unearthed an interesting phenomenon. A “mystery hemorrhagic fever” was identified by HealthMap — software that mines government websites, social networks and local news reports to map potential disease outbreaks — a full nine days before the World Health Organization declared the Ebola epidemic. This raised the question: What potential do the vast amounts of data shared through social media hold in identifying outbreaks and controlling disease?

Ming-Hsiang Tsou, a professor at San Diego State University and an author of a recent study titled The Complex Relationship of Realspace Events and Messages in Cyberspace: Case Study of Influenza and Pertussis Using Tweets, believes algorithms that map social media posts and mobile phone data hold enormous potential for helping researchers track epidemics.

“Traditional methods of collecting patient data, reporting to health officials and compiling reports are costly and time consuming,” Tsou said. “In recent years, syndromic surveillance tools have expanded and researchers are able to exploit the vast amount of data available in real time on the Internet at minimal cost.”


Thursday, 08 January 2015 00:00

Human Error Caused 93% of Data Breaches

Despite tremendous increased attention, the number of reported cyberbreach incidents rapidly escalated in 2014. According to Information Commissioner’s Office data collected by Egress Software Technologies, U.K. businesses saw substantially more breaches last year, with industry-wide increases of 101% in healthcare, 200% in insurance, 44% among financial advisers, 200% among lenders 200%, 56% in education and 143% in general business. As a result, these industries also saw notable increases in fines for data protection violations.

The role of employees was equally alarming. “Only 7% of breaches for the period occurred as a result of technical failings,” Egress reported. “The remaining 93% were down to human error, poor processes and systems in place, and lack of care when handling data.”

Check out more of the findings from Egress’ review in the infographic below:


The future of IT infrastructure is changing. My friend, BJ Farmer over at CITOC, is fond of reminding me that Change is the Only Constant (see what CITOC stands for?).

It’s true for most everything in life, and especially true for our industry. You can either embrace the changes that come along, evolving how you present services to your clients, or you can slowly lose relevance and fade out of the big picture. The choice is yours.

Right now, change comes from The Cloud.

Yes, there is definitely a lot of hype about the cloud, and it’s easy to grumble about fads and look at the big cloud migration as a bandwagon everyone’s too eager to jump on. But the plain fact is that the cloud is providing affordable, smart alternatives to the kind of infrastructure that used to be the bread and butter of an MSP, and it’s not going anywhere. So you can either keep railing against the cloud, running your Exchange servers and piecing together various services from different partners, or you can start thinking about how to offer innovative solutions for your clients by STRATEGICALLY leveraging the cloud.


Thursday, 08 January 2015 00:00

Scoping Out Your Program/Risk Assessment

At the PLI Advanced Compliance & Ethics Workshop in NYC in October, Scott Killingsworth of the Bryan Cave law firm noted that each risk assessment should be unique.  I agree, and I believe that the case for uniqueness is even more powerful for the combined program and risk assessments companies sometime undertake.  Given the diversity of possibilities, where should you start in scoping out such an engagement?  Another way of asking this question is “How should you conduct a needs assessment for a program/risk assessment?”

To begin, it may be worth thinking in terms of the following six fields of information which can comprise the subjects of an assessment:


Thursday, 08 January 2015 00:00

Survey: business continuity in 2015

Continuity Central’s annual survey asking business continuity professionals about their expectations for the year ahead is now live.

Please take part at

The survey looks at the trends and changes the profession can expect to see in the year ahead.

Read the results from previous years:

The presence or lack of catastrophes is a defining event when it comes to the financial state of the U.S. property/casualty insurance industry.

At the 2014 Natural Catastrophe Year in Review webinar hosted by Munich Re and the Insurance Information Institute (I.I.I.), we can see just how defining the influence of catastrophes can be.

U.S. property/casualty insurers had their second best year in 2014 since the financial crisis – 2013 was the best – according to estimates presented by I.I.I. president Dr. Robert Hartwig.

P/C industry net income after taxes (profits) are estimated at around $50 billion in 2014, after 2013 when net income rose by 82 percent to $63.8 billion on lower catastrophe losses and capital gains.


Thursday, 08 January 2015 00:00

SMBs Should Consider These Tech Trends in 2015

Of course, the end of 2014 and the beginning of 2015 bring all sorts of articles predicting what will be hot in the coming year. For small to midsize businesses (SMBs), quite a few outlets are reporting their lists of technology trends to watch.

Entrepreneur gave three “promising trends” for 2015, which include creating and leveraging well-designed technology, adopting software as a service (SaaS) and developing “data-driven insights.”

Taking advantage of data to make better informed decisions is also a top trend for SMBs to watch from the Huffington Post. According to writer Joyce Maroney, “Smaller businesses, swimming in lots of data of their own, will likewise be taking more advantage of that data to bring science as well as art to their decision making.” That likely means delving further into more data sources than just Google Analytics, says Entrepreneur writer Himanshu Sareen, CEO of Icreon Tech.


Thursday, 08 January 2015 00:00

43 States Have 'Widespread' Flu Problems

(TNS) -- Influenza viruses have infiltrated most parts of the United States, with 43 states experiencing "widespread" flu activity and six others reporting "regional" flu activity, according to the Centers for Disease Control and Prevention.

Hawaii was the only state where flu cases were merely "sporadic" during the week that ended Dec. 27, the CDC said in its latest FluView report. One week earlier, California also had been in the "sporadic" category, and Alaska and Oregon reported "local" flu outbreaks. Now all three states have been upgraded to "regional" flu activity, along with Arizona, Maine and Nevada.

The rest of the states are dealing with "widespread" outbreaks, according to the CDC.


New data from IBM (IBM) showed that despite a decline in cyber attack incidents against U.S. retailers, the number of customer records stolen during cyber attacks remained near record highs in 2014.

IBM reported that cyber attackers secured more than 61 million retail customer records in 2014, down from almost 73 million in 2013.

When IBM narrowed its data down to only incidents involving less than 10 million customer records (which excludes the top two attacks over this timeframe, Target Corporation and The Home Depot), the number of records compromised last year increased by more than 43 percent over 2013. IBM said that cyber criminals have become more sophisticated in reaching customer records.


(TNS) — When Glynn County, Ga., Police Chief Matt Doering began his career nearly three decades ago, the thought of holding an interactive map in his hand would have been like something out of a science fiction novel.

He and the rest of the Glynn County public safety community will see fiction become reality when the county’s new $485,000 computer aided dispatch, or CAD, system goes online next Monday. The county spent and additional $1.1 million to convert decades worth of reports and other information kept in a separate records management system that works with the new software.

“We wouldn’t have dreamed of this,” Doering said. “It is going to be a new mindset.”

His excitement is shared by others because it has been 12 years since the system that helps disseminate information about emergency calls has been updated. In technological terms, that is like a century.


(TNS) — The humble infusion pump: It stands sentinel in the hospital room, injecting patients with measured doses of drugs and writing information to their electronic medical records.

But what if hackers and identity thieves could hijack a pump on a hospital’s information network and use it to eavesdrop on sensitive data like patient identity and billing data for the entire hospital?

It is not a far-fetched scenario. Though it hasn’t happened yet, the hacking of wireless infusion pumps is considered a critical cybersecurity vulnerability in hospitals — so much so that federal authorities are focusing on the pumps as part of a wide-ranging effort to develop guidelines to prevent cyberattacks against medical devices.


Enterprise organizations are looking to partner with MSPs as they move to the cloud. The key for success is to develop an engagement plan using a high touch process to ensure a smooth onboarding experience during all three phases of the on-boarding process:  The Assessment, Transition Plan and Cutover, and Ongoing Performance Analysis. Like most new technologies, cloud computing can require significant changes in business processes, application architectures, technology infrastructure, and operating models that must be properly understood before embarking on any new initiative. Having a well thought out strategy can mean the difference between success and failure.


(TNS) -- Hydraulic fracturing at two well pads in Mahoning County caused 77 small earthquakes last March along a previously unknown geologic fault, a new scientific study says.

The series of temblors included one quake of magnitude 3 -- rare in Ohio -- that was strong enough to be felt by neighbors, according to the study by three researchers from Miami University.

At the time of the quakes, only five were reported, ranging from magnitude 2.1 to 3.

The new research was published online Tuesday in the Bulletin of the Seismological Society of America. It will be printed in the February-March issue of the bulletin.

The peer-reviewed study of the quakes, which occurred in Poland Township southeast of Youngstown, appears to strengthen the link between small- and medium-sized earthquakes and both hydraulic fracturing (also known as fracking) and the use of injection wells for drilling wastes.


Industrial-organizational (I-O) psychologists are all about what makes us tick in the workplace, so it’s unsurprising that the Society for Industrial and Organizational Psychology (SIOP) releases an annual “Top 10 Workplace Trends” list. Equally unsurprising, but interesting nonetheless, is that the list for 2015 is highly tech-focused.

Judging from the list, which was compiled on the basis of a survey of SIOP’s 8,000 industrial-organizational psychologists, these folks appear to have a pretty good handle on technology trends, which clearly have had a significant impact on their views of the workplace in the coming year. Here’s their Top 10 list:


On January 1, 2015, version 3.0 of the PCI (Payment Card Industry) Data Security Standards replaced version 2.0 as the standard. In other words, what some financial institutions, merchants, and other credit card payments industry members already saw as an onerous process—complying with PCI standards and possibly being audited—is about to get even harder. While I can’t take the blood, sweat and tears out of PCI compliance, as an experienced Qualified Security Assessor (QSA) I can give you some context for why PCI is issuing a new version of its standards, and why 3.0 is a good thing for your business in the end.


By David Honour

As we enter a new year it’s always a good exercise to look ahead at potential changes in the coming 12 months and what these might mean for existing business continuity plans and systems. Will the strategies you had in place in 2014 remain fit for purpose, or will some reworking be necessary? What emerging threats need to be considered to ensure that new exposures are not developing? In this article I highlight three areas which are likely to be the biggest generic business continuity challenges in 2015.

The rise and rise of information security threats

2014 was the year that information security related incidents took many of the business continuity headlines, with attacks increasing in sophistication, magnitude and impact. This situation is only going to get worse during 2015.

The greatest risk is that of a full-on cyber war breaking out, which would inevitably result in collateral damage to businesses. The first salvoes have been seen in a potential United States versus North Korea cyber war; but other state actors are also well geared up for cyber battle, including Israel, Russia, China and India. The cyber-warfare skills of terrorist groups such as ISIS should also not be under-estimated.


Wednesday, 07 January 2015 00:00

How We Get Work Done: Good Old Email

While attention is focused this week on the CES 2015 show in Las Vegas and all the new technology, gadgets and apps that may change the way we work in the near future, Pew Research has a reminder of the technology that we truly consider indispensable at work: Email and the Internet.

After a survey of 1,066 adult Internet users, Pew Research analyzed results from those who have full- or part-time jobs. When it comes to the digital work lives of these respondents, the findings indicate, the tools designated as “very important” are nothing new. Sixty-one percent named email, 54 percent “the Internet,” and 35 percent a landline phone. Cell phones and smartphones trailed at 24 percent, and social networking sites grabbed a measly 4 percent.

Pew notes that email is still king despite increasing awareness of drawbacks, including “phishing, hacking and spam, and dire warnings about lost productivity and email overuse.” In fact, 46 percent of respondents said they think they are more productive with their use of email and other digital tools; 7 percent say they are less productive. Being more productive, these workers report, includes communicating with more contacts outside the company, more flexible work hours, and more hours worked.


Wednesday, 07 January 2015 00:00

Frigid Weather Heightens Ice Hazards

Freezing weather now sweeping across much of the U.S. brings a greater risk of ice storms and underlines the need for careful planning and heightened safety measures.

In fact, it does not take much ice to create disaster conditions. Even a thin coat of ice can create dangerous conditions on roads. Add strong winds and you have a recipe for downed trees and power lines, bringing outages that can last for days.


When it comes to mobile computing MSPs should be gearing up for a lot more complexity going into 2015. For all practical purposes usages of mobile computing devices has been fairly limited to accessing email and using browsers to surf the web. But by the end of this year most employees will probably have as many five to ten applications developed by the companies they work for running on their devices. For MSPs that means developing a capability to manage mobile applications, not just the devices they run on, will be critical requirements in 2015.

According to Phil Redman, vice president of mobile solutions and strategy for Citrix, mobile applications almost by definition will be accessing a mix of backend service running on premise and in the cloud. As such, IT organizations will be looking to work with MSPs that not only have application management expertise, but also familiarity with the entire scope of their enterprise IT operations.


Policy uncertainty at home and economic and geopolitical risks overseas are the central challenges facing chief financial officers (CFOs) of the UK’s largest companies as they enter 2015, according to a survey by Deloitte.

Deloitte’s latest CFO Survey gauged the views of 119 CFOs of FTSE 350 and other large private UK companies. It found that risk appetite among CFOs fell in Q4 2014. 56 percent of CFOs say that now is a good time to take greater risk onto their balance sheets, down from a record reading of 71 percent in Q3 2014 but still well above the long-term average. The change was driven by concerns over political and economic risk uncertainties: when asked to rate the level of risk posed between 0 and 100, CFOs attached a 63 rating to the UK General Election and 56 to deflation and weakness in the Euro area and to a possible referendum on the UK’s membership of the EU. The level of risk posed by each factor has risen in the last three months. 60 percent of CFOs enter 2015 with above normal, high or very high levels of uncertainty facing their businesses, up from a low of 49 percent in Q2 2014 but at the same level seen 12 months ago.

Ian Stewart, chief economist at Deloitte, said: “The central challenges facing the UK’s largest companies as they enter 2015 are policy uncertainty at home and economic and geopolitical risks overseas. Rising levels of uncertainty have caused a weakening of corporate risk appetite which, nonetheless, remains well above the long-term average.”


According to preliminary estimates, total economic losses from natural catastrophes and man-made disasters were USD 113 billion in 2014, down from USD 135 billion in 2013. Out of the total economic losses, insurers covered USD 34 billion in 2014, down 24 percent from USD 45 billion in 2013. This year disaster events have claimed around 11 000 lives.

Of the estimated total economic losses of USD 113 billion in 2014, natural catastrophes caused USD 106 billion, down from USD 126 billion in 2013. The outcome is well below the average annual USD 188 billion loss figure of the previous 10 years. The total loss of life of 11,000 from natural catastrophe and man-made disaster events this year is down from the more than 27,000 fatalities in 2013.

Insured losses for 2014 are estimated to be USD 34 billion, of which USD 29 billion were triggered by natural catastrophe events compared with USD 37 billion in 2013. Man-made disasters generated the additional USD 5 billion in insurance losses in 2014.


The BCI has published an updated version of its guide to business continuity legislation, regulation, standards and guidance around the world.

Although not completely comprehensive the guide is probably the best available currently.

The guide starts by listing current and projected international initiatives, particularly those supported by the International Standards Organization (ISO), The European Union (EU) and the Basel Committee on Banking Supervision.

Each entry is categorized into one of four headings:

Legislations: government laws which include aspects of business continuity management by name or are sufficiently similar in nature (disaster recovery, emergency response, crisis management) to be treated as BCM legislation. To be included in this category they must be legally enforceable legislation passed by a national, federal, state or provincial government.

Regulations: Mandatory rules or audited guidance documents from official regulatory bodies.

Standards: Official standards from national (and international) accredited standards bodies which relate to business continuity as a whole or to a specific related subset such as IT service continuity.

Good practice: Guidelines published as good (or best) practice by various authoritative bodies.

Obtain the document.

Tuesday, 06 January 2015 00:00

From the Extreme to the Mean

By 2050, most of the US coast can expect to see 30 or more days a year of floods up to two feet above high tide levels, says a new NOAA study.

The study, ‘From the Extreme to the Mean: Acceleration and Tipping Points for Coastal Inundation due to Sea Level Rise’, has been published in the American Geophysical Union’s online peer-reviewed journal Earth’s Future.

NOAA scientists Sweet and Joseph Park established a frequency-based benchmark for ‘tipping points,’: when so-called nuisance flooding, defined by NOAA’s National Weather Service as between one to two feet above local high tide, occurs more than 30 or more times a year.

Based on that standard, the NOAA team found that these tipping points will be met or exceeded by 2050 at most of the US coastal areas studied, regardless of sea level rise likely to occur this century. In their study, Sweet and Park used a 1.5 to 4 foot set of recent projections for global sea level rise by year 2100 similar to the rise projections of the Intergovernmental Panel for Climate Change, but also accounting for local factors such as the settlement of land, known as subsidence.


People who manage a functional department or a business process may find it tough to set recovery objectives for what they manage so devotedly, day in and day out. That does not necessarily mean that they are not objective. Instead, they may not know how critical their part of the business is to the rest of the organisation. Without a measuring stick, they cannot confidently make recommendations or requests about suitable recovery times. So when the next business continuity planning moment comes along, BC managers may find that they have some handholding and educating to do to bring different organisational units up to speed.


Tuesday, 06 January 2015 00:00

Are We Closing in on the Quantum Enterprise?

The prevailing narrative in enterprise circles these days is that things will keep getting bigger: Big Data, regional data centers, hyperscale … everything is aimed at finding the magic formula that allows organizations to deal with larger workloads at less cost.

It is ironic, then, that one of the ways researchers are hoping to tackle this problem is by shrinking the basic computing elements – processing, storage and networking – to atomic and even sub-atomic levels in order to derive greater power and efficiency from available resources.

So-called quantum computing (QC) has been a facet of high-performance architectures for some time, but lately there has been steadily increasing buzz about enterprise applications as well.


I’m back at my desk after a relaxing holiday vacation. It was a pretty quiet time for cybersecurity, too. The only really disturbing news I saw during my holiday involved a data breach at Chick-fil-A and the new theory that the Sony breach likely wasn’t done by North Korea but by an insider (but then again, some of us were questioning insider involvement from the beginning).

You and I know too well that this little lull in cybersecurity news won’t last very long, but I do think that this is a good time for companies to review their cybersecurity procedures and policies. We saw the damage from the fallout after the Sony incident and I think Target is still picking up the pieces from its breach a year ago.

Near the end of 2014, Ponemon released a study, “2014 Cost of Cyber Crime Study: United States,” that shows just how expensive and damaging a breach can be: It revealed that it can cost upwards of $20,000 a day for incidents that may take, on average, a month to fix. Jon Oberheide of Duo Security pointed out that SMBs need to be especially concerned about these breach costs, telling me in an email:

While the mega-breach-du-jour gets the most media attention, Ponemon's study calls out an important distinction: The impact of breaches is much greater on small and medium businesses than the large enterprises. The real challenge in cybersecurity is how to protect the millions of businesses who don't have an enormous security budget or a large roster of top security talent to defend their organization. And yet, they face the same attacks and adversaries as the big guys. So while companies like Sony face dramatic consequences in the short-term, they will rebuild, recover, and revisit their security strategy to continue their operations in the long-term. But if you're not a Sony-scale company may just have your business effectively wiped out.


Tuesday, 06 January 2015 00:00

Winter Weather and Cat Losses

With frigid temperatures and snow expected to fall around the New York City area and other parts of the United States this week, it’s a good time to review how winter storms can impact catastrophe losses.

For insurers, winter storms are historically very expensive and the third-largest cause of catastrophe losses, behind only hurricanes and tornadoes, according to the I.I.I.

Despite below average catastrophe losses overall in 2014, insured losses from winter storms were significant. In fact winter storms in the U.S. and Japan accounted for two of the most costly insured catastrophe losses in 2014.

According to preliminary estimates from sigma, extreme winter storms in the U.S. at the beginning of 2014 caused insured losses of $1.7 billion, above the average full-year winter storm loss number of $1.1 billion of the previous 10 years.


Increased supercomputing capacity will improve accuracy of weather forecasts

DSCOVR mission logo. (Credit: NOAA)

NOAA's supercomputer upgrades will provide more timely, accurate weather forecasts. (Credit:

Today, NOAA announced the next phase in the agency’s efforts to increase supercomputing capacity to provide more timely, accurate, reliable, and detailed forecasts. By October 2015, the capacity of each of NOAA’s two operational supercomputers will jump to 2.5 petaflops, for a total of 5 petaflops – a nearly tenfold increase from the current capacity.

“NOAA is America’s environmental intelligence agency; we provide the information, data, and services communities need to become resilient to significant and severe weather, water, and climate events,” said Kathryn Sullivan, Ph.D., NOAA’s Administrator. “These supercomputing upgrades will significantly improve our ability to translate data into actionable information, which in turn will lead to more timely, accurate, and reliable forecasts.”

Ahead of this upgrade, each of the two operational supercomputers will first more than triple their current capacity later this month (to at least 0.776 petaflops for a total capacity of 1.552 petaflops). With this larger capacity, NOAA’s National Weather Service in January will begin running an upgraded version of the Global Forecast System (GFS) with greater resolution that extends further out in time – the new GFS will increase resolution from 27km to 13km out to 10 days and 55km to 33km for 11 to 16 days. In addition, the Global Ensemble Forecast System (GEFS) will be upgraded by increasing the number of vertical levels from 42 to 64 and increasing the horizontal resolution from 55km to 27km out to eight days and 70km to 33km from days nine to 16.

Computing capacity upgrades scheduled for this month and later this year are part of ongoing computing and modeling upgrades that began in July 2013. NOAA’s National Weather Service has upgraded existing models – such as the Hurricane Weather Research and Forecasting model, which did exceptionally well this hurricane season, including for Hurricane Arthur which struck North Carolina. And NOAA’s National Weather Service has operationalized the widely acclaimed High-Resolution Rapid Refresh model, which delivers 15-hour numerical forecasts every hour of the day.

“We continue to make significant, critical investments in our supercomputers and observational platforms,” said Louis Uccellini, Ph.D., director, NOAA’s National Weather Service. “By increasing our overall capacity, we’ll be able to process quadrillions of calculations per second that all feed into our forecasts and predictions. This boost in processing power is essential as we work to improve our numerical prediction models for more accurate and consistent forecasts required to build a Weather Ready Nation.”

The increase in supercomputing capacity comes via a $44.5 million investment using NOAA's operational high performance computing contract with IBM, $25 million of which was provided through the Disaster Relief Appropriations Act of 2013 related to the consequences of Hurricane Sandy. Cray Inc., headquartered in Seattle, plans to serve as a subcontractor for IBM to provide the new systems to NOAA.

“We are excited to provide NOAA’s National Weather Service with advanced supercomputing capabilities for running operational weather forecasts with greater detail and precision,” said Peter Ungaro, president and CEO of Cray. “This investment to increase their supercomputing capacity will allow the National Weather Service to both augment current capabilities and run more advanced models. We are honored these forecasts will be prepared using Cray supercomputers.”

"As a valued provider to NOAA since 2000, IBM is proud to continue helping NOAA achieve its vital mission," said Anne Altman, General Manager, IBM Federal. "These capabilities enable NOAA experts and researchers to make forecasts that help inform and protect citizens. We are pleased to partner in NOAA's ongoing transformation."

NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on TwitterFacebookInstagram and our other social media channels. Visit our news release archive.

After deciding to focus its efforts squarely on the mainframe at the end of 2014, Compuware is starting 2015 off with the launch today of Topaz, a data virtualization framework that makes mainframe data more accessible.

Compuware CEO Chris O’Malley says that with the vast amounts of enterprise data that reside on the mainframe, one of the core challenges organizations face is finding ways to make that information accessible to the entire organization. Topaz, says O’Malley, provides a layer of abstraction that makes that data accessible without having to intimately understand how, for example, a COBOL application was constructed.

O’Malley says Topaz will enable IT organizations that still depend on mainframes to run their most mission-critical applications to introduce more flexibility by not only making that data available via a single user interface, but also enabling users to copy that data using a simple drag-and-drop file transfer utility.


(TNS) — An ice storm 10 years ago proved a learning experience for some local agencies, and proof of proper preparedness for others.

The ice storm of 2005 left more than 75,000 residents without power for several days, killed four people and devastated the city and county.

Looking back, Russ Decker, director of the Allen County Emergency Management Agency, said the neat thing about the storm was Allen County’s actions after it.

“When it was over, the first thing everybody wanted to do was get together and figure out what we can do” better next time, he said.

The results left more municipalities and county agencies ready in case there is a repeat of 2005’s disaster.


High-profile data breaches at well-known companies such as Home Depot, Staples and Sony have shined a bright spotlight on data security, or the lack of it. But these breaches have also raised an alarm within these public companies and other organizations. Many more companies, including big IT service providers, have elevated the job of IT security to the C-level, a highly visible response to what is now a highly visible issue.

“Security jobs are being moved to the C-suite 
because the billions lost to data breaches are a C-level problem,” said Arthur Zilberman, CEO,, a New York-based computer repair company.


Traditionally, insurance agencies do not reward companies that stay out of trouble. The idea is to split the cost of compensation to a few unfortunate enterprises among the larger number of all enterprises that take out an insurance policy. Compensation is paid according to the nature of the insurance claim presented and the terms of the policy. However, it can only be made if risks can be evaluated and damage calculated. Some aspects such as damage to a company’s brand may be impossible to assess, even if they have a major negative impact. Insureds and insurers try to work with quantifiable factors. But smart enterprises know there is additional leverage to be gained when putting insurance in place.


Business Continuity planning and maintenance cycles often leave little time and few resources for planning how the organization will react if the unexpected actually occurs (their Incident Response).

An analogy can be drawn between healthcare and Business Continuity Management: planning and plan exercise cycles are analogous to maintaining a healthy lifestyle and having regular medical checkups.  And when something serious occurs, the medical care system is prepared to react.  So should your BCM program.


Trapp Technology has unveiled disaster recovery (DR) services that are designed to deliver physical or virtual data replication, redundant connectivity and high availability of IT infrastructure during downtime recovery.

The Scottsdale, Arizona-based managed service provider (MSP) said its DR services can instantly initiate seamless recovery of applications and data.

"Our clients consistently asked us to assist them with more of their technology needs, disaster recovery being one of the most common requests," DJ Jones, Trapp's vice president of sales and marketing, told MSPmentor. "With the high demand for disaster recovery services, we made sure [these were] a priority."


By Rachel Weingarten

Some brands stay fresh and relevant generation after generation. What makes certain corporate branding strategies timeless while others come and go?

Take Brooks Brothers. No less a person than Abraham Lincoln was one of the brand’s most loyal customers. So how does a nearly 200-year-old company not only stick around, but remain relevant and even cutting-edge?


By Andrew Hiles

Service level agreements (SLAs) and business continuity go hand-in-hand: or they should do!

Whether SLAs are implemented in support of a balanced scorecard to align information and communications technology with business mission achievement, or as a stand-alone initiative, the strategic use of service level agreements can be a perfect solution to the justification of investment in resilience and business continuity: an approach I have been advocating for over ten years.

How does it work?

First, define the business mission.

Take, as an example, a multinational company – call it Klenehost - selling miniature packs of soap, shampoo, hair conditioner and shower gel to the hotel industry. These are packaged in different ways and customized for specific hotel chains.


Monday, 05 January 2015 00:00

Cybersecurity predictions for 2015

Proofpoint looks at how information security threats are likely to evolve during the coming year.

2014 was a year in which information security vaulted into the public eye, driven by a surge in both the number and the visibility of data breaches and compromises. This new attention will bring greater scrutiny in 2015, just as the nature and severity of threats continue to evolve for the worst.

Cyberextortion will be the most rapidly growing new threat family

Beginning with the rapid rise of CryptoLocker in late 2013, the threat from ransomware expanded rapidly in 2014, adding not only other ‘extortion malware’ but also spreading to mobile platforms such as Android. Paying the ransom remains arguably a popular option despite its risks, and the estimated $3 million in ransoms generated by CryptoLocker alone has shown cybercriminals the revenue potential of digital extortion schemes. These attacks are difficult to defend against and costly to recover from, and lead to business disruption that extends far beyond the loss of data.


Monday, 05 January 2015 00:00

The cloud in 2015

Steven Harrison predicts how business use of cloud computing will develop and change during the next 12 months:

Hybrid is the equaliser

Whilst cloud computing has become an integral part of IT systems, concerns around vendor lock-in, licensing restrictions and security mean that businesses are still resistant to moving all IT operations into a hosted environment. As a result, the hybrid cloud will become the deployment model of choice for those organizations that want to leverage the elasticity of the cloud in tandem with existing infrastructure. The challenge for organizations adopting a hybrid approach is ensuing that systems can run in parallel and operate as one environment to guarantee performance uptime.


A recent survey from antivirus software provider ThreatTrack Security showed that 81 percent of IT security professionals said they would "personally guarantee that their company's customer data will be safe in 2015."

The ThreatTrack Security survey, titled "2015 Predictions from the Front Lines," also revealed 94 percent of respondents said they are optimistic that their organization's ability to prevent data breaches will improve next year.


Monday, 05 January 2015 00:00

A New Year to Prepare

It is that time of year again, a time to reflect on another year gone by and prepare for the new year to come. It is time to dust off last year’s resolutions and come up with a new list of things to accomplish in 2015. While researching the latest diet trend and signing up for the newest exercise class or in between swearing off your guilty pleasures, vowing to set your alarm earlier, and promising to be better at staying in touch, do yourself a favor and add these five simple preparedness resolutions to your list.

1.  Make or update your emergency kit.

If you don’t have an emergency preparedness kit in your house and car, it’s time to get one.Hurricane Kit

Gather water, food, flashlights, batteries, and a first aid kit into a container or bag and store it in an easy-to-access area of your house or car.

If you already have an emergency kit, take time to review what is in it. Does your extra pair of clothes still fit? Do the flashlights need new batteries? Are all your important documents up to date? Having an emergency kit in your home or car will not be of use during an emergency if your kit is out of date or missing adequate supplies.

For more information on what to include in your emergency kit, visit CDC’s webpage:

2.  Form a support network (talk to your neighbors).

New Year’s Eve parties are a great time to catch up with friends and family. Why not use this time surrounded by those you love to talk about preparing for an emergency? Talk to your neighbors about forming a support network and make a plan to check on each other after a disaster occurs. Talk to people close to you about any physical limitations or special medical needs you may have during an emergency. During an emergency it is usually the people in closest proximity that are first to offer aid, and while it may not be the typical topic of conversation at your New Year’s Eve bash, it is an important discussion to have.

3.  Prepare your family (older adults, kids and pets).

When making all your plans to prepare, don’t forget your family. Talk to older adults in your life about their emergency preparedness plans, and ask them how you can help. Make sure your kids are involved in your emergency preparedness planning. Help them understand and be part of natural disaster planning with CDC’s Ready Wrigley. Also, don’t forget your pets. Include food and water for your furry friends in your emergency kit, and identify pet friendly evacuation shelters in your area.

4.  Join an alert network (app, weather radio, email updates).

It’s 2015 and even though we may not have flying cars or time machines, we do have some great technology for tracking and alerting us to natural disasters that may be in our area. Rather than downloading the latest video game or dating app, make sure your phone and computer have alert systems set up to notify you when dangerous weather is in your area. Consider setting up push notifications or email alerts that let you know when a natural disaster may be coming.

5.  Weatherize your home and review your insurance.

mature couple fills in questionnaire together 

The New Year is a perfect time to review your insurance plan and evaluate your home. Install or check smoke detectors and carbon-monoxide alarms in your house. Make sure you know where the utility off and on switches are located. During leaks or when evacuating your home, knowing how to turn off your gas, water, and electricity could help prevent damage to your home and protect your health. Also, check your insurance policy and make sure you are covered for possible flooding or structural damage to your home and property.

Taking time to prepare for emergencies and natural disasters now could be the most important thing you do this year.

Winter has officially begun, and the next few months could create both opportunities and challenges for many managed service providers (MSPs).

While winter can bring snow, sleet and other inclement weather, MSPs can provide data backup and disaster recovery (BDR) solutions to help businesses safeguard data and ensure that companies can access this information in any conditions.

And with December quickly drawing to a close, and winter weather on the horizon in cities and towns across the country, now is the perfect time to review this month's top BDR lessons for MSPs.


Few things are as integral to the data center as the server. I know, technically it is only one leg of the three-legged data stool, along with storage and networking, but the server is where the actual processing takes place – the brains of the operation, so to say – so it is understandable that data executives are a little apprehensive about the divergent path that server development is taking.

On the one hand, mainframes still run a fair amount of the enterprise load, despite numerous calls for the technology to be put out to pasture. At the same time, blade and microservers are showing that they are equally adept at handling massive data loads, particularly when it comes to the parallel processing of multiple data streams that characterize modern Web-facing applications.

In between, there is a plethora of high-power, medium-power and low-power solutions, not to mention the rise of modular infrastructure that could make the whole idea of disparate systems obsolete. So it is no wonder the data center executive is having trouble seeing the future.


In 2013 Continuity Central conducted a survey to explore quality control methods that are being used within business continuity management systems. This survey has now been repeated to see how the trends in this area have changed.

The 2014 Quality control and measurement of business continuity management systems survey was conducted online using SurveyMonkey and received 142 responses in total. 84.5 percent of respondents were from large organizations (those with more than 250 employees). Respondents came from around the world, with the most coming from the US (34 percent), the UK (25 percent) and Australia (6 percent).

The survey initially asked “Does your organization have clear processes or methods for the quality control of business continuity plans and systems?” 66.9 percent of respondents said that, yes, their organization did have clear processes or methods; while 29.6 percent said no their organization didn’t. This was a very similar result to the 2013 survey, where 64.9 percent answered ‘yes’ and 30.2 percent answered ‘no.’


They say that information drives business. Actually, it’s electricity. Your data will most likely be useless if you have no power. On the other hand, if you can turn the lights on, you can start working, one way or another. But now in a kind of millennial Mobius loop, information is also increasingly driving power distribution. Smart grids are a case in point. The benefits are in higher power transmission efficiency, reduced costs, better peak load handling and better integration of customer-owned generating systems. The risk is in the network security.


Information technology downtime is a costly proposition. Based on industry surveys, it can cost an organization as much as $5,600 a minute, or well over $300,000 per hour in losses, according to IT research firm Gartner. But the costs and complexities of traditional approaches to disaster recovery can be expensive too, especially for smaller jurisdictions. As a result, some cities are leveraging the cloud to provide a cost-effective way to maintain services in the event of a local or regional emergency.

Asheville, N.C., historically maintained its data center redundancy through a local disaster recovery center, located just two blocks from the city’s primary data center. But when Asheville CIO Jonathan Feldman came on board, that scenario made him uncomfortable. 

“Anything that can take out City Hall can probably impact something that’s two blocks away as well,” he said. “It was sort of a thorn in my side. But disaster recovery is not the easiest thing to get money for, so we struggled a bit to find a solution.”


In 2015, cybercriminals will increasingly be non-state actors who monitor and collect data through extended, targeted attack campaigns, McAfee Labs predicts. In the group’s 2015 Threats Predictions, Intel Security identified internet trust exploits, mobile, internet of things and cyber espionage as the key vulnerabilities on next year’s threat landscape.

“The year 2014 will be remembered as ‘the Year of Shaken Trust,’” said Vincent Weafer, senior vice president of McAfee Labs. “This unprecedented series of events shook industry confidence in long-standing Internet trust models, consumer confidence in organizations’ abilities to protect their data, and organizations’ confidence in their ability to detect and deflect targeted attacks in a timely manner. Restoring trust in 2015 will require stronger industry collaboration, new standards for a new threat landscape, and new security postures that shrink time-to-detection through the superior use of threat data. Ultimately, we need to get to a security model that’s built-in by design, seamlessly integrated into every device at every layer of the compute stack.”

McAfee Labs predicts the top cybersecurity threats in 2015 will be:


Tuesday, 23 December 2014 00:00

Data Center 2015: Where Do We Go from Here?

Tis the season for year-end wrap-ups and year-ahead predictions, so as in past years I will take a look at what some of the key industry players are saying and then offer my own take as to what looks real and what looks imaginary.

One of the broadest discussions of late is the future of the data center itself. As virtualization, the cloud and software-defined architectures gain in popularity, it is not hard to imagine a software-defined data center (SDDC) consisting of an end-to-end data environment sitting entirely atop the virtual layer with nearly all hardware, save the client device outsourced to a third-party provider.

This is part of what IDC describes as the 3rd Platform of innovation and growth. Accompanied by advances like mobile computing, Big Data analytics and social networking, the 3rd Platform characterizes what the firm says is the “new core of ICT market growth” and is already responsible for about a third of the total IT spend. For 2015, IDC expects raw compute and storage capacity to shift to cloud-based resources optimized for mobile and Big Data applications, and this will lead to the rise of “cloud first” hardware development – particularly consolidated solutions that cater to hyperscale infrastructure.


By Natalie Burg

“You’ll shoot your eye out!”

Just when you thought that much-loved line couldn’t mean any more or less than it did the last 500 times you heard it, the popular movie A Christmas Story includes some business lessons you may have overlooked.

Business lessons in a holiday movie? You bet your Red Ryder, carbine action, 200-shot, range model air rifle. In this post we revisit good ol’ Cleveland Street to uncover five business lessons that can be learned from the cinematic classic.


Insurance companies face strict business uptime, data management and data protection requirements, and as a result, these businesses need data backup and disaster recovery (BDR) and business continuity solutions that fulfill these needs.

However, managed service providers (MSPs) can offer data BDR and business continuity solutions with image-based backup to help insurance companies back up files, programs and other important information quickly and easily.


The Sony hacking and subsequent threats to the company and its supply chain, has become the biggest information story of 2014; in a year of many high profile incidents. What started out as ‘yet another breach story’ a few weeks ago rapidly developed into a very real business continuity and reputation threatening incident.

On December 19th the FBI published an update on the Sony cyber attack. The highlights include:


WatchGuard Technologies is urging organizations to use the nearly epic scale of the Sony cyber attacks to spur their companies into action versus panicking about potential risks.

"A year ago, we predicted major state-sponsored attacks may bring a Hollywood movie hack to life that exploits a flaw against critical infrastructure – we just didn't predict it would happen to Hollywood itself," said WatchGuard's Global Director of Security Strategy, Corey Nachreiner. "It's important that IT pros use this opportunity to upgrade what is often five-year-old technology to defend against five-day-old threats."

"The FBI is right when it says that less than 10 percent of companies could survive an attack like the one on Sony," continued Nachreiner. "And, unfortunately, it's not a question of if, but when for these kinds of attacks."

Nachreiner recommends five immediate actions that organizations can take to make sure they have the best possible chance of preventing attacks, and seven actions to minimize damage if cyber criminals do get in:


Monday, 22 December 2014 00:00

2015: The Year of Agile Data Warehousing

2015 will be the year that agile data warehouse (DW)/business intelligence (BI) takes off.  Traditional strategies for DW/BI have been challenged at best, with the running joke being that a DW/BI team will build the first release and nobody will come. On average, Agile strategies provide better time to market, improved stakeholder satisfaction, greater levels of quality, and better return on investment (ROI) than do traditional strategies. The DW/BI community has finally started to accept this reality, and it is now starting to shift gears and adopt agile ways of working. My expectation is that 2015 will see a plethora of books, case studies, and blog postings describing people’s experiences in this area.


Monday, 22 December 2014 00:00

Data Center Efficiency: Look Before You Leap

Efficiency in the data center is a big thing now, with organizations of all sizes working to develop both the infrastructure and the practices that can help lower the energy bill. But while analysis of data flows and operating characteristics within equipment racks is fairly advanced, the ability to peek under the covers to see how energy is actually being used is still very new.

To be sure, there is a variety of tools on the market these days, from simple measurement devices to full Data Center Infrastructure Management (DCIM) platforms, but more often than not the question revolves around not only what to measure, but how.

Without adequate insight into what is going on, it is nearly impossible to execute an effective energy management plan, says UK power efficiency expert Dave Wolfenden. Many standard tests, in fact, fail in this regard because they attempt to gauge the upper capabilities of power and cooling equipment, not how to maintain maximum efficiency during normal operation. New techniques like computational fluid dynamics (CFD) can help in this regard, but they must be employed with proper baselines in order to give a realistic indication of actual vs. projected results.


In 2015, almost every CIO will be tasked with assessing their organizations and technology to ensure data and confidential information is protected.

Current Situation

Target, Home Depot, Staples, who’s next? These are just the most recent retail outlets that made the news. What is not making the headlines are the multitude of private- and public-sector organizations that have been hacked and lost data and information — many times totally unaware until after the fact.


As the Ebola outbreak in West Africa led many to be concerned about U.S. capability to respond to its infectious disease threats, an annual report shows only half of states score well on 10 key public health measures.   

Many states scored poorly on measures of communication and coordination responses to threats, vaccination rates and infections from contact with the health care system, according to the report, released annually by the Robert Wood Johnson Foundation and Trust for America’s Health. 

"Over the last decade, we have seen dramatic improvements in state and local capacity to respond to outbreaks and emergencies,” said Jeffrey Levi, executive director of the Trust, in a statement. “But we also saw during the recent Ebola outbreak that some of the most basic infectious disease controls failed when tested.”


Friday, 19 December 2014 00:00

Cyber Risk on the Inside

While the Sony cyber attack has put the spotlight on sophisticated external attacks, a new report suggests that insiders with too much access to sensitive data are a growing risk as well.

According to the survey conducted by the Ponemon Institute, some 71 percent of employees report that they have access to data they should not see, and more than half say this access is frequent or very frequent.

In the words of Dr. Larry Ponemon, chairman and founder of The Ponemon Institute:

This research surfaces an important factor that is often overlooked: employees commonly have too much access to data, beyond what they need to do their jobs, and when that access is not tracked or audited, an attack that gains access to employee accounts can have devastating consequences.”


One of the side effects of the consumerization of IT is that some end customers are feeling more empowered than ever to take IT matters into their own hands rather than seek the help of IT solution providers. This is especially true when it comes to cloud services, where business owners (or their employees) can self-install a cloud backup product and instantly have access to 5 GB or more of free cloud storage. Even if business owners aren't actively involved in using or promoting DIY (do-it-yourself) cloud services, research shows their employees are. A study from Skyhigh Networks, which monitors the use of cloud services for businesses, found that the average enterprise uses 545 cloud services, which is approximately 500 more than the average CIO is aware of!

Besides the loss of control of corporate data, DIY cloud services play into the hands of cybercriminals who exploit business owners through ransomware. Like other malware, ransomware infects corporate networks through unpatched computers or when a user clicks on an infected email attachment. Once launched, the ransomware program encrypts common user files on the network--such as documents, spreadsheets and database files--and the victim is required to pay a ransom to decrypt the files.


Friday, 19 December 2014 00:00

2014: The Perfect Malware Storm

IT security may be an MSP’s core offering or one of several lines of business. But regardless of its business model, a service provider should take stock of the current threat landscape. MSPs need to know what’s out there if they hope to help clients mitigate their security risks.

What are your customers up against? In 2014, they endured the perfect malware storm. Consider the following:


Friday, 19 December 2014 00:00

How to Turn Open Data into Real Money

I recently interviewed a technology start-up that claimed they were already profitable, with only a few clients and a few months out the door. I have no way to verify or deny that, but I can tell you this: The entire product is built around open data.

In fact, its founders adamantly refused to let me call it a technology company, which is just one of many reasons I’m not revealing its name.

“Our product is the data,” one VP repeatedly told me.

That’s a bit of a bold claim for a company based on government-released data and other open data sets. If it were really the data, and everybody has access to the data, then what’s the point?


Security pros got the Target breach for Christmas last year. The breach hit the retailer during its busiest time of the year and cost them millions in lost business. For security pros desperate for more budget and business prioritization, you couldn’t have asked for a more perfect present - it’s as is if Santa himself came down the chimney and placed a beautifully wrapped gift box topped with a bow right under your own tree. This year it looked as if all we were getting was a lump of coal - but then Sony swooped in to save us like a Grinch realizing the true meaning of Christmas.
The Sony Picture Entertainment (SPE) breach is still unfolding, but what we know so far is that a hacktivist group calling themselves the Guardians of Peace (GoP) attacked Sony in retribution for the production of a movie, “The Interview,” which uses the planned assassination of North Korea’s leader as comedic fodder. The hacktivists supposedly stole 100 TBs of data that they are gleefully leaking bit by bit (imagine Jingle Bells as the soundtrack). The attack itself affected the availability of SPE’s IT infrastructure, forcing the company to halt production on several movies.
We’ll be releasing a more detailed analysis for clients later this afternoon, but at a high level, there are several reasons why this attack is in the news every day, why it will prove to be yet another turning point in the security industry, and why security is so integral to the business technology (BT) agenda:

(TNS) — Think the Napa fault stopped moving after producing a 6.0 earthquake in August? Think again.

The fault that caused that Napa quake is forecast to move an additional 2 to 6 inches in the next three years in a hard-hit residential area, a top federal scientist said at a meeting of the American Geophysical Union in San Francisco on Tuesday.

It is the first time scientists have formally forecast the gradual shifting of the ground in a residential area after an earthquake.

“Until the South Napa earthquake happened, we had not clearly foreseen just what a problem that could be,” U.S. Geological Survey geophysicist Ken Hudnut said.


It is fascinating to watch a new class of software be born. This doesn’t seem to happen that often anymore, but every once in a while a customer or a vendor discovers a gap in the current offerings and fills that gap with something we have never seen before. I recently ran into an event like this at BMC Engage. BMC has a write-up that subtly points to the impending creation of this new security automation product class. And last week, I spoke to Tony Stevens, who works for the Department of Technology, Management and Budget at the State of Michigan and is helping husband the birth of this class. Let’s talk about that this week.


Have you ever thought about all the information your appliances tell you? The world is moving toward presenting instant data about every aspect of life. For example, there is now an electric toothbrush with Bluetooth capabilities that can record your brush strokes and let you chart your dental hygiene activities on a smartphone app. Home sensor products not only tell you if your teenager is trying to sneak out at night, but also how many times someone has been dipping into the cookie jar. And many of us can’t even exercise anymore without a fitness band and apps that record every step, every calorie expended, and every turn in our sleep.

While some of that real-time data is great to have, we’re also reaching a point of TMI … “too much information,” or data overload. How much is too much real-time data? Only you can answer that for your personal data needs, but I do know there is one area where there is never enough real-time data. That is in your company’s disaster recovery plan.

Think about a disaster striking your business. You could have all your subject matter experts in place, but if they can’t access data or if your recovery strategy isn’t complete, nothing will work. The consequences could be nothing short of catastrophic: for the vast majority of companies, once they have to shut down because of server problems or another disaster, they aren’t able to recover in a timely fashion. And let’s face it … a faltering or incomplete recovery can spell death for a business.


To customers, the cloud often seems like an ideally flexible application and data storage solution. On the other hand, starting as a cloud provider often requires very deep pockets. As a result, not every provider stays the course. And if under-capitalisation doesn’t kill a provider off, there is always the danger of a marketing failure that persuades backers to pull the plug. The irony of the situation is that many customers want to make their cloud provider a strategic part of their disaster planning. However, customers must then also extend their plan to include the possibility that the provider itself is the disaster.


Thursday, 18 December 2014 00:00

Even in the Cloud B&R Still Needs TLC

Data is the lifeblood of the modern enterprise, and as with most complex organisms, loss of blood can lead to weakness and death.

So it is no wonder that data recovery has emerged as a top priority as the enterprise finds itself trusting third-party providers for the care and maintenance of their lifeblood to an ever greater degree.

According to Veeam Software, application and data downtime is costing the average enterprise about $2 million per year, with the vast majority of that cost attributed to the failure to recover data in a reasonable amount of time. This usually presents a double-edged sword for IT, though, as the pressure to improve recovery times is often accompanied by the reluctance of the front office to invest in adequate backup and recovery (B&R) infrastructure. This also affects permanent data loss, as many organizations maintain backup windows and restore points that fail to account for the massive accumulation of potentially critical data in a relatively short time.

The cloud has done a lot to relieve the burden, financial and otherwise, of wide-scale B&R. In fact, this is one of the primary drivers of IaaS, according to ResearchandMarkets, in that it provides a ready platform to not only integrate backed-up data into dynamic production environments, but to maintain a duplicate IT infrastructure should primary resources go dark. IaaS also puts these capabilities within reach of the small-to-midsize enterprise.


Thursday, 18 December 2014 00:00

It’s 2015 – Do You Know Where Your Data Is?

The “Internet of Things” will take further hold and become more fully embedded as a reality in our society. However, a tipping point is likely to be reached in 2015 as public awareness of the potential for these technologies to violate personal privacy increases. This will lead to an associated public outcry for stricter controls and government legislation regarding how people, organizations and government collect and use this information. The public will no longer be satisfied to leave technology companies and users to self-police their uses of their personal data.

Surveillance and other technologies that permit the collection of data about people will continue to proliferate. Analytical tools are emerging to interpret this information, and to merge and use it in an increasingly integrated fashion to permit continuous monitoring of locational and other information about specific people and groups. Drones that are freely available in the open marketplace can be programmed to follow people and objects using GSM and other technologies as tracking beacons. Miniature homing devices that will facilitate tracking of locational information of objects and people are also freely available. Phone companies routinely collect data from everyone making cell calls on their networks. Because many phones have chips that stay on even after a battery has been removed, tracking powered-down phones is within the realm of possibility.


VMware predicted software-defined data centers (SDDC) would “hit it big” in 2013. Spoiler: That didn’t happen.

Nonetheless, the concept hasn’t gone away. In fact, IT Business Edge’s Infrastructure blogger, Arthur Cole, wrote about SDCCs several times this year, including a November article in which he called the idea “a work in progress.” He did a great job of summing up SDCCs and the current opinion of them.

Still, it begs the question: Could 2015 be the year that SDCCs actually, finally, take off? Michael Hay thinks so.

Hay is the vice president of Product Planning at Hitachi Data Systems and chief engineer for the Information Technology Platform Division (ITPD). In a recent Information Week column, Hay predicted that SDCCs will be one of three disruptive trends in the coming year.


Wednesday, 17 December 2014 00:00

2015 Technology Predictions: An MSP Perspective

Somehow it got to be the week December 15, which seems crazy to me because wasn’t it just last week that I was already breaking my week-old New Year’s resolutions? But end of year means that it’s time for predictions about 2015. What events and trends will rock the managed services world in 2015? Here’s this humble blogger’s take.


(TNS) — The hacking group behind the Sony cybersecurity attack has made its first physical threat.

In a message sent at around 9:30 a.m., the group — calling itself Guardians of Peace — issued a warning along with what appears to be files related to Sony Pictures CEO and Chairman Michael Lynton.

“We will clearly show it (our Christmas gift) to you at the very time and places ‘The Interview’ be shown, including the premiere, how bitter fate those who seek fun in terror should be doomed to,” the hackers wrote.

The hackers also invoked the Sept. 11, 2001, attacks, urging people to keep themselves “distant from the places at that time.”

“The world will be full of fear,” they wrote. “Whatever comes in the coming days is called by the greed of Sony Pictures Entertainment. All the world will denounce the SONY.”


(TNS) — The hostage crisis at the Lindt Chocolat Cafe in Sydney, Australia, unfolded in a way impossible a decade ago.

Much of it played out on Facebook and text messaging (already there as of 2004), and on YouTube, Twitter, and other social media as yet unborn in 2004. To be a hostage-taker or hostage as of 2014, it seems, you need good social-media skills.

"There's an unprecedented degree of immediacy to such crises now," says Lawrence Husick, senior fellow at the Foreign Policy Research Institute and co-director for the Center for the Study of Terrorism. "All the players are so acutely aware they're being watched. It's Shakespearean: All are walking that stage."


Rentsys Recovery Services plans to offer its BlackCloud Virtual Office business continuity solution to managed service providers (MSPs).

The College Station, Texas-based business continuity solutions company today announced it will work with MSPs, healthcare software companies and regional data center providers.

Rentsys introduced BlackCloud Virtual Office last month, and now, MSPs can offer this business continuity solution to their customers.


Wednesday, 17 December 2014 00:00

6 Corporate Holiday Gift-Giving Tips and Ideas

By Rachel Weingarten

If the idea of buying holiday gifts for your friends and family isn’t enough to send you into a tailspin, there’s the added pressure of trying to figure out what to buy for those you work with. With so many rules–both written and unwritten—it’s all too easy to make a corporate gift-giving gaffe. But, with some thoughtfulness and a little planning, it is possible to give just the right gift to just the right person.

“Remember, gifts are a form of communication in the same way as what you write and what you say,” says Stephen Paskoff, CEO of workplace learning company ELI. “Give clients, employees and colleagues gifts in line with your organization’s values and standards, and keep in mind that what you give directly reflects your judgment and professionalism.”

Sure, you could always give a gift card, but they can feel too impersonal. The trick is to balance thoughtfulness and appropriateness. Here are some tips for giving corporate gifts that strike that balance, along with some suggestions to help inspire you:


Natural catastrophes and man-made disasters cost insurers $34 billion in 2014, down 24 percent from $45 billion in 2013, according to just-released Swiss Re sigma preliminary estimates.

Of the $34 billion tab for insurers, some $29 billion was triggered by natural catastrophe events (compared with $37 billion in 2013), while man-made disasters generated the additional $5 billion in insured losses in 2014.

Despite total losses coming in at below annual averages, the United States still accounted for three of the most costly insured catastrophe losses for the year, with two thunderstorm events and one winter storm event causing just shy of $6 billion in insured losses (see chart below).


Wednesday, 17 December 2014 00:00

Using a Risk Model as a Common Language

The central purpose of a common risk language is to assist management with evaluating the completeness of its efforts to identify events and scenarios that merit consideration in a risk assessment. Either management begins a risk assessment with (a) a blank sheet of paper with all of the start-up that choice entails, or (b) a common language that enables busy people with diverse backgrounds and experience to communicate more effectively with each other and identify relevant issues more quickly.

In a Corporate Compliance Insights column earlier this year, we provided a suggested language for executive management and directors to use in the Boardroom to focus the board risk oversight process. This month, we discuss the merits of a common language for use by the entire organization.

The sources of uncertainty an enterprise must understand and manage may be external or internal. Risk is about knowledge. When management lacks knowledge, there is greater uncertainty. Thus sources of uncertainty also relate to the relevance and reliability of information about the external and internal environment. These three broad groups – environment, process and information for decision making – provide the basis for an enabling framework summarizing the sources of uncertainty in a business.


Wednesday, 17 December 2014 00:00

2015 risk predictions

What emerging risks are likely to have an impact on organizations during 2015? Experts from The Institute of Risk Management give their views.

Political instability caused by low oil prices, increased shareholder activism and the business threat posed by a potential UK exit from the EU, are among chief concerns voiced by some of the UK’s leading risk experts for 2015.

As the year comes to a close, members of the Institute of Risk Management (IRM), were asked to identify key risk areas for 2015. A broad range of oil and gas, political, healthcare, regulatory and insurance risks were highlighted as potential flashpoints.


Wednesday, 17 December 2014 00:00

2015 cyber risk and data protection predictions

Businesses in 2015 are expected to experience increasing challenges as they struggle to contend with the burgeoning threat of complex cybercrime. EY analysis has outlined some of the key areas that cyber risks threaten to impact in the coming year, including the difficulties in the insurance sector of underwriting cyber risk, the raft of regulation coming out of both the EU and the UK, the importance of integrated risk functions in firms, and the cyber risk of supply chains moving to the cloud.

Insuring against cyber risk

Cyber risk poses a serious and growing threat to businesses across the UK, and companies are increasingly looking to insurers for protection against financial losses in the face of attacks. Certain sectors already require firms to take out cyber risk under regulatory compliance. However, cybercrime is not a traditional area of risk for insurers, and the burden of underwriting the risk is proving to be very difficult.

Shaun Crawford, Global Head of Insurance at EY, comments: “Cyber risk will certainly be one of the biggest challenges to the insurance market in 2015. Cybercrime is a moving beast, making it impossible to quantify the risks neatly or to calculate them in an informed or consistent manner. With so much unknown, it’s not surprising that premiums are wildly different across the market, and without cross-market stability, the industry will most likely be operating on significant indemnity losses.


MSPs specializing in cloud-based file sharing, may not be shocked to discover that end users frequently share data via insecure means. What might come as a surprise, however, is the fact that 20 percent of those files contain data directly related to compliance (or lack thereof).   

This statistic comes from a recent study that analyzed roughly 100 million files shared through public-cloud applications. You can see all the findings in this infographic, but here are a few key takeaways, specifically for MSPs:

Non-compliance is the norm, not the exception

Based on the numbers, most businesses are struggling to stay compliant. Some have not made it a priority at all. The compliance data that was shared on public-clouds included personally identifiable information (PII), personal health information (PHI), and customer payment card information.

This fact presents opportunities for MSPs. Conveying to companies the importance of compliance and the risk of having their data vulnerable gives you the opportunity to bring them a solution. Creating a cloud file sharing system that stays compliant can bring them extreme value.


Wednesday, 17 December 2014 00:00

Lessons Learned from Data Breaches

Recent data breaches have left some large organizations reeling as they deal with the aftermath. They include the Target data breach, compromises at Home Depot, JP Morgan, USPS (which exposed employee Social Security Numbers and other data) and, most recently, Sony Pictures. The Sony hack also proved to be embarrassing to some of the company’s executives, as private email correspondences were exposed.

Collateral damage from data breach is significant: one in nine customers affected by a data breach stopped shopping at a particular retailer. According to LifeLock, a recent survey of corporate executive decision-makers found that while concern for a breach is 4 or 5 on a 5-point scale, only 10% to 20% of their total cyber security budgets go to breach remediation. Establishing an incident response plan in advance can reduce the cost per compromised record by $17.


Wednesday, 17 December 2014 00:00

Here Comes the Big Data

About two decades ago I thought I had a handle on big data. I was doing some data warehousing work with a telephone utility that had about 100 million transactions. That was a lot of data, I said to myself. Then, about 10 years ago, I was doing a review of a firm that audited financial trading on one of the major stock markets and I asked its big data guy how many transactions the company processed. His initial answer was, “On a slow day we get about 2.5 billion transactions.” “How many do you have on a busy day?” I asked with an air of shock. “4 or 5 billion,” he responded. Now that was really a lot of data.

Jump ahead a decade or so, and on 24 July 2014, Facebook announces that it is currently processing 1 trillion transactions per day. Now ”that” is really, really big data. If you are a CEO, that is just one of the reasons why you should worry about having a big data strategy. Even if your organization isn’t a telecom utility or a financial institution, the amount of data you’re going to have to process is shooting up, what with all the smart (wireless) devices your customers and employees use heavily, plus the volumes of data beginning to flood the organization from all the IoT devices/systems that increasingly control any number of real-time systems.


Tuesday, 16 December 2014 00:00

Shaping mobile security

Keith Bird shows how a new approach to mobile security can help organizations achieve the right balance of protection, mobility and productivity.

Most of us are familiar with the ‘triangle’ project management model, which highlights the constraints on delivering results in projects. The three corners of the triangle are fast, good and cheap, showing that in any given project, all three attributes cannot be optimised: one will inevitably be compromised to maximise the other two. You can have a good project delivered quickly, but not cheaply, and so on.

It’s traditionally been the same in IT security, especially when it comes to mobility. In this case, the three corners of the triangle are security, mobility and productivity. Usually, organizations have taken one of two approaches: either enabled mobility to boost productivity, with security inevitably being compromised; or they’ve tried to deliver more effective security for mobile fleets, compromising productivity.

Recent research shows that a majority of organizations have used the first approach, with mobility racing ahead of security. We (Check Point) surveyed over 700 IT professionals worldwide about mobility and mobile device usage in their organizations, and 72 percent said the number of personal mobile devices connecting to their organizations' networks had more than doubled in the past two years. 82 percent expected mobile security incidents to grow over the next 12 months, with higher costs of remediation.


A Johns Hopkins University analysis has looked at how climate change will increase the risk of power outages for various major US metro areas.

Johns Hopkins engineers created a computer model to predict the increasing vulnerability of power grids in major coastal cities during hurricanes. By factoring historical hurricane information with plausible scenarios for future storm behavior, the team could pinpoint which of 27 cities, from Texas to Maine, will become more susceptible to blackouts from future hurricanes.

Topping the list of cities most likely to see big increases in their power outage risk are New York City, Philadelphia, Jacksonville, Fla.; Virginia Beach, Va.; and Hartford, Conn. Cities at the bottom of the list, whose future risk of outages is unlikely to dramatically change, include Memphis, Dallas, Pittsburgh, Atlanta and Buffalo.

Seth Guikema, an associate professor in the university’s Department of Geography and Environmental Engineering, said his team’s analysis could help metropolitan areas better plan for climate change.


Ever since the cloud burst onto the IT consciousness, the primary focus of most organizations has been to prepare for this new data paradigm. The thinking has been that the enterprise needs to be ready for the cloud or risk being left behind.

Lately, however, we’ve seen a subtle shift in attitude on the part of both the enterprise and the nascent cloud industry: It’s not the enterprise that needs to adapt to the cloud, but the cloud that needs to adapt to the enterprise. Across the board, from the large players like Amazon and Google to smaller ones like CloudSigma and DigitalOcean, the goal has shifted from providing the commodity resources that appeal to consumers to more specialized offerings that the enterprise values.

To be sure, there is no shortage of enterprise interest in the cloud already. According to IDG, nearly 70 percent of organizations today utilize cloud-based infrastructure or applications in some way, and IT spending on the cloud is currently averaging about 20 percent growth per year. The thing is, the vast majority of that activity consists of low-level workloads and bulk storage applications that generally go to the lowest bidder, which is usually one of the hyperscale players that can shave margins to the bone and still turn out a decent profit.


Tuesday, 16 December 2014 00:00

The Insider Risk of Temporary Employees

Almost all businesses need temporary workers at some time or another, but December is an especially popular time to bring in extra help.

Of course, if you are hiring temporary employees, you will likely need to set them up with access to your company network, maybe give them an email address, and possibly even authorize them to work with databases that contain sensitive information.

In fact, according to a new study by Avecto, 72 percent of temporary hires are given admin privileges on the company network. We already know that insider threats are a serious concern to cybersecurity. When temporary employees are given network privileges, companies could be unwittingly setting themselves up for a serious security failure. As Paul Kenyon, EVP of global sales at Avecto, stated in a release:

Giving any worker admin rights is akin to giving them the keys to the kingdom. The insider threat has been well documented, but this research demonstrates that businesses clearly haven't got the message.


(TNS) — California has received congressional funding to begin rolling out an earthquake early warning system next year, capping nearly a decade of planning, setbacks and technological breakthroughs, officials said Sunday.

Scientists have long planned to make such a system available to some schools, fire stations, and more private businesses in 2015, but their effort hinged on Congress providing $5 million. The system would give as much as a minute's warning before shaking is felt in metropolitan areas, a margin that experts say would increase survival.

The U.S. Senate approved the allocation this weekend as part of the $1.1-trillion spending package, passed by the House of Representatives on Thursday, that will fund most of the U.S. government through the rest of the fiscal year. Officials plan to announce the funding at a news conference at Caltech on Monday.


Risk management executives are charged with preparing companies for, and protecting them from, a broad array of emerging risks. Today, there is perhaps no threat that poses more danger than a cyberattack, which could result in a data breach or compromising sensitive information. Given the rapid increase in frequency and severity of high-profile cyberattacks in recent months, organizations must confront cybersecurity issues with greater focus, specificity and commitment.

Of note, an astounding 43% of U.S. companies experienced a data breach in the past year, according to the Ponemon Institute’s 2014 annual study on data breach preparedness, a 10% increase from 2013. These alarming trends are compelling companies to create programs centered on cyber risk awareness, education and preparedness. These programs are vital to the company’s performance and growth; the 2014 Cost of Data Breach Study by IBM and the Ponemon Institute reveals that the average cost to a company from a data breach was about $3.5 million per breach in 2014 – a 15% increase since last year. A company’s intellectual property and customer data may also be compromised in a cyberattack, expanding potential casualties beyond financial losses.


It’s probably safe to say that if one facet of your IT operation needs to be as fail-safe as you can possibly make it, it’s your disaster recovery/business continuity (DR/BC) setup. Is that something you can reliably entrust to the cloud? It’s one thing to use a cloud-based service like, say, Dropbox to back up the files on your PC. But is the cloud the way to go to back up your entire IT operation?

I recently had the opportunity to address that question with Lynn LeBlanc, co-founder and CEO of HotLink, a hybrid IT management software provider in Santa Clara. In what turned out to be an enlightening email interview, I asked LeBlanc it there’s any legitimate argument against leveraging the public cloud for disaster recovery. She said there is none:

In fact, the public cloud lends itself very well to disaster recovery. It’s one of its best use cases. Amazon Web Services is the largest and most available infrastructure in the world, and its scale and economics allow IT teams to easily and cost-effectively protect their on-premise workloads from disasters. In fact, some solutions, such as HotLink DR Express, also enable business continuity for a full recovery in the public cloud at a price point that was inconceivable only a few years ago.


(TNS) — In the nuclear plant control room with wall-to-wall panels of colorful knobs, levers and switches, one might think a wrong flip or a misplaced twist could become disaster, showering neighboring communities with radioactivity.

That would be a tricky feat, and an unlikely one, nuclear inspectors say.

“At nuclear power plants there are backups to backups to backups. There are so many redundant systems for a single purpose it would take multiple failures and kind of a completely unlikely scenario in order for a consequence to actually occur,” said Brandon Reyes, a Nuclear Regulatory Commission resident inspector at the Beaver Valley nuclear power plant in Shippingport.

Nuclear power is an industry in headlines more for the potential dangers it poses to the public, than the energy it affords their lifestyles. It receives a disproportionate amount of scrutiny and concern, says Reyes and his partner, senior inspector Jim Krafty.


Slowly but surely, a more secure credit card is making its way toward your wallet – if it hasn’t already. Whether you call it a smart card, chip card, credit card chip or EMV (Europay MasterCard MA -1.68% Visa V -0.58%) card, you may have heard that the United States is on the verge of adopting a new breed of credit card. These cards will fight fraud, secure your personal information, and protect you from credit card theft.

If you don’t have an EMV card in your wallet yet—and maybe even if you already do—you might be wondering what the new EMV cards are all about, and when they’ll finally become widespread in the United States. Wonder no longer…here is a quick primer on what EMV cards are, why the new smart chip is important, and when American banks, merchants, and consumers will adopt more secure credit cards once and for all.


Is your company prepared for a cyber attack? This is a question that every director should be asking, and management should be providing regular updates to the Board on its level of preparedness. Cyber attacks are running rampant, and no company is exempt from an attack. If your company thinks so, then brace yourselves for a rude awakening.

Cyber attacks can cause serious damage to a company’s reputation, which says nothing of the financial impact that accompanies such an event. According to the National Association of Corporate Directors, if companies and governments are unable to effectively combat cyber threats, between $9 and $21 trillion of global economic value creation could be at risk.

Due to the growing volume and sophistication of cyber attacks, cybersecurity is an issue that every Board should be actively grappling with in order to mitigate the pitfalls associated with a breach. For companies and Boards, it is not the time or place for complacency when it comes to cybersecurity. Just because a company is small doesn’t mean that it is insulated against an attack.


Monday, 15 December 2014 00:00

At Big Banks, a Lesson Not Learned

Are the colossal regulatory fines extracted from big banks today likely to deter their officials from violating the same rules tomorrow? Or are these billion-dollar settlements viewed simply as a cost of doing business, and not a very large one at that?

Judging from a regulatory action brought last week against 10 mostly large financial firms, the answers are “no” and “yes.”

The case, brought on Thursday by the Financial Industry Regulatory Authority, is striking. It takes us back to the financial scandal of the early 2000s involving corrupt Wall Street research.

Remember that mess? Firms whose analysts were supposed to be impartial instead used their bullish stock recommendations to attract investment-banking business. The losers in the situation were investors who didn’t know that the analysts were biased and who heeded their calls to buy the shares. In 2003, 10 firms and two analysts struck a settlement with regulators over these practices, paying $1.4 billion. That was real money back then, and it was hoped that such a hefty fine, along with new research rules, might keep Wall Street analysts conflict-free.


Monday, 15 December 2014 00:00

The Path to Zero Ebola Cases

MONROVIA, Liberia — In my career as a medical doctor and global health policy maker, I have been in the middle of monumental struggles, including fights to make treatment accessible in the developing world for those living with H.I.V./AIDS as well as multi-drug resistant tuberculosis. But the Ebola epidemic is the worst I’ve ever seen.

More than 11 months into the crisis, thousands of people are dead and more than 17,000 have been infected. The virus kills quickly, spreads fear even faster, alters human relationships, devastates economies and threatens to cruelly extinguish hope in three fragile countries that were on the rebound after years of misery. No other modern epidemic has been so destructive so fast.

Monday, 15 December 2014 00:00

How One CIO Rescued a Failed ERP Deployment

Imagine you’re a CIO, and you just hired on with a $600 million publicly traded technology company. You walk into work the first day on the job, and you find yourself in the throes of an ERP deployment that—well, let’s just say, it isn’t going so well. The previous CIO, who had been with the company for 10 years, left two months ago, so the hand-off wasn’t as smooth as it could have been. You know if you don’t act fast, the deployment is going to spin irreversibly out of control, which would put your CEO in the lousy position of having to explain to shareholders why a technology company failed so miserably with a technology implementation, and threw a boatload of money away in the process. Just try to imagine the pressure you’d be under.

Dave Brady doesn’t have to imagine it. He lived it.

Brady is the CIO at Datalink, a cloud services provider in Eden Prairie, Minn. When he joined the company in March 2013, that bleak scenario was precisely the one he faced. I recently had the opportunity to speak with him about it, and one of the things that struck me was the even-keeled manner in which he recounted the story. There was no embellishment, no woe-is-me vibe, no self-aggrandizement. If anything, he downplayed the whole mess. This is how he brought it up: 


There’s an interesting moment in a report on the current state of cyber security leadership from International Business Machines Corp (IBM).

For those who haven’t seen it yet, the report identifies growing concerns over cyber security with almost 60 percent of Chief Information Security Officers (CISOs) saying the sophistication of attackers is outstripping the sophistication of their organization’s defenses.

But as security leaders and their organizations attempt to fight what many feel is a losing battle against hackers and other cyber criminals, there is growing awareness that greater collaboration is necessary.

As IBM puts it: “Protection through isolation is less and less realistic in today’s world.”

Consider this: some 62 percent of security leaders strongly agreed that the risk level to their organization was increasing due to the number of interactions and connections with customers, suppliers and partners.


This last week has been quite the week for pedestrian and vehicle collisions and accidents. We even had a few people die this week due to such incidents. Yes, I feel for the friends and families of those that have been impacted yet, what struck me most about each situation, was the communication messages being conveyed.

IT’s easy to blame one side of the situation and in many cases that might be reality. But just like in BCM and DR, we must convey a message that everyone can understand. The communications have to be straight to it and yet be articulate enough for people of any walk of life to understand the message – and have it retained. They can’t just be to one side of the situation. Here’s what I mean.

Immediately after the first accident the police and responding Emergency Medical Services (EMS) personnel were placing the blame for the traffic incidents on the shoulders of those driving; there was no responsibility placed on the side of the pedestrian. I found this odd because it was clean in some of the situations that the pedestrian wasn’t following the rules set out for them and the reminder about the rules wasn’t coming from the police of EMS; it was only directed at the vehicle operators.


Casual spectators of business behavior can't help being jaded; every day they see news stories about corporate fraud, security breaches, delayed safety recalls, and other sorts of general malfeasance. But what they don't see is the renewed time and investment companies around the world are putting  toward implementing and reporting on responsible behavior (this less sensational side of the story gets far less coverage).

This week, Nick Hayes and I published an exciting new report, Meet Customers' Demands For Corporate Responsibility, which looks at the corporate responsibility reporting habits of the world's largest companies. While it's easy to think that the business community is as dirty as ever, we actually found a substantial increase over the past 6 years in what these companies included in their CSR and sustainability reports.


It’s that time of year again…most people are slowing down for the Christmas break. The raft of out-of-office replies from the second week in December seem to increase by the hour as people begin to use up the last dregs of annual leave and head out in to the busy shops. Others are using this time of year as an opportunity to reflect on the previous 12 months. As its BlueyedBC’s 1st Birthday I thought it was only right to get all reflective on you guys!

The Birth of BlueyedBC

Okay, so in the autumn of 2013, professionally, I was not in a very good place at all. I was unqualified, on to my 3rd BC job in less than 12 months and deeply lacking in confidence. My peer group networks were virtually non-existent because I hadn’t built it up yet and if I’m being honest I was quite angry and frustrated with the way things were going.

So I decided in my wisdom to pick up a pen and paper and write some of my thoughts down. It started by blaming virtually everyone else except myself for the recent challenges in my career. Once I started writing I found that I couldn’t stop…venting my frustrations became like an addiction to me. I had several difficult years of trying to make it as a professional post university with all this pent up feeling inside of me and I was rapidly running out of ink! It wasn’t long before my scribbles became small chapters in their own right and this is when I submitted my first (rather unfair) scathing review of my experience in the industry to Continuity Central who kindly released it to the BC world.


Friday, 12 December 2014 00:00

Data Analytics as a Risk Management Strategy

In our increasingly competitive business environment, companies everywhere are looking for the next new thing to give them a competitive edge. But perhaps the next new thing is applying new techniques and capabilities to existing concepts such as risk management. The exponential growth of data as well as recent technologies and techniques for managing and analyzing data create more opportunities.

Enterprise risk management can encompass so much more than merely making sure your business has purchased the right types and amounts of insurance. With the tools now available, businesses can quantify and model the risks they face to enable smarter mitigation strategies and better strategic decisions.

The discipline of risk management in general and the increasingly popular field of enterprise risk management have been around for years. But several recent trends and developments have increased the ability to execute on the concept of enterprise risk management.


Friday, 12 December 2014 00:00

Security predictions for 2015

As the complexity and diversity of devices, platforms and modes of interaction advance, so do the associated risks from malicious individuals, criminal organisations and states that wish to exploit technology for their own purposes. Below, Michael Fimin, CEO at Netwrix, provides his major observations of IT security trends and the most crucial areas to keep watch over in 2015:

Many individuals and enterprises are already using cloud technologies to store sensitive information and perform business critical tasks. In response to security concerns, cloud technologies will continue to develop in 2015, focusing on improved data encryption; the ability to view audit trails for configuration management and secure access of data; and the development of security brokers for cloud access, allowing for user access control as a security enforcement point between a user and cloud service provider.

As the adoption and standardisation of a few select mobile OS platforms grows, the opportunity for attack also increases. We can expect to see further growth in smartphone malware, increases in mobile phishing attacks and fake apps making their way into app stores. Targeted attacks on mobile payment technologies can also be expected. In response, 2015 will see various solutions introduced to improve mobile protection, including the development of patch management across multiple devices and platforms, the blocking of apps from unknown sources and anti-malware protection.

Software defined data centre
’Software defined’ usually refers to the decoupling and abstracting of infrastructure elements followed by a centralised control. Software defined networking (SDN) and software defined storage (SDS) are clearly trending and we can expect this to expand in 2015. But while these modular software defined infrastructures improve operational efficiency, they also create new security risks. In particular, centralised controllers can become a single point of attack. While the adoption of this approach is not widespread enough to become a common target for attacks, as more companies run SDN and SDS pilots in 2015, we expect their security concerns will be raised. This will result in more of a focus on security from manufacturers, as well as new solutions from third party vendors.

Internet of things
The Internet of things (IoT) universe is expanding with a growing diversity of devices connecting to the network and/or holding sensitive data - from smart TVs and Wi-Fi-connected light bulbs to complex industrial operational technology systems.

With the IoT likely to play a more significant role in 2015 and beyond, devices and systems require proper management, as well as security policies and provisions. While the IoT security ecosystem has not yet developed, we do not expect attacks on the IoT to become widespread in 2015.

Most attacks are likely to be ’whitehat’ hacks to report vulnerabilities and proof of concept exploits. That being said, sophisticated targeted attacks may go beyond traditional networks and PCs.

Next generation security platforms
In 2015 and beyond, we can expect to see more vendors in the information security industry talking about integration, security analytics and the leveraging of big data. Security analytics platforms have to take into account more internal data sources as well as the external feeds, such as online reputation services and third party threat intelligence feeds. The role of context and risk assessment will also become more important. The focus of defence systems becomes more about minimising attack surfaces, isolating and segmenting the infrastructure to reduce potential damage and identifying the most business critical components to protect.

Looking back at previous years, new security challenges will continue to arise, so IT professionals should be armed with mission critical information and be prepared to defend against them.

Friday, 12 December 2014 00:00

Do You Have a Cybersecurity Problem?

When the topic of cybersecurity comes up at your organization, I’m guessing your executives immediately look to the CIO – yourself included. After all, when you’re talking about data, about information access and about the technology needed to keep both safe from unwanted activities, you assume IT has it covered. And your organization isn’t the only one operating under this assumption – far from it.

According to a report by Kroll and Compliance Week, three-quarters of Compliance Officers have no involvement in managing cybersecurity risk. Plus, 44 percent of respondents revealed that their Chief Compliance Officer is only given responsibility for privacy compliance and breach disclosure after a security incident has taken place and plays zero part in addressing the risks beforehand.

Here’s the problem with that approach: many breaches are preventable. According to the 2013 Verizon “Data Breach Investigations Report,” 78 percent of initial intrusions are rated as “low difficulty.” Now, don’t get me wrong: hackers are extremely crafty and are scheming new tactics as I write this. But part of the reason they are able to get their hands on data that isn’t theirs is because organizations simply aren’t prepared.


Friday, 12 December 2014 00:00

Good tidings we bring

The festive season is upon us and, assuming there are no postal strikes, Christmas Cards in their billions will be delivered to homes across the world spreading peace, joy and goodwill. Of course the Business Continuity Institute shares those same sentiments but, as has become tradition, we have decided not to send cards. Instead we will donate the money to those who need it more than we do.

This year, with the deadly virus Ebola high on our radar, we will be supporting Unicef in fighting this outbreak. As of the 1st December 2014, the total reported number of confirmed, probable, and suspected cases in the West African epidemic was 15,935 with 5,689 deaths. "Thousands of children are living through the deaths of their mother, father or family members from Ebola" said Manuel Fontaine, UNICEF Regional Director for West and Central Africa. "These children urgently need special attention and support; yet many of them feel unwanted and even abandoned. Orphans are usually taken in by a member of the extended family, but in some communities, the fear surrounding Ebola is becoming stronger than family ties."

As business continuity professionals, our role is to make sure that our organizations can continue to operate in the event of a 'disruption' but how would you prepare for a crisis of this magnitude? Can you prepare for a crisis of this magnitude? How do you continue to operate when death lurks around every corner and lives are consumed by fear? Fortunately most of us will never have to experience this, but we can play our part in helping those who do, which is why we are making this donation. If you would also like to make a donation to Unicef and help fight the spread of Ebola then please click here.

The BCI would wishes all our Chapter Leaders, Forum Leaders, the BCI Board, Global Membership Council and fellow business continuity practitioners around the world, Seasons' Greetings and a healthy 2015.

Note that the BCI Central Office will be closed on the 25th and 26th December and the 1st January 2015, re-opening on Friday 2nd January 2015. On the days between Christmas and New Year, the office will be staffed between 10am and 3pm only (GMT).

A recent court decision about the Target breach should have businesses of all sizes taking note.

A Minnesota judge found Target negligent in the breach and said it can be held responsible for financial damages. Infosecurity Magazine quoted the judge:

“Although the third-party hackers’ activities caused harm, Target played a key role in allowing the harm to occur,” Magnuson wrote in his ruling. “Indeed, Plaintiffs’ allegation that Target purposely disabled one of the security features that would have prevented the harm is itself sufficient to plead a direct negligence case.”


While most risk professionals are satisfied with their insurers and brokers, those from of organizations with enterprise risk management (ERM) programs were the least content, according to the inaugural J.D. Power and Risk and Insurance Management Society (RIMS) 2014 Large Commercial Insurance Report.

The full report, based on findings of the J.D. Power 2014 Large Business Commercial Study, slated for release in February 2015, examines industry-level performance metrics among large business commercial insurers and brokers. The study, which interviewed almost 1,000 risk professionals, highlights best practices that are critical to satisfying them.


As an information technology (IT) leader dealing with the intricacies and complexities of enterprise technology every day, I can tell you this: it’s not the technology that is the toughest thing to change in IT. It’s the people. Here’s my personal take on 4 of the hardest IT transformations to implement – and how people make or break those changes.

1. Going global

There’s no question that transforming your company from regional-based systems to global systems is a big job. Global applications, global processes, global networks … that takes tech expertise to the nth degree. You need to talk with the regions, departments, and teams to ensure that you have all the business requirements clear and know how the end-to-end processes now need to work before you can consolidate disparate systems or stand up new ones.

That being said, chances are that you’ll find those separate regions have their own cultures, methodologies, goals, and initiatives … and they like it that way. It often works, and works well – for them. The most important thing when talking to these regions is to remember that people want to be heard and valued for their expertise. This doesn’t mean they’re absolutely tied to the old way of doing things. Most likely, they simply want to provide context so that their voices and inputs are considered in the new direction.

So as you transform your ERP apps to span the world, or plug in new SaaS apps to transform the user experience, you simultaneously need to build a culture that helps people move out of their regional silos. Hear what they have to say before you encourage them to embrace a new perspective. Having listened, you can then encourage them to look at what is best for the company and for the customer overall. Let them see the benefits that will come from globalization, such as the removal of inconsistencies or duplication. Acknowledge that they are giving up something when they lose their regional approach, but assure them that there are great answers to the ever present question “WIIFM”: “What’s in it for me?”


Online giant Google raised eyebrows recently when it stated that it was starting up two billion containers a week in its computing infrastructure. But the type of containers the company was talking about were logical instances inside its computers, not the mammoth steel boxes that are shipped by truck, rail and ship. Google’s containers are its solution to an issue concerning conventional server virtualisation, which involves more overhead than the provider is prepared to accept. A new development in IT, its new ‘lightweight virtualisation’ may be attractive to other organisations too. Yet, in certain circumstances, a real steel container may also hold the solution for business continuity.


Thursday, 11 December 2014 00:00


Resiliency is about bouncing back from something. It doesn’t always mean a catastrophe. It can also mean recovering from the simple annoyances of life. Most people are resilient, but have different levels, styles and speeds of their bounce-backs. Think of it as elastic. It can stretch but comes back to essentially its original shape. When it doesn’t, you know it is time to do something about it. Research shows that resiliency is learned. So you can learn and do more to become more resilient. I’ll be sharing MUCH more about this as the new year unfolds. My new wesite will reflect this and I have several fun project in the works for 2015. In the meantime, practice managing your daily stressors by becoming a Weeble®…you know…they “wobble but they don’t fall down.”

Chuck Wallace is the deputy director of emergency management for Grays Harbor County, Wash., a Pacific Ocean-facing county. He is a 31-year veteran of the fire service, retiring from the Philadelphia Fire Department in 2007. In addition to his duties in emergency management, he also serves as the fire chief at Grays Harbor County Fire Protection District #11 and as an elected fire commissioner for Grays Harbor County Fire Protection District #11. In addition to his county duties, he serves on a number of regional emergency management committees. Currently attending Evergreen State College, Wallace expects to graduate in June 2015 with a master’s degree in public administration.

Wallace participated in an interview with Emergency Management to share the challenges and success he has had in promoting tsunami mitigation measures in his county. Wallace also addresses the county’s vertical evacuation, tsunami-engineered, safe haven building, which he says is the first in North America.


By Gail Dutton

Virtual reality (VR) finally is on the verge of becoming practical. With new, light-weight headsets that provide immediate response times coming onto the market, VR advocates say almost every industry could benefit by immersing their employees or clients into virtual worlds for some activities.

Real estate sales is a prime example. Sacramento broke ground in October for the Sacramento Entertainment and Sports Center (ESC), replacing the Sacramento Kings’ Sleep Train arena. With completion still two years away, selling the high-end Kings suites normally would rely on architectural renderings and floor plans. Instead, potential buyers can strap on a VR headset and tour a realistic virtual model that has the same look and feel as the finished, amenity-rich suites. Cha-ching!

From a participant’s perspective, being in a virtual world is like being in a real world. Perspective is determined by position trackers linked to the goggles, so when you turn around, you see what’s behind you. The result is a very realistic, immersive experience.


Wednesday, 10 December 2014 00:00

Will resilience replace risk and continuity?

By David Evans

Is the world of risk, continuity and crisis about to change as new concepts and approaches linked to resilience gain momentum or are we seeking solutions to the same old stories repacked through a different language?

Protecting organizations is big business, or at least it should be, as no one wants to fail and few if any executives can wish to face the negative impact of serious disruption or crises. In general, crises are expensive for organizations to handle, derail the best-laid plans and generally threaten the reputation of the top people in the business. Added to which there is a mix of guidance, regulatory requirements, employee concerns and shareholder expectations to address.


Patrick Alcantara explains why the BCI sees organizational resilience as an important framework that brings together various ‘protective disciplines’ and provides a strategic goal for organizations.

Resilience is fast becoming an industry buzzword which reveals underlying changes in the way practitioners view business continuity and other ‘protective disciplines’ such as emergency planning, risk management and cyber/physical security. From the development of clear boundaries which separate disciplines in the last decade or so, work is now underway to bring these fields together into a framework of organizational resilience. However, more than just thinking about it merely as the sum of ‘protective disciplines’, organizational resilience is thought of as a strategic goal that must be driven by top management. The quality of resilience is rooted in a series of capabilities that allow organizations to get through bad times (continuity) and thrive in good/changing times (adaptability). Organizational resilience involves a coherent approach ‘from the boardroom to the storeroom’ that requires strong governance and accountability among other ‘soft’ factors.

In the UK, this development in thinking culminates with the recent launch of the new British Standard 65000 (BS 65000) which outlines the principles and provides guidance behind organizational resilience. This parallels the development of global guidance on organizational resilience or ISO 22316 which is due on April 2017.


The ISP and hosting sectors were the most targeted industries of cyber-crime in 2014, and the trend is likely to continue in 2015. That’s according to Radware. The findings from its fourth annual ‘Global application and security report’, which surveyed 330 companies globally on cyber attacks on networks and applications, act as a strong warning to companies that depend on a hosting provider or ISP to ensure they do not become a ‘cyber-domino’ as a result of the security failings of their suppliers.

As part of the report, Radware has published a ‘Ring of Fire’, which tracks cyber attacks and predicts the likelihood of attack on major industries. In the last 12 months, ISPs have moved up the risk rankings to become some of the most at-risk companies, joining the gambling sector and government at the centre of the ‘Ring of Fire’. Hosting companies have jumped from ‘low risk’ on the outside of the ring to just outside the ‘high risk’ centre.

Adrian Crawley, UK & Ireland regional director for Radware, says: “The news presents a stark reality for thousands of British businesses that rely heavily on ISP and hosting provision to host their website and network operations. If companies fail to ensure their network security planning includes that of their ISP and hosting partners then there’s no doubt that 2015 will see a great number of ‘cyber-dominoes’ fall.”


Wednesday, 10 December 2014 00:00

Bitcoin, the Solution to Consumer Data Protection

Despite all the news headlines around data breaches, hackers and identity theft, it is a little known fact that since 2013 over 1 billion consumer records have been stolen by hackers. The estimated cost of this data theft is a staggering $5 billion dollars a year, which inevitably gets passed down to consumers and merchants in the form of higher prices and fees. No doubt, there is a global data security crisis, indeed a war being waged, that is getting harder and harder for the good guys to win.

The hackers only have to succeed a small percentage of the time to make a very big dent on our society. As a result, we are in an era where securing personal information requires more and more complex security and surveillance, by merchants, banks and the government agencies. The system of credit card processing introduced in the 1940s and 1950s and perfected in the 1970s and 1980s was just never designed for the 21st century, a century in which the Internet, the open source community and the dark web accelerate technology innovation at pace far more rapid than slow-moving merchant and banking infrastructure can keep up with. There is a need to address this global data security crisis, and this requires us to fundamentally rethink what it means for a consumer to spend money.


Retail companies have Big Data capabilities, but they’re not sure what to do with them. It’s just too… big, according to a special report released today by Brick Meets Clicks (available for free download with registration).

“Discussions about Big Data and retail often bog down in the vastness of its potential, leaving retailers with only the vaguest guidance as they try to figure out where and how to invest in this powerful tool,” states the report.

That seems to be a common theme with Big Data right now. As I shared in my previous post on analytics, Dr. Shawna Thayer talked about executive paralysis with Big Data during the recent Data Strategy Symposium.


It’s been said that the cloud represents a fundamental shift in the relationship between users, the enterprise, and the data with which they work.

A key facet of this change is the ability to spin up virtual and even physical data center environments on a whim, which leads to the interesting notion of how these resources are developed and deployed. It is reasonable to assume that with the cloud as the new data center, traditional resources will no longer be purchased and provisioned on a piecemeal basis. Rather, entire data centers will be implemented all at once. This is the same dynamic behind today’s hardware deployment, where whole servers or PCs are implemented, rather than individual boards, fans and chip sets.

The vendor community, in fact, has been prepping itself for this reality for some time. Nearly all of the major players have offered turnkey solutions for decades, but these usually represent pre-integrated components from their various product lines. Lately, however, vendors have been teaming up with newly minted software-defined networking (SDN) and other platforms in order to provide end-to-end data center products that do away with systems integration, testing and other complex processes.


By Adam Wren

When it comes to the workplace, what do millennials want? If you want your company to thrive, that’s a question that you should be asking on a regular basis to attract the future of your firm.

The good news: You don’t have to be the next Apple AAPL +0.96%, Google GOOGL +0.51%, Facebook or even cool startup to get millennial talent flocking to your business. Money isn’t the only attraction, either.

To succeed as a an employer, you’ll need to hire millennial workers. Surveys show they are bright, innovative, talented and want to make a difference. But there’s also the sheer demographic reality that it will soon be hard not to hire millenials.


Tuesday, 09 December 2014 00:00

App risk management advice

Espion is calling on organizations not to overlook the risks posed by workers increasingly packing their own clouds and apps into their virtual briefcase without consulting their IT department.

The growth of ‘shadow IT products’(non-approved SaaS applications), has skyrocketed in recent years, with the latest research revealing that 81 percent of enterprise employees[1] admit to using unauthorised applications. The scale of this was also highlighted at Espion’s recent 101 Series on App Security with attendees agreeing it is a growing concern in their organization.

Without doubt apps and cloud solutions such as Basecamp, Salesforce, Dropbox and Google Apps are great for productivity and flexible working. However, organizations need to be highly cognisant of the potential downside these time-saving, skill-boosting, collaboration-enhancing, process-streamlining (and more) apps and software pose to corporate information.


UK organizations are struggling to stay on top of costly technology risks, according to a new report by KPMG. The Technology Risk Radar, which tracks the major technology incidents faced by businesses and public sector bodies, reveals the cost of IT failures over the last 12 months. It found that, on average, employers had to pay an unplanned £410,000 for each technology-related problem they faced. The report also reveals that an average of 776,000 individuals were affected - and around 4 million bank and credit card accounts were compromised: by each IT failure.

Incidents caused by ‘avoidable’ problems such as software coding errors or failed IT changes accounted for over 50 percent of the IT incidents reported over the past year. Of these, 7.3 percent of reported events were the fault of human error: a figure which shows that basic investments in training are being ignored at the employers’ cost. Further, while data loss related incidents continued to be a major problem for all industries, a significant number of those (16 percent) were unintentional.

KPMG’s Tech Risk Radar reveals that customer-facing organizations are quickly realising the true cost of systems failures if they are left unchecked. For instance, a utility company faced a £10 million fine when technical glitches during the transfer to a new billing system meant customers did not receive bills for months and were then sent inaccurate payment demands or refused prompt refunds when errors were eventually acknowledged.

Commenting on the findings of the Technology Risk Radar report, Jon Dowie, Partner in KPMG’s Technology Risk practice said: “Technology is no longer a function within a business which operates largely in insolation. It is at the heart of everything a company does and, when it goes wrong it affects an organization’s bottom line, its relationship with customers and its wider reputation.

“Investment in technology will continue to rise as businesses embrace digital and other opportunities, but this needs to be matched by investments in assessing, managing and monitoring the associated risks. At a time when even our regulators have shown themselves to be vulnerable to technology risk, no one can afford to be complacent.”

With financial services under enormous pressure to maintain highly secure technology infrastructure, KPMG predicts IT complexity will continue to be the single biggest risk to financial services organizations in the coming year. This is closely followed by ineffective governance, risk and non-compliance with regulations. Security risks – such as cyber-crime and unauthorised access - are rated fifth.

Jon Dowie adds: “With ever greater complexity in IT systems – not to mention the challenge of implementing IT transformational change – companies are running to stand still in managing their IT risks. The cost of failure is all too clear. It is crucial for both public and private sector organizations to understand the risks associated with IT and how they can be managed, mitigated and avoided.”

The Australian Prudential Regulation Authority (APRA) has released the final version of its new risk management standard, and associated guidance.

APRA consulted extensively during 2013 and 2014 on both the risk management standard and prudential practice guide. The package released includes final versions of Prudential Standard CPS 220 Risk Management (CPS 220) and Prudential Practice Guide CPG 220 Risk Management (CPG 220) as well as a letter to industry summarising APRA’s response to submissions on the most recent consultation, which commenced on 7th October 2014. The letter sets out a small number of minor refinements that were made to the prudential practice guide as a result of the submissions received; there were no further changes to the prudential standard.

The new requirements are applicable to authorised deposit-taking institutions (ADIs), general insurers and life companies, and authorised non-operating holding companies (authorised NOHCs), and take effect from 1st January 2015.

APRA Chairman Wayne Byres said the new standard harmonises risk management requirements across the banking and insurance industries, bringing together a range of risk management requirements into a single standard.

‘The new standard, together with the new practice guide, reflect APRA’s heightened expectations with regards to risk management, consistent with the increased emphasis that has been placed on sound governance and robust risk management practices in response to the global financial crisis.’

More details here.

Early data suggests that the current 2014-2015 flu season could be severe, with related human resource business continuity issues for organizations.

The Centers for Disease Control and Prevention (CDC) urges immediate vaccination for anyone still unvaccinated this season and recommends prompt treatment with antiviral drugs for people at high risk of complications who develop flu.

So far this year, seasonal influenza A H3N2 viruses have been most common. There often are more severe flu illnesses, hospitalizations, and deaths during seasons when these viruses predominate. For example, H3N2 viruses were predominant during the 2012-2013, 2007-2008, and 2003-2004 seasons, the three seasons with the highest mortality levels in the past decade. All were characterized as ‘moderately severe.’



Earlier this year, Steelhenge launched the Crisis Management Survey 2014 with the aim of developing a better picture of how organizations are building their preparedness for a crisis. Questions ranged from strategic ownership of the crisis management capability through plan development and training to the tools used to support the crisis management team. Respondents were also asked about the challenges they face in creating a crisis management capability and how they rate their overall level of preparedness.

One of the most striking results from the survey, published in 'Preparing for crisis: safeguarding your future', was that less than half of the respondents rated the overall crisis preparedness of their organization as ‘very well prepared’ with 13% responding that they were either ‘not well prepared’ or ‘not prepared at all’. The greatest challenges to crisis preparedness cited by the survey respondents were lack of budget, lack of senior management buy-in, time constraints, operational issues taking precedence and employees not seeing crisis preparedness activities as a priority.

The crisis communications function was found to be lagging behind when it comes to crisis preparedness; while 84% of organizations surveyed had a documented Crisis Management Plan, less than a quarter of respondents recorded that they do not have a documented plan for how they will communicate in a crisis and 41% responded that they do not have guidance on handling social media in a crisis.

In the Business Continuity Institute's 2014 Horizon Scan report, the influence of social media came second in the list of emerging trends or uncertainties with 63% of respondents to the survey identifying it as something to look out for.

Other key themes to emerge from the Crisis Management Survey include:

  • Embedding – Less than half of the respondents had a programme of regular reviews, training and exercising that would help embed crisis management within an organization and create a genuinely sustainable crisis management capability.
  • Engagement – In the face of high profile crises befalling major organizations year after year, 29% of organizations taking part in the survey still waited for the brutal experience of a crisis before creating a plan. Crisis preparedness is still a work in progress for many, particularly with regard to crisis communications planning.
  • Ownership – Ownership of crisis management at the strategic level amongst the survey population lay predominantly with the Chief Executive. However, responsibility for day-to-day management of the crisis management capability was spread widely across a broad range of functional roles with business continuity/disaster recovery and incident/emergency management featuring most with 50% between them.

The report concludes that the fact that a large number of organizations still do not have plans, and such a large percentage of organizations do not run a programme of development to maintain and improve their crisis management capability, suggests that too many organizations are not yet taking crisis management seriously enough. Any doubters as to the value of crisis management only have to speak to organizations who have suffered a crisis. As one survey respondent said "we have suffered a number of potential crisis situations including an actual terrorist attack. Good planning and preparation has stood us in good stead.

Would you put all your investment into shares in just one company? Or into just one piece of property? Or even just into gold? While people are free to put their money where they please, many financial investors have identified diversification of investment as a better solution. Similarly, in business continuity the right mix of safer measures with lower returns and more innovative strategies with higher returns can optimise resilience without requiring unduly heavy expenditure (which in itself could threaten business continuity). This portfolio approach requires a certain attitude and tools, but can pay dividends.


LOS ANGELES — In the most sweeping campaign directed at earthquake safety ever attempted in California, Los Angeles officials proposed Monday to require the owners of thousands of small, wooden apartment buildings and big concrete offices to invest millions of dollars in strengthening them to guard against catastrophic damage in a powerful earthquake.

The mandate to retrofit buildings was part of a raft of proposals made by Mayor Eric M. Garcetti to deal with what is widely viewed as a longtime failure of Southern California to prepare for a damaging earthquake. In a report issued Monday, Mr. Garcetti also proposed that the city take steps to create a new firefighting water supply system, using ocean and waste water, to help battle as many as 1,500 fires that could break out in a major earthquake. Such a temblor is likely to leave large parts of this region without water or power.

The retrofitting requirements must be approved by the City Council, and would have to be paid for by the building owners, with the costs presumably passed on to tenants and renters. The costs could be significant: $5,000 per unit in vulnerable wooden buildings and $15 per square foot for office buildings, Mr. Garcetti said.


It’s that time of year—security experts are looking ahead to the coming months and discussing their predictions. I have seen a number of predictions that I believe deserve further discussion, so over the month of December, I’ll be looking at some of those issues more in depth. Today, I’m going to take a look at cloud security.

A recent IBM study found that 75 percent of security decision makers expect their cloud security budgets to increase in the next five years. At the same time, according to MSP Mentor, 86 percent of CISOs say their companies are adopting cloud computing. So it makes sense that there will also be a greater interest in funding cloud security efforts.

But it isn’t just a matter of securing the data in the cloud. The cloud is also going to have a much stronger influence on the way we approach overall security practices, says Paul Lipman, CEO of iSheriff. That’s because the cloud is changing the entire business computing structure, which will cause it to have a ripple effect into security concerns. In an email conversation, Lipman provided his five predictions for the future of cloud security. In a nutshell, they are:


Knowledge Vault today announced the general availability of its namesake analytics-as-a-service platform that provides more insight into how documents are being consumed and shared beyond anything IT organizations could hope to accomplish on premise.

Knowledge Vault CEO Christian Ehrenthal says that starting with Microsoft Office 365 deployments, IT organizations can use Knowledge Vault to discover and audit content and apply governance policies to documents stored in the cloud. Next up, says Ehrenthal, will be support for Dropbox, Microsoft OneDrive and

Knowledge Vault itself makes use of a Big Data analytics engine based on Hadoop that runs on Microsoft Azure to analyze the content of documents that it accesses via the application programming interfaces (APIs) that various cloud service providers expose. That data then gets stored on top of Hadoop as a Knowledge Vault object.


(TNS) — The Federal Emergency Management Agency unveiled a broad series of reforms Friday to address concerns contractors conspired to underpay flood insurance settlements to homeowners after superstorm Sandy.

In a strongly worded letter to private companies that work for the government-run National Flood Insurance Program, FEMA administrator W. Craig Fugate said he had "deep concern" over allegations engineers falsified documents to deny claims.

"We must do better," Fugate wrote. "Policyholders deserve to be paid for every dollar of their covered flood loss."

The reforms include:


Building on previous suggestions, including the establishment of two specialized Ebola treatment centers, a task force on Thursday released its full report on how the state could better handle an outbreak of an infectious disease.

The Texas Task Force on Infectious Disease Preparedness and Response, created in October by Gov. Rick Perry after a man was diagnosed with Ebola in Dallas, called for new education efforts to help health care providers be better prepared to identify new diseases.

The panel’s 174-page report also recommended the creation of guidelines for handling pets that may have been exposed to infectious diseases, a mobile app to help monitor potentially exposed individuals, and the establishment of a treatment facility specifically for children and infants.

"The recommendations contained in this report represent a major step forward in protecting the people of Texas in the event of an outbreak of Ebola or other virulent disease," Perry said in a statement.


Making the case that the time has come for building a more efficient way to manage data center environments, Mesosphere today announced what it is calling the first data center operating system (DCOS) that turns everything in the data center into a shared programmable resource.

Mesosphere CEO Florian Leibert says Mesosphere DCOS is based on an open source distributed Apache Mesos kernel project that turns virtual and physical IT infrastructure into a common pool of resources. At present, Mesosphere DCOS can be deployed on Red Hat, CentOS, Ubuntu and CoreOS distributions of Linux running on bare-metal servers or VMware or KVM virtual machine environments running on premise or in Amazon Web Services, Google, DigitalOcean, Microsoft, Rackspace and VMware cloud computing environments.

Liebert says it takes too much effort these days to deploy distributed computing applications. By abstracting away the underlying physical and virtual infrastructure, Mesosphere presents services and application programming interfaces (APIs) that ultimately serve to dramatically increase overall utilization of IT infrastructure.


Monday, 08 December 2014 00:00

The future of business continuity


The sun is now setting on 2014 and we can look forward to welcoming in the new year. 2014 was the 20th anniversary of the Business Continuity Institute but the commemorations were never about reflecting back on the previous 20 years, but rather looking to the future and the new horizon that awaits us all. The BCI’s outgoing Chairman – Steve Mellish FBCI – referred to this as his 2020 Mission.

Who better to write about where the industry is heading than those who will perhaps be doing most to shape that future, those who are just starting out in their careers? '20 in their 20s' is a series of essays written by business continuity professionals from across the world who are all still aged in their twenties, so all still with a long career ahead of them.

This publication is what these twenty young professionals feel are the challenges that the business continuity industry will be facing in the future. Some relate to the particular industry they work in and some relate to the region that they are based in, however they all give an idea of what may lie ahead.

To read '20 in their 20s: The future of business continuity', click here.

On October 20, 2014, Wyndham Worldwide Corporation won dismissal of a shareholder derivative suit seeking damages arising out of three data breaches that occurred between 2008 and 2010.  Dennis Palkon, et al. v. Stephen P. Holmes, et al., Case No. 2:14-cv-01234 (D. N.J. Oct. 20, 2014). Wyndham prevailed, but the litigation carries key cybersecurity warnings for officers and directors.

Businesses suffering data breaches end up litigating on multiple fronts. Wyndham had to defend itself against the shareholder derivative action and against a Federal Trade Commission action.  In other data breach-related cases, the Securities & Exchange Commission, the Department of Justice and state regulatory agencies have asserted jurisdiction. Regulatory actions only compound exposure from private civil actions.

Officers and directors play a key role in cybersecurity. Wyndham’s directors supported the company as it defended its conduct and procedures before the FTC. However, they also had to satisfy their fiduciary duties to assess whether the breaches were the result of negligent or reckless conduct by Wyndham’s officers, which may have required the company to file its own civil action against its officers. It is not difficult to imagine situations in which a Board of Directors determines that the company’s officers acted wrongfully or negligently and end up with a choice between suing the company’s own officers for their conduct or foregoing such a lawsuit and facing derivative litigation from shareholders.


The hyperscale data center industry has made no secret of its desire to leverage renewable energy to the greatest extent possible. When you start measuring density in megawatts, any solution that helps cut the power bill is welcome.

Lately, much of the activity has centered on wind, with top-tier data producers signing long-term agreements with wind farms near their newest plants, or in some cases building capacity on-site.

Google, for example, recently teamed up with Dutch utility Eneco to provide wind energy to the company’s new facility in Eemshaven in the Netherlands. The goal is to run the plant on 100 percent wind that is sourced from Eneco’s farm in nearby Delfzijl, and in fact, the data center is expected to draw the full output of the facility for the 10-year lifespan of the contract. The data center is expected to go on-line in mid-2016.


Earlier today, we published a report that dissects global risk perceptions of business and technology management leaders. One of the most eye-popping observations from our analysis is how customer obsession dramatically alters the risk mindset of business decision-makers.

Out of seven strategic initiatives -- including “grow revenues,” “reduce costs,” and “better comply with regulations,” -- improve the experience of our customers is the most frequently cited priority for business and IT decision-makers over the next 12 months. When you compare those “customer-obsessed” decision-makers (i.e. those who believe customer experience is a critical priority) versus others who view customer experience as a lower priority, drastic differences appear in how they view, prioritize, and manage risk.

Customer obsession has the following effects on business decision-makers’ risk perceptions:


(TNS) — A powerful storm is bearing down on the Philippines, prompting residents to flee their homes in some central coastal regions still recovering from last year's deadly Typhoon Haiyan.

Typhoon Hagupit, which was packing winds as high as 149 mph over the Pacific Ocean on Thursday, is expected to make landfall Saturday, bringing heavy rain and storm surges of up to 13 feet.

Although there is uncertainty about the storm's route, forecasts by the Philippines weather agency show it hitting the eastern coast and barreling west along a trajectory similar to that of Haiyan, which destroyed about 1 million homes, displaced 4 million people and left more than 7,300 dead and missing in November 2013.


Do you remember the scene from The Empire Strikes Back where the Millennium Falcon is trying to escape an Imperial Star Destroyer? Han Solo says, “Let’s get out of here, ready for light-speed? One… two… three!” Han pulls back on the hyperspace throttle and nothing happens. He then says, “It’s not fair! It’s not my fault! It’s not my fault!” 

Later in the movie when Lando and Leia are trying to escape Bespin, the hyperdrive fails yet again. Lando exclaimed, “They told me they fixed it. I trusted them to fix it. It's not my fault!” In first case transfer circuits were damaged, and in the second case, stormtroopers disabled the hyperdrive.

Ultimately they were at fault; they were the captains of the ship, and the buck stops with them. It doesn't matter what caused problems, they were responsible; excuses don't matter when a Sith Lord is in pursuit. 

I am seeing a trend where breached companies might be heading down a similar “it’s not my fault” path. Consider these examples:


Keith Fehr wants to be ready for anything when the Super Bowl comes to the University of Phoenix Stadium in February. “We trained on structural collapse, on foodborne illness. We practiced a biological agent release, a chemical warfare release, explosions, multi-vehicle accidents,” he said.

As director of emergency management for the Maricopa Integrated Health System, an Arizona system that encompasses an adult trauma center, pediatric trauma, a regional burn center and two behavioral health facilities, Fehr said he has his bases covered. “The big game may never see a chemical weapons attack,” he said, “but you always want to push to the point of failure, to see where you could do better.”

Fehr got his right-to-the-edge training this fall at the Center for Domestic Preparedness (CDP), a FEMA teaching facility where some 14,000 first responders and emergency managers come each year to drill, pairing classroom time with intensely realistic exercises. Walking wounded stagger through a mock downtown. Radiation victims crowd the halls of a full-scale hospital. Hazmat teams deal with actual anthrax and ricin. It’s a hardcore program, with FEMA picking up all participants’ costs.


When a technology company does well, more power to it. When it does good at the same time, it warrants our attention. So when TCN, a provider of cloud-based call center technology in St. George, Utah, announced that it was releasing technology that would help visually impaired people get jobs in call centers, my attention was immediately grabbed.

On Tuesday, TCN announced the release of Platform 3 VocalVision, technology that enables visually impaired people to navigate TCN’s Platform 3.0 call center suite. The approach was to optimize the platform to be compatible with Job Access with Speech (JAWS), a popular screen reader that assists users whose vision impairment prevents them from seeing screen content or using a mouse.

In an email interview, Terrel Bird, co-founder and CEO of TCN, explained the roots of the project.


(TNS) — This is a test of the region's preparedness for sea level rise and climate change. This is only a test:

It's Aug., 19, 2044. Hurricane Elvis, a Category 3 storm, is bearing down on Hampton Roads.

Sea levels are 1.5 feet higher than today. Because of climate change, the region has had 60 days 90 degrees this year. The National Weather Service is forecasting Elvis storm surges of 3 to 8 feet.

What does Hampton Roads need to do to prepare for something like this?

The scenario was part of a federally led exercise held this week at Old Dominion University.


The year is 2015. You walk into your bank to make a withdrawal, hold your smartphone to the terminal with one hand, and put the fingers of your other hand on the small green-glowing window.

A buzzer sounds and the words “IDENTITY REJECTED” flash onto the screen. A security guard appears from nowhere.

You begin the first of many long, frustrating protestations. You are who you say you are, but you can’t prove it.

Your identity has been snatched.


BSI has announced the availability of a revised version of PAS 96, which helps companies safeguard food and drink against malicious tampering and food terrorism. PAS 96 ‘Defending food and drink’ was first published in 2008 as a guide to Hazard Analysis Critical Control Point (HACCP) which identifies and manages risks in supply chains.

The food and drinks industry is used to handling natural errors or mishaps within the food supply chain, but the threat of deliberate attack, although not new, is growing with the changing political climate. Ideological groups can see this as an entry point to commit sabotage or further criminal activity.

Therefore the impacts of threats to the food supply chain are great. They can include direct losses when responding to the act of sabotage, paying compensation to affected producers and suppliers, customers and distributors. Trade embargoes can be imposed by trading partners and long term reputational damage can occur as a result of an attack.



Businesses in the UK are at risk of sleepwalking into a reputational time bomb due to a lack of awareness on how to protect their data assets, according to new research by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.

The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56%) of UK businesses being more concerned than 12 months ago. 7 in 10 (70%) attribute this to hackers becoming more skilled and better at targeting businesses. However, whilst the vast majority (98%) of organizations have taken measures to minimize risks to their information security, only 12% are extremely confident about the security measures their organization has in place to defend against these attacks.

These concerns echo those in the annual Horizon Scan survey carried out by the Business Continuity Institute and sponsored by BSI, which showed that cyber attacks and data breaches are the joint second biggest concern for business continuity practitioners. In the 2014 report, 73% of respondents to a global survey expressed either concern or extreme concern about each of these threats materialising.

Worryingly, IT Directors appear to have accepted the risks to their information security, with 9 in 10 (91%) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49% in both instances). Around four in ten (42%) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30%) have suffered a loss of confidential information.

Organizations need to safeguard themselves and their customer data, however there is an inherent lack of trust from consumers on how their data is handled with a third of consumers admitting they do not trust organizations with their data. There have been many high profile data breaches in the last few years that help demonstrate just why this lack of trust is justified. On the other hand there is a level of acceptance that nothing online will ever be safe, leading to a false sense of security that ‘this will not happen to me’ amongst those who have not suffered from a cyber-attack/crime.

Maureen Sumner Smith, UK Managing Director at BSI added: “Consumers want their information to be confidential and not shared or sold. Those who want to be reassured that their data is safe and secure are looking to organizations who are willing to go the extra mile to protect and look after their data. Best practice security frameworks, such as ISO 27001 and easily recognizable consumer icons such as the BSI Kitemark for Secure Digital Transactions can help organizations benefit from increased sales, fewer security breaches and protected reputations. The research shows that the onus is on businesses to wake up and take responsibility if they want to continue to be profitable and protect their brand reputations.”

Efforts continue in order to stop the spread of the Ebola outbreak and find vaccines to defeat the virus. However, businesses need to be prepared in more ways than one. Although the risk is considered low that a widespread Ebola infection would occur outside West African countries, the potential consequences could be catastrophic and deadly. Like other epidemics that became pandemics, precautions against Ebola can start with common sense instructions to prevent infection and to react appropriately if it is detected. But they cannot end there. Organisations must make sure that additional protection is in place both for their employees and their business activities.


When creating a business continuity (BC) or disaster recovery (DR) plan, I say “begin with the end in mind.”

A BC / DR plan’s primary goal is to help prepare an organization so it can respond to and fully recover from any disaster, as quickly as possible. But how many actually get to the end with a fully functional integrated easy to use crisis management plan (or Incident Management, Continuity of Operations Plan)? How many still have a big thick binder with multiple pages you have to flip through to find the information you need?

The point of this article is to map out elements of an effective crisis management plan with the goal of helping you avoid recovery delays and potential financial or operational disasters. Having an effective crisis management plan with each action mapped out prior to an incident is essential. Without it, your emergency response might lead to catastrophic consequences for your employees, your business and your customers.


2014 saw continued use of buzzwords like cloud, wearables, BYOD and IoT but conversations around what this will mean to business if we don’t evolve and prepare our IT infrastructures were significantly lacking.

There’ll always be some level of disconnect between maintaining IT and maintaining business productivity; both have very different deliverables. However the two must be interlinked as there are key areas where IT and business objectives overlap. Understanding the ICT environment in depth is important to improving business resilience and the efficiency of the ICT infrastructure.

In this article Patrick Hubbard highlights emerging areas where greater understanding is required to enable organizations to maintain current levels of ICT availability and resiliency.


EMC Corporation has published the findings of a new global data protection study that reveals that data loss and downtime cost enterprises more than $1.7 trillion in the last twelve months. Data loss is up by 400 percent since 2012 while, surprisingly, 71 percent of organizations are still not fully confident in their ability to recover after a disruption.

The EMC Global Data Protection Index, conducted by Vanson Bourne, surveyed 3,300 IT decision makers from mid-size to enterprise-class businesses across 24 countries.

Impact of data loss and downtime
The good news is that the number of data loss incidents is decreasing overall. However, the volume of data lost during an incident is growing exponentially:

  • 64 percent of enterprises surveyed experienced data loss or downtime in the last 12 months;
  • The average business experienced more than three working days (25 hours) of unexpected downtime in the last 12 months;
  • Other commercial consequences of disruptions were loss of revenue (36 percent) and delays to product development (34 percent).

New wave of data protection challenges
Business trends, such as big data, mobile and hybrid cloud are creating new challenges for data protection:

  • 51 percent of businesses lack a disaster recovery plan for any of these environments and just 6 percent have a plan for all three;
  • In fact, 62 percent rated big data, mobile and hybrid cloud as 'difficult' to protect
  • With 30 percent of all primary data located in some form of cloud storage, this could result in substantial loss.

The protection paradox
Adopting advanced data protection technologies dramatically decreases the likelihood of disruption. And, many companies turn to multiple IT vendors to solve their data protection challenges. However, a siloed approach to deploying these can increase risks:

  • Enterprises that have not deployed a continuous availability strategy were twice as likely to suffer data loss as those that had;
  • Businesses using three or more vendors to supply data protection solutions lost three times as much data as those who unified their data protection strategy around a single vendor;
  • Those enterprises with three vendors were also likely to spend an average of $3 million more on their data protection infrastructure compared to those with just one.

More details:

There are a great many challenges to overcome to prepare a sizable organization for crises, emergencies or reputation disasters. But one seems nearly intractable: the ignorance of those in high places. The very ones who will make the big decisions when push comes to shove. The lawyers, the CEOs, the regional execs, the Incident Commanders, the chiefs, the directors, the presidents.

If the ones who call the shots during a response do not understand the water they are swimming in, the effort is doomed–despite all the preparation that communication and public relations leaders may put in place.

A week or so ago I had the privilege of presenting to the Washington State Sheriffs and Police Chief’s association training meeting. Chief Bill Boyd and I were to give a four hour presentation to these law enforcement leaders. Bill did the bulk of the work on the presentation, but had a medical emergency and couldn’t present with me. One item he had gathered for this really hit me–and those present. The Boston Police radio message from the Incident Commander on the scene just after the bombing occurred included the calm but clearly adrenalin-filled IC’s details on what actions the police on the scene were taking. Then he said, “And I need someone to get on social media and tell everyone what we are doing.” That’s correct. One of the top priorities of this Commander was to inform the public of police actions and the way to do that he knew was through the agencies social media channels.


Boards, regulators and leadership teams are demanding more and more of risk, compliance, audit, IT and security teams. They are asking them to collaboratively focus on identifying, analyzing and managing the portfolio of risks that really matter to the business.

As risk management programs evolve to more formal processes aligned with business objectives, leaders are realizing that by developing a proactive mindset in risk and compliance management, teams can provide added value to help the organization gain agility by identifying new opportunities as well as managing down-side risk. Organizations with this new perspective are more successful in orchestrating change to provide a 360-degree view of both risk and opportunity.

Risk teams that are further along on the journey of leveraging proactive approaches to risk management look not only within the organization but beyond to supplier, third party and customer ecosystems. This means developing a view across the larger enterprise infocosm, to ensure alignment of people, processes and technologies.


(TNS) — In baseball, when a slugger has been slumping for a few years in a row, the pundits in the upper deck will be quick to declare a trend; “the bum’s done,” they’ll assert.

Weather forecasters are a little more retrospective.

In 2001, forecasters had announced that they believed that since 1995, the tropics had been in a cycle of more and stronger storms. Such periods can last 25 to 40 years.

The hurricane season that ended Sunday, Nov. 30, was quiet. So was the year before that. Only three seasons since 1995 have been below average. We just went through two of them.

This followed some of the busiest, and most damaging, years on record.


UK businesses are at risk of sleepwalking into a reputational time bomb due a lack of awareness on how to protect their data assets, according to research released by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.

The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56 percent) of UK businesses being more concerned than 12 months ago. 7 in 10 (70 percent) attribute this to hackers becoming more skilled and better at targeting businesses.

However, whilst the vast majority (98 percent) of organizations have taken measures to minimize risks to their information security, only 12 percent are extremely confident about the security measures their organization has in place to defend against these attacks.

Worryingly, IT directors appear to have accepted the risks to their information security, with 9 in 10 (91 percent) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49 percent in both instances). Around four in ten (42 percent) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30 percent) have suffered a loss of confidential information.


Tuesday, 02 December 2014 00:00

Business Continuity Management and ERM Tools

In theory, BCM and ERM should get along just fine. ERM or enterprise risk management is concerned with identifying both positive and negative risk for an organisation – or opportunities as well as threats, if you prefer. Business continuity management is about keeping a business in operation in the face of adversity. It’s also about enhancing the value and profitability of operations, thanks to a better corporate image towards customers, banks, insurers and the like. Effective BCM depends on good risk analysis of the kind that ERM is designed to do. With selection of ERM software tools to automate risk management, how can organisations find out if there’s one that’s right for them?


Tuesday, 02 December 2014 00:00

Working Our Way Toward the Federated Cloud

Some interesting research came out last month regarding the enterprise’s attitude toward the cloud and what it will take to push more of the data load, and mission-critical functions in particular, off of local infrastructure. It turns out that while security and availability are still prime concerns, flexibility and federation across multiple cloud architectures are equally important.

In IDG’s most recent Enterprise Cloud Computing Study, more than a third of IT respondents say they are comfortable with the current state of cloud technology, with about two thirds saying the cloud increases agility and employee collaboration. The key data, however, comes in the attitude toward advanced networking technologies like software-defined networking (SDN) and network functions virtualization (NFV), with more than 60 percent saying they plan to increase their investment in these areas specifically to enhance their ability to access and manage disparate cloud environments.


Where does your business stand on security readiness?

If you are like the majority of small businesses, you are pretty nervous about your cybersecurity efforts and ability to thwart and/or react to a threat.

In October, e-Management asked attendees at the CyberMaryland Conference about their cybersecurity policies. What the CyberRX survey found was that 63 percent of small businesses aren’t very confident about their continuous security monitoring capabilities and nearly a quarter don’t provide any type of security training for their employees. Of those that do provide some sort of training, it is mostly periodic—and we’ve learned that cybersecurity education and training needs to be a constant evolving effort because the threat landscape is always changing.


(TNS) — Tornado Alley is undergoing a transformation.

The number of days that damaging tornadoes occur has fallen sharply over the past 40 years, a study published recently in the journal Science shows. But the number of days on which large outbreaks occur has climbed dramatically.

“It’s really pretty shocking,” said Greg Carbin, warning coordination meteorologist with the Storm Prediction Center in Norman, Okla.

In the early 1970s, there was an average of 150 days each year with at least one F1 tornado. That number has dropped to about 100 days each year now.

There were just six days in all of the 1970s with at least 30 F1 tornadoes. But that number has jumped to three a year now.


(TNS) — The nation's top housing official recently toured the core of a house in Brownsville that holds the promise of returning people quickly to their homes after a major disaster. What he didn't know was that it had been partially put up in an afternoon by a group of unskilled teenagers.

The house inspected Monday by Housing and Urban Development Secretary Julian Castro is part of a $2 million pilot project that envisions the construction of less-expensive, structurally sound housing within days of a disaster instead of years. Although hundreds of low-income homes have been rebuilt since Hurricanes Dolly and Ike laid waste to the Texas Gulf Coast in 2008, many families are still waiting for housing already funded with federal disaster money.

The RAPIDO project, to build 20 prefabricated homes in the Rio Grande Valley, is the first of two projects that its originators hope will revolutionize not only the way housing is built after disasters, but as a way to provide low-income housing everywhere in Texas. A similar $4 million project to build 20 homes in Harris and Galveston counties is in its early stages and expected to produce its first house by March.


Remember the business aftermath of Hurricanes Katrina and Sandy? In each case, companies far and wide scrambled to put business continuity/disaster recovery (BC/DR) plans in place if they didn’t already have them – whether or not they had felt so much as a raindrop from the super-storms.

But human memory is short-lived. As incredible as it may seem, some people have already forgotten the devastation and destruction caused by disasters such as Hurricanes Sandy and Katrina. The problem, of course, is that the risk of disasters hasn’t gone down, even if our alertness to them has. All you need to do is take a look at data such as Sperling’s natural disaster map to see that the next disaster could be just around the corner … with the risks notably higher depending on where you are.

So now – in between crises – is a great time to figure out how to mitigate the risk associated with natural disasters. And one of the foremost ways to do so is to consider the location of your secondary or backup data center.


Monday, 01 December 2014 00:00

Seven crisis management tips

By Charlie Maclean-Bristol, FBCI

Recently I conducted three strategic level exercises and thought I would share some of the lessons learned. The exercises consisted of two public sector executive teams and a manufacturer.

The following are the main lessons learned.