Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 29, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (6550)

“Pandemic” and “panic” sound a lot alike. Certainly, the first can trigger the second in next to no time, as the recent outbreak of Ebola has demonstrated. But as a leader in your company, you can avoid both by encouraging your cross-functional teams to take the following six steps.



(TNS) — There are chilling similarities between the deadly Charlie Hebdo attack in Paris and the Boston Marathon bombings, with lessons to be drawn for law enforcement, terrorism experts say.

Both attacks have been blamed on homegrown terrorist brothers — in each case with a brother who had drawn law enforcement attention for Islamic radical ties before. In both cases, both police and citizens were targeted with equal cold-blooded vigor.

“I think what you’re going to see is governments going through their watch lists to see how many names appear identical. They should have added worry when you have two or three members of the same family giving prior warning, governments should be taking a second and third look at them,” said Victor David Hanson of the Hoover Institution. “When you are dealing with familial relations, it means there are fewer people who have privileged information about the ongoing plotting and the secret is reinforced by family ties ... it’s going to be much harder for Western intelligence to break into them.”



Thursday, 08 January 2015 00:00

43 States Have 'Widespread' Flu Problems

(TNS) -- Influenza viruses have infiltrated most parts of the United States, with 43 states experiencing "widespread" flu activity and six others reporting "regional" flu activity, according to the Centers for Disease Control and Prevention.

Hawaii was the only state where flu cases were merely "sporadic" during the week that ended Dec. 27, the CDC said in its latest FluView report. One week earlier, California also had been in the "sporadic" category, and Alaska and Oregon reported "local" flu outbreaks. Now all three states have been upgraded to "regional" flu activity, along with Arizona, Maine and Nevada.

The rest of the states are dealing with "widespread" outbreaks, according to the CDC.



Thursday, 08 January 2015 00:00

SMBs Should Consider These Tech Trends in 2015

Of course, the end of 2014 and the beginning of 2015 bring all sorts of articles predicting what will be hot in the coming year. For small to midsize businesses (SMBs), quite a few outlets are reporting their lists of technology trends to watch.

Entrepreneur gave three “promising trends” for 2015, which include creating and leveraging well-designed technology, adopting software as a service (SaaS) and developing “data-driven insights.”

Taking advantage of data to make better informed decisions is also a top trend for SMBs to watch from the Huffington Post. According to writer Joyce Maroney, “Smaller businesses, swimming in lots of data of their own, will likewise be taking more advantage of that data to bring science as well as art to their decision making.” That likely means delving further into more data sources than just Google Analytics, says Entrepreneur writer Himanshu Sareen, CEO of Icreon Tech.



The presence or lack of catastrophes is a defining event when it comes to the financial state of the U.S. property/casualty insurance industry.

At the 2014 Natural Catastrophe Year in Review webinar hosted by Munich Re and the Insurance Information Institute (I.I.I.), we can see just how defining the influence of catastrophes can be.

U.S. property/casualty insurers had their second best year in 2014 since the financial crisis – 2013 was the best – according to estimates presented by I.I.I. president Dr. Robert Hartwig.

P/C industry net income after taxes (profits) are estimated at around $50 billion in 2014, after 2013 when net income rose by 82 percent to $63.8 billion on lower catastrophe losses and capital gains.



Thursday, 08 January 2015 00:00

Survey: business continuity in 2015

Continuity Central’s annual survey asking business continuity professionals about their expectations for the year ahead is now live.

Please take part at https://www.surveymonkey.com/r/businesscontinuityin2015

The survey looks at the trends and changes the profession can expect to see in the year ahead.

Read the results from previous years:

Thursday, 08 January 2015 00:00

Scoping Out Your Program/Risk Assessment

At the PLI Advanced Compliance & Ethics Workshop in NYC in October, Scott Killingsworth of the Bryan Cave law firm noted that each risk assessment should be unique.  I agree, and I believe that the case for uniqueness is even more powerful for the combined program and risk assessments companies sometime undertake.  Given the diversity of possibilities, where should you start in scoping out such an engagement?  Another way of asking this question is “How should you conduct a needs assessment for a program/risk assessment?”

To begin, it may be worth thinking in terms of the following six fields of information which can comprise the subjects of an assessment:



The future of IT infrastructure is changing. My friend, BJ Farmer over at CITOC, is fond of reminding me that Change is the Only Constant (see what CITOC stands for?).

It’s true for most everything in life, and especially true for our industry. You can either embrace the changes that come along, evolving how you present services to your clients, or you can slowly lose relevance and fade out of the big picture. The choice is yours.

Right now, change comes from The Cloud.

Yes, there is definitely a lot of hype about the cloud, and it’s easy to grumble about fads and look at the big cloud migration as a bandwagon everyone’s too eager to jump on. But the plain fact is that the cloud is providing affordable, smart alternatives to the kind of infrastructure that used to be the bread and butter of an MSP, and it’s not going anywhere. So you can either keep railing against the cloud, running your Exchange servers and piecing together various services from different partners, or you can start thinking about how to offer innovative solutions for your clients by STRATEGICALLY leveraging the cloud.



Thursday, 08 January 2015 00:00

Human Error Caused 93% of Data Breaches

Despite tremendous increased attention, the number of reported cyberbreach incidents rapidly escalated in 2014. According to Information Commissioner’s Office data collected by Egress Software Technologies, U.K. businesses saw substantially more breaches last year, with industry-wide increases of 101% in healthcare, 200% in insurance, 44% among financial advisers, 200% among lenders 200%, 56% in education and 143% in general business. As a result, these industries also saw notable increases in fines for data protection violations.

The role of employees was equally alarming. “Only 7% of breaches for the period occurred as a result of technical failings,” Egress reported. “The remaining 93% were down to human error, poor processes and systems in place, and lack of care when handling data.”

Check out more of the findings from Egress’ review in the infographic below:



The recent Ebola outbreak unearthed an interesting phenomenon. A “mystery hemorrhagic fever” was identified by HealthMap — software that mines government websites, social networks and local news reports to map potential disease outbreaks — a full nine days before the World Health Organization declared the Ebola epidemic. This raised the question: What potential do the vast amounts of data shared through social media hold in identifying outbreaks and controlling disease?

Ming-Hsiang Tsou, a professor at San Diego State University and an author of a recent study titled The Complex Relationship of Realspace Events and Messages in Cyberspace: Case Study of Influenza and Pertussis Using Tweets, believes algorithms that map social media posts and mobile phone data hold enormous potential for helping researchers track epidemics.

“Traditional methods of collecting patient data, reporting to health officials and compiling reports are costly and time consuming,” Tsou said. “In recent years, syndromic surveillance tools have expanded and researchers are able to exploit the vast amount of data available in real time on the Internet at minimal cost.”



(TNS) — After a series of 13 small earthquakes rattled North Texas from Jan. 1 to Wednesday, a team of scientists is adding 22 seismographs to the Irving area in an effort to learn more.

The team of seismologists from Southern Methodist University, which has studied other quakes in the area since 2008, deployed 15 of the earthquake monitors Wednesday. SMU studies of quakes in the DFW Airport and Cleburne areas have concluded wastewater injection wells created by the natural gas industry after fracking are a plausible reason for the temblors in those areas.

But Craig Pearson, seismologist for the state Railroad Commission, said that is not the case with the Irving quakes.

“There are no oil and gas disposal wells in Dallas County,” said Railroad Commission of Texas seismologist Dr. Craig Pearson in a Wednesday email.



Wednesday, 07 January 2015 00:00

Frigid Weather Heightens Ice Hazards

Freezing weather now sweeping across much of the U.S. brings a greater risk of ice storms and underlines the need for careful planning and heightened safety measures.

In fact, it does not take much ice to create disaster conditions. Even a thin coat of ice can create dangerous conditions on roads. Add strong winds and you have a recipe for downed trees and power lines, bringing outages that can last for days.



Wednesday, 07 January 2015 00:00

How We Get Work Done: Good Old Email

While attention is focused this week on the CES 2015 show in Las Vegas and all the new technology, gadgets and apps that may change the way we work in the near future, Pew Research has a reminder of the technology that we truly consider indispensable at work: Email and the Internet.

After a survey of 1,066 adult Internet users, Pew Research analyzed results from those who have full- or part-time jobs. When it comes to the digital work lives of these respondents, the findings indicate, the tools designated as “very important” are nothing new. Sixty-one percent named email, 54 percent “the Internet,” and 35 percent a landline phone. Cell phones and smartphones trailed at 24 percent, and social networking sites grabbed a measly 4 percent.

Pew notes that email is still king despite increasing awareness of drawbacks, including “phishing, hacking and spam, and dire warnings about lost productivity and email overuse.” In fact, 46 percent of respondents said they think they are more productive with their use of email and other digital tools; 7 percent say they are less productive. Being more productive, these workers report, includes communicating with more contacts outside the company, more flexible work hours, and more hours worked.



By David Honour

As we enter a new year it’s always a good exercise to look ahead at potential changes in the coming 12 months and what these might mean for existing business continuity plans and systems. Will the strategies you had in place in 2014 remain fit for purpose, or will some reworking be necessary? What emerging threats need to be considered to ensure that new exposures are not developing? In this article I highlight three areas which are likely to be the biggest generic business continuity challenges in 2015.

The rise and rise of information security threats

2014 was the year that information security related incidents took many of the business continuity headlines, with attacks increasing in sophistication, magnitude and impact. This situation is only going to get worse during 2015.

The greatest risk is that of a full-on cyber war breaking out, which would inevitably result in collateral damage to businesses. The first salvoes have been seen in a potential United States versus North Korea cyber war; but other state actors are also well geared up for cyber battle, including Israel, Russia, China and India. The cyber-warfare skills of terrorist groups such as ISIS should also not be under-estimated.



On January 1, 2015, version 3.0 of the PCI (Payment Card Industry) Data Security Standards replaced version 2.0 as the standard. In other words, what some financial institutions, merchants, and other credit card payments industry members already saw as an onerous process—complying with PCI standards and possibly being audited—is about to get even harder. While I can’t take the blood, sweat and tears out of PCI compliance, as an experienced Qualified Security Assessor (QSA) I can give you some context for why PCI is issuing a new version of its standards, and why 3.0 is a good thing for your business in the end.



Industrial-organizational (I-O) psychologists are all about what makes us tick in the workplace, so it’s unsurprising that the Society for Industrial and Organizational Psychology (SIOP) releases an annual “Top 10 Workplace Trends” list. Equally unsurprising, but interesting nonetheless, is that the list for 2015 is highly tech-focused.

Judging from the list, which was compiled on the basis of a survey of SIOP’s 8,000 industrial-organizational psychologists, these folks appear to have a pretty good handle on technology trends, which clearly have had a significant impact on their views of the workplace in the coming year. Here’s their Top 10 list:



(TNS) -- Hydraulic fracturing at two well pads in Mahoning County caused 77 small earthquakes last March along a previously unknown geologic fault, a new scientific study says.

The series of temblors included one quake of magnitude 3 -- rare in Ohio -- that was strong enough to be felt by neighbors, according to the study by three researchers from Miami University.

At the time of the quakes, only five were reported, ranging from magnitude 2.1 to 3.

The new research was published online Tuesday in the Bulletin of the Seismological Society of America. It will be printed in the February-March issue of the bulletin.

The peer-reviewed study of the quakes, which occurred in Poland Township southeast of Youngstown, appears to strengthen the link between small- and medium-sized earthquakes and both hydraulic fracturing (also known as fracking) and the use of injection wells for drilling wastes.



Enterprise organizations are looking to partner with MSPs as they move to the cloud. The key for success is to develop an engagement plan using a high touch process to ensure a smooth onboarding experience during all three phases of the on-boarding process:  The Assessment, Transition Plan and Cutover, and Ongoing Performance Analysis. Like most new technologies, cloud computing can require significant changes in business processes, application architectures, technology infrastructure, and operating models that must be properly understood before embarking on any new initiative. Having a well thought out strategy can mean the difference between success and failure.



(TNS) — The humble infusion pump: It stands sentinel in the hospital room, injecting patients with measured doses of drugs and writing information to their electronic medical records.

But what if hackers and identity thieves could hijack a pump on a hospital’s information network and use it to eavesdrop on sensitive data like patient identity and billing data for the entire hospital?

It is not a far-fetched scenario. Though it hasn’t happened yet, the hacking of wireless infusion pumps is considered a critical cybersecurity vulnerability in hospitals — so much so that federal authorities are focusing on the pumps as part of a wide-ranging effort to develop guidelines to prevent cyberattacks against medical devices.



(TNS) — When Glynn County, Ga., Police Chief Matt Doering began his career nearly three decades ago, the thought of holding an interactive map in his hand would have been like something out of a science fiction novel.

He and the rest of the Glynn County public safety community will see fiction become reality when the county’s new $485,000 computer aided dispatch, or CAD, system goes online next Monday. The county spent and additional $1.1 million to convert decades worth of reports and other information kept in a separate records management system that works with the new software.

“We wouldn’t have dreamed of this,” Doering said. “It is going to be a new mindset.”

His excitement is shared by others because it has been 12 years since the system that helps disseminate information about emergency calls has been updated. In technological terms, that is like a century.



New data from IBM (IBM) showed that despite a decline in cyber attack incidents against U.S. retailers, the number of customer records stolen during cyber attacks remained near record highs in 2014.

IBM reported that cyber attackers secured more than 61 million retail customer records in 2014, down from almost 73 million in 2013.

When IBM narrowed its data down to only incidents involving less than 10 million customer records (which excludes the top two attacks over this timeframe, Target Corporation and The Home Depot), the number of records compromised last year increased by more than 43 percent over 2013. IBM said that cyber criminals have become more sophisticated in reaching customer records.



Traditionally, insurance agencies do not reward companies that stay out of trouble. The idea is to split the cost of compensation to a few unfortunate enterprises among the larger number of all enterprises that take out an insurance policy. Compensation is paid according to the nature of the insurance claim presented and the terms of the policy. However, it can only be made if risks can be evaluated and damage calculated. Some aspects such as damage to a company’s brand may be impossible to assess, even if they have a major negative impact. Insureds and insurers try to work with quantifiable factors. But smart enterprises know there is additional leverage to be gained when putting insurance in place.



High-profile data breaches at well-known companies such as Home Depot, Staples and Sony have shined a bright spotlight on data security, or the lack of it. But these breaches have also raised an alarm within these public companies and other organizations. Many more companies, including big IT service providers, have elevated the job of IT security to the C-level, a highly visible response to what is now a highly visible issue.

“Security jobs are being moved to the C-suite 
because the billions lost to data breaches are a C-level problem,” said Arthur Zilberman, CEO, LaptopMD.com, a New York-based computer repair company.



(TNS) — An ice storm 10 years ago proved a learning experience for some local agencies, and proof of proper preparedness for others.

The ice storm of 2005 left more than 75,000 residents without power for several days, killed four people and devastated the city and county.

Looking back, Russ Decker, director of the Allen County Emergency Management Agency, said the neat thing about the storm was Allen County’s actions after it.

“When it was over, the first thing everybody wanted to do was get together and figure out what we can do” better next time, he said.

The results left more municipalities and county agencies ready in case there is a repeat of 2005’s disaster.



After deciding to focus its efforts squarely on the mainframe at the end of 2014, Compuware is starting 2015 off with the launch today of Topaz, a data virtualization framework that makes mainframe data more accessible.

Compuware CEO Chris O’Malley says that with the vast amounts of enterprise data that reside on the mainframe, one of the core challenges organizations face is finding ways to make that information accessible to the entire organization. Topaz, says O’Malley, provides a layer of abstraction that makes that data accessible without having to intimately understand how, for example, a COBOL application was constructed.

O’Malley says Topaz will enable IT organizations that still depend on mainframes to run their most mission-critical applications to introduce more flexibility by not only making that data available via a single user interface, but also enabling users to copy that data using a simple drag-and-drop file transfer utility.



Increased supercomputing capacity will improve accuracy of weather forecasts

DSCOVR mission logo. (Credit: NOAA)

NOAA's supercomputer upgrades will provide more timely, accurate weather forecasts. (Credit: istockphoto.com)

Today, NOAA announced the next phase in the agency’s efforts to increase supercomputing capacity to provide more timely, accurate, reliable, and detailed forecasts. By October 2015, the capacity of each of NOAA’s two operational supercomputers will jump to 2.5 petaflops, for a total of 5 petaflops – a nearly tenfold increase from the current capacity.

“NOAA is America’s environmental intelligence agency; we provide the information, data, and services communities need to become resilient to significant and severe weather, water, and climate events,” said Kathryn Sullivan, Ph.D., NOAA’s Administrator. “These supercomputing upgrades will significantly improve our ability to translate data into actionable information, which in turn will lead to more timely, accurate, and reliable forecasts.”

Ahead of this upgrade, each of the two operational supercomputers will first more than triple their current capacity later this month (to at least 0.776 petaflops for a total capacity of 1.552 petaflops). With this larger capacity, NOAA’s National Weather Service in January will begin running an upgraded version of the Global Forecast System (GFS) with greater resolution that extends further out in time – the new GFS will increase resolution from 27km to 13km out to 10 days and 55km to 33km for 11 to 16 days. In addition, the Global Ensemble Forecast System (GEFS) will be upgraded by increasing the number of vertical levels from 42 to 64 and increasing the horizontal resolution from 55km to 27km out to eight days and 70km to 33km from days nine to 16.

Computing capacity upgrades scheduled for this month and later this year are part of ongoing computing and modeling upgrades that began in July 2013. NOAA’s National Weather Service has upgraded existing models – such as the Hurricane Weather Research and Forecasting model, which did exceptionally well this hurricane season, including for Hurricane Arthur which struck North Carolina. And NOAA’s National Weather Service has operationalized the widely acclaimed High-Resolution Rapid Refresh model, which delivers 15-hour numerical forecasts every hour of the day.

“We continue to make significant, critical investments in our supercomputers and observational platforms,” said Louis Uccellini, Ph.D., director, NOAA’s National Weather Service. “By increasing our overall capacity, we’ll be able to process quadrillions of calculations per second that all feed into our forecasts and predictions. This boost in processing power is essential as we work to improve our numerical prediction models for more accurate and consistent forecasts required to build a Weather Ready Nation.”

The increase in supercomputing capacity comes via a $44.5 million investment using NOAA's operational high performance computing contract with IBM, $25 million of which was provided through the Disaster Relief Appropriations Act of 2013 related to the consequences of Hurricane Sandy. Cray Inc., headquartered in Seattle, plans to serve as a subcontractor for IBM to provide the new systems to NOAA.

“We are excited to provide NOAA’s National Weather Service with advanced supercomputing capabilities for running operational weather forecasts with greater detail and precision,” said Peter Ungaro, president and CEO of Cray. “This investment to increase their supercomputing capacity will allow the National Weather Service to both augment current capabilities and run more advanced models. We are honored these forecasts will be prepared using Cray supercomputers.”

"As a valued provider to NOAA since 2000, IBM is proud to continue helping NOAA achieve its vital mission," said Anne Altman, General Manager, IBM Federal. "These capabilities enable NOAA experts and researchers to make forecasts that help inform and protect citizens. We are pleased to partner in NOAA's ongoing transformation."

NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on TwitterFacebookInstagram and our other social media channels. Visit our news release archive.

Tuesday, 06 January 2015 00:00

Winter Weather and Cat Losses

With frigid temperatures and snow expected to fall around the New York City area and other parts of the United States this week, it’s a good time to review how winter storms can impact catastrophe losses.

For insurers, winter storms are historically very expensive and the third-largest cause of catastrophe losses, behind only hurricanes and tornadoes, according to the I.I.I.

Despite below average catastrophe losses overall in 2014, insured losses from winter storms were significant. In fact winter storms in the U.S. and Japan accounted for two of the most costly insured catastrophe losses in 2014.

According to preliminary estimates from sigma, extreme winter storms in the U.S. at the beginning of 2014 caused insured losses of $1.7 billion, above the average full-year winter storm loss number of $1.1 billion of the previous 10 years.



When it comes to mobile computing MSPs should be gearing up for a lot more complexity going into 2015. For all practical purposes usages of mobile computing devices has been fairly limited to accessing email and using browsers to surf the web. But by the end of this year most employees will probably have as many five to ten applications developed by the companies they work for running on their devices. For MSPs that means developing a capability to manage mobile applications, not just the devices they run on, will be critical requirements in 2015.

According to Phil Redman, vice president of mobile solutions and strategy for Citrix, mobile applications almost by definition will be accessing a mix of backend service running on premise and in the cloud. As such, IT organizations will be looking to work with MSPs that not only have application management expertise, but also familiarity with the entire scope of their enterprise IT operations.



I’m back at my desk after a relaxing holiday vacation. It was a pretty quiet time for cybersecurity, too. The only really disturbing news I saw during my holiday involved a data breach at Chick-fil-A and the new theory that the Sony breach likely wasn’t done by North Korea but by an insider (but then again, some of us were questioning insider involvement from the beginning).

You and I know too well that this little lull in cybersecurity news won’t last very long, but I do think that this is a good time for companies to review their cybersecurity procedures and policies. We saw the damage from the fallout after the Sony incident and I think Target is still picking up the pieces from its breach a year ago.

Near the end of 2014, Ponemon released a study, “2014 Cost of Cyber Crime Study: United States,” that shows just how expensive and damaging a breach can be: It revealed that it can cost upwards of $20,000 a day for incidents that may take, on average, a month to fix. Jon Oberheide of Duo Security pointed out that SMBs need to be especially concerned about these breach costs, telling me in an email:

While the mega-breach-du-jour gets the most media attention, Ponemon's study calls out an important distinction: The impact of breaches is much greater on small and medium businesses than the large enterprises. The real challenge in cybersecurity is how to protect the millions of businesses who don't have an enormous security budget or a large roster of top security talent to defend their organization. And yet, they face the same attacks and adversaries as the big guys. So while companies like Sony face dramatic consequences in the short-term, they will rebuild, recover, and revisit their security strategy to continue their operations in the long-term. But if you're not a Sony-scale company ...you may just have your business effectively wiped out.



Tuesday, 06 January 2015 00:00

Are We Closing in on the Quantum Enterprise?

The prevailing narrative in enterprise circles these days is that things will keep getting bigger: Big Data, regional data centers, hyperscale … everything is aimed at finding the magic formula that allows organizations to deal with larger workloads at less cost.

It is ironic, then, that one of the ways researchers are hoping to tackle this problem is by shrinking the basic computing elements – processing, storage and networking – to atomic and even sub-atomic levels in order to derive greater power and efficiency from available resources.

So-called quantum computing (QC) has been a facet of high-performance architectures for some time, but lately there has been steadily increasing buzz about enterprise applications as well.



People who manage a functional department or a business process may find it tough to set recovery objectives for what they manage so devotedly, day in and day out. That does not necessarily mean that they are not objective. Instead, they may not know how critical their part of the business is to the rest of the organisation. Without a measuring stick, they cannot confidently make recommendations or requests about suitable recovery times. So when the next business continuity planning moment comes along, BC managers may find that they have some handholding and educating to do to bring different organisational units up to speed.



Tuesday, 06 January 2015 00:00

From the Extreme to the Mean

By 2050, most of the US coast can expect to see 30 or more days a year of floods up to two feet above high tide levels, says a new NOAA study.

The study, ‘From the Extreme to the Mean: Acceleration and Tipping Points for Coastal Inundation due to Sea Level Rise’, has been published in the American Geophysical Union’s online peer-reviewed journal Earth’s Future.

NOAA scientists Sweet and Joseph Park established a frequency-based benchmark for ‘tipping points,’: when so-called nuisance flooding, defined by NOAA’s National Weather Service as between one to two feet above local high tide, occurs more than 30 or more times a year.

Based on that standard, the NOAA team found that these tipping points will be met or exceeded by 2050 at most of the US coastal areas studied, regardless of sea level rise likely to occur this century. In their study, Sweet and Park used a 1.5 to 4 foot set of recent projections for global sea level rise by year 2100 similar to the rise projections of the Intergovernmental Panel for Climate Change, but also accounting for local factors such as the settlement of land, known as subsidence.



The BCI has published an updated version of its guide to business continuity legislation, regulation, standards and guidance around the world.

Although not completely comprehensive the guide is probably the best available currently.

The guide starts by listing current and projected international initiatives, particularly those supported by the International Standards Organization (ISO), The European Union (EU) and the Basel Committee on Banking Supervision.

Each entry is categorized into one of four headings:

Legislations: government laws which include aspects of business continuity management by name or are sufficiently similar in nature (disaster recovery, emergency response, crisis management) to be treated as BCM legislation. To be included in this category they must be legally enforceable legislation passed by a national, federal, state or provincial government.

Regulations: Mandatory rules or audited guidance documents from official regulatory bodies.

Standards: Official standards from national (and international) accredited standards bodies which relate to business continuity as a whole or to a specific related subset such as IT service continuity.

Good practice: Guidelines published as good (or best) practice by various authoritative bodies.

Obtain the document.

Policy uncertainty at home and economic and geopolitical risks overseas are the central challenges facing chief financial officers (CFOs) of the UK’s largest companies as they enter 2015, according to a survey by Deloitte.

Deloitte’s latest CFO Survey gauged the views of 119 CFOs of FTSE 350 and other large private UK companies. It found that risk appetite among CFOs fell in Q4 2014. 56 percent of CFOs say that now is a good time to take greater risk onto their balance sheets, down from a record reading of 71 percent in Q3 2014 but still well above the long-term average. The change was driven by concerns over political and economic risk uncertainties: when asked to rate the level of risk posed between 0 and 100, CFOs attached a 63 rating to the UK General Election and 56 to deflation and weakness in the Euro area and to a possible referendum on the UK’s membership of the EU. The level of risk posed by each factor has risen in the last three months. 60 percent of CFOs enter 2015 with above normal, high or very high levels of uncertainty facing their businesses, up from a low of 49 percent in Q2 2014 but at the same level seen 12 months ago.

Ian Stewart, chief economist at Deloitte, said: “The central challenges facing the UK’s largest companies as they enter 2015 are policy uncertainty at home and economic and geopolitical risks overseas. Rising levels of uncertainty have caused a weakening of corporate risk appetite which, nonetheless, remains well above the long-term average.”



According to preliminary estimates, total economic losses from natural catastrophes and man-made disasters were USD 113 billion in 2014, down from USD 135 billion in 2013. Out of the total economic losses, insurers covered USD 34 billion in 2014, down 24 percent from USD 45 billion in 2013. This year disaster events have claimed around 11 000 lives.

Of the estimated total economic losses of USD 113 billion in 2014, natural catastrophes caused USD 106 billion, down from USD 126 billion in 2013. The outcome is well below the average annual USD 188 billion loss figure of the previous 10 years. The total loss of life of 11,000 from natural catastrophe and man-made disaster events this year is down from the more than 27,000 fatalities in 2013.

Insured losses for 2014 are estimated to be USD 34 billion, of which USD 29 billion were triggered by natural catastrophe events compared with USD 37 billion in 2013. Man-made disasters generated the additional USD 5 billion in insurance losses in 2014.



Winter has officially begun, and the next few months could create both opportunities and challenges for many managed service providers (MSPs).

While winter can bring snow, sleet and other inclement weather, MSPs can provide data backup and disaster recovery (BDR) solutions to help businesses safeguard data and ensure that companies can access this information in any conditions.

And with December quickly drawing to a close, and winter weather on the horizon in cities and towns across the country, now is the perfect time to review this month's top BDR lessons for MSPs.



Monday, 05 January 2015 00:00

A New Year to Prepare

It is that time of year again, a time to reflect on another year gone by and prepare for the new year to come. It is time to dust off last year’s resolutions and come up with a new list of things to accomplish in 2015. While researching the latest diet trend and signing up for the newest exercise class or in between swearing off your guilty pleasures, vowing to set your alarm earlier, and promising to be better at staying in touch, do yourself a favor and add these five simple preparedness resolutions to your list.

1.  Make or update your emergency kit.

If you don’t have an emergency preparedness kit in your house and car, it’s time to get one.Hurricane Kit

Gather water, food, flashlights, batteries, and a first aid kit into a container or bag and store it in an easy-to-access area of your house or car.

If you already have an emergency kit, take time to review what is in it. Does your extra pair of clothes still fit? Do the flashlights need new batteries? Are all your important documents up to date? Having an emergency kit in your home or car will not be of use during an emergency if your kit is out of date or missing adequate supplies.

For more information on what to include in your emergency kit, visit CDC’s webpage: http://emergency.cdc.gov/preparedness/kit/disasters/.

2.  Form a support network (talk to your neighbors).

New Year’s Eve parties are a great time to catch up with friends and family. Why not use this time surrounded by those you love to talk about preparing for an emergency? Talk to your neighbors about forming a support network and make a plan to check on each other after a disaster occurs. Talk to people close to you about any physical limitations or special medical needs you may have during an emergency. During an emergency it is usually the people in closest proximity that are first to offer aid, and while it may not be the typical topic of conversation at your New Year’s Eve bash, it is an important discussion to have.

3.  Prepare your family (older adults, kids and pets).

When making all your plans to prepare, don’t forget your family. Talk to older adults in your life about their emergency preparedness plans, and ask them how you can help. Make sure your kids are involved in your emergency preparedness planning. Help them understand and be part of natural disaster planning with CDC’s Ready Wrigley. Also, don’t forget your pets. Include food and water for your furry friends in your emergency kit, and identify pet friendly evacuation shelters in your area.

4.  Join an alert network (app, weather radio, email updates).

It’s 2015 and even though we may not have flying cars or time machines, we do have some great technology for tracking and alerting us to natural disasters that may be in our area. Rather than downloading the latest video game or dating app, make sure your phone and computer have alert systems set up to notify you when dangerous weather is in your area. Consider setting up push notifications or email alerts that let you know when a natural disaster may be coming.

5.  Weatherize your home and review your insurance.

mature couple fills in questionnaire together 

The New Year is a perfect time to review your insurance plan and evaluate your home. Install or check smoke detectors and carbon-monoxide alarms in your house. Make sure you know where the utility off and on switches are located. During leaks or when evacuating your home, knowing how to turn off your gas, water, and electricity could help prevent damage to your home and protect your health. Also, check your insurance policy and make sure you are covered for possible flooding or structural damage to your home and property.

Taking time to prepare for emergencies and natural disasters now could be the most important thing you do this year.


A recent survey from antivirus software provider ThreatTrack Security showed that 81 percent of IT security professionals said they would "personally guarantee that their company's customer data will be safe in 2015."

The ThreatTrack Security survey, titled "2015 Predictions from the Front Lines," also revealed 94 percent of respondents said they are optimistic that their organization's ability to prevent data breaches will improve next year.



Monday, 05 January 2015 00:00

The cloud in 2015

Steven Harrison predicts how business use of cloud computing will develop and change during the next 12 months:

Hybrid is the equaliser

Whilst cloud computing has become an integral part of IT systems, concerns around vendor lock-in, licensing restrictions and security mean that businesses are still resistant to moving all IT operations into a hosted environment. As a result, the hybrid cloud will become the deployment model of choice for those organizations that want to leverage the elasticity of the cloud in tandem with existing infrastructure. The challenge for organizations adopting a hybrid approach is ensuing that systems can run in parallel and operate as one environment to guarantee performance uptime.



Monday, 05 January 2015 00:00

Cybersecurity predictions for 2015

Proofpoint looks at how information security threats are likely to evolve during the coming year.

2014 was a year in which information security vaulted into the public eye, driven by a surge in both the number and the visibility of data breaches and compromises. This new attention will bring greater scrutiny in 2015, just as the nature and severity of threats continue to evolve for the worst.

Cyberextortion will be the most rapidly growing new threat family

Beginning with the rapid rise of CryptoLocker in late 2013, the threat from ransomware expanded rapidly in 2014, adding not only other ‘extortion malware’ but also spreading to mobile platforms such as Android. Paying the ransom remains arguably a popular option despite its risks, and the estimated $3 million in ransoms generated by CryptoLocker alone has shown cybercriminals the revenue potential of digital extortion schemes. These attacks are difficult to defend against and costly to recover from, and lead to business disruption that extends far beyond the loss of data.



By Andrew Hiles

Service level agreements (SLAs) and business continuity go hand-in-hand: or they should do!

Whether SLAs are implemented in support of a balanced scorecard to align information and communications technology with business mission achievement, or as a stand-alone initiative, the strategic use of service level agreements can be a perfect solution to the justification of investment in resilience and business continuity: an approach I have been advocating for over ten years.

How does it work?

First, define the business mission.

Take, as an example, a multinational company – call it Klenehost - selling miniature packs of soap, shampoo, hair conditioner and shower gel to the hotel industry. These are packaged in different ways and customized for specific hotel chains.



By Rachel Weingarten

Some brands stay fresh and relevant generation after generation. What makes certain corporate branding strategies timeless while others come and go?

Take Brooks Brothers. No less a person than Abraham Lincoln was one of the brand’s most loyal customers. So how does a nearly 200-year-old company not only stick around, but remain relevant and even cutting-edge?



Trapp Technology has unveiled disaster recovery (DR) services that are designed to deliver physical or virtual data replication, redundant connectivity and high availability of IT infrastructure during downtime recovery.

The Scottsdale, Arizona-based managed service provider (MSP) said its DR services can instantly initiate seamless recovery of applications and data.

"Our clients consistently asked us to assist them with more of their technology needs, disaster recovery being one of the most common requests," DJ Jones, Trapp's vice president of sales and marketing, told MSPmentor. "With the high demand for disaster recovery services, we made sure [these were] a priority."



Business Continuity planning and maintenance cycles often leave little time and few resources for planning how the organization will react if the unexpected actually occurs (their Incident Response).

An analogy can be drawn between healthcare and Business Continuity Management: planning and plan exercise cycles are analogous to maintaining a healthy lifestyle and having regular medical checkups.  And when something serious occurs, the medical care system is prepared to react.  So should your BCM program.



By Natalie Burg

“You’ll shoot your eye out!”

Just when you thought that much-loved line couldn’t mean any more or less than it did the last 500 times you heard it, the popular movie A Christmas Story includes some business lessons you may have overlooked.

Business lessons in a holiday movie? You bet your Red Ryder, carbine action, 200-shot, range model air rifle. In this post we revisit good ol’ Cleveland Street to uncover five business lessons that can be learned from the cinematic classic.



Tuesday, 23 December 2014 00:00

Data Center 2015: Where Do We Go from Here?

Tis the season for year-end wrap-ups and year-ahead predictions, so as in past years I will take a look at what some of the key industry players are saying and then offer my own take as to what looks real and what looks imaginary.

One of the broadest discussions of late is the future of the data center itself. As virtualization, the cloud and software-defined architectures gain in popularity, it is not hard to imagine a software-defined data center (SDDC) consisting of an end-to-end data environment sitting entirely atop the virtual layer with nearly all hardware, save the client device outsourced to a third-party provider.

This is part of what IDC describes as the 3rd Platform of innovation and growth. Accompanied by advances like mobile computing, Big Data analytics and social networking, the 3rd Platform characterizes what the firm says is the “new core of ICT market growth” and is already responsible for about a third of the total IT spend. For 2015, IDC expects raw compute and storage capacity to shift to cloud-based resources optimized for mobile and Big Data applications, and this will lead to the rise of “cloud first” hardware development – particularly consolidated solutions that cater to hyperscale infrastructure.



In 2015, cybercriminals will increasingly be non-state actors who monitor and collect data through extended, targeted attack campaigns, McAfee Labs predicts. In the group’s 2015 Threats Predictions, Intel Security identified internet trust exploits, mobile, internet of things and cyber espionage as the key vulnerabilities on next year’s threat landscape.

“The year 2014 will be remembered as ‘the Year of Shaken Trust,’” said Vincent Weafer, senior vice president of McAfee Labs. “This unprecedented series of events shook industry confidence in long-standing Internet trust models, consumer confidence in organizations’ abilities to protect their data, and organizations’ confidence in their ability to detect and deflect targeted attacks in a timely manner. Restoring trust in 2015 will require stronger industry collaboration, new standards for a new threat landscape, and new security postures that shrink time-to-detection through the superior use of threat data. Ultimately, we need to get to a security model that’s built-in by design, seamlessly integrated into every device at every layer of the compute stack.”

McAfee Labs predicts the top cybersecurity threats in 2015 will be:



Information technology downtime is a costly proposition. Based on industry surveys, it can cost an organization as much as $5,600 a minute, or well over $300,000 per hour in losses, according to IT research firm Gartner. But the costs and complexities of traditional approaches to disaster recovery can be expensive too, especially for smaller jurisdictions. As a result, some cities are leveraging the cloud to provide a cost-effective way to maintain services in the event of a local or regional emergency.

Asheville, N.C., historically maintained its data center redundancy through a local disaster recovery center, located just two blocks from the city’s primary data center. But when Asheville CIO Jonathan Feldman came on board, that scenario made him uncomfortable. 

“Anything that can take out City Hall can probably impact something that’s two blocks away as well,” he said. “It was sort of a thorn in my side. But disaster recovery is not the easiest thing to get money for, so we struggled a bit to find a solution.”



They say that information drives business. Actually, it’s electricity. Your data will most likely be useless if you have no power. On the other hand, if you can turn the lights on, you can start working, one way or another. But now in a kind of millennial Mobius loop, information is also increasingly driving power distribution. Smart grids are a case in point. The benefits are in higher power transmission efficiency, reduced costs, better peak load handling and better integration of customer-owned generating systems. The risk is in the network security.



In 2013 Continuity Central conducted a survey to explore quality control methods that are being used within business continuity management systems. This survey has now been repeated to see how the trends in this area have changed.

The 2014 Quality control and measurement of business continuity management systems survey was conducted online using SurveyMonkey and received 142 responses in total. 84.5 percent of respondents were from large organizations (those with more than 250 employees). Respondents came from around the world, with the most coming from the US (34 percent), the UK (25 percent) and Australia (6 percent).

The survey initially asked “Does your organization have clear processes or methods for the quality control of business continuity plans and systems?” 66.9 percent of respondents said that, yes, their organization did have clear processes or methods; while 29.6 percent said no their organization didn’t. This was a very similar result to the 2013 survey, where 64.9 percent answered ‘yes’ and 30.2 percent answered ‘no.’



Few things are as integral to the data center as the server. I know, technically it is only one leg of the three-legged data stool, along with storage and networking, but the server is where the actual processing takes place – the brains of the operation, so to say – so it is understandable that data executives are a little apprehensive about the divergent path that server development is taking.

On the one hand, mainframes still run a fair amount of the enterprise load, despite numerous calls for the technology to be put out to pasture. At the same time, blade and microservers are showing that they are equally adept at handling massive data loads, particularly when it comes to the parallel processing of multiple data streams that characterize modern Web-facing applications.

In between, there is a plethora of high-power, medium-power and low-power solutions, not to mention the rise of modular infrastructure that could make the whole idea of disparate systems obsolete. So it is no wonder the data center executive is having trouble seeing the future.



As the Ebola outbreak in West Africa led many to be concerned about U.S. capability to respond to its infectious disease threats, an annual report shows only half of states score well on 10 key public health measures.   

Many states scored poorly on measures of communication and coordination responses to threats, vaccination rates and infections from contact with the health care system, according to the report, released annually by the Robert Wood Johnson Foundation and Trust for America’s Health. 

"Over the last decade, we have seen dramatic improvements in state and local capacity to respond to outbreaks and emergencies,” said Jeffrey Levi, executive director of the Trust, in a statement. “But we also saw during the recent Ebola outbreak that some of the most basic infectious disease controls failed when tested.”



In 2015, almost every CIO will be tasked with assessing their organizations and technology to ensure data and confidential information is protected.

Current Situation

Target, Home Depot, Staples, who’s next? These are just the most recent retail outlets that made the news. What is not making the headlines are the multitude of private- and public-sector organizations that have been hacked and lost data and information — many times totally unaware until after the fact.



Monday, 22 December 2014 00:00

Data Center Efficiency: Look Before You Leap

Efficiency in the data center is a big thing now, with organizations of all sizes working to develop both the infrastructure and the practices that can help lower the energy bill. But while analysis of data flows and operating characteristics within equipment racks is fairly advanced, the ability to peek under the covers to see how energy is actually being used is still very new.

To be sure, there is a variety of tools on the market these days, from simple measurement devices to full Data Center Infrastructure Management (DCIM) platforms, but more often than not the question revolves around not only what to measure, but how.

Without adequate insight into what is going on, it is nearly impossible to execute an effective energy management plan, says UK power efficiency expert Dave Wolfenden. Many standard tests, in fact, fail in this regard because they attempt to gauge the upper capabilities of power and cooling equipment, not how to maintain maximum efficiency during normal operation. New techniques like computational fluid dynamics (CFD) can help in this regard, but they must be employed with proper baselines in order to give a realistic indication of actual vs. projected results.



Monday, 22 December 2014 00:00

2015: The Year of Agile Data Warehousing

2015 will be the year that agile data warehouse (DW)/business intelligence (BI) takes off.  Traditional strategies for DW/BI have been challenged at best, with the running joke being that a DW/BI team will build the first release and nobody will come. On average, Agile strategies provide better time to market, improved stakeholder satisfaction, greater levels of quality, and better return on investment (ROI) than do traditional strategies. The DW/BI community has finally started to accept this reality, and it is now starting to shift gears and adopt agile ways of working. My expectation is that 2015 will see a plethora of books, case studies, and blog postings describing people’s experiences in this area.



WatchGuard Technologies is urging organizations to use the nearly epic scale of the Sony cyber attacks to spur their companies into action versus panicking about potential risks.

"A year ago, we predicted major state-sponsored attacks may bring a Hollywood movie hack to life that exploits a flaw against critical infrastructure – we just didn't predict it would happen to Hollywood itself," said WatchGuard's Global Director of Security Strategy, Corey Nachreiner. "It's important that IT pros use this opportunity to upgrade what is often five-year-old technology to defend against five-day-old threats."

"The FBI is right when it says that less than 10 percent of companies could survive an attack like the one on Sony," continued Nachreiner. "And, unfortunately, it's not a question of if, but when for these kinds of attacks."

Nachreiner recommends five immediate actions that organizations can take to make sure they have the best possible chance of preventing attacks, and seven actions to minimize damage if cyber criminals do get in:



The Sony hacking and subsequent threats to the company and its supply chain, has become the biggest information story of 2014; in a year of many high profile incidents. What started out as ‘yet another breach story’ a few weeks ago rapidly developed into a very real business continuity and reputation threatening incident.

On December 19th the FBI published an update on the Sony cyber attack. The highlights include:



Insurance companies face strict business uptime, data management and data protection requirements, and as a result, these businesses need data backup and disaster recovery (BDR) and business continuity solutions that fulfill these needs.

However, managed service providers (MSPs) can offer data BDR and business continuity solutions with image-based backup to help insurance companies back up files, programs and other important information quickly and easily.



Have you ever thought about all the information your appliances tell you? The world is moving toward presenting instant data about every aspect of life. For example, there is now an electric toothbrush with Bluetooth capabilities that can record your brush strokes and let you chart your dental hygiene activities on a smartphone app. Home sensor products not only tell you if your teenager is trying to sneak out at night, but also how many times someone has been dipping into the cookie jar. And many of us can’t even exercise anymore without a fitness band and apps that record every step, every calorie expended, and every turn in our sleep.

While some of that real-time data is great to have, we’re also reaching a point of TMI … “too much information,” or data overload. How much is too much real-time data? Only you can answer that for your personal data needs, but I do know there is one area where there is never enough real-time data. That is in your company’s disaster recovery plan.

Think about a disaster striking your business. You could have all your subject matter experts in place, but if they can’t access data or if your recovery strategy isn’t complete, nothing will work. The consequences could be nothing short of catastrophic: for the vast majority of companies, once they have to shut down because of server problems or another disaster, they aren’t able to recover in a timely fashion. And let’s face it … a faltering or incomplete recovery can spell death for a business.



It is fascinating to watch a new class of software be born. This doesn’t seem to happen that often anymore, but every once in a while a customer or a vendor discovers a gap in the current offerings and fills that gap with something we have never seen before. I recently ran into an event like this at BMC Engage. BMC has a write-up that subtly points to the impending creation of this new security automation product class. And last week, I spoke to Tony Stevens, who works for the Department of Technology, Management and Budget at the State of Michigan and is helping husband the birth of this class. Let’s talk about that this week.



(TNS) — Think the Napa fault stopped moving after producing a 6.0 earthquake in August? Think again.

The fault that caused that Napa quake is forecast to move an additional 2 to 6 inches in the next three years in a hard-hit residential area, a top federal scientist said at a meeting of the American Geophysical Union in San Francisco on Tuesday.

It is the first time scientists have formally forecast the gradual shifting of the ground in a residential area after an earthquake.

“Until the South Napa earthquake happened, we had not clearly foreseen just what a problem that could be,” U.S. Geological Survey geophysicist Ken Hudnut said.



Security pros got the Target breach for Christmas last year. The breach hit the retailer during its busiest time of the year and cost them millions in lost business. For security pros desperate for more budget and business prioritization, you couldn’t have asked for a more perfect present - it’s as is if Santa himself came down the chimney and placed a beautifully wrapped gift box topped with a bow right under your own tree. This year it looked as if all we were getting was a lump of coal - but then Sony swooped in to save us like a Grinch realizing the true meaning of Christmas.
The Sony Picture Entertainment (SPE) breach is still unfolding, but what we know so far is that a hacktivist group calling themselves the Guardians of Peace (GoP) attacked Sony in retribution for the production of a movie, “The Interview,” which uses the planned assassination of North Korea’s leader as comedic fodder. The hacktivists supposedly stole 100 TBs of data that they are gleefully leaking bit by bit (imagine Jingle Bells as the soundtrack). The attack itself affected the availability of SPE’s IT infrastructure, forcing the company to halt production on several movies.
We’ll be releasing a more detailed analysis for clients later this afternoon, but at a high level, there are several reasons why this attack is in the news every day, why it will prove to be yet another turning point in the security industry, and why security is so integral to the business technology (BT) agenda:
Friday, 19 December 2014 00:00

How to Turn Open Data into Real Money

I recently interviewed a technology start-up that claimed they were already profitable, with only a few clients and a few months out the door. I have no way to verify or deny that, but I can tell you this: The entire product is built around open data.

In fact, its founders adamantly refused to let me call it a technology company, which is just one of many reasons I’m not revealing its name.

“Our product is the data,” one VP repeatedly told me.

That’s a bit of a bold claim for a company based on government-released data and other open data sets. If it were really the data, and everybody has access to the data, then what’s the point?



Friday, 19 December 2014 00:00

2014: The Perfect Malware Storm

IT security may be an MSP’s core offering or one of several lines of business. But regardless of its business model, a service provider should take stock of the current threat landscape. MSPs need to know what’s out there if they hope to help clients mitigate their security risks.

What are your customers up against? In 2014, they endured the perfect malware storm. Consider the following:



One of the side effects of the consumerization of IT is that some end customers are feeling more empowered than ever to take IT matters into their own hands rather than seek the help of IT solution providers. This is especially true when it comes to cloud services, where business owners (or their employees) can self-install a cloud backup product and instantly have access to 5 GB or more of free cloud storage. Even if business owners aren't actively involved in using or promoting DIY (do-it-yourself) cloud services, research shows their employees are. A study from Skyhigh Networks, which monitors the use of cloud services for businesses, found that the average enterprise uses 545 cloud services, which is approximately 500 more than the average CIO is aware of!

Besides the loss of control of corporate data, DIY cloud services play into the hands of cybercriminals who exploit business owners through ransomware. Like other malware, ransomware infects corporate networks through unpatched computers or when a user clicks on an infected email attachment. Once launched, the ransomware program encrypts common user files on the network--such as documents, spreadsheets and database files--and the victim is required to pay a ransom to decrypt the files.



Friday, 19 December 2014 00:00

Cyber Risk on the Inside

While the Sony cyber attack has put the spotlight on sophisticated external attacks, a new report suggests that insiders with too much access to sensitive data are a growing risk as well.

According to the survey conducted by the Ponemon Institute, some 71 percent of employees report that they have access to data they should not see, and more than half say this access is frequent or very frequent.

In the words of Dr. Larry Ponemon, chairman and founder of The Ponemon Institute:

This research surfaces an important factor that is often overlooked: employees commonly have too much access to data, beyond what they need to do their jobs, and when that access is not tracked or audited, an attack that gains access to employee accounts can have devastating consequences.”



VMware predicted software-defined data centers (SDDC) would “hit it big” in 2013. Spoiler: That didn’t happen.

Nonetheless, the concept hasn’t gone away. In fact, IT Business Edge’s Infrastructure blogger, Arthur Cole, wrote about SDCCs several times this year, including a November article in which he called the idea “a work in progress.” He did a great job of summing up SDCCs and the current opinion of them.

Still, it begs the question: Could 2015 be the year that SDCCs actually, finally, take off? Michael Hay thinks so.

Hay is the vice president of Product Planning at Hitachi Data Systems and chief engineer for the Information Technology Platform Division (ITPD). In a recent Information Week column, Hay predicted that SDCCs will be one of three disruptive trends in the coming year.



Thursday, 18 December 2014 00:00

It’s 2015 – Do You Know Where Your Data Is?

The “Internet of Things” will take further hold and become more fully embedded as a reality in our society. However, a tipping point is likely to be reached in 2015 as public awareness of the potential for these technologies to violate personal privacy increases. This will lead to an associated public outcry for stricter controls and government legislation regarding how people, organizations and government collect and use this information. The public will no longer be satisfied to leave technology companies and users to self-police their uses of their personal data.

Surveillance and other technologies that permit the collection of data about people will continue to proliferate. Analytical tools are emerging to interpret this information, and to merge and use it in an increasingly integrated fashion to permit continuous monitoring of locational and other information about specific people and groups. Drones that are freely available in the open marketplace can be programmed to follow people and objects using GSM and other technologies as tracking beacons. Miniature homing devices that will facilitate tracking of locational information of objects and people are also freely available. Phone companies routinely collect data from everyone making cell calls on their networks. Because many phones have chips that stay on even after a battery has been removed, tracking powered-down phones is within the realm of possibility.



Thursday, 18 December 2014 00:00

Even in the Cloud B&R Still Needs TLC

Data is the lifeblood of the modern enterprise, and as with most complex organisms, loss of blood can lead to weakness and death.

So it is no wonder that data recovery has emerged as a top priority as the enterprise finds itself trusting third-party providers for the care and maintenance of their lifeblood to an ever greater degree.

According to Veeam Software, application and data downtime is costing the average enterprise about $2 million per year, with the vast majority of that cost attributed to the failure to recover data in a reasonable amount of time. This usually presents a double-edged sword for IT, though, as the pressure to improve recovery times is often accompanied by the reluctance of the front office to invest in adequate backup and recovery (B&R) infrastructure. This also affects permanent data loss, as many organizations maintain backup windows and restore points that fail to account for the massive accumulation of potentially critical data in a relatively short time.

The cloud has done a lot to relieve the burden, financial and otherwise, of wide-scale B&R. In fact, this is one of the primary drivers of IaaS, according to ResearchandMarkets, in that it provides a ready platform to not only integrate backed-up data into dynamic production environments, but to maintain a duplicate IT infrastructure should primary resources go dark. IaaS also puts these capabilities within reach of the small-to-midsize enterprise.



To customers, the cloud often seems like an ideally flexible application and data storage solution. On the other hand, starting as a cloud provider often requires very deep pockets. As a result, not every provider stays the course. And if under-capitalisation doesn’t kill a provider off, there is always the danger of a marketing failure that persuades backers to pull the plug. The irony of the situation is that many customers want to make their cloud provider a strategic part of their disaster planning. However, customers must then also extend their plan to include the possibility that the provider itself is the disaster.



Wednesday, 17 December 2014 00:00

Here Comes the Big Data

About two decades ago I thought I had a handle on big data. I was doing some data warehousing work with a telephone utility that had about 100 million transactions. That was a lot of data, I said to myself. Then, about 10 years ago, I was doing a review of a firm that audited financial trading on one of the major stock markets and I asked its big data guy how many transactions the company processed. His initial answer was, “On a slow day we get about 2.5 billion transactions.” “How many do you have on a busy day?” I asked with an air of shock. “4 or 5 billion,” he responded. Now that was really a lot of data.

Jump ahead a decade or so, and on 24 July 2014, Facebook announces that it is currently processing 1 trillion transactions per day. Now ”that” is really, really big data. If you are a CEO, that is just one of the reasons why you should worry about having a big data strategy. Even if your organization isn’t a telecom utility or a financial institution, the amount of data you’re going to have to process is shooting up, what with all the smart (wireless) devices your customers and employees use heavily, plus the volumes of data beginning to flood the organization from all the IoT devices/systems that increasingly control any number of real-time systems.



Wednesday, 17 December 2014 00:00

Lessons Learned from Data Breaches

Recent data breaches have left some large organizations reeling as they deal with the aftermath. They include the Target data breach, compromises at Home Depot, JP Morgan, USPS (which exposed employee Social Security Numbers and other data) and, most recently, Sony Pictures. The Sony hack also proved to be embarrassing to some of the company’s executives, as private email correspondences were exposed.

Collateral damage from data breach is significant: one in nine customers affected by a data breach stopped shopping at a particular retailer. According to LifeLock, a recent survey of corporate executive decision-makers found that while concern for a breach is 4 or 5 on a 5-point scale, only 10% to 20% of their total cyber security budgets go to breach remediation. Establishing an incident response plan in advance can reduce the cost per compromised record by $17.



MSPs specializing in cloud-based file sharing, may not be shocked to discover that end users frequently share data via insecure means. What might come as a surprise, however, is the fact that 20 percent of those files contain data directly related to compliance (or lack thereof).   

This statistic comes from a recent study that analyzed roughly 100 million files shared through public-cloud applications. You can see all the findings in this infographic, but here are a few key takeaways, specifically for MSPs:

Non-compliance is the norm, not the exception

Based on the numbers, most businesses are struggling to stay compliant. Some have not made it a priority at all. The compliance data that was shared on public-clouds included personally identifiable information (PII), personal health information (PHI), and customer payment card information.

This fact presents opportunities for MSPs. Conveying to companies the importance of compliance and the risk of having their data vulnerable gives you the opportunity to bring them a solution. Creating a cloud file sharing system that stays compliant can bring them extreme value.



Wednesday, 17 December 2014 00:00

2015 cyber risk and data protection predictions

Businesses in 2015 are expected to experience increasing challenges as they struggle to contend with the burgeoning threat of complex cybercrime. EY analysis has outlined some of the key areas that cyber risks threaten to impact in the coming year, including the difficulties in the insurance sector of underwriting cyber risk, the raft of regulation coming out of both the EU and the UK, the importance of integrated risk functions in firms, and the cyber risk of supply chains moving to the cloud.

Insuring against cyber risk

Cyber risk poses a serious and growing threat to businesses across the UK, and companies are increasingly looking to insurers for protection against financial losses in the face of attacks. Certain sectors already require firms to take out cyber risk under regulatory compliance. However, cybercrime is not a traditional area of risk for insurers, and the burden of underwriting the risk is proving to be very difficult.

Shaun Crawford, Global Head of Insurance at EY, comments: “Cyber risk will certainly be one of the biggest challenges to the insurance market in 2015. Cybercrime is a moving beast, making it impossible to quantify the risks neatly or to calculate them in an informed or consistent manner. With so much unknown, it’s not surprising that premiums are wildly different across the market, and without cross-market stability, the industry will most likely be operating on significant indemnity losses.



Wednesday, 17 December 2014 00:00

2015 risk predictions

What emerging risks are likely to have an impact on organizations during 2015? Experts from The Institute of Risk Management give their views.

Political instability caused by low oil prices, increased shareholder activism and the business threat posed by a potential UK exit from the EU, are among chief concerns voiced by some of the UK’s leading risk experts for 2015.

As the year comes to a close, members of the Institute of Risk Management (IRM), were asked to identify key risk areas for 2015. A broad range of oil and gas, political, healthcare, regulatory and insurance risks were highlighted as potential flashpoints.



Wednesday, 17 December 2014 00:00

Using a Risk Model as a Common Language

The central purpose of a common risk language is to assist management with evaluating the completeness of its efforts to identify events and scenarios that merit consideration in a risk assessment. Either management begins a risk assessment with (a) a blank sheet of paper with all of the start-up that choice entails, or (b) a common language that enables busy people with diverse backgrounds and experience to communicate more effectively with each other and identify relevant issues more quickly.

In a Corporate Compliance Insights column earlier this year, we provided a suggested language for executive management and directors to use in the Boardroom to focus the board risk oversight process. This month, we discuss the merits of a common language for use by the entire organization.

The sources of uncertainty an enterprise must understand and manage may be external or internal. Risk is about knowledge. When management lacks knowledge, there is greater uncertainty. Thus sources of uncertainty also relate to the relevance and reliability of information about the external and internal environment. These three broad groups – environment, process and information for decision making – provide the basis for an enabling framework summarizing the sources of uncertainty in a business.



Natural catastrophes and man-made disasters cost insurers $34 billion in 2014, down 24 percent from $45 billion in 2013, according to just-released Swiss Re sigma preliminary estimates.

Of the $34 billion tab for insurers, some $29 billion was triggered by natural catastrophe events (compared with $37 billion in 2013), while man-made disasters generated the additional $5 billion in insured losses in 2014.

Despite total losses coming in at below annual averages, the United States still accounted for three of the most costly insured catastrophe losses for the year, with two thunderstorm events and one winter storm event causing just shy of $6 billion in insured losses (see chart below).



Wednesday, 17 December 2014 00:00

6 Corporate Holiday Gift-Giving Tips and Ideas

By Rachel Weingarten

If the idea of buying holiday gifts for your friends and family isn’t enough to send you into a tailspin, there’s the added pressure of trying to figure out what to buy for those you work with. With so many rules–both written and unwritten—it’s all too easy to make a corporate gift-giving gaffe. But, with some thoughtfulness and a little planning, it is possible to give just the right gift to just the right person.

“Remember, gifts are a form of communication in the same way as what you write and what you say,” says Stephen Paskoff, CEO of workplace learning company ELI. “Give clients, employees and colleagues gifts in line with your organization’s values and standards, and keep in mind that what you give directly reflects your judgment and professionalism.”

Sure, you could always give a gift card, but they can feel too impersonal. The trick is to balance thoughtfulness and appropriateness. Here are some tips for giving corporate gifts that strike that balance, along with some suggestions to help inspire you:



Rentsys Recovery Services plans to offer its BlackCloud Virtual Office business continuity solution to managed service providers (MSPs).

The College Station, Texas-based business continuity solutions company today announced it will work with MSPs, healthcare software companies and regional data center providers.

Rentsys introduced BlackCloud Virtual Office last month, and now, MSPs can offer this business continuity solution to their customers.



(TNS) — The hostage crisis at the Lindt Chocolat Cafe in Sydney, Australia, unfolded in a way impossible a decade ago.

Much of it played out on Facebook and text messaging (already there as of 2004), and on YouTube, Twitter, and other social media as yet unborn in 2004. To be a hostage-taker or hostage as of 2014, it seems, you need good social-media skills.

"There's an unprecedented degree of immediacy to such crises now," says Lawrence Husick, senior fellow at the Foreign Policy Research Institute and co-director for the Center for the Study of Terrorism. "All the players are so acutely aware they're being watched. It's Shakespearean: All are walking that stage."



(TNS) — The hacking group behind the Sony cybersecurity attack has made its first physical threat.

In a message sent at around 9:30 a.m., the group — calling itself Guardians of Peace — issued a warning along with what appears to be files related to Sony Pictures CEO and Chairman Michael Lynton.

“We will clearly show it (our Christmas gift) to you at the very time and places ‘The Interview’ be shown, including the premiere, how bitter fate those who seek fun in terror should be doomed to,” the hackers wrote.

The hackers also invoked the Sept. 11, 2001, attacks, urging people to keep themselves “distant from the places at that time.”

“The world will be full of fear,” they wrote. “Whatever comes in the coming days is called by the greed of Sony Pictures Entertainment. All the world will denounce the SONY.”



Wednesday, 17 December 2014 00:00

2015 Technology Predictions: An MSP Perspective

Somehow it got to be the week December 15, which seems crazy to me because wasn’t it just last week that I was already breaking my week-old New Year’s resolutions? But end of year means that it’s time for predictions about 2015. What events and trends will rock the managed services world in 2015? Here’s this humble blogger’s take.



Slowly but surely, a more secure credit card is making its way toward your wallet – if it hasn’t already. Whether you call it a smart card, chip card, credit card chip or EMV (Europay MasterCard MA -1.68% Visa V -0.58%) card, you may have heard that the United States is on the verge of adopting a new breed of credit card. These cards will fight fraud, secure your personal information, and protect you from credit card theft.

If you don’t have an EMV card in your wallet yet—and maybe even if you already do—you might be wondering what the new EMV cards are all about, and when they’ll finally become widespread in the United States. Wonder no longer…here is a quick primer on what EMV cards are, why the new smart chip is important, and when American banks, merchants, and consumers will adopt more secure credit cards once and for all.



(TNS) — In the nuclear plant control room with wall-to-wall panels of colorful knobs, levers and switches, one might think a wrong flip or a misplaced twist could become disaster, showering neighboring communities with radioactivity.

That would be a tricky feat, and an unlikely one, nuclear inspectors say.

“At nuclear power plants there are backups to backups to backups. There are so many redundant systems for a single purpose it would take multiple failures and kind of a completely unlikely scenario in order for a consequence to actually occur,” said Brandon Reyes, a Nuclear Regulatory Commission resident inspector at the Beaver Valley nuclear power plant in Shippingport.

Nuclear power is an industry in headlines more for the potential dangers it poses to the public, than the energy it affords their lifestyles. It receives a disproportionate amount of scrutiny and concern, says Reyes and his partner, senior inspector Jim Krafty.



It’s probably safe to say that if one facet of your IT operation needs to be as fail-safe as you can possibly make it, it’s your disaster recovery/business continuity (DR/BC) setup. Is that something you can reliably entrust to the cloud? It’s one thing to use a cloud-based service like, say, Dropbox to back up the files on your PC. But is the cloud the way to go to back up your entire IT operation?

I recently had the opportunity to address that question with Lynn LeBlanc, co-founder and CEO of HotLink, a hybrid IT management software provider in Santa Clara. In what turned out to be an enlightening email interview, I asked LeBlanc it there’s any legitimate argument against leveraging the public cloud for disaster recovery. She said there is none:

In fact, the public cloud lends itself very well to disaster recovery. It’s one of its best use cases. Amazon Web Services is the largest and most available infrastructure in the world, and its scale and economics allow IT teams to easily and cost-effectively protect their on-premise workloads from disasters. In fact, some solutions, such as HotLink DR Express, also enable business continuity for a full recovery in the public cloud at a price point that was inconceivable only a few years ago.



Risk management executives are charged with preparing companies for, and protecting them from, a broad array of emerging risks. Today, there is perhaps no threat that poses more danger than a cyberattack, which could result in a data breach or compromising sensitive information. Given the rapid increase in frequency and severity of high-profile cyberattacks in recent months, organizations must confront cybersecurity issues with greater focus, specificity and commitment.

Of note, an astounding 43% of U.S. companies experienced a data breach in the past year, according to the Ponemon Institute’s 2014 annual study on data breach preparedness, a 10% increase from 2013. These alarming trends are compelling companies to create programs centered on cyber risk awareness, education and preparedness. These programs are vital to the company’s performance and growth; the 2014 Cost of Data Breach Study by IBM and the Ponemon Institute reveals that the average cost to a company from a data breach was about $3.5 million per breach in 2014 – a 15% increase since last year. A company’s intellectual property and customer data may also be compromised in a cyberattack, expanding potential casualties beyond financial losses.



(TNS) — California has received congressional funding to begin rolling out an earthquake early warning system next year, capping nearly a decade of planning, setbacks and technological breakthroughs, officials said Sunday.

Scientists have long planned to make such a system available to some schools, fire stations, and more private businesses in 2015, but their effort hinged on Congress providing $5 million. The system would give as much as a minute's warning before shaking is felt in metropolitan areas, a margin that experts say would increase survival.

The U.S. Senate approved the allocation this weekend as part of the $1.1-trillion spending package, passed by the House of Representatives on Thursday, that will fund most of the U.S. government through the rest of the fiscal year. Officials plan to announce the funding at a news conference at Caltech on Monday.



Tuesday, 16 December 2014 00:00

The Insider Risk of Temporary Employees

Almost all businesses need temporary workers at some time or another, but December is an especially popular time to bring in extra help.

Of course, if you are hiring temporary employees, you will likely need to set them up with access to your company network, maybe give them an email address, and possibly even authorize them to work with databases that contain sensitive information.

In fact, according to a new study by Avecto, 72 percent of temporary hires are given admin privileges on the company network. We already know that insider threats are a serious concern to cybersecurity. When temporary employees are given network privileges, companies could be unwittingly setting themselves up for a serious security failure. As Paul Kenyon, EVP of global sales at Avecto, stated in a release:

Giving any worker admin rights is akin to giving them the keys to the kingdom. The insider threat has been well documented, but this research demonstrates that businesses clearly haven't got the message.



Ever since the cloud burst onto the IT consciousness, the primary focus of most organizations has been to prepare for this new data paradigm. The thinking has been that the enterprise needs to be ready for the cloud or risk being left behind.

Lately, however, we’ve seen a subtle shift in attitude on the part of both the enterprise and the nascent cloud industry: It’s not the enterprise that needs to adapt to the cloud, but the cloud that needs to adapt to the enterprise. Across the board, from the large players like Amazon and Google to smaller ones like CloudSigma and DigitalOcean, the goal has shifted from providing the commodity resources that appeal to consumers to more specialized offerings that the enterprise values.

To be sure, there is no shortage of enterprise interest in the cloud already. According to IDG, nearly 70 percent of organizations today utilize cloud-based infrastructure or applications in some way, and IT spending on the cloud is currently averaging about 20 percent growth per year. The thing is, the vast majority of that activity consists of low-level workloads and bulk storage applications that generally go to the lowest bidder, which is usually one of the hyperscale players that can shave margins to the bone and still turn out a decent profit.



A Johns Hopkins University analysis has looked at how climate change will increase the risk of power outages for various major US metro areas.

Johns Hopkins engineers created a computer model to predict the increasing vulnerability of power grids in major coastal cities during hurricanes. By factoring historical hurricane information with plausible scenarios for future storm behavior, the team could pinpoint which of 27 cities, from Texas to Maine, will become more susceptible to blackouts from future hurricanes.

Topping the list of cities most likely to see big increases in their power outage risk are New York City, Philadelphia, Jacksonville, Fla.; Virginia Beach, Va.; and Hartford, Conn. Cities at the bottom of the list, whose future risk of outages is unlikely to dramatically change, include Memphis, Dallas, Pittsburgh, Atlanta and Buffalo.

Seth Guikema, an associate professor in the university’s Department of Geography and Environmental Engineering, said his team’s analysis could help metropolitan areas better plan for climate change.



Tuesday, 16 December 2014 00:00

Shaping mobile security

Keith Bird shows how a new approach to mobile security can help organizations achieve the right balance of protection, mobility and productivity.

Most of us are familiar with the ‘triangle’ project management model, which highlights the constraints on delivering results in projects. The three corners of the triangle are fast, good and cheap, showing that in any given project, all three attributes cannot be optimised: one will inevitably be compromised to maximise the other two. You can have a good project delivered quickly, but not cheaply, and so on.

It’s traditionally been the same in IT security, especially when it comes to mobility. In this case, the three corners of the triangle are security, mobility and productivity. Usually, organizations have taken one of two approaches: either enabled mobility to boost productivity, with security inevitably being compromised; or they’ve tried to deliver more effective security for mobile fleets, compromising productivity.

Recent research shows that a majority of organizations have used the first approach, with mobility racing ahead of security. We (Check Point) surveyed over 700 IT professionals worldwide about mobility and mobile device usage in their organizations, and 72 percent said the number of personal mobile devices connecting to their organizations' networks had more than doubled in the past two years. 82 percent expected mobile security incidents to grow over the next 12 months, with higher costs of remediation.



Monday, 15 December 2014 00:00

How One CIO Rescued a Failed ERP Deployment

Imagine you’re a CIO, and you just hired on with a $600 million publicly traded technology company. You walk into work the first day on the job, and you find yourself in the throes of an ERP deployment that—well, let’s just say, it isn’t going so well. The previous CIO, who had been with the company for 10 years, left two months ago, so the hand-off wasn’t as smooth as it could have been. You know if you don’t act fast, the deployment is going to spin irreversibly out of control, which would put your CEO in the lousy position of having to explain to shareholders why a technology company failed so miserably with a technology implementation, and threw a boatload of money away in the process. Just try to imagine the pressure you’d be under.

Dave Brady doesn’t have to imagine it. He lived it.

Brady is the CIO at Datalink, a cloud services provider in Eden Prairie, Minn. When he joined the company in March 2013, that bleak scenario was precisely the one he faced. I recently had the opportunity to speak with him about it, and one of the things that struck me was the even-keeled manner in which he recounted the story. There was no embellishment, no woe-is-me vibe, no self-aggrandizement. If anything, he downplayed the whole mess. This is how he brought it up: 



Monday, 15 December 2014 00:00

The Path to Zero Ebola Cases

MONROVIA, Liberia — In my career as a medical doctor and global health policy maker, I have been in the middle of monumental struggles, including fights to make treatment accessible in the developing world for those living with H.I.V./AIDS as well as multi-drug resistant tuberculosis. But the Ebola epidemic is the worst I’ve ever seen.

More than 11 months into the crisis, thousands of people are dead and more than 17,000 have been infected. The virus kills quickly, spreads fear even faster, alters human relationships, devastates economies and threatens to cruelly extinguish hope in three fragile countries that were on the rebound after years of misery. No other modern epidemic has been so destructive so fast.

Monday, 15 December 2014 00:00

At Big Banks, a Lesson Not Learned

Are the colossal regulatory fines extracted from big banks today likely to deter their officials from violating the same rules tomorrow? Or are these billion-dollar settlements viewed simply as a cost of doing business, and not a very large one at that?

Judging from a regulatory action brought last week against 10 mostly large financial firms, the answers are “no” and “yes.”

The case, brought on Thursday by the Financial Industry Regulatory Authority, is striking. It takes us back to the financial scandal of the early 2000s involving corrupt Wall Street research.

Remember that mess? Firms whose analysts were supposed to be impartial instead used their bullish stock recommendations to attract investment-banking business. The losers in the situation were investors who didn’t know that the analysts were biased and who heeded their calls to buy the shares. In 2003, 10 firms and two analysts struck a settlement with regulators over these practices, paying $1.4 billion. That was real money back then, and it was hoped that such a hefty fine, along with new research rules, might keep Wall Street analysts conflict-free.



Is your company prepared for a cyber attack? This is a question that every director should be asking, and management should be providing regular updates to the Board on its level of preparedness. Cyber attacks are running rampant, and no company is exempt from an attack. If your company thinks so, then brace yourselves for a rude awakening.

Cyber attacks can cause serious damage to a company’s reputation, which says nothing of the financial impact that accompanies such an event. According to the National Association of Corporate Directors, if companies and governments are unable to effectively combat cyber threats, between $9 and $21 trillion of global economic value creation could be at risk.

Due to the growing volume and sophistication of cyber attacks, cybersecurity is an issue that every Board should be actively grappling with in order to mitigate the pitfalls associated with a breach. For companies and Boards, it is not the time or place for complacency when it comes to cybersecurity. Just because a company is small doesn’t mean that it is insulated against an attack.



As an information technology (IT) leader dealing with the intricacies and complexities of enterprise technology every day, I can tell you this: it’s not the technology that is the toughest thing to change in IT. It’s the people. Here’s my personal take on 4 of the hardest IT transformations to implement – and how people make or break those changes.

1. Going global

There’s no question that transforming your company from regional-based systems to global systems is a big job. Global applications, global processes, global networks … that takes tech expertise to the nth degree. You need to talk with the regions, departments, and teams to ensure that you have all the business requirements clear and know how the end-to-end processes now need to work before you can consolidate disparate systems or stand up new ones.

That being said, chances are that you’ll find those separate regions have their own cultures, methodologies, goals, and initiatives … and they like it that way. It often works, and works well – for them. The most important thing when talking to these regions is to remember that people want to be heard and valued for their expertise. This doesn’t mean they’re absolutely tied to the old way of doing things. Most likely, they simply want to provide context so that their voices and inputs are considered in the new direction.

So as you transform your ERP apps to span the world, or plug in new SaaS apps to transform the user experience, you simultaneously need to build a culture that helps people move out of their regional silos. Hear what they have to say before you encourage them to embrace a new perspective. Having listened, you can then encourage them to look at what is best for the company and for the customer overall. Let them see the benefits that will come from globalization, such as the removal of inconsistencies or duplication. Acknowledge that they are giving up something when they lose their regional approach, but assure them that there are great answers to the ever present question “WIIFM”: “What’s in it for me?”



While most risk professionals are satisfied with their insurers and brokers, those from of organizations with enterprise risk management (ERM) programs were the least content, according to the inaugural J.D. Power and Risk and Insurance Management Society (RIMS) 2014 Large Commercial Insurance Report.

The full report, based on findings of the J.D. Power 2014 Large Business Commercial Study, slated for release in February 2015, examines industry-level performance metrics among large business commercial insurers and brokers. The study, which interviewed almost 1,000 risk professionals, highlights best practices that are critical to satisfying them.



A recent court decision about the Target breach should have businesses of all sizes taking note.

A Minnesota judge found Target negligent in the breach and said it can be held responsible for financial damages. Infosecurity Magazine quoted the judge:

“Although the third-party hackers’ activities caused harm, Target played a key role in allowing the harm to occur,” Magnuson wrote in his ruling. “Indeed, Plaintiffs’ allegation that Target purposely disabled one of the security features that would have prevented the harm is itself sufficient to plead a direct negligence case.”



Friday, 12 December 2014 00:00

Good tidings we bring

The festive season is upon us and, assuming there are no postal strikes, Christmas Cards in their billions will be delivered to homes across the world spreading peace, joy and goodwill. Of course the Business Continuity Institute shares those same sentiments but, as has become tradition, we have decided not to send cards. Instead we will donate the money to those who need it more than we do.

This year, with the deadly virus Ebola high on our radar, we will be supporting Unicef in fighting this outbreak. As of the 1st December 2014, the total reported number of confirmed, probable, and suspected cases in the West African epidemic was 15,935 with 5,689 deaths. "Thousands of children are living through the deaths of their mother, father or family members from Ebola" said Manuel Fontaine, UNICEF Regional Director for West and Central Africa. "These children urgently need special attention and support; yet many of them feel unwanted and even abandoned. Orphans are usually taken in by a member of the extended family, but in some communities, the fear surrounding Ebola is becoming stronger than family ties."

As business continuity professionals, our role is to make sure that our organizations can continue to operate in the event of a 'disruption' but how would you prepare for a crisis of this magnitude? Can you prepare for a crisis of this magnitude? How do you continue to operate when death lurks around every corner and lives are consumed by fear? Fortunately most of us will never have to experience this, but we can play our part in helping those who do, which is why we are making this donation. If you would also like to make a donation to Unicef and help fight the spread of Ebola then please click here.

The BCI would wishes all our Chapter Leaders, Forum Leaders, the BCI Board, Global Membership Council and fellow business continuity practitioners around the world, Seasons' Greetings and a healthy 2015.

Note that the BCI Central Office will be closed on the 25th and 26th December and the 1st January 2015, re-opening on Friday 2nd January 2015. On the days between Christmas and New Year, the office will be staffed between 10am and 3pm only (GMT).

Friday, 12 December 2014 00:00

Do You Have a Cybersecurity Problem?

When the topic of cybersecurity comes up at your organization, I’m guessing your executives immediately look to the CIO – yourself included. After all, when you’re talking about data, about information access and about the technology needed to keep both safe from unwanted activities, you assume IT has it covered. And your organization isn’t the only one operating under this assumption – far from it.

According to a report by Kroll and Compliance Week, three-quarters of Compliance Officers have no involvement in managing cybersecurity risk. Plus, 44 percent of respondents revealed that their Chief Compliance Officer is only given responsibility for privacy compliance and breach disclosure after a security incident has taken place and plays zero part in addressing the risks beforehand.

Here’s the problem with that approach: many breaches are preventable. According to the 2013 Verizon “Data Breach Investigations Report,” 78 percent of initial intrusions are rated as “low difficulty.” Now, don’t get me wrong: hackers are extremely crafty and are scheming new tactics as I write this. But part of the reason they are able to get their hands on data that isn’t theirs is because organizations simply aren’t prepared.



Friday, 12 December 2014 00:00

Security predictions for 2015

As the complexity and diversity of devices, platforms and modes of interaction advance, so do the associated risks from malicious individuals, criminal organisations and states that wish to exploit technology for their own purposes. Below, Michael Fimin, CEO at Netwrix, provides his major observations of IT security trends and the most crucial areas to keep watch over in 2015:

Many individuals and enterprises are already using cloud technologies to store sensitive information and perform business critical tasks. In response to security concerns, cloud technologies will continue to develop in 2015, focusing on improved data encryption; the ability to view audit trails for configuration management and secure access of data; and the development of security brokers for cloud access, allowing for user access control as a security enforcement point between a user and cloud service provider.

As the adoption and standardisation of a few select mobile OS platforms grows, the opportunity for attack also increases. We can expect to see further growth in smartphone malware, increases in mobile phishing attacks and fake apps making their way into app stores. Targeted attacks on mobile payment technologies can also be expected. In response, 2015 will see various solutions introduced to improve mobile protection, including the development of patch management across multiple devices and platforms, the blocking of apps from unknown sources and anti-malware protection.

Software defined data centre
’Software defined’ usually refers to the decoupling and abstracting of infrastructure elements followed by a centralised control. Software defined networking (SDN) and software defined storage (SDS) are clearly trending and we can expect this to expand in 2015. But while these modular software defined infrastructures improve operational efficiency, they also create new security risks. In particular, centralised controllers can become a single point of attack. While the adoption of this approach is not widespread enough to become a common target for attacks, as more companies run SDN and SDS pilots in 2015, we expect their security concerns will be raised. This will result in more of a focus on security from manufacturers, as well as new solutions from third party vendors.

Internet of things
The Internet of things (IoT) universe is expanding with a growing diversity of devices connecting to the network and/or holding sensitive data - from smart TVs and Wi-Fi-connected light bulbs to complex industrial operational technology systems.

With the IoT likely to play a more significant role in 2015 and beyond, devices and systems require proper management, as well as security policies and provisions. While the IoT security ecosystem has not yet developed, we do not expect attacks on the IoT to become widespread in 2015.

Most attacks are likely to be ’whitehat’ hacks to report vulnerabilities and proof of concept exploits. That being said, sophisticated targeted attacks may go beyond traditional networks and PCs.

Next generation security platforms
In 2015 and beyond, we can expect to see more vendors in the information security industry talking about integration, security analytics and the leveraging of big data. Security analytics platforms have to take into account more internal data sources as well as the external feeds, such as online reputation services and third party threat intelligence feeds. The role of context and risk assessment will also become more important. The focus of defence systems becomes more about minimising attack surfaces, isolating and segmenting the infrastructure to reduce potential damage and identifying the most business critical components to protect.

Looking back at previous years, new security challenges will continue to arise, so IT professionals should be armed with mission critical information and be prepared to defend against them.


Friday, 12 December 2014 00:00

Data Analytics as a Risk Management Strategy

In our increasingly competitive business environment, companies everywhere are looking for the next new thing to give them a competitive edge. But perhaps the next new thing is applying new techniques and capabilities to existing concepts such as risk management. The exponential growth of data as well as recent technologies and techniques for managing and analyzing data create more opportunities.

Enterprise risk management can encompass so much more than merely making sure your business has purchased the right types and amounts of insurance. With the tools now available, businesses can quantify and model the risks they face to enable smarter mitigation strategies and better strategic decisions.

The discipline of risk management in general and the increasingly popular field of enterprise risk management have been around for years. But several recent trends and developments have increased the ability to execute on the concept of enterprise risk management.



It’s that time of year again…most people are slowing down for the Christmas break. The raft of out-of-office replies from the second week in December seem to increase by the hour as people begin to use up the last dregs of annual leave and head out in to the busy shops. Others are using this time of year as an opportunity to reflect on the previous 12 months. As its BlueyedBC’s 1st Birthday I thought it was only right to get all reflective on you guys!

The Birth of BlueyedBC

Okay, so in the autumn of 2013, professionally, I was not in a very good place at all. I was unqualified, on to my 3rd BC job in less than 12 months and deeply lacking in confidence. My peer group networks were virtually non-existent because I hadn’t built it up yet and if I’m being honest I was quite angry and frustrated with the way things were going.

So I decided in my wisdom to pick up a pen and paper and write some of my thoughts down. It started by blaming virtually everyone else except myself for the recent challenges in my career. Once I started writing I found that I couldn’t stop…venting my frustrations became like an addiction to me. I had several difficult years of trying to make it as a professional post university with all this pent up feeling inside of me and I was rapidly running out of ink! It wasn’t long before my scribbles became small chapters in their own right and this is when I submitted my first (rather unfair) scathing review of my experience in the industry to Continuity Central who kindly released it to the BC world.



Casual spectators of business behavior can't help being jaded; every day they see news stories about corporate fraud, security breaches, delayed safety recalls, and other sorts of general malfeasance. But what they don't see is the renewed time and investment companies around the world are putting  toward implementing and reporting on responsible behavior (this less sensational side of the story gets far less coverage).

This week, Nick Hayes and I published an exciting new report, Meet Customers' Demands For Corporate Responsibility, which looks at the corporate responsibility reporting habits of the world's largest companies. While it's easy to think that the business community is as dirty as ever, we actually found a substantial increase over the past 6 years in what these companies included in their CSR and sustainability reports.



This last week has been quite the week for pedestrian and vehicle collisions and accidents. We even had a few people die this week due to such incidents. Yes, I feel for the friends and families of those that have been impacted yet, what struck me most about each situation, was the communication messages being conveyed.

IT’s easy to blame one side of the situation and in many cases that might be reality. But just like in BCM and DR, we must convey a message that everyone can understand. The communications have to be straight to it and yet be articulate enough for people of any walk of life to understand the message – and have it retained. They can’t just be to one side of the situation. Here’s what I mean.

Immediately after the first accident the police and responding Emergency Medical Services (EMS) personnel were placing the blame for the traffic incidents on the shoulders of those driving; there was no responsibility placed on the side of the pedestrian. I found this odd because it was clean in some of the situations that the pedestrian wasn’t following the rules set out for them and the reminder about the rules wasn’t coming from the police of EMS; it was only directed at the vehicle operators.



There’s an interesting moment in a report on the current state of cyber security leadership from International Business Machines Corp (IBM).

For those who haven’t seen it yet, the report identifies growing concerns over cyber security with almost 60 percent of Chief Information Security Officers (CISOs) saying the sophistication of attackers is outstripping the sophistication of their organization’s defenses.

But as security leaders and their organizations attempt to fight what many feel is a losing battle against hackers and other cyber criminals, there is growing awareness that greater collaboration is necessary.

As IBM puts it: “Protection through isolation is less and less realistic in today’s world.”

Consider this: some 62 percent of security leaders strongly agreed that the risk level to their organization was increasing due to the number of interactions and connections with customers, suppliers and partners.



By Gail Dutton

Virtual reality (VR) finally is on the verge of becoming practical. With new, light-weight headsets that provide immediate response times coming onto the market, VR advocates say almost every industry could benefit by immersing their employees or clients into virtual worlds for some activities.

Real estate sales is a prime example. Sacramento broke ground in October for the Sacramento Entertainment and Sports Center (ESC), replacing the Sacramento Kings’ Sleep Train arena. With completion still two years away, selling the high-end Kings suites normally would rely on architectural renderings and floor plans. Instead, potential buyers can strap on a VR headset and tour a realistic virtual model that has the same look and feel as the finished, amenity-rich suites. Cha-ching!

From a participant’s perspective, being in a virtual world is like being in a real world. Perspective is determined by position trackers linked to the goggles, so when you turn around, you see what’s behind you. The result is a very realistic, immersive experience.



Chuck Wallace is the deputy director of emergency management for Grays Harbor County, Wash., a Pacific Ocean-facing county. He is a 31-year veteran of the fire service, retiring from the Philadelphia Fire Department in 2007. In addition to his duties in emergency management, he also serves as the fire chief at Grays Harbor County Fire Protection District #11 and as an elected fire commissioner for Grays Harbor County Fire Protection District #11. In addition to his county duties, he serves on a number of regional emergency management committees. Currently attending Evergreen State College, Wallace expects to graduate in June 2015 with a master’s degree in public administration.

Wallace participated in an interview with Emergency Management to share the challenges and success he has had in promoting tsunami mitigation measures in his county. Wallace also addresses the county’s vertical evacuation, tsunami-engineered, safe haven building, which he says is the first in North America.



Thursday, 11 December 2014 00:00


Resiliency is about bouncing back from something. It doesn’t always mean a catastrophe. It can also mean recovering from the simple annoyances of life. Most people are resilient, but have different levels, styles and speeds of their bounce-backs. Think of it as elastic. It can stretch but comes back to essentially its original shape. When it doesn’t, you know it is time to do something about it. Research shows that resiliency is learned. So you can learn and do more to become more resilient. I’ll be sharing MUCH more about this as the new year unfolds. My new wesite will reflect this and I have several fun project in the works for 2015. In the meantime, practice managing your daily stressors by becoming a Weeble®…you know…they “wobble but they don’t fall down.”


Online giant Google raised eyebrows recently when it stated that it was starting up two billion containers a week in its computing infrastructure. But the type of containers the company was talking about were logical instances inside its computers, not the mammoth steel boxes that are shipped by truck, rail and ship. Google’s containers are its solution to an issue concerning conventional server virtualisation, which involves more overhead than the provider is prepared to accept. A new development in IT, its new ‘lightweight virtualisation’ may be attractive to other organisations too. Yet, in certain circumstances, a real steel container may also hold the solution for business continuity.



It’s been said that the cloud represents a fundamental shift in the relationship between users, the enterprise, and the data with which they work.

A key facet of this change is the ability to spin up virtual and even physical data center environments on a whim, which leads to the interesting notion of how these resources are developed and deployed. It is reasonable to assume that with the cloud as the new data center, traditional resources will no longer be purchased and provisioned on a piecemeal basis. Rather, entire data centers will be implemented all at once. This is the same dynamic behind today’s hardware deployment, where whole servers or PCs are implemented, rather than individual boards, fans and chip sets.

The vendor community, in fact, has been prepping itself for this reality for some time. Nearly all of the major players have offered turnkey solutions for decades, but these usually represent pre-integrated components from their various product lines. Lately, however, vendors have been teaming up with newly minted software-defined networking (SDN) and other platforms in order to provide end-to-end data center products that do away with systems integration, testing and other complex processes.



Retail companies have Big Data capabilities, but they’re not sure what to do with them. It’s just too… big, according to a special report released today by Brick Meets Clicks (available for free download with registration).

“Discussions about Big Data and retail often bog down in the vastness of its potential, leaving retailers with only the vaguest guidance as they try to figure out where and how to invest in this powerful tool,” states the report.

That seems to be a common theme with Big Data right now. As I shared in my previous post on analytics, Dr. Shawna Thayer talked about executive paralysis with Big Data during the recent Data Strategy Symposium.



Wednesday, 10 December 2014 00:00

Bitcoin, the Solution to Consumer Data Protection

Despite all the news headlines around data breaches, hackers and identity theft, it is a little known fact that since 2013 over 1 billion consumer records have been stolen by hackers. The estimated cost of this data theft is a staggering $5 billion dollars a year, which inevitably gets passed down to consumers and merchants in the form of higher prices and fees. No doubt, there is a global data security crisis, indeed a war being waged, that is getting harder and harder for the good guys to win.

The hackers only have to succeed a small percentage of the time to make a very big dent on our society. As a result, we are in an era where securing personal information requires more and more complex security and surveillance, by merchants, banks and the government agencies. The system of credit card processing introduced in the 1940s and 1950s and perfected in the 1970s and 1980s was just never designed for the 21st century, a century in which the Internet, the open source community and the dark web accelerate technology innovation at pace far more rapid than slow-moving merchant and banking infrastructure can keep up with. There is a need to address this global data security crisis, and this requires us to fundamentally rethink what it means for a consumer to spend money.



The ISP and hosting sectors were the most targeted industries of cyber-crime in 2014, and the trend is likely to continue in 2015. That’s according to Radware. The findings from its fourth annual ‘Global application and security report’, which surveyed 330 companies globally on cyber attacks on networks and applications, act as a strong warning to companies that depend on a hosting provider or ISP to ensure they do not become a ‘cyber-domino’ as a result of the security failings of their suppliers.

As part of the report, Radware has published a ‘Ring of Fire’, which tracks cyber attacks and predicts the likelihood of attack on major industries. In the last 12 months, ISPs have moved up the risk rankings to become some of the most at-risk companies, joining the gambling sector and government at the centre of the ‘Ring of Fire’. Hosting companies have jumped from ‘low risk’ on the outside of the ring to just outside the ‘high risk’ centre.

Adrian Crawley, UK & Ireland regional director for Radware, says: “The news presents a stark reality for thousands of British businesses that rely heavily on ISP and hosting provision to host their website and network operations. If companies fail to ensure their network security planning includes that of their ISP and hosting partners then there’s no doubt that 2015 will see a great number of ‘cyber-dominoes’ fall.”



Patrick Alcantara explains why the BCI sees organizational resilience as an important framework that brings together various ‘protective disciplines’ and provides a strategic goal for organizations.

Resilience is fast becoming an industry buzzword which reveals underlying changes in the way practitioners view business continuity and other ‘protective disciplines’ such as emergency planning, risk management and cyber/physical security. From the development of clear boundaries which separate disciplines in the last decade or so, work is now underway to bring these fields together into a framework of organizational resilience. However, more than just thinking about it merely as the sum of ‘protective disciplines’, organizational resilience is thought of as a strategic goal that must be driven by top management. The quality of resilience is rooted in a series of capabilities that allow organizations to get through bad times (continuity) and thrive in good/changing times (adaptability). Organizational resilience involves a coherent approach ‘from the boardroom to the storeroom’ that requires strong governance and accountability among other ‘soft’ factors.

In the UK, this development in thinking culminates with the recent launch of the new British Standard 65000 (BS 65000) which outlines the principles and provides guidance behind organizational resilience. This parallels the development of global guidance on organizational resilience or ISO 22316 which is due on April 2017.



Wednesday, 10 December 2014 00:00

Will resilience replace risk and continuity?

By David Evans

Is the world of risk, continuity and crisis about to change as new concepts and approaches linked to resilience gain momentum or are we seeking solutions to the same old stories repacked through a different language?

Protecting organizations is big business, or at least it should be, as no one wants to fail and few if any executives can wish to face the negative impact of serious disruption or crises. In general, crises are expensive for organizations to handle, derail the best-laid plans and generally threaten the reputation of the top people in the business. Added to which there is a mix of guidance, regulatory requirements, employee concerns and shareholder expectations to address.



Making the case that the time has come for building a more efficient way to manage data center environments, Mesosphere today announced what it is calling the first data center operating system (DCOS) that turns everything in the data center into a shared programmable resource.

Mesosphere CEO Florian Leibert says Mesosphere DCOS is based on an open source distributed Apache Mesos kernel project that turns virtual and physical IT infrastructure into a common pool of resources. At present, Mesosphere DCOS can be deployed on Red Hat, CentOS, Ubuntu and CoreOS distributions of Linux running on bare-metal servers or VMware or KVM virtual machine environments running on premise or in Amazon Web Services, Google, DigitalOcean, Microsoft, Rackspace and VMware cloud computing environments.

Liebert says it takes too much effort these days to deploy distributed computing applications. By abstracting away the underlying physical and virtual infrastructure, Mesosphere presents services and application programming interfaces (APIs) that ultimately serve to dramatically increase overall utilization of IT infrastructure.



Building on previous suggestions, including the establishment of two specialized Ebola treatment centers, a task force on Thursday released its full report on how the state could better handle an outbreak of an infectious disease.

The Texas Task Force on Infectious Disease Preparedness and Response, created in October by Gov. Rick Perry after a man was diagnosed with Ebola in Dallas, called for new education efforts to help health care providers be better prepared to identify new diseases.

The panel’s 174-page report also recommended the creation of guidelines for handling pets that may have been exposed to infectious diseases, a mobile app to help monitor potentially exposed individuals, and the establishment of a treatment facility specifically for children and infants.

"The recommendations contained in this report represent a major step forward in protecting the people of Texas in the event of an outbreak of Ebola or other virulent disease," Perry said in a statement.



(TNS) — The Federal Emergency Management Agency unveiled a broad series of reforms Friday to address concerns contractors conspired to underpay flood insurance settlements to homeowners after superstorm Sandy.

In a strongly worded letter to private companies that work for the government-run National Flood Insurance Program, FEMA administrator W. Craig Fugate said he had "deep concern" over allegations engineers falsified documents to deny claims.

"We must do better," Fugate wrote. "Policyholders deserve to be paid for every dollar of their covered flood loss."

The reforms include:



Knowledge Vault today announced the general availability of its namesake analytics-as-a-service platform that provides more insight into how documents are being consumed and shared beyond anything IT organizations could hope to accomplish on premise.

Knowledge Vault CEO Christian Ehrenthal says that starting with Microsoft Office 365 deployments, IT organizations can use Knowledge Vault to discover and audit content and apply governance policies to documents stored in the cloud. Next up, says Ehrenthal, will be support for Dropbox, Microsoft OneDrive and Box.net.

Knowledge Vault itself makes use of a Big Data analytics engine based on Hadoop that runs on Microsoft Azure to analyze the content of documents that it accesses via the application programming interfaces (APIs) that various cloud service providers expose. That data then gets stored on top of Hadoop as a Knowledge Vault object.



It’s that time of year—security experts are looking ahead to the coming months and discussing their predictions. I have seen a number of predictions that I believe deserve further discussion, so over the month of December, I’ll be looking at some of those issues more in depth. Today, I’m going to take a look at cloud security.

A recent IBM study found that 75 percent of security decision makers expect their cloud security budgets to increase in the next five years. At the same time, according to MSP Mentor, 86 percent of CISOs say their companies are adopting cloud computing. So it makes sense that there will also be a greater interest in funding cloud security efforts.

But it isn’t just a matter of securing the data in the cloud. The cloud is also going to have a much stronger influence on the way we approach overall security practices, says Paul Lipman, CEO of iSheriff. That’s because the cloud is changing the entire business computing structure, which will cause it to have a ripple effect into security concerns. In an email conversation, Lipman provided his five predictions for the future of cloud security. In a nutshell, they are:



LOS ANGELES — In the most sweeping campaign directed at earthquake safety ever attempted in California, Los Angeles officials proposed Monday to require the owners of thousands of small, wooden apartment buildings and big concrete offices to invest millions of dollars in strengthening them to guard against catastrophic damage in a powerful earthquake.

The mandate to retrofit buildings was part of a raft of proposals made by Mayor Eric M. Garcetti to deal with what is widely viewed as a longtime failure of Southern California to prepare for a damaging earthquake. In a report issued Monday, Mr. Garcetti also proposed that the city take steps to create a new firefighting water supply system, using ocean and waste water, to help battle as many as 1,500 fires that could break out in a major earthquake. Such a temblor is likely to leave large parts of this region without water or power.

The retrofitting requirements must be approved by the City Council, and would have to be paid for by the building owners, with the costs presumably passed on to tenants and renters. The costs could be significant: $5,000 per unit in vulnerable wooden buildings and $15 per square foot for office buildings, Mr. Garcetti said.



Would you put all your investment into shares in just one company? Or into just one piece of property? Or even just into gold? While people are free to put their money where they please, many financial investors have identified diversification of investment as a better solution. Similarly, in business continuity the right mix of safer measures with lower returns and more innovative strategies with higher returns can optimise resilience without requiring unduly heavy expenditure (which in itself could threaten business continuity). This portfolio approach requires a certain attitude and tools, but can pay dividends.




Earlier this year, Steelhenge launched the Crisis Management Survey 2014 with the aim of developing a better picture of how organizations are building their preparedness for a crisis. Questions ranged from strategic ownership of the crisis management capability through plan development and training to the tools used to support the crisis management team. Respondents were also asked about the challenges they face in creating a crisis management capability and how they rate their overall level of preparedness.

One of the most striking results from the survey, published in 'Preparing for crisis: safeguarding your future', was that less than half of the respondents rated the overall crisis preparedness of their organization as ‘very well prepared’ with 13% responding that they were either ‘not well prepared’ or ‘not prepared at all’. The greatest challenges to crisis preparedness cited by the survey respondents were lack of budget, lack of senior management buy-in, time constraints, operational issues taking precedence and employees not seeing crisis preparedness activities as a priority.

The crisis communications function was found to be lagging behind when it comes to crisis preparedness; while 84% of organizations surveyed had a documented Crisis Management Plan, less than a quarter of respondents recorded that they do not have a documented plan for how they will communicate in a crisis and 41% responded that they do not have guidance on handling social media in a crisis.

In the Business Continuity Institute's 2014 Horizon Scan report, the influence of social media came second in the list of emerging trends or uncertainties with 63% of respondents to the survey identifying it as something to look out for.

Other key themes to emerge from the Crisis Management Survey include:

  • Embedding – Less than half of the respondents had a programme of regular reviews, training and exercising that would help embed crisis management within an organization and create a genuinely sustainable crisis management capability.
  • Engagement – In the face of high profile crises befalling major organizations year after year, 29% of organizations taking part in the survey still waited for the brutal experience of a crisis before creating a plan. Crisis preparedness is still a work in progress for many, particularly with regard to crisis communications planning.
  • Ownership – Ownership of crisis management at the strategic level amongst the survey population lay predominantly with the Chief Executive. However, responsibility for day-to-day management of the crisis management capability was spread widely across a broad range of functional roles with business continuity/disaster recovery and incident/emergency management featuring most with 50% between them.

The report concludes that the fact that a large number of organizations still do not have plans, and such a large percentage of organizations do not run a programme of development to maintain and improve their crisis management capability, suggests that too many organizations are not yet taking crisis management seriously enough. Any doubters as to the value of crisis management only have to speak to organizations who have suffered a crisis. As one survey respondent said "we have suffered a number of potential crisis situations including an actual terrorist attack. Good planning and preparation has stood us in good stead.

Early data suggests that the current 2014-2015 flu season could be severe, with related human resource business continuity issues for organizations.

The Centers for Disease Control and Prevention (CDC) urges immediate vaccination for anyone still unvaccinated this season and recommends prompt treatment with antiviral drugs for people at high risk of complications who develop flu.

So far this year, seasonal influenza A H3N2 viruses have been most common. There often are more severe flu illnesses, hospitalizations, and deaths during seasons when these viruses predominate. For example, H3N2 viruses were predominant during the 2012-2013, 2007-2008, and 2003-2004 seasons, the three seasons with the highest mortality levels in the past decade. All were characterized as ‘moderately severe.’



The Australian Prudential Regulation Authority (APRA) has released the final version of its new risk management standard, and associated guidance.

APRA consulted extensively during 2013 and 2014 on both the risk management standard and prudential practice guide. The package released includes final versions of Prudential Standard CPS 220 Risk Management (CPS 220) and Prudential Practice Guide CPG 220 Risk Management (CPG 220) as well as a letter to industry summarising APRA’s response to submissions on the most recent consultation, which commenced on 7th October 2014. The letter sets out a small number of minor refinements that were made to the prudential practice guide as a result of the submissions received; there were no further changes to the prudential standard.

The new requirements are applicable to authorised deposit-taking institutions (ADIs), general insurers and life companies, and authorised non-operating holding companies (authorised NOHCs), and take effect from 1st January 2015.

APRA Chairman Wayne Byres said the new standard harmonises risk management requirements across the banking and insurance industries, bringing together a range of risk management requirements into a single standard.

‘The new standard, together with the new practice guide, reflect APRA’s heightened expectations with regards to risk management, consistent with the increased emphasis that has been placed on sound governance and robust risk management practices in response to the global financial crisis.’

More details here.

UK organizations are struggling to stay on top of costly technology risks, according to a new report by KPMG. The Technology Risk Radar, which tracks the major technology incidents faced by businesses and public sector bodies, reveals the cost of IT failures over the last 12 months. It found that, on average, employers had to pay an unplanned £410,000 for each technology-related problem they faced. The report also reveals that an average of 776,000 individuals were affected - and around 4 million bank and credit card accounts were compromised: by each IT failure.

Incidents caused by ‘avoidable’ problems such as software coding errors or failed IT changes accounted for over 50 percent of the IT incidents reported over the past year. Of these, 7.3 percent of reported events were the fault of human error: a figure which shows that basic investments in training are being ignored at the employers’ cost. Further, while data loss related incidents continued to be a major problem for all industries, a significant number of those (16 percent) were unintentional.

KPMG’s Tech Risk Radar reveals that customer-facing organizations are quickly realising the true cost of systems failures if they are left unchecked. For instance, a utility company faced a £10 million fine when technical glitches during the transfer to a new billing system meant customers did not receive bills for months and were then sent inaccurate payment demands or refused prompt refunds when errors were eventually acknowledged.

Commenting on the findings of the Technology Risk Radar report, Jon Dowie, Partner in KPMG’s Technology Risk practice said: “Technology is no longer a function within a business which operates largely in insolation. It is at the heart of everything a company does and, when it goes wrong it affects an organization’s bottom line, its relationship with customers and its wider reputation.

“Investment in technology will continue to rise as businesses embrace digital and other opportunities, but this needs to be matched by investments in assessing, managing and monitoring the associated risks. At a time when even our regulators have shown themselves to be vulnerable to technology risk, no one can afford to be complacent.”

With financial services under enormous pressure to maintain highly secure technology infrastructure, KPMG predicts IT complexity will continue to be the single biggest risk to financial services organizations in the coming year. This is closely followed by ineffective governance, risk and non-compliance with regulations. Security risks – such as cyber-crime and unauthorised access - are rated fifth.

Jon Dowie adds: “With ever greater complexity in IT systems – not to mention the challenge of implementing IT transformational change – companies are running to stand still in managing their IT risks. The cost of failure is all too clear. It is crucial for both public and private sector organizations to understand the risks associated with IT and how they can be managed, mitigated and avoided.”


Tuesday, 09 December 2014 00:00

App risk management advice

Espion is calling on organizations not to overlook the risks posed by workers increasingly packing their own clouds and apps into their virtual briefcase without consulting their IT department.

The growth of ‘shadow IT products’(non-approved SaaS applications), has skyrocketed in recent years, with the latest research revealing that 81 percent of enterprise employees[1] admit to using unauthorised applications. The scale of this was also highlighted at Espion’s recent 101 Series on App Security with attendees agreeing it is a growing concern in their organization.

Without doubt apps and cloud solutions such as Basecamp, Salesforce, Dropbox and Google Apps are great for productivity and flexible working. However, organizations need to be highly cognisant of the potential downside these time-saving, skill-boosting, collaboration-enhancing, process-streamlining (and more) apps and software pose to corporate information.



By Adam Wren

When it comes to the workplace, what do millennials want? If you want your company to thrive, that’s a question that you should be asking on a regular basis to attract the future of your firm.

The good news: You don’t have to be the next Apple AAPL +0.96%, Google GOOGL +0.51%, Facebook or even cool startup to get millennial talent flocking to your business. Money isn’t the only attraction, either.

To succeed as a an employer, you’ll need to hire millennial workers. Surveys show they are bright, innovative, talented and want to make a difference. But there’s also the sheer demographic reality that it will soon be hard not to hire millenials.



Earlier today, we published a report that dissects global risk perceptions of business and technology management leaders. One of the most eye-popping observations from our analysis is how customer obsession dramatically alters the risk mindset of business decision-makers.

Out of seven strategic initiatives -- including “grow revenues,” “reduce costs,” and “better comply with regulations,” -- improve the experience of our customers is the most frequently cited priority for business and IT decision-makers over the next 12 months. When you compare those “customer-obsessed” decision-makers (i.e. those who believe customer experience is a critical priority) versus others who view customer experience as a lower priority, drastic differences appear in how they view, prioritize, and manage risk.

Customer obsession has the following effects on business decision-makers’ risk perceptions:



The hyperscale data center industry has made no secret of its desire to leverage renewable energy to the greatest extent possible. When you start measuring density in megawatts, any solution that helps cut the power bill is welcome.

Lately, much of the activity has centered on wind, with top-tier data producers signing long-term agreements with wind farms near their newest plants, or in some cases building capacity on-site.

Google, for example, recently teamed up with Dutch utility Eneco to provide wind energy to the company’s new facility in Eemshaven in the Netherlands. The goal is to run the plant on 100 percent wind that is sourced from Eneco’s farm in nearby Delfzijl, and in fact, the data center is expected to draw the full output of the facility for the 10-year lifespan of the contract. The data center is expected to go on-line in mid-2016.



On October 20, 2014, Wyndham Worldwide Corporation won dismissal of a shareholder derivative suit seeking damages arising out of three data breaches that occurred between 2008 and 2010.  Dennis Palkon, et al. v. Stephen P. Holmes, et al., Case No. 2:14-cv-01234 (D. N.J. Oct. 20, 2014). Wyndham prevailed, but the litigation carries key cybersecurity warnings for officers and directors.

Businesses suffering data breaches end up litigating on multiple fronts. Wyndham had to defend itself against the shareholder derivative action and against a Federal Trade Commission action.  In other data breach-related cases, the Securities & Exchange Commission, the Department of Justice and state regulatory agencies have asserted jurisdiction. Regulatory actions only compound exposure from private civil actions.

Officers and directors play a key role in cybersecurity. Wyndham’s directors supported the company as it defended its conduct and procedures before the FTC. However, they also had to satisfy their fiduciary duties to assess whether the breaches were the result of negligent or reckless conduct by Wyndham’s officers, which may have required the company to file its own civil action against its officers. It is not difficult to imagine situations in which a Board of Directors determines that the company’s officers acted wrongfully or negligently and end up with a choice between suing the company’s own officers for their conduct or foregoing such a lawsuit and facing derivative litigation from shareholders.



Monday, 08 December 2014 00:00

The future of business continuity


The sun is now setting on 2014 and we can look forward to welcoming in the new year. 2014 was the 20th anniversary of the Business Continuity Institute but the commemorations were never about reflecting back on the previous 20 years, but rather looking to the future and the new horizon that awaits us all. The BCI’s outgoing Chairman – Steve Mellish FBCI – referred to this as his 2020 Mission.

Who better to write about where the industry is heading than those who will perhaps be doing most to shape that future, those who are just starting out in their careers? '20 in their 20s' is a series of essays written by business continuity professionals from across the world who are all still aged in their twenties, so all still with a long career ahead of them.

This publication is what these twenty young professionals feel are the challenges that the business continuity industry will be facing in the future. Some relate to the particular industry they work in and some relate to the region that they are based in, however they all give an idea of what may lie ahead.

To read '20 in their 20s: The future of business continuity', click here.

Keith Fehr wants to be ready for anything when the Super Bowl comes to the University of Phoenix Stadium in February. “We trained on structural collapse, on foodborne illness. We practiced a biological agent release, a chemical warfare release, explosions, multi-vehicle accidents,” he said.

As director of emergency management for the Maricopa Integrated Health System, an Arizona system that encompasses an adult trauma center, pediatric trauma, a regional burn center and two behavioral health facilities, Fehr said he has his bases covered. “The big game may never see a chemical weapons attack,” he said, “but you always want to push to the point of failure, to see where you could do better.”

Fehr got his right-to-the-edge training this fall at the Center for Domestic Preparedness (CDP), a FEMA teaching facility where some 14,000 first responders and emergency managers come each year to drill, pairing classroom time with intensely realistic exercises. Walking wounded stagger through a mock downtown. Radiation victims crowd the halls of a full-scale hospital. Hazmat teams deal with actual anthrax and ricin. It’s a hardcore program, with FEMA picking up all participants’ costs.



Do you remember the scene from The Empire Strikes Back where the Millennium Falcon is trying to escape an Imperial Star Destroyer? Han Solo says, “Let’s get out of here, ready for light-speed? One… two… three!” Han pulls back on the hyperspace throttle and nothing happens. He then says, “It’s not fair! It’s not my fault! It’s not my fault!” 

Later in the movie when Lando and Leia are trying to escape Bespin, the hyperdrive fails yet again. Lando exclaimed, “They told me they fixed it. I trusted them to fix it. It's not my fault!” In first case transfer circuits were damaged, and in the second case, stormtroopers disabled the hyperdrive.

Ultimately they were at fault; they were the captains of the ship, and the buck stops with them. It doesn't matter what caused problems, they were responsible; excuses don't matter when a Sith Lord is in pursuit. 

I am seeing a trend where breached companies might be heading down a similar “it’s not my fault” path. Consider these examples:



(TNS) — A powerful storm is bearing down on the Philippines, prompting residents to flee their homes in some central coastal regions still recovering from last year's deadly Typhoon Haiyan.

Typhoon Hagupit, which was packing winds as high as 149 mph over the Pacific Ocean on Thursday, is expected to make landfall Saturday, bringing heavy rain and storm surges of up to 13 feet.

Although there is uncertainty about the storm's route, forecasts by the Philippines weather agency show it hitting the eastern coast and barreling west along a trajectory similar to that of Haiyan, which destroyed about 1 million homes, displaced 4 million people and left more than 7,300 dead and missing in November 2013.



When creating a business continuity (BC) or disaster recovery (DR) plan, I say “begin with the end in mind.”

A BC / DR plan’s primary goal is to help prepare an organization so it can respond to and fully recover from any disaster, as quickly as possible. But how many actually get to the end with a fully functional integrated easy to use crisis management plan (or Incident Management, Continuity of Operations Plan)? How many still have a big thick binder with multiple pages you have to flip through to find the information you need?

The point of this article is to map out elements of an effective crisis management plan with the goal of helping you avoid recovery delays and potential financial or operational disasters. Having an effective crisis management plan with each action mapped out prior to an incident is essential. Without it, your emergency response might lead to catastrophic consequences for your employees, your business and your customers.



Efforts continue in order to stop the spread of the Ebola outbreak and find vaccines to defeat the virus. However, businesses need to be prepared in more ways than one. Although the risk is considered low that a widespread Ebola infection would occur outside West African countries, the potential consequences could be catastrophic and deadly. Like other epidemics that became pandemics, precautions against Ebola can start with common sense instructions to prevent infection and to react appropriately if it is detected. But they cannot end there. Organisations must make sure that additional protection is in place both for their employees and their business activities.




Businesses in the UK are at risk of sleepwalking into a reputational time bomb due to a lack of awareness on how to protect their data assets, according to new research by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.

The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56%) of UK businesses being more concerned than 12 months ago. 7 in 10 (70%) attribute this to hackers becoming more skilled and better at targeting businesses. However, whilst the vast majority (98%) of organizations have taken measures to minimize risks to their information security, only 12% are extremely confident about the security measures their organization has in place to defend against these attacks.

These concerns echo those in the annual Horizon Scan survey carried out by the Business Continuity Institute and sponsored by BSI, which showed that cyber attacks and data breaches are the joint second biggest concern for business continuity practitioners. In the 2014 report, 73% of respondents to a global survey expressed either concern or extreme concern about each of these threats materialising.

Worryingly, IT Directors appear to have accepted the risks to their information security, with 9 in 10 (91%) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49% in both instances). Around four in ten (42%) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30%) have suffered a loss of confidential information.

Organizations need to safeguard themselves and their customer data, however there is an inherent lack of trust from consumers on how their data is handled with a third of consumers admitting they do not trust organizations with their data. There have been many high profile data breaches in the last few years that help demonstrate just why this lack of trust is justified. On the other hand there is a level of acceptance that nothing online will ever be safe, leading to a false sense of security that ‘this will not happen to me’ amongst those who have not suffered from a cyber-attack/crime.

Maureen Sumner Smith, UK Managing Director at BSI added: “Consumers want their information to be confidential and not shared or sold. Those who want to be reassured that their data is safe and secure are looking to organizations who are willing to go the extra mile to protect and look after their data. Best practice security frameworks, such as ISO 27001 and easily recognizable consumer icons such as the BSI Kitemark for Secure Digital Transactions can help organizations benefit from increased sales, fewer security breaches and protected reputations. The research shows that the onus is on businesses to wake up and take responsibility if they want to continue to be profitable and protect their brand reputations.”

BSI has announced the availability of a revised version of PAS 96, which helps companies safeguard food and drink against malicious tampering and food terrorism. PAS 96 ‘Defending food and drink’ was first published in 2008 as a guide to Hazard Analysis Critical Control Point (HACCP) which identifies and manages risks in supply chains.

The food and drinks industry is used to handling natural errors or mishaps within the food supply chain, but the threat of deliberate attack, although not new, is growing with the changing political climate. Ideological groups can see this as an entry point to commit sabotage or further criminal activity.

Therefore the impacts of threats to the food supply chain are great. They can include direct losses when responding to the act of sabotage, paying compensation to affected producers and suppliers, customers and distributors. Trade embargoes can be imposed by trading partners and long term reputational damage can occur as a result of an attack.



The year is 2015. You walk into your bank to make a withdrawal, hold your smartphone to the terminal with one hand, and put the fingers of your other hand on the small green-glowing window.

A buzzer sounds and the words “IDENTITY REJECTED” flash onto the screen. A security guard appears from nowhere.

You begin the first of many long, frustrating protestations. You are who you say you are, but you can’t prove it.

Your identity has been snatched.



(TNS) — This is a test of the region's preparedness for sea level rise and climate change. This is only a test:

It's Aug., 19, 2044. Hurricane Elvis, a Category 3 storm, is bearing down on Hampton Roads.

Sea levels are 1.5 feet higher than today. Because of climate change, the region has had 60 days 90 degrees this year. The National Weather Service is forecasting Elvis storm surges of 3 to 8 feet.

What does Hampton Roads need to do to prepare for something like this?

The scenario was part of a federally led exercise held this week at Old Dominion University.



When a technology company does well, more power to it. When it does good at the same time, it warrants our attention. So when TCN, a provider of cloud-based call center technology in St. George, Utah, announced that it was releasing technology that would help visually impaired people get jobs in call centers, my attention was immediately grabbed.

On Tuesday, TCN announced the release of Platform 3 VocalVision, technology that enables visually impaired people to navigate TCN’s Platform 3.0 call center suite. The approach was to optimize the platform to be compatible with Job Access with Speech (JAWS), a popular screen reader that assists users whose vision impairment prevents them from seeing screen content or using a mouse.

In an email interview, Terrel Bird, co-founder and CEO of TCN, explained the roots of the project.



(TNS) — In baseball, when a slugger has been slumping for a few years in a row, the pundits in the upper deck will be quick to declare a trend; “the bum’s done,” they’ll assert.

Weather forecasters are a little more retrospective.

In 2001, forecasters had announced that they believed that since 1995, the tropics had been in a cycle of more and stronger storms. Such periods can last 25 to 40 years.

The hurricane season that ended Sunday, Nov. 30, was quiet. So was the year before that. Only three seasons since 1995 have been below average. We just went through two of them.

This followed some of the busiest, and most damaging, years on record.



Boards, regulators and leadership teams are demanding more and more of risk, compliance, audit, IT and security teams. They are asking them to collaboratively focus on identifying, analyzing and managing the portfolio of risks that really matter to the business.

As risk management programs evolve to more formal processes aligned with business objectives, leaders are realizing that by developing a proactive mindset in risk and compliance management, teams can provide added value to help the organization gain agility by identifying new opportunities as well as managing down-side risk. Organizations with this new perspective are more successful in orchestrating change to provide a 360-degree view of both risk and opportunity.

Risk teams that are further along on the journey of leveraging proactive approaches to risk management look not only within the organization but beyond to supplier, third party and customer ecosystems. This means developing a view across the larger enterprise infocosm, to ensure alignment of people, processes and technologies.



There are a great many challenges to overcome to prepare a sizable organization for crises, emergencies or reputation disasters. But one seems nearly intractable: the ignorance of those in high places. The very ones who will make the big decisions when push comes to shove. The lawyers, the CEOs, the regional execs, the Incident Commanders, the chiefs, the directors, the presidents.

If the ones who call the shots during a response do not understand the water they are swimming in, the effort is doomed–despite all the preparation that communication and public relations leaders may put in place.

A week or so ago I had the privilege of presenting to the Washington State Sheriffs and Police Chief’s association training meeting. Chief Bill Boyd and I were to give a four hour presentation to these law enforcement leaders. Bill did the bulk of the work on the presentation, but had a medical emergency and couldn’t present with me. One item he had gathered for this really hit me–and those present. The Boston Police radio message from the Incident Commander on the scene just after the bombing occurred included the calm but clearly adrenalin-filled IC’s details on what actions the police on the scene were taking. Then he said, “And I need someone to get on social media and tell everyone what we are doing.” That’s correct. One of the top priorities of this Commander was to inform the public of police actions and the way to do that he knew was through the agencies social media channels.



EMC Corporation has published the findings of a new global data protection study that reveals that data loss and downtime cost enterprises more than $1.7 trillion in the last twelve months. Data loss is up by 400 percent since 2012 while, surprisingly, 71 percent of organizations are still not fully confident in their ability to recover after a disruption.

The EMC Global Data Protection Index, conducted by Vanson Bourne, surveyed 3,300 IT decision makers from mid-size to enterprise-class businesses across 24 countries.

Impact of data loss and downtime
The good news is that the number of data loss incidents is decreasing overall. However, the volume of data lost during an incident is growing exponentially:

  • 64 percent of enterprises surveyed experienced data loss or downtime in the last 12 months;
  • The average business experienced more than three working days (25 hours) of unexpected downtime in the last 12 months;
  • Other commercial consequences of disruptions were loss of revenue (36 percent) and delays to product development (34 percent).

New wave of data protection challenges
Business trends, such as big data, mobile and hybrid cloud are creating new challenges for data protection:

  • 51 percent of businesses lack a disaster recovery plan for any of these environments and just 6 percent have a plan for all three;
  • In fact, 62 percent rated big data, mobile and hybrid cloud as 'difficult' to protect
  • With 30 percent of all primary data located in some form of cloud storage, this could result in substantial loss.

The protection paradox
Adopting advanced data protection technologies dramatically decreases the likelihood of disruption. And, many companies turn to multiple IT vendors to solve their data protection challenges. However, a siloed approach to deploying these can increase risks:

  • Enterprises that have not deployed a continuous availability strategy were twice as likely to suffer data loss as those that had;
  • Businesses using three or more vendors to supply data protection solutions lost three times as much data as those who unified their data protection strategy around a single vendor;
  • Those enterprises with three vendors were also likely to spend an average of $3 million more on their data protection infrastructure compared to those with just one.

More details: http://emc.im/DPindex

2014 saw continued use of buzzwords like cloud, wearables, BYOD and IoT but conversations around what this will mean to business if we don’t evolve and prepare our IT infrastructures were significantly lacking.

There’ll always be some level of disconnect between maintaining IT and maintaining business productivity; both have very different deliverables. However the two must be interlinked as there are key areas where IT and business objectives overlap. Understanding the ICT environment in depth is important to improving business resilience and the efficiency of the ICT infrastructure.

In this article Patrick Hubbard highlights emerging areas where greater understanding is required to enable organizations to maintain current levels of ICT availability and resiliency.



Remember the business aftermath of Hurricanes Katrina and Sandy? In each case, companies far and wide scrambled to put business continuity/disaster recovery (BC/DR) plans in place if they didn’t already have them – whether or not they had felt so much as a raindrop from the super-storms.

But human memory is short-lived. As incredible as it may seem, some people have already forgotten the devastation and destruction caused by disasters such as Hurricanes Sandy and Katrina. The problem, of course, is that the risk of disasters hasn’t gone down, even if our alertness to them has. All you need to do is take a look at data such as Sperling’s natural disaster map to see that the next disaster could be just around the corner … with the risks notably higher depending on where you are.

So now – in between crises – is a great time to figure out how to mitigate the risk associated with natural disasters. And one of the foremost ways to do so is to consider the location of your secondary or backup data center.



(TNS) — The nation's top housing official recently toured the core of a house in Brownsville that holds the promise of returning people quickly to their homes after a major disaster. What he didn't know was that it had been partially put up in an afternoon by a group of unskilled teenagers.

The house inspected Monday by Housing and Urban Development Secretary Julian Castro is part of a $2 million pilot project that envisions the construction of less-expensive, structurally sound housing within days of a disaster instead of years. Although hundreds of low-income homes have been rebuilt since Hurricanes Dolly and Ike laid waste to the Texas Gulf Coast in 2008, many families are still waiting for housing already funded with federal disaster money.

The RAPIDO project, to build 20 prefabricated homes in the Rio Grande Valley, is the first of two projects that its originators hope will revolutionize not only the way housing is built after disasters, but as a way to provide low-income housing everywhere in Texas. A similar $4 million project to build 20 homes in Harris and Galveston counties is in its early stages and expected to produce its first house by March.



(TNS) — Tornado Alley is undergoing a transformation.

The number of days that damaging tornadoes occur has fallen sharply over the past 40 years, a study published recently in the journal Science shows. But the number of days on which large outbreaks occur has climbed dramatically.

“It’s really pretty shocking,” said Greg Carbin, warning coordination meteorologist with the Storm Prediction Center in Norman, Okla.

In the early 1970s, there was an average of 150 days each year with at least one F1 tornado. That number has dropped to about 100 days each year now.

There were just six days in all of the 1970s with at least 30 F1 tornadoes. But that number has jumped to three a year now.



Where does your business stand on security readiness?

If you are like the majority of small businesses, you are pretty nervous about your cybersecurity efforts and ability to thwart and/or react to a threat.

In October, e-Management asked attendees at the CyberMaryland Conference about their cybersecurity policies. What the CyberRX survey found was that 63 percent of small businesses aren’t very confident about their continuous security monitoring capabilities and nearly a quarter don’t provide any type of security training for their employees. Of those that do provide some sort of training, it is mostly periodic—and we’ve learned that cybersecurity education and training needs to be a constant evolving effort because the threat landscape is always changing.



Tuesday, 02 December 2014 00:00

Working Our Way Toward the Federated Cloud

Some interesting research came out last month regarding the enterprise’s attitude toward the cloud and what it will take to push more of the data load, and mission-critical functions in particular, off of local infrastructure. It turns out that while security and availability are still prime concerns, flexibility and federation across multiple cloud architectures are equally important.

In IDG’s most recent Enterprise Cloud Computing Study, more than a third of IT respondents say they are comfortable with the current state of cloud technology, with about two thirds saying the cloud increases agility and employee collaboration. The key data, however, comes in the attitude toward advanced networking technologies like software-defined networking (SDN) and network functions virtualization (NFV), with more than 60 percent saying they plan to increase their investment in these areas specifically to enhance their ability to access and manage disparate cloud environments.



Tuesday, 02 December 2014 00:00

Business Continuity Management and ERM Tools

In theory, BCM and ERM should get along just fine. ERM or enterprise risk management is concerned with identifying both positive and negative risk for an organisation – or opportunities as well as threats, if you prefer. Business continuity management is about keeping a business in operation in the face of adversity. It’s also about enhancing the value and profitability of operations, thanks to a better corporate image towards customers, banks, insurers and the like. Effective BCM depends on good risk analysis of the kind that ERM is designed to do. With selection of ERM software tools to automate risk management, how can organisations find out if there’s one that’s right for them?



UK businesses are at risk of sleepwalking into a reputational time bomb due a lack of awareness on how to protect their data assets, according to research released by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.

The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56 percent) of UK businesses being more concerned than 12 months ago. 7 in 10 (70 percent) attribute this to hackers becoming more skilled and better at targeting businesses.

However, whilst the vast majority (98 percent) of organizations have taken measures to minimize risks to their information security, only 12 percent are extremely confident about the security measures their organization has in place to defend against these attacks.

Worryingly, IT directors appear to have accepted the risks to their information security, with 9 in 10 (91 percent) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49 percent in both instances). Around four in ten (42 percent) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30 percent) have suffered a loss of confidential information.



CHICAGO – With the holidays fast approaching, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) Region V office encourages everyone to consider giving gifts that will help protect their family members and friends during a future emergency.

“A gift to help prepare for emergencies could be life-saving for friends and family,” said FEMA Region V acting regional administrator, Janet Odeshoo. “These gift ideas provide a great starting point for being prepared for an emergency or disaster.”

Supplies for an emergency preparedness kit can make unique—and potentially life-saving—holiday gifts, such as:

  • Battery-powered or hand-crank radio and a NOAA Weather Radio with tone alert.
  • A flashlight with extra batteries.
  • Solar-powered cell phone charger.
  • Smoke detector and/or carbon monoxide detectors.
  • First aid kit.
  • Fire extinguisher and fire escape ladder.
  • Enrollment in a CPR or first aid class.
  • Books, coloring books, crayons and board games for the kids, in case the power goes out.
  • Personal hygiene comfort kit, including shampoo, body wash, wash cloth, hairbrush, comb, toothbrush, toothpaste and deodorant.
  • A waterproof pouch or backpack containing any of the above items, or with such things as a rain poncho, moist towelettes, work gloves, batteries, duct tape, whistle, food bars, etc.

Holiday shoppers might also consider giving a winter car kit, equipped with a shovel, ice scraper, emergency flares, fluorescent distress flags and jumper cables. For animal lovers, a pet disaster kit with emergency food, bottled water, toys and a leash is also a good gift.

The gift of preparedness might just save the life of a friend or family member. For more information, preparedness tips or other gift ideas, visit www.Ready.gov.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at . The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

The enterprise is poised to embark on a number of data and infrastructure initiatives in the coming years, almost all of which are focused on the capture and analysis of Big Data.

But while the term “Big Data” is appropriate to describe the scale of the challenge ahead, it leaves the impression that the solution is simply to deploy more resources to accommodate larger workloads. But as many early adopters are finding out, Big Data is not just big, it’s also complex and nuanced — and that spells trouble for anyone who thinks they can just throw resources at Big Data and make it work.

As MarkLogic’s Jon Bakke points out, Big Data can encompass everything from large text and database files to audio/video and real-time data streams tracking changes to complex systems and environments. To handle this, the enterprise will need to mount a multi-pronged approach that encompasses not just advanced database systems and emerging infrastructure technologies, but legacy systems as well. A key strategy in squaring this circle is the logical data warehouse (LDW), which encompasses two or more physical database platforms united under a common access and control mechanism. In this way, the enterprise can take advantage of existing capabilities like RDBMS while employing state-of-the-art capabilities for the specific functions that need them.




Did you know that by making a few simple changes to your CV and Linked In profile you can increase the number of interviews you secure by up to 50%?

Having an effective CV and Linked In profile is absolutely critical so, in conjunction with the CV & Interview Advisors, the Business Continuity Institute is inviting you to attend a free webinar to help you significantly enhance your CV and Linked In profile as you prepare for the new year.

The webinar will be delivered by one of the UK's leading authorities on personal branding and career enhancement and previous events have been described as "outstanding" and "truly inspirational".

In this lively one hour session, you will learn:

  • How to assess the effectiveness of your current CV
  • The things that you should never do on your CV
  • How to transform your CV and Linked In profile into a powerful business case
  • How to use case studies on your CV and Linked In profile to differentiate you from other candidates

The webinar is not your typical boring top 10 tips; it is a leading-edge session for professionals and is packed with practical advice that really works, as this candidate recently confirmed: “Following the webinar, I have spent that last week re-writing my CV in the format you discussed - then put it online last night. Today, I have received three emails from agencies who want to deliver my CV to their clients. Alongside this I have had two calls from companies who have invited me in for a chat about vacancies. This is more interest that I have had in the last three years combined! Testament to the success of your webinar.”

If you want more interviews and job offers, then investing one hour of your life watching this webinar is an absolute must. The webinar takes place on Monday 8th December at 1915 GMT and to register, all you need to do is click here and fill in your details.

Monday, 01 December 2014 00:00

Do Data Lakes Need a ‘Refinement Layer?’

Okay, sure, maybe Gartner has a point about this whole “data lake becoming a data swamp” problem. But a recent Information Age piece proposes that organizations can get around all that — and the need for data scientists — with a “data refinery layer.”

Haven’t heard of such a thing? Neither have I, and Google seems to only have heard of it twice, including this article and an unsourced Word document.

“As data is consolidated, the refinement layer would process, evaluate, correlate and learn from the information passing through it, essentially generating additional insights and information from the data, and also linking to the aforementioned applications to drive value,” the article explains.

That sounds wonderful. Let’s do it! The problem is, after reading the article, I’m still not exactly sure what it is or if it exists or if it could exist.



I’ve pointed out many times over the years that everyone has their own perception of green. To a coal plant operator, a 20 percent reduction in emissions is cause for celebration, while the environmentalist still frets over the 80 percent still coming out of the stack.

So it is understandable that the data center industry – arguably the top energy consumer on the planet – is both the hero and the villain when it comes to greening up the world’s digital infrastructure. And in time-honored tradition, the biggest targets are always first on the hit list, which in this case would be the hyperscale providers like Google, Facebook and Amazon.

But as Data Center Dynamics’ Peter Judge points out, criticism of the web-scale providers actually misses the mark when it comes to environmental friendliness because their facilities, while massive, are also among the most efficient on the planet. According to a recent breakdown from the National Resources Defense Council, hyperscale infrastructure consumes about 5 percent of total data center energy draw, and is probably responsible for even less of the emissions due to its state-of-the-art power capabilities. The largest consumers of data center power are the small-to-mid-sized facilities, which account for about half of total consumption. Large enterprises take up another quarter or so, followed by the colocation industry, which draws another 20 percent.




One of the highlights of the Business Continuity Institute’s World Conference in November was BSI’s announcement of their new guidance for organizational resilience – BS 65000. Richard Taylor from BSI highlighted the benefits of organizational resilience and explained what this standard can do to support organizations aiming to achieve this by providing an overview of resilience, describing the foundations required and explaining how to build resilience. This standard, one that deals with an organization’s capacity to anticipate, respond and adapt, has now been published and officially launched at an event in London on the 27th November.

It is argued that by following this guidance, an organization is more able to adapt successfully to unforeseen and disruptive changing environments, perhaps not dissimilar to an effective business continuity programme. This is possibly taken a step further by enabling an organization to gain a competitive edge by identifying gaps in the market or better understanding risks and opportunities, and being more agile and innovative in order to exploit these. It could also help the organization to reduce costs and increase efficiency by avoiding potential pitfalls.

More and more these days we talk about the value of reputation and BS 65000 provides guidance that can help an organisation preserve or improve its reputation by being seen as vigilant and robust, while also engendering trust amongst its internal and external stakeholders. All of this can help cultivate a culture of shared purpose and values.

Patrick Alcantara, Research Associate at the BCI and author of the Institute’s Working Paper on conceptualising organizational resilience, commented: “We see the launch of the BS 65000 as the next step towards building a more resilient world. As one of the institutions who collaborated in developing this standard, we subscribe to its vision of putting resilience as a strategic goal for top management. This standard adds more value to BC and the work its practitioners do as one of the integral ‘protective disciplines’ within its scope.

Anne Hayes Head of Market Development for Governance and Risk at BSI said: “Organizations that are resilient behave in a very specific way and have long understood what this means to their long term success. They take a proactive approach to governing themselves and have pinpointed the importance of being forewarned. BS 65000 can work alongside their existing risk, crisis and business continuity management strategies to provide a solid defence against weathering a tough business climate.

A wide range of experts and representatives from a cross-section of industry, trade bodies and academia were involved in the consensus-based process for developing the standard. Deborah Higgins MBCI, Head of Learning and Development at the BCI, played an important role in this as part of the group that developed the standard, representing the BCI Membership and encouraging Members to comment as part of the public consultation process.

To purchase you copy of BS 65000, access the BSI shop by clicking here and then on the link for BS 65000.

By Steve Salinas

For those of us in the technology industry comparing Moore's Law to technology advancement is nothing new. Moore's law holds that computer processing power will double every two years. Aside from a few peaks and valleys, I think most would agree that this is true. I contend that Moore's Law, at least in principle, holds true for malware and attack methods as well.

Unless you have been hiding under a rock the last few years you will be fully aware that cybercrime has exploded in recent years. Hackers, who once had to build their own malware from scratch, now have access to numerous toolkits that make developing their own variant of malware easy. For the hacker who would rather spend their money than their time on malware, there are even malware exchanges where anyone can buy malware built for anything from controlling a webcam to siphoning credit card information, and anything in between.

Combine the ease by which hackers can access malware with the way social media makes it easy to organize groups of people around the world and you have a dangerous new frontier. Attackers, who can work together to target an organization, steal data and cover their tracks, all under the guise of anonymity. How can you defend yourself from this new breed of attackers?



Monday, 01 December 2014 00:00

Seven crisis management tips

By Charlie Maclean-Bristol, FBCI

Recently I conducted three strategic level exercises and thought I would share some of the lessons learned. The exercises consisted of two public sector executive teams and a manufacturer.

The following are the main lessons learned.



Former FBI Director Robert Mueller once said, “There are only two types of companies: those that have been hacked and those that will be. Even that is merging into one category: those that have been hacked and will be again.” This is the environment in which risk managers must protect their businesses, and it isn’t easy.

Cyber risk is not an IT issue; it’s a business problem. As such, risk management strategies must include cyber risk insurance protection. Until recently, cyber insurance was considered a nice-to-have supplement to existing insurance coverage. However, following in the wake of numerous, high-profile data breaches, cyber coverage is fast becoming a must-have. In fact, new data from The Ponemon Institute indicates that policy purchases have more than doubled in the past year, and insiders estimate U.S. premiums at around $1 billion today and rising.

But is a cyber policy really necessary? In short, yes. As P.F. Chang’s China Bistro recently discovered, commercial general liability (CGL) policies generally do not include liability coverage to protect against cyber-related losses. CGL policies are intended to provide broad coverage, not necessarily deep coverage. Considering the complexity of cyber risks, there is a real and legitimate need for specialized policies that indemnify the insured against cyber-related loss and liability.



While organizations of just about any size have an interest in tapping into the potential of Big Data, the vast majority of them won’t have the resources required to actually do that any time soon unless they get some external help.

With that issue in mind, First Data, a provider of credit card processing services, has been building out an Insightics analytics service in the cloud that aggregates both internal data collected by First Data and external data sources. The latest external data source that First Data is including comes from Factual, provider of a location-based service that helps organizations deliver mobile experiences based on the physical location of a mobile computing device.

Sandeep Garg, vice president of information and analytics at First Data, says that rather than requiring small-to-medium-sized (SMB) organizations to build their own Big Data applications and acquire associated infrastructure, First Data has created an application that they can either interface directly or programmatically address via application programming interfaces (APIs).



Application development is a vital and ever-changing part of the mobile ecosystem. Now, there are rumblings that a new approach is necessary. Research sponsored by Kinvey points to dissatisfaction on the part of CIOs about mobile app creation. Half of those surveyed, according to the story at Associations Now, think that it takes too long to build an app. More than half says it takes seven months to a year and 35 percent think it takes less than six months.

A big problem, according to the survey, is lack of a cohesive central strategy. Seventy-five percent of respondents say that product lines and “individual functions” drive development. The process may be changing, however: 54 percent of those who answered the survey say they will standardize development and 63 percent will utilize cloud approaches.

The call to change is being heard. Forrester released a report on the transitions occurring in the mobile app development sector. It identifies eight. The top four: Standalone apps will fade; hardware changes will create new opportunities; and mobile competition will shift to both accessories and ecosystems. The other four changes and details on all of them are available at the ReadWrite story on the Forrester research.



Recently the US law firm of Foley and Lardner LLP and MZM Legal, Advocates & Legal Consultants in India jointly released a white paper, entitled “Anti-Bribery and Foreign Corrupt Practices Act Compliance Guide for U.S. Companies Doing Business in India”. For any compliance practitioner it is a welcome addition to country specific literature on the Foreign Corrupt Practices Act (FCPA), UK Bribery Act and other anti-corruption legislation and includes a section on India’s anti-corruption laws and regulations.

FCPA Enforcement Actions for Conduct Centered in India

Under the FCPA, several notable US companies have been through enforcement actions related to conduct in India. Although not monikered as a ‘Box Score’ the authors do provide a handy chart which lists the companies involved, a description of the conduct and fine/penalty involved.



A lack of widespread adherence to best practices, combined with the number of organizations that have suffered a significant cyber attack, potentially indicates a false sense of security.

SolarWinds has released the results of its Information Security Confidence Survey, which explored IT professionals’ confidence in their organizations’ security measures and processes. The survey found that while confidence is notably high, likely the result of several key factors, widespread adherence to security best practices is lacking and significant, damaging attacks continue: potentially indicating this confidence is a false sense of security.

“Organizations are taking positive steps toward improving their information security; most notably in terms of budget and resources,” said Mav Turner, director of security, SolarWinds. “It’s important, however, to never fall into the trap of over-confidence. IT pros should do everything they can to ensure the best defences possible, but never actually think they’ve done everything they can. This approach will ensure they are proactively taking all the steps necessary to truly protect their organizations’ infrastructures and sensitive data.”

Conducted in October 2014 in conjunction with Enterprise Management Associates, the survey yielded responses from 168 IT practitioners, managers, directors and executives in the UK from small and midsize enterprise companies.



Wednesday, 26 November 2014 00:00

Crisis Management Survey 2014 Results

Steelhenge Consulting has published the results of its Crisis Management Survey 2014: ‘Preparing for Crisis, Safeguarding Your Future’.

The aim of the Crisis Management Survey was to build a better picture of how organizations are preparing themselves to manage crises effectively in order to protect their reputation and performance. It asked the 375 participants from organizations around the world, what they are doing to prepare to manage crises, the challenges they face in creating a crisis management capability and to assess their overall level of crisis preparedness.

Over half rated themselves as less than very well prepared, with 13 percent responding that they were either not well prepared or not prepared at all.

The crisis communications function was shown to be lagging behind when it comes to crisis preparedness; while 84 percent of organizations surveyed had a documented crisis management plan, over a quarter of respondents recorded that they do not have a documented plan for how they will communicate in a crisis and 41 percent responded that they do not have guidance on handling social media in a crisis.

Other key themes from the survey results include:

Embedding: less than half of the respondents had a programme of regular reviews, training and exercising that would help embed crisis management within an organization and create a genuinely sustainable crisis management capability.

Engagement: in the face of high profile crises befalling major organizations year after year, 29 percent of organizations taking part in the survey still waited for the brutal experience of a crisis before creating a plan. Crisis preparedness is still a work in progress, particularly with regard to crisis communications planning.

Ownership: ownership of crisis management at the strategic level amongst the survey population lay predominantly with the chief executive. However, responsibility for day-to-day management of the crisis management capability was spread widely across a broad range of functional roles.

For the full results of the Crisis Management Survey, please click here (PDF).

Wednesday, 26 November 2014 00:00

Thanksgiving Crowd Control

As holiday shopping gets underway, several major retailers are opening even earlier this year offering the prospect of deep discounts and large crowds to an ever growing number of shoppers.

The National Retail Federation (NRF) notes that 140 million holiday shoppers are likely to take advantage of Thanksgiving weekend deals in stores and online.

Millennials are most eager to shop, with the NRF survey showing 8 in 10 (79.6 percent) of 18-24 year olds will or may shop over the weekend, the highest of any age group.

Much has been written about the risks of online shopping, but for those who still head to the stores, there are dangers there too.



After covering tips for small to midsize businesses (SMBs) to minimize data loss, it makes sense to also delve further into disaster recovery. For those businesses that have a mix of infrastructures, including those that store information both onsite and in the cloud, it can be extremely complex to ensure that the data remains available after a disaster.

Unitrends, provider of industry-leading backup, archiving and disaster recovery solutions, offers one simple solution that many SMBs find attractive: Disaster Recovery as a Service (DRaaS). According to Subo Guha, vice president of product management for Unitrends, the cloud is helping to make disaster recovery options more attainable for SMBs. In an email interview, Guha explained why disaster recovery is integral for even small businesses:

Disaster recovery (DR) is crucial for any size business. However, due to limited resources, an SMB’s ability to quickly recover (from outages, disaster and/or catastrophic failure) can be the sole factor in their survival or failure (according to a recent IDC study, 80% of SMB respondents reported that network downtime costs their organizations at least $20,000 per hour). Although many SMBs acknowledge the importance of protecting their data, DR continues to be a major challenge; IT environments are more complex than ever, as critical data resides across virtual, physical and cloud infrastructures and IT staffing and budgets are constrained. And, SMBs are overwhelmed by the time, money and personnel required to build physical failover environments for disaster recovery purposes. SMBs simply can’t afford large scale disaster recovery policies and facilities; therefore, they are continuously striving for more economical means to manage DR.



Tuesday, 25 November 2014 00:00

Meet the Challenges of a Changing Climate

The Climate Resilience Toolkit provides resources and a framework for understanding and addressing the climate issues that impact people and their communities.



Following its success in 2014, the Business Continuity Institute will again be hosting the BCI Middle East Conference in 2015, this time in Doha, Qatar on the 11th and 12th May.

Over the two days, the conference will focus on the latest thinking and best practice in continuity and resilience. There will be plenary sessions from leading local and international experts with opportunities to break out into streams and examine the key themes in more depth – either from a business continuity or enterprise risk perspective.

Chris Green FBCI, Head of the Business Continuity Programme at Qatar Airways, commented: "There are many benefits to being active in the business continuity industry. There is a huge advantage in meeting and in being connected to other people who can act as a valuable resource for information and ideas. The BCI Middle East Conference will give you an opportunity to network with business continuity professionals from different countries and industry sectors. All of them work in the region so you can share with them your common areas of interest, thoughts and ideas. In return, you are sure to gain inspiration from the experience and knowledge available from the speakers and delegates. Alongside this great-value conference will be an exhibition of products and services from some innovative and influential companies in the Middle East.”

Thomas Keegan FBCI, Middle East Enterprise Resilience Leader at PwC, added: “No matter how experienced you are in business continuity or the wider field of organisational resilience, there is always something new to learn. The educational aspect of the BCI Middle East Conference will expose you to new ways of conducting your business – informing you of the latest research, keeping you up-to-date on best practice and demonstrating the latest tools and techniques that are available to you, all of which are designed to assist you in your role.”

As well as the main conference, delegates will be able to register for site visits to high profile organisations and discover how they put business continuity theory into practice. For those who wish to focus on developing their practical skills, the BCI will be running a selection of training courses – perhaps an ideal opportunity to get certified in business continuity. Coinciding with the conference will be the BCI’s Middle East Awards where individuals and organisations from across the region will have their outstanding contribution to the industry recognised in front of their colleagues.

The full programme is coming out soon and once confirmed will be made available on the BCI website. The programme will provide something of interest to all business continuity professionals, newcomers and experienced professionals alike, as well as those working in associated disciplines who have an interest in building and improving resilience within their organisation. To register your interest for the BCI Middle East Conference, please email the BCI events team at events@thebci.org.

By Rose Jacobs

As we move into the holiday season, the idea of travel begins to cause many of us sleepless nights: We have visions of highway traffic jams, long airport layovers, even longer flights and–that nightmare of nightmares—the winter storm that leaves our best-laid plans in total disarray.

There is some comfort to be had, however, and it comes in the form of technology. Gadgets and apps can brighten the best of trips–and make the worst bearable. I’ve learned this the hard way, through countless hours on the road for work and in the skies to see my family overseas. And from that trial by fire, I can offer 10 tried and tested technology tools that are as necessary as a passport and as comforting as a first-class lounge.



With the biggest shopping events of the season, retailers face tremendous amounts of both risk and reward as sales and door-busters draw in eager consumers all week. In 2013, Thanksgiving deals brought in 92.1 million shoppers to spend over $50 billion in a single weekend, the National Retail Federation reports.

The National Retail Federation issued crowd management guidelines for retailers and mall management officials to use when planning special events, including Black Friday, product launches, celebrity appearances and promotional sales. General considerations to plan for and curtail any crowd control issues include:



(TNS) — Police officers have become as visible on college campuses as students and professors, as schools respond to the early Thursday morning shooting at Florida State University.

The incident, in which FSU alumnus Myron May injured three students in a campus library before being killed by police, has alarmed students and employees at colleges throughout the state. Schools are now reviewing their own security procedures.

"Incidents like this remind us we can never be too cautious," said Alexander Casas, police chief at Florida International University, west of Miami.

Campus safety has been a high priority for most Florida colleges and universities since the Virginia Tech massacre in 2007. Many schools have added sirens with speakers as well as text, email and social media alert systems. They've also increased the number of counselors to deal with mental health issues.



Improved model, new surge forecast products and research projects debuted

NOAA satellite image of Hurricane Arthur, July 3, 2014. (Credit: NOAA.)

NOAA satellite image of Hurricane Arthur, July 3, 2014. (Credit: NOAA.)

The Atlantic hurricane season will officially end November 30, and will be remembered as a relatively quiet season as was predicted. Still, the season afforded NOAA scientists with opportunities to produce new forecast products, showcase successful modeling advancements, and conduct research to benefit future forecasts.

“Fortunately, much of the U.S. coastline was spared this year with only one landfalling hurricane along the East Coast. Nevertheless, we know that’s not always going to be the case,” said Louis Uccellini, Ph.D., director of NOAA’s National Weather Service. “The ‘off season’ between now and the start of next year’s hurricane season is the best time for communities to refine their response plans and for businesses and individuals to make sure they’re prepared for any potential storm.”

How the Atlantic Basin seasonal outlooks from NOAA’s Climate Prediction Center verified:



August Outlook

May Outlook

Named storms (top winds of 39 mph or higher)




Hurricanes (top winds of 74 mph or higher)




Major hurricanes (Category 3, 4, 5; winds of at least 111 mph)




“A combination of atmospheric conditions acted to suppress the Atlantic hurricane season, including very strong vertical wind shear, combined with increased atmospheric stability, stronger sinking motion and drier air across the tropical Atlantic,” said Gerry Bell, Ph.D., lead hurricane forecaster at NOAA’s Climate Prediction Center. “Also, the West African monsoon was near- to below average, making it more difficult for African easterly waves to develop.”

Meanwhile, the eastern North Pacific hurricane season met or exceeded expectations with 20 named storms – the busiest since 1992. Of those, 14 became hurricanes and eight were major hurricanes. NOAA’s seasonal hurricane outlook called for 14 to 20 named storms, including seven to 11 hurricanes, of which three to six were expected to become major hurricanes. Two hurricanes (Odile and Simon) brought much-needed moisture to the parts of the southwestern U.S., with very heavy rain from Simon causing flooding in some areas.

“Conditions that favored an above-normal eastern Pacific hurricane season included weak vertical wind shear, exceptionally moist and unstable air, and a strong ridge of high pressure in the upper atmosphere that helped to keep storms in a conducive environment for extended periods,” added Bell.

In the central North Pacific hurricane basin, there were five named storms (four hurricanes, including a major hurricane, and one tropical storm). NOAA’s seasonal hurricane outlook called for four to seven tropical cyclones to affect the central Pacific this season. The most notable storm was major Hurricane Iselle, which hit the Big Island of Hawaii in early August as a tropical storm, and was the first tropical cyclone to make landfall in the main Hawaiian Islands since Hurricane Iniki in 1992. Hurricane Ana was also notable in that it was the longest-lived tropical cyclone (13 days) of the season and the longest-lived central Pacific storm of the satellite era.

New & improved products this year

As part of its efforts to provide better products and services, NOAA's National Weather Service introduced many new and experimental products that are already paying off.

The upgrade of the Hurricane Weather Research and Forecasting (HWRF) model in June with increased vertical resolution and improved physics produced excellent forecasts for Hurricane Arthur’s landfall in the Outer Banks of North Carolina, and provided outstanding track forecasts in the Atlantic basin through the season. The model, developed by NOAA researchers, is also providing guidance on tropical cyclones in other basins globally, including the Western Pacific and North Indian Ocean basins, benefiting the Joint Typhoon Warning Center and several international operational forecast agencies. The Global Forecast System (GFS) model has also been a valuable tool over the last couple of hurricane seasons, providing excellent guidance in track forecasts out to 120 hours.

In 2014, NOAA's National Hurricane Center introduced an experimental five-day Graphical Tropical Weather Outlook to accompany its text product for both the Atlantic and eastern North Pacific basins. The new graphics indicate the likelihood of development and the potential formation areas of new tropical cyclones during the next five days. NHC also introduced an experimental Potential Storm Surge Flooding Map for those areas along the Gulf and Atlantic coasts of the United States at risk of storm surge from an approaching tropical cyclone. First used on July 1 as a strengthening Tropical Storm Arthur targeted the North Carolina coastline, the map highlights those geographical areas where inundation from storm surge could occur and the height above ground that the water could reach. 

Beginning with the 2015 hurricane season, NHC plans to offer a real-time experimental storm surge watch/warning graphic for areas along the Gulf and Atlantic coasts of the United States where there is a danger of life-threatening storm surge inundation from an approaching tropical cyclone.

Fostering further improvements

While this year’s hurricane season was fairly quiet, NOAA scientists used new tools that have the potential to improve hurricane track and intensity forecasts. Several of these tools resulted from research projects supported by the Disaster Relief Appropriations Act of 2013, which was passed by Congress in the wake of Hurricane Sandy.

Among the highlights were both manned and unmanned aircraft missions in Atlantic hurricanes to collect data and evaluate forecast models. NOAA and NASA’s missions involving the Global Hawk, an unmanned aircraft that flies at higher altitudes and for longer periods of time than manned aircraft, allowed scientists to sample weather information off the west coast of Africa where hurricanes form, and also to investigate Hurricane Edouard’s inner core with eight crossings over the hurricane’s eye. NOAA launched a three-year project to assess the impact of data collected by the Global Hawk on forecast models and to design sampling strategies to improve model forecasts of hurricane track and intensity.

While the Global Hawk flew high above hurricanes, NOAA used the much smaller Coyote, an unmanned aircraft system released from NOAA’s hurricane hunter manned aircraft, to collect wind, temperature and other weather data in hurricane force winds during Edouard. The Coyote flew into areas of the storm that would be too dangerous for manned aircraft, sampling weather in and around the eyewall at very low altitudes. In addition, NOAA’s hurricane hunters gathered data in Hurricanes Arthur, Bertha and Cristobal, providing information to improve forecasts and to test, refine and improve forecast models. The missions were directed by research meteorologists from NOAA’s Hurricane Research Division, a part of the Atlantic Oceanographic and Meteorological Laboratory in Miami, and the NOAA Aircraft Operations Center in Tampa.

In addition, increased research and operational computing capacity planned in 2015 will facilitate future model upgrades to the GFS and HWRF to include better model physics and higher resolution predictions. These upgraded models will provide improved guidance to forecasters leading to better hurricane track and intensity predictions.

The 2015 hurricane season begins June 1 for the Atlantic Basin and central North Pacific, and on May 15 for the eastern North Pacific. NOAA will issue seasonal outlooks for all three basins in May. Learn how to prepare at hurricanes.gov/prepare and FEMA’s Ready.gov.

NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on TwitterFacebookInstagram and our other social media channels.

(TNS) — Officials are planning the first major rollout of California's earthquake early warning system next year, providing access to some schools, fire stations and more private companies.

The ambitious plan highlights the progress scientists have made in building out the system, which can give as much as a minute of warning before a major earthquake is felt in metropolitan areas.

Until now, only academics, select government agencies and a few private firms have received the alerts. But officials said they are building a new, robust central processing system and now have enough ground sensors in the Los Angeles and San Francisco areas to widen access. They stressed the system is far from perfected but said expanded access will help determine how it works and identify problems.



(TNS) — Signs were already brewing for last week’s devastating lake effect snowfall as early as Nov. 15, when the National Weather Service issued its first watches for a couple of feet of snow — and maybe more.

Over the following two days leading up to the storm, the watches were upgraded to warnings as weather service forecasts called for “near blizzard conditions” across Erie County with “around two feet in the most persistent bands” that could leave “some roads ... nearly impassable.”

The weather service also accurately pegged accumulating snows at almost unheard of “rates of 3 to 5 inches per hour in the most intense portion of the band.”

But, according to state and Erie County officials, not only did the information come too late for them to adequately prepare, the national forecasting service also failed to project the ferocity and exact locations of the tandem lake-effect storms that dumped 7 feet or more of snow in just 72 hours.



One of the main problems in introducing scale-out architecture to legacy data environments is the sheer number of incompatible formats, platforms and vendor solutions that have infiltrated the data center over the years.

The drive to remove these siloes and federate the data environment under either a single proprietary solution or the myriad open platforms currently available is well underway. But in many cases the transition is happening too slowly given that the need to scale out is immediate as enterprises attempt to cope with issues like Big Data and the Internet of Things.

This is why many researchers are looking to move the concept of virtualization to an entirely new level. Rather than focus on infrastructure like servers, storage and networking, virtualization on the data plane introduces a level of abstraction that allows data and applications to sit on any hardware, and thus interact with other data sets across the enterprise and into the cloud. And as tech author Anne Buff points out, it would also optimize hardware utilization and reduce system complexity, as well as offer more centralized security and control.



One of the attributes of most advanced analytics applications is that they assume the organization or person invoking them actually knows which questions are worth asking. Most organizations, however, are still trying to figure out the questions they should be asking.

With that goal in mind, BeyondCore has made available a production release of BeyondCore V, an analytics application that is designed to discover patterns in data in minutes using new data visualization tools.

BeyondCore CEO Arijit Sengupta says while analytics applications can be helpful, most organizations need help framing the question to ask. The end result is major investments in hiring everybody from SQL programmers to data scientists.



If you want to achieve an enterprise view of your data, your solution options basically fall into one of two camps:

  • Move it and integrate.
  • Leave it and virtualize.

Metanautix’s co-founder, Theo Vassilakis, contends that both add unnecessary complexity to enterprise data analytics.

“A lot of the times, that's where the complexity comes from: Oh hold on, let me do a little Informatica here, let me do a little virtualization here, and let me do a little Teradata there,” Vassilakis said during a recent interview. “So, solving the same business problem, some of the data sets you’ll have to move and some of the data sets you're not going to be able to move. Additionally, you end up having to do the moving with one system and then the querying with another system."



Tuesday, 25 November 2014 00:00

Current Australian Preparedness against Ebola

As efforts to contain and eliminate the current Ebola outbreak in West Africa continue, countries around the world are making preparations to be ready in case the virus arrives. The Australian government is also making plans to deal with such an event. Ebola already exists in Australia – but fortunately (so far) only as the subject of research in the high security Australian Animal Health and Research Centre in Geelong to develop a vaccine. But how does Australian preparedness compare with that if other countries? And what would happen if Ebola cases were declared in Australia in the way they have already occurred in Spain and in the United States?



Despite over half of companies wanting to retain control of their IT disaster recovery inhouse, a lack of frequent testing is putting these businesses more at risk of IT downtime than companies which outsource. The mismatch between the high levels of confidence that in-house disaster recovery yields and the high test failure rates indicates that either testing needs to be stepped up or companies would be better to outsource.

This was one of the key findings of research carried out by Plan B, through surveying 150 contacts that attended the BCI World conference in November 2014. All contacts interviewed were within an IT function of their business, with knowledge of the disaster recovery strategy and solution for their business.

Other findings include:



By Mark Kedgley

December 15th is the anniversary that Target's infamous security breach was discovered; but has anything really changed in the year that has gone by? Retailer after retailer is still falling foul of the same form of malware attack. So just what is going wrong?

The truth is that there is never going to be a 100 percent guarantee of security: and with today's carefully focused zero day attacks, the continued reliance on prevention rather than cure is obviously not working. Organizations are blithely continuing day to day operations while an attack is in progress because they are simply not spotting the breaches as they occur.

If an organization wants to maintain security and minimise the financial fall out of these attacks, the emphasis has to change. Accept it: the chances of stopping all breaches are unlikely at best with a prevention only strategy. Instead, with non-stop, continuous visibility of what is going on in the IT estate, an organization can at least spot in real-time the unusual changes that may represent a breach, and take action before it is too late.



As the U.S. begins to feel winter’s icy grasp, a number of cities are turning to GPS data and the Internet of Things to help keep the roads clear during snowstorms.

Boston, Minneapolis and Buffalo, N.Y. (parts of which received 60 inches of snow on Tuesday, according to AccuWeather), are among the many municipalities using machine-to-machine communication and engagement tools to modernize snow removal and other inclement weather requests from citizens. From sensors attached to snow plows and interactive mapping technology, residents remain more informed on travel conditions, while public works departments are seeing an increase in efficiency.

Buffalo’s Division of Citizen Services teamed up with the city’s public works department to speed the process of addressing service calls for salting and snow issues. The plowing and salting strategy hasn’t changed – plows still clear the main roads, followed by secondary and side streets. But with GPS sensors now attached to the city’s snowplow fleet, it has made the entire operation a lot more transparent.



Friday, 21 November 2014 00:00

DDoS Attacks Cost Businesses $40,000 an Hour

One of the most common weapons in the cybercriminal’s arsenal is the DDoS attack. According to the network security experts at Digital Attack Map, “A Distributed Denial of Service (DDoS) attack is an attempt to make an online service unavailable by overwhelming it with traffic from multiple sources. They target a wide variety of important resources, from banks to news websites, and present a major challenge to making sure people can publish and access important information.”

While many have heard of these attacks or suffered from the outages they cause, most people do not understand the true business risks these incidents pose. To get a better picture of the threat, Internet security firm Incapsula surveyed 270 firms across the U.S. and Canada about their experiences with DDoS attacks. On average, they found, 49% of DDoS attacks last between 6 and 24 hours. “This means that, with an estimated cost of $40,000 per hour, the average DDoS cost can be assessed at about $500,000—with some running significantly higher,” the company reported. “Costs are not limited to the IT group; they also have a large impact on units such as security and risk management, customer service, and sales.”

Check out the infographic below for more of Incapsula’s findings on the actual costs of DDoS attacks:



The shooting at Sandy Hook Elementary School nearly two years ago shook Newtown, Conn., and has had far-flung reverberations. Tech companies have continued the push for gun control, the Center for Health Care Services launched a crisis intervention app that provides resources for early intervention and treatment of mental illness, and an app launched in January 2014 aims to give law enforcement a 60-second head start on school shootings.

Some jurisdictions even installed mobile panic alarms in schools. Take Ohio, where the tragedy pushed state government to expand its wireless emergency communications by offering radios for schools to communicate directly with local law enforcement during a life-threatening situation.  

The idea of the school radios with emergency buttons -- like "fire alarms," but for police -- came up the day of the Sandy Hook tragedy at a meeting addressing the upgrade of Ohio's Multi-Agency Radio Communication System, or MARCS.



In case you haven’t seen, Uber, the controversial (for taxi companies anyway) new contract ride service, is in trouble. Seems they have a way of knowing where everyone who uses their service goes. It’s available to those inside the company. It’s called “God View.”

Obviously there is considerable power in having such a God view. As Lord Acton reminded us, there is a corrupting power related to power. All it would take would be for someone not using their head to use it for bad reasons. Buzzfeed broke a story about the New York executive for Uber using the God View to track the movements of a reporter and others. One other executive said that Uber might use the tracking information to smear reporters who wrote critically of the company. He, of course, apologized and admitted saying that was “wrong.”



When hurricane Sandy came to town, it blew through a slew of cracks in New York’s building infrastructure. Millions of people sat in the dark for days, many unable to wash their hands or flush their toilets. Backup generators, which sat in flooding basements, broke before they had a chance to help. Sewer systems overflowed.

In the months that followed, in an effort to protect its residents from future bouts of city-wide paralysis, the city of New York asked for help safeguarding their buildings from future storms. They called Russell Unger, the executive director of a nonprofit called Urban Green Council, to create a task force of building experts, property owners and city officials some 200-strong. After six months and more than 5,500 hours of donated time, the task force released a report recommending 33 changes that would make buildings safer. That was in June of 2013 . So far, the city has already passed and implemented 16 of their recommendations.



Friday, 21 November 2014 00:00

A fork in the road

Continuity Central's 2014 Business Continuity Paper of the Year competition is open to entries and to mark this we are publishing the winning entry from the 2013 competition. This was first published in the Q1 2013 issue of the Business Continuity and Resiliency Journal.

The paper, entitled 'A fork in the road' was submitted by Ken Simpson. Although it was written in 2013 the issues that it raises are still very pertinent to the position the business continuity profession currently finds itself in.


In 2013 we find ourselves at a collective fork in the road, once again considering the path we should collectively take to the future of the discipline. The current choice is between a wider-focused discipline called business continuity, and the 'management systems' highway known as business continuity management.

Moving forward may require embracing multiple alternative paths and destinations. To grow towards a wider focus we need to become a learning discipline. A wider focus on learning means we reflect on what we need to learn and how we facilitate that learning as a holistic discipline.

This paper discusses three ideas that challenge business continuity (management) professionals to think differently about learning, what it means to learn and ways that we can shape future practice.

Read the paper (PDF)

Thursday, 20 November 2014 15:33

Take Time to Check for Earthquake Damage

SACRAMENTO, Calif.  – When earthquakes occur, some of the damage happens in areas of our homes and businesses that may be nearly impossible to spot without close attention. Residents and business owners in Napa and Solano Counties continue to discover damage from the South Napa Earthquake.

The California Governor’s Office of Emergency Services (Cal OES) and the Federal Emergency Management Agency (FEMA) urge people in those counties to take time to check for any signs of potential damage and register for assistance as soon as possible.

"Earthquake damage sometimes goes unnoticed," said Federal Coordinating Officer Steve DeBlasio. "Earthquakes are different from other disasters, because damages can mimic regular wear and tear or be so subtle that they are hard to find at first. A new crack or stuck door, for example, could be the sign of a serious problem."

Homeowners and renters in Napa and Solano Counties who had damage from the South Napa Earthquake have until Dec. 29, 2014 to apply for disaster assistance from FEMA. Disaster assistance includes grants to help pay for temporary housing, essential home repairs and other serious disaster-related needs not covered by insurance or other sources.

“Every resident and business should take the necessary time to do a thorough double check for damages of their property,” said Cal OES Director Mark Ghilarducci. “It’s important for homeowners and businesses to take advantage of available federal assistance and register as soon as possible.”

Cal OES and FEMA offer the following questions and tips to help everyone spot potential damage:

Exterior Structure:
• Has the house shifted off its foundation? Has it fallen away from the foundation in any place?
• Is the structure noticeably leaning? When looked at from a distance, does it look tilted?
• Do you see severe cracks or openings between the structure and outdoor steps or porches?
• Do you experience seriously increased vibrations from passing trucks and buses?
• Do you see severe cracks in external walls or foundation?
• Are there any breaks in fence lines or other structures that might indicate nearby damage?
• Did you check for damage to ceilings, partitions, light fixtures, the roof, fuel tanks and other attachments to the main frame of the structure?

• Are there cracks between the chimney and the exterior wall or the roof?
• Are there cracks in the liner?
• Did you find unexplained debris in the fireplace?

• Are power lines to your house noticeably sagging?
• Is your hot water heater leaning or tilted?
• Are all the water connections secure including those for pipes, toilets, faucets?

• Are any doors and windows more difficult to open or close?
• Is the roof leaking? Is there water damage to the ceiling?
• Has the furnace shifted in any way? Are ducts and exhaust pipes connected and undamaged?
• Do you feel unexplained draftiness? Are any cracks in the walls, poorly aligned window frames or loosened exterior sidings letting in breezes?
• Has the floor separated from walls or stairwells anywhere inside the house?
• Are there cracks between walls and built-in fixtures such as lights, cupboards or bookcases?
• Does the floor feel "bouncy" or "soggy" when you walk on it?
• Have you checked crawl spaces, stairwells, basements, attics and other exposed areas for signs of damage such as exposed or cracked beams, roof leaks and foundation cracks?

Low-interest disaster loans are also available from the U.S. Small Business Administration (SBA) for homeowners, renters, businesses of all sizes, and private non-profit organizations that had damage or loss as a result of the South Napa Earthquake. Disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.

To apply for disaster assistance, register online at DisasterAssistance.gov or via smartphone or tablet at m.fema.gov. Applicants may also call FEMA at 800-621-3362 or (TTY) 800-462-7585.  People who use 711-Relay or VRS may call 800-621-3362.

FEMA must verify damages for every application. FEMA inspectors have completed more than 2,600 inspections in Napa and Solano Counties. FEMA inspectors display photo identification badges.

Damage inspections by FEMA are free and generally take 30 to 45 minutes, and they are conducted by FEMA contract inspectors who have construction or appraisal expertise and have received disaster-specific training. Inspectors document the damage by checking the building structure and its systems, major appliances and any damaged septic systems and wells.

If applicants discover additional damage to their property after the inspection takes place, they can request another one by calling FEMA at 800-621-FEMA (3362) or (TTY) 800-462-7585.

Additional information on California disaster recovery is available at www.fema.gov/disaster/4193.

Thursday, 20 November 2014 15:32

Guerrilla Business Continuity Management

Guerrilla warfare, guerrilla marketing, guerrilla negotiating – if all these things can benefit from a ‘guerrilla’ point of view, how about business continuity management? The basic concept is to get bigger results from a smaller amount of resources, possibly supplemented by some lateral thinking. Guerrilla soldiers don’t have the big guns and tanks of their adversaries. Guerrilla marketers don’t have the big television and print budgets of their competitors. And guerrilla negotiators learn to think around business deals to turn losing propositions into winning ones. Guerrilla business continuity management can draw on each of these areas to help BCM move forward.



Thursday, 20 November 2014 15:31

Expert Gives Tips for SMBs to Prevent Data Loss

Data loss in any shape or form can prove disastrous for business—especially small to midsize businesses (SMBs). Depending on the occurrence, data recovery can cost from $100 for a commercial data recovery product to thousands for hard drive crashes or catastrophic events such as flood, fire or tornado.

According to David Zimmerman, president of LC Technology International, a global leader in file and data recovery, a big mistake that SMBs make in regard to data protection is that they don’t create and test a formal plan because they don’t expect a big data loss to happen. In an email interview, Zimmerman explained that it’s important for all businesses to at least plan for a disaster:

You have to expect bad things to happen, prepare for the worst. Having a disaster recovery plan in place is mandatory for successfully restoring backups and recovering lost data. Off-site storage that is readily accessible is also essential to help protect data and get the business running after a disaster.



Not all organizations are moving to the external cloud. Some data and applications are going from public to private clouds, said Seth Robinson, senior director of technology analysis at CompTIA.

As I wrote yesterday, CompTIA released its Fifth Annual Trends in the Cloud report, which queried 400 businesses and 400 individuals on cloud adoption. I’ve covered the integration aspects, but here’s something else worth noting: While cloud adoption is becoming more mainstream, at least some adopters are opting to move data to internal clouds.

“It's not that everything is funneling into major cloud providers,” Robinson said. “Companies have different requirements for different pieces of their architecture, and they are finding where those pieces fit best between all these models that are out there. Companies are going to keep moving that way.”



Thursday, 20 November 2014 15:29

The SDDC: Still a Work in Progress

It’s funny how technology always progresses to a higher state even before the current state has made its way to widespread use. First blade servers, then virtualization and then the cloud all made their way into the collective IT consciousness while most enterprise managers were still getting their feet wet with the current “state of the art” technology.

These days, the buzz is all about the software-defined data center (SDDC), which is an amalgam of nearly everything that has happened to IT over the past decade cobbled together into one glorious, trouble-free computing environment. And if you believe that last part, I have a bridge to sell you.

What is clear is that by virtualizing the three pillars of data infrastructure – compute, storage and networking – entire data environments could potentially be created and dismissed at whim. I say “potentially” because the technology to do this simply does not exist yet, at least not in the way that people expect: quickly, easily and with little or no training.



(TNS) — As nationwide alarm over Ebola fades, hospital officials and public health professionals are trying to ensure that lessons learned don’t disappear along with it.

After a Liberian man carrying the disease died last month in a hospital in Dallas and two of his nurses became infected, facilities stepped up training and planning for Ebola cases.

“The mantra is, ‘Don’t be the next Dallas,’ ” said Dr. Andrew Pavia, chief of pediatric infectious diseases for the University of Utah health system.

But as the situation abates, so does the urgency to act. With a quarter of American hospitals losing money in day-to-day operations, according to the American Hospital Association, expensive and time-consuming training for unknown future outbreaks is not always a top priority, experts say.



(TNS) — On the coldest morning since last winter, officials with numerous state agencies gathered Tuesday morning to practice ways to avoid a repeat of last winter’s memorable “Snowmageddon.”

On that cold January day, heavy snow moved into metro Atlanta just as businesses and government agencies sent workers home, and thousands of motorists were stranded overnight — and well into the next day — on jammed, ice- and snow-laden streets and interstates.

Tuesday, the Georgia Emergency Management Agency opened its Emergency Operations Center for a coordination exercise that involved GEMA, the Georgia Department of Transportation, the Georgia Department of Public Safety, the Georgia Department of Natural Resources, the Georgia Forestry Commission and the Georgia National Guard.



The Insurance Institute for Business & Home Safety’s (IBHS) free business continuity planning toolkit, OFB-EZ (Open for Business-EZ), is now available as a free, mobile app.

IBHS member company, EMC Insurance Companies, partnered with IBHS to develop the new app, OFB-EZ Mobile, which guides users through an easy process to create a recovery plan that will help even the smallest business recover and re-open quickly after a disaster.

OFB-EZ Mobile, available for Android devices in the Google Play Store and for Apple devices in the App Store, includes several helpful planning tools, such as evaluation checklists to help business users understand their risks, and forms for users to enter and store important contact information for employees, key customers, suppliers, and vendors.

OFB-EZ is also available at no charge in Adobe Acrobat (pdf) and Microsoft Word formats on the IBHS website at: http://www.disastersafety.org/open-for-business.

A recent poll by the Security Executive Council set out to discover which business continuity standards are being used when organizations are developing their business continuity programs.

The results show that ISO 22301 was used most often. 34 percent of poll respondents use this standard to benchmark against. However, surprisingly 30 percent stated that they do not benchmark their business continuity program against any standard.

The other standards in use are:

  • NFPA 1600 12 percent
  • ISO/IEC 27001 8 percent
  • BS 25999 6 percent
  • ISO/PAS 22399 4 percent
  • Other 6 percent

The ‘Other’ category included write-in votes for other business continuity related standards, the most popular being CSA Z1600, HB 221/292, and NIST 800-53.

More details.

Blue Coat Systems has published research results that show that the growing use of encryption to address privacy concerns is creating perfect conditions for cyber criminals to hide malware inside encrypted transactions, and even reducing the level of sophistication required for malware to avoid detection.

The use of encryption across a wide variety of websites — both business and consumer - is increasing as concerns around personal privacy grow. In fact, eight of the top 10 global websites as ranked by Alexa deploy SSL encryption technology throughout all or portions of their sites. For example, technology goliaths Google, Amazon and Facebook have switched to an ‘always on HTTPS’ model to secure all data in transit using SSL encryption.

Business critical applications, such as file-storage, search, cloud-based business software and social media, have long-used encryption to protect data-in-transit. However, the lack of visibility into SSL traffic represents a potential vulnerability to many enterprises where benign and hostile uses of SSL are indistinguishable to many security devices. As a result, encryption enables threats to bypass network security and allows sensitive employee or corporate data to leak from anywhere inside the enterprise.



If your employees travel on behalf of your business – whether in the U.S. or abroad – you are legally responsible for their health and safety. In fact, Duty of Care legislation has become increasingly important in the corporate travel world.  Companies that fail to safeguard their employees not only risk the health and safety of their people, but also can face legal, financial and reputational consequences.

Someone in your company must be responsible for ensuring the safety and health of traveling employees (usually, this falls on an administrator from the human resources or risk management department). This should include implementing a well balanced, company-wide travel risk management plan.



Wednesday, 19 November 2014 16:12

Why Incident Management Matters

Throughout its history, the Business Continuity industry has maintained a steady focus on Preparedness – understanding the organization’s most critical business functions (both technological and operational) and development of Plans to respond to any disruption of those critical functions. That makes sense.  How that can be accomplished has been refined and tweaked over time through various ‘standards’ and ‘best practices’. Those activities answer some basic questions:

  • What do we need to protect?
  • How will we prepare to respond to a disruption of those critical functions?

What has always been omitted in that analysis has been the third major question:

  • How will we manage that response?

If you ask 20 BCM practitioners that question you will get a wide variety of answers:



Integration permeates all four stages of cloud adoption, from experimenters to companies that are “brutally transforming” their business and workflows through cloud, a recent report by CompTIA shows. In other words, it’s not so much a barrier to cloud adoption as it is a “hidden challenge,” according to Seth Robinson, senior director of Technology Analysis for the firm.

“Integration pops up in every stage; it's the one that runs through everything,” said Robinson via a call this week. “Even as, in general, the early stages see more technical challenges and the leaders see more behavioral or culture challenge, that challenge of integration — which is more of a technical challenge — does run through every stage.

“And that really goes back to what was known for a long time, that integration tends to be the lion's share of the cost or effort in an IT project."