Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 29, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (6562)

Everyone makes mistakes, but for social media teams, one wrong click can mean catastrophe. @USAirways experienced this yesterday when it responded to a customer complaint on Twitter with a pornographic image, quickly escalating into every social media manager’s worst nightmare.

Not only is this one of the most obscene social media #fails to date, but the marketers operating the airline’s Twitter handle left the post online for close to an hour. In the age of social media, it might as well have remained up there for a decade. Regardless of how or why this happened, this event immediately paints a picture of incompetence at US Airways, as well as the newly merged American Airlines brand.

It also indicates a lack of effective oversight and governance.

While details are still emerging, initial reports indicate that human error was the cause of the errant US Airways tweet, which likely means it was a copy and paste mistake or the image was saved incorrectly and selected from the wrong stream. In any case, basic controls could have prevented this brand disaster:



When it comes to IT security, the complexity of managing all the technologies involved often seems like a clear-cut case of insult being continuously added to injury.

Looking to address that complexity issue, Trend Micro today announced an upgrade to the Trend Micro Complete User Protection suite of endpoint security software that makes it much easier to both deploy a mix of IT security technologies as well as acquire them in the age of the cloud.

Confronted with a dizzying array of security products in and out of the cloud, Eric Skinner, vice president of solutions marketing for Trend Micro, concedes it’s very likely that customers are unprotected simply because they failed to acquire the right type of security product to address a particular class of known threats. The primary reason for that failure, says Skinner, is often the complex line card of products that security vendors present to customers. Presented with a raft of options and a limited budget, customers often wind up making a best guess as to which endpoint software to deploy.



With so many data centers making up the firmament of the cloud these days, it’s only natural that a pantheon of service providers would emerge to offer disaster recovery as a cloud service.

The latest cloud service provider to join the list of vendors offering such services is VMware, which today is unfurling the VMware vCloud Hybrid Service Disaster Recovery offering as part of its public cloud service.

Angelos Kottas, director of product marketing for the VMware Hybrid Cloud unit, says the VMware disaster recovery service is designed to replicate virtual machines over a wide area network every 15 minutes. Recovery point objectives (RPOs) for the service can be set for anywhere between 15 minutes to 24 hours.



It is hard to imagine any people, collectively, being better prepared for earthquakes than the Japanese. Their country is one long seismic zone, which at any moment could, literally, rock and roll. Every Sept. 1, across the archipelago, Japanese engage in exercises devoted to disaster awareness: what to do should the worst happen. The occasion resonates with history. On that date in 1923, the Great Kanto earthquake devastated Tokyo and nearby Yokohama, unleashing fire and fury that left more than 100,000 people dead. After that, Japan resolved that it would prepare for whatever cataclysm nature might throw at it.

And yet.

When a huge earthquake struck Kobe in southern Japan in January 1995, killing more than 6,400, the national government and local officials stood accused of foot-dragging — a slow response that, among other failings, cost some people their lives and left as many as 300,000 others out in the cold, homeless for far too long. Comparable indictments of the authorities were heard in 2011 after the Tohoku earthquake and tsunami, which overwhelmed parts of northeastern Japan and created the enduring nuclear nightmare at the crippled Fukushima Daiichi power plant.



The MSc Organisational Resilience (OR) at Buckinghamshire New University is loading up with students very rapidly.  The MSc OR is designed to meet the requirements of business, public and private sectors globally and the professionals who are either currently employed in its disciplines or who seek to develop advanced capability.  Our approach has been to design and deliver an accessible postgraduate programme that reflects sector currency and assists in the drive towards further professionalism and research capabilities.  This, we believe, is crucial to developing fluency what is becoming recognised as a coherent, rather than distinct and completely separable, group of linked subjects.  In this programme; the development of mastery in understanding of these links, in their applicability to organisations and business, and the high-level knowledge, confidence and capability necessary to be fully effective as an OR professional are considered to be essential and explicit educational outcomes.

To support and as an adjunct to these requirements, the MSc OR is also designed to meet the needs of students who are, or who aspire to be, employed as managers and as sector influencers in the wide subject area of OR.  There are many currently working in the sector that have long-term experience and are seeking validation and evidence of this through the achievement of postgraduate qualification. In particular, applied postgraduate programmes and awards are considered to be the most desirable and required awards by companies and employers.  Industry also requires, because of the growing inter-relationship and blurred boundaries between the various elements, and the constant development of new risks and the need to mitigate them; the development of organisational and individual capability and knowledge across a range of contributing areas.   Therefore, this programme is designed to educate those with a specialist interest in the following areas and sub-disciplines:



CIO — It's hard to resist the sparkly nirvana that big data, leveraged appropriately, promises to those who choose to embrace it. You can transform your business, become more relevant to your customers, increase your profits and target efficiencies in your market all by simply taking a look at the data you probably already have in your possession but have been ignoring due to a lack of qualified talent to glean value from it.

Enter the data scientist — arguably one of the hottest jobs on the market. The perfect candidate is a numbers whiz and savant at office politics who plays statistical computing languages like a skilled pianist. But it can be hard to translate that ideal into an actionable job description and screening criteria.

This article explains several virtues to look for when identifying suitable candidates for an open data scientist position on your team. It also notes some market dynamics when it comes to establishing compensation packages for data scientists.



Computerworld — The IT response to Heartbleed is almost as scary as the hole itself. Patching it, installing new certificates and then changing all passwords is fine as far as it goes, but a critical follow-up step is missing. We have to fundamentally rethink how the security of mission-critical software is handled.

Viewed properly, Heartbleed is a gift to IT: an urgent wake-up call to fundamental problems with how Internet security is addressed. If the call is heeded, we could see major improvements. If the flaw is just patched and then ignored, we're doomed. (I think we've all been doomed for years, but now I have more proof.)

Let's start with how Heartbleed happened. It was apparently created accidentally two years ago by German software developer Robin Seggelmann. In an interview with the Sydney Morning Herald, Seggelmann said, "I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length."



So what will you choose: public cloud, private cloud – or perhaps a solution in between? The flexibility and scalability of the cloud have also made it well suited to partial use, namely the hybrid cloud solution. Those who can’t quite make up their mind can have as much or as little of the cloud as suits them. However, it’s better still to approach this resource with a clear IT strategy in mind and to make a hybrid cloud solution a deliberate choice, rather than a vague default. Here are two possibilities that could drive a hybrid cloud decision.



IDG News Service (San Francisco Bureau) — Canada's tax authority and a popular British parenting website both lost user data after attackers exploited the Heartbleed SSL vulnerability, they said Monday.

The admissions are thought to be the first from websites that confirm data loss as a result of Heartbleed, which was first publicized last Tuesday. The flaw existed in Open SSL, a cryptographic library used by thousands of websites to enableA encryption, and was quickly labeled one of the most serious security vulnerabilities in years.

The Canada Revenue Agency (CRA) blocked public access to its online services last Tuesday in reaction to the announcement, but that wasn't fast enough to stop attackers from stealing information, it said on its website.



IDG News Service (Washington, D.C., Bureau) — More U.S. Internet users report they have been victims of data breach, while 80 percent want additional restrictions against sharing of online data, according to two surveys released Monday.

While nearly half of all U.S. Internet users avoid at least one type of online service because of privacy concerns, according to a survey by marketing research firm GfK, 18 percent reported as of January that important personal information was stolen from them online, a poll from the Pew Research Center's Internet and American Life Project found. That's an increase from 11 percent last July.

"As online Americans have become ever more engaged with online life, their concerns about the amount of personal information available about them online have shifted as well," Mary Madden, a senior researcher at Pew, wrote in a blog post. "When we look at how broad measures of concern among adults have changed over the past five years, we find that internet users have become more worried about the amount of personal information available about them online."



“We don’t need no education . . .”

I couldn’t help but think of that line from a Pink Floyd song when I saw the headline on an eSecurity Planet article, “Majority of Employees Don’t Receive Security Awareness Training.”

The article goes on to report on a study by Enterprise Management Associates called Security Awareness Training: It's Not Just for Compliance. The study interviewed 600 people at companies of all sizes, from the very small to the very large, and what it found was that more than half of employees not working in IT or security receive no security awareness training. However, business size did make a difference – midsize businesses fared the worst when it comes to security education.



Tuesday, 15 April 2014 14:11

Is the Virtual Data Center Inevitable?

Given the state of virtual and cloud-based infrastructure, it’s almost impossible not to think about end-to-end data environments residing in abstract software layers atop physical infrastructure.

But is the virtual data center (VDC) really in the cards? And if so, does it mean all data environments will soon gravitate toward these ethereal constructs, or will there still be use cases for traditional, on-premises infrastructure?

Undoubtedly, a fully virtualized data operation offers many advantages. Aside from the lower capital and operating costs, it will be much easier to support mobile communications, collaboration, social networking and many of the other trends that are driving the knowledge workforce to new levels of productivity.



I saw an encouraging sign the other day in a Tech Target 2014 Market Intelligence report.  It provided a list of the top IT projects for this year based on a survey of IT professionals.  Number one of the list was server virtualization.  And number two?  Business Continuity/Disaster Recovery (BC/DR).

That’s big news for us at the Disaster Recovery Preparedness Council.  It’s our mission to raise awareness of the need for BC/DR planning and help IT professionals to benchmark their current DR practices and implement ways to improve DR planning and recovery in the event of an outage or disaster.

So, given the results of the Tech Target report, you need to ask yourself where BD/DR falls on your list of priorities this year.  Maybe you’ve got a formal plan and a budget for BC/DR but many companies still do not.  That doesn’t mean you can’t start to develop and/or improve your business continuity strategy today.



Monday, 14 April 2014 15:08

Take Off the Blinders

It’s been an extraordinary month, with scenarios that include a missing plane (see Divya Yadav’s research note); another round of deaths at Fort Hood just as the report on lessons learned in the Washington Shipyard was released; a Supreme Court decision that makes us wonder if the justices believe that free speech is the same as money; and, right in our backyard, a devastating mudslide from which not all the bodies have been removed.

The month also included the first meeting of the mayor’s City of Seattle Disaster Recovery Plan Executive Advisory Group, of which I am a member. This group is charge with imagining how recovery efforts, not the response itself, might proceed, and to consider how some planning now might make decisions easier to make after a catastrophic event such as an earthquake:  “what policy changes, planning or other strategies should be acted on now?  How will we ensure we have the necessary resources (staff, equipment, facilities, etc.) to get back to acceptable levels of service and to meet our legal mandates?”



Monday, 14 April 2014 15:05

AI Gets its Groove Back

Computerworld — Try this: Go online to translate.google.com.

In the left-hand input box, type, "The spirit is willing, but the flesh is weak." In the right-hand box, decide which language you want it translated to. After it's translated the first time, copy the translated text and paste it into the left-hand box for conversion back into English.

If you don't get exactly the original text, the back-translation will in all likelihood still reflect at least part of the original thought: That the actions of the subject fell short of his or her intentions and not that the wine was good but the meat was tasteless, which the phrase could mean in a literal translation.



IDG News Service — Four researchers working separately have demonstrated a server's private encryption key can be obtained using the Heartbleed bug, an attack thought possible but unconfirmed.

The findings come shortly after a challenge created by CloudFlare, a San Francisco-based company that runs a security and redundancy service for website operators.

CloudFlare asked the security community if the flaw in the OpenSSL cryptographic library, made public last week, could be used to obtain the private key used to create an encrypted channel between users and websites, known as SSL/TLS (Secure Sockets Layer/Transport Security Layer).



Due to the complexities of making products, most manufacturers are used to having large influxes of data from machines, processes, shipping, etc. What may be new to these companies, though, is having tools to retrieve actionable information from these piles of Big Data.

LNS Research and Mesa International teamed up to compile a survey of manufacturers on how they are using new technologies. Among the information gathered was how these companies felt they could use Big Data from the manufacturing plants and the overall enterprise. Of the more than 200 responses, 46 percent felt that Big Data analysis could help them “better forecast products” and production. Another 39 percent believed that Big Data mining will allow them to “service and support customers faster.” Other metrics from the survey include:



The number of countries with downgraded political risk ratings grew in the last year, as all five emerging market BRICS countries (Brazil, Russia, India, China, South Africa) saw their risk rating increase, according to Aon’s 2014 Political Risk Map.

As a result, countries representing a large share of global output experienced a broad-based increase in political risk including political violence, government interference and sovereign non-payment risk, Aon said.

The 2014 map shows that 16 countries were downgraded in 2014 compared to 12 in 2013. Only six countries experienced upgrades (where the territory risk is rated lower than the previous year), compared to 13 in 2013.

Aon noted that Brazil’s rating was downgraded because political risks have been increasing from moderate levels as economic weakness has increased the role of the government in the economy.



Monday, 14 April 2014 15:01

Business Continuity Flash Blog

On Tuesday 18th March 2014, as part of the Business Continuity Awareness Week activities, we witnessed the first ever BC Flash Blog. This is probably a new term to most readers, it is a virtual Flash Mob – but instead of a dance routine the participants wrote and published their own blog post or article.

The event featured 22 writers, from all sectors of the BC industry – and from various corners of the globe. All the articles were on the same subject, and published at the same time. In keeping with the BCAW theme, the subject was “Counting the costs, and benefits, for business continuity”, with each writer taking their own, unique, perspective on this issue.

If you haven’t already done so, you can find links to all 22 of these blogs here. If we do nothing else, we can at least pay these writers the respect of reading their work.



CSO — Size matters when it comes to security, according to Davi Ottenhelmer. Ottenhelmer, senior director of trust at EMC, titled his presentation at SOURCE Boston Wednesday, "Delivering Security at Big Data Scale," and began with the premise that, "as things get larger, a lot of our assumptions break."

The advertised promise of Big Data is that it will help enterprises make better decisions and more accurate predictions, but Ottenhelmer contends that is placing far too much trust in systems that are not well secured. "We're making the same mistakes we've made before," he said. "We're not baking security into Big Data we're expecting somebody else to do it later on." Ottenhelmer, who is completing a book titled,A "Realities of Big Data Security," said he does defense research, and focuses on avoidance and detection. "Avoidance is the best way to escape a damaging attack," he said. "You can move data centers at real-time speeds. You can keep the old one as honeypot, and just observe what's going on with it without causing any harm. Big Data allows it now more than ever."



Qualification: Diploma

Study mode: Distance learning

Location: High Wycombe

Credits: 90

As a further membership option, the BCI and Bucks New University, via their unique partnership, have designed a programme to develop and deliver this new qualification, delivered over three, ten-week distance modules.

Is this course for me?

As a further membership option, the BCI and Bucks New University, via their unique partnership, have designed a programme to develop and deliver a new qualification - the BCI Diploma. This is a 30 week, 90 credit, professional course aimed at the following prospective students:



IDG News Service — Much of the talk on the Web this week has focused on the Heartbleed security fiasco. Still unsure as to what's happening with Heartbleed and how it impacts you? Here's our quick-and-dirty guide.

What exactly is Heartbleed?

Heartbleed is a vulnerability in OpenSSL, an open-source implementation of the SSL/TLS encryption protocol.A When exploited, the flaw could expose information stored in a server's memory, including not-at-all-trivial things like your username, password, and other bits of personal data. Since OpenSSL is particularly popular among website administrators, a significant number of your favorite websites may be affected by Heartbleed--research firm Netcraft puts the number at half-a-million sites.

Should I panic?

Panicking is not terribly productive, and, since it involves a lot of running around like a chicken with your head cut off, potentially exhausting. That's no way to go through life. Still, this is a serious matter, and it'll require a little more action on your part than adapting a "this too shall pass" mindset.



Network World — The Heartbleed Bug, a flaw in OpenSSL that would let attackers eavesdrop on Web, e-mail and some VPN communications, is a vulnerability that can be found not just in servers using it but also in network gear from Cisco and Juniper Networks. Both vendors say there's still a lot they are investigating about how Heartbleed impacts their products, and to expect updated advisories on a rolling basis.

Juniper detailed a long list in two advisories, one here and the other here. Cisco acted in similar fashion with its advisory.

"Expect a product by product advisory about vulnerabilities," says Cisco spokesman Nigel Glennie, explaining that Cisco engineers are evaluating which Cisco products use the flawed versions of OpenSSL that may need a patch though not all necessarily will. That's because Cisco believes it's a specific feature in OpenSSL that is at the heart of the Heartbleed vulnerability and that it's not always turned on in products.



IDG News Service — Website and server administrators will have to spend considerable time, effort and money to mitigate all the security risks associated with Heartbleed, one of the most severe vulnerabilities to endanger encrypted SSL communications in recent years.

The flaw, which was publicly revealed Monday, is not the result of a cryptographic weakness in the widely used TLS (Transport Layer Security) or SSL (Secure Sockets Layer) communication protocols, but stems from a rather mundane programming error in a popular SSL/TLS library called OpenSSL that's used by various operating systems, Web server software, browsers, mobile applications and even hardware appliances and embedded systems.

Attackers can exploit the vulnerability to force servers that use OpenSSL versions 1.0.1 through 1.0.1f to expose information from their private memory space. That information can include confidential data like passwords, TLS session keys and long-term server private keys that allow decrypting past and future SSL traffic captured from the server.



I don’t think I’ve ever seen the reaction to an Internet security problem like the reaction I’m seeing with the Heartbleed bug. I expected to get email messages from security experts, but not the volume that has been coming in. Then I logged on to Facebook, and my feed was in pandemonium. People are totally freaked out by the news of this vulnerability, but I’m not sure which concerns them more: That their personal information may be compromised or that they are going to have to change a lot of passwords.

Let’s take a deep breath and get some points straight. I reached out to a number of experts to get their insights into this issue.

First, we should all take this very seriously. For those who may not understand what the Heartbleed bug is, the Heartbleed bug website explains it clearly:



If I had a top ten list of PR models, it would be Tesla and Elon Musk. He got a bum review in the New York Times and his damage control strategy was to demonstrate that the reviewer was less than honest. I thought no way could he win that battle. He did. The US government, typical of government-by-headline, launched a safety investigation against the cars after a battery fire caused lurid news stories. What did Tesla do? Used the opportunity to make it clear to the world just how safe their cars actually are. Lemons to lemonade. (I blogged on these stories earlier–just enter Tesla in the search on this blog).



Computerworld — A federal court in New Jersey this week affirmed the Federal Trade Commission's contention that it can sue companies on charges related to data breaches, a major victory for the agency.

Judge Esther Salas of the U.S. District Court for the District Court of New Jersey ruled that the FTC can hold companies responsible for failing to use reasonable security practices.

Wyndham Worldwide Corp. had challenged a 2012 FTC lawsuit in connection with a data breach that exposed hundreds of thousands of credit and debit cards and resulted in more than $10.6 million in fraud losses.



CIO — As government CIOs begin consolidating their agency data centers, they should leave the forklift in park.

That was the message senior officials in the government IT sphere delivered in a panel discussion on how to maximize return on investment through overhauling the sprawling federal data center apparatus — which numbers well into the thousands of facilities.

Its not enough simply to pack up one set of servers and reshelf them in another location. Government IT leaders stress that any data center overhaul cannot simply be an IT-driven initiative that amounts to a check-box exercise. The process should entail a considered engagement with the business lines of the agency, they say.



Network World — The Heartbleed Bug, basically a flaw in OpenSSL that would let savvy attackers eavesdrop on Web, e-mail and some VPN communications that use OpenSSL, has sent companies scurrying to patch servers and change digital encryption certificates and users to change their passwords. But who's to blame for this flaw in the open-source protocol that some say also could impact routers and even mobile devices as well?

A German software engineer named Robin Seggelmann of Munster, Germany has reportedly accepted responsibility for inserting what experts are calling a mistake of catastrophic proportions into the open-source protocol OpenSSL used by millions of websites and servers, leaving them open to stealing data and passwords that many think has already been exploited by cyber-criminals and government intelligence agencies.

"Half a million websites are vulnerable, including my own," wrote security expert Bruce Schneier in his blog, pointing to a tool to test for the Heartbleed Bug vulnerability. He described Heartbleed as a "catastrophic bug" in OpenSSL because it "allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software." It compromises secret keys used to identify service providers and encrypt traffic, he pointed out. "This means anything in memory--SSL private keys, user keys, anything--is vulnerable."



By staff reporter

Security experts consider the Heartbleed bug to be a very serious issue, and one that will require action by most Internet users – not just for businesses – bringing the topic of information security home for web users everywhere.

“It's a pretty significant bug, particularly since it impacts popular open-source web servers such as Apache (the most popular web server) and Nginx,” explains ISACA director of emerging business and technology, Ed Moyle. “One significant area that has been covered less in the industry press is the impact this issue could have outside of the population of vulnerable web servers. Now clearly, the impact to web servers is a big deal. But consider for a moment what else might be impacted by this.”

In other words, he explains, consider the impact on embedded systems and "special purpose" systems (like biomed or ICS). “OpenSSL has a very developer-friendly license, requiring only attribution for it to be linked against, copied/pasted or otherwise incorporated into a derivative software product. It is also free. This makes it compelling for developers to incorporate it into anything they're building that requires SSL functionality: everything from toasters to ICS systems, medical equipment, smoke detectors, remote cameras, consumer-oriented cable routers and wireless access points. It's literally the path of least resistance as a supporting library/toolkit when developing new software that requires SSL.



Friday, 11 April 2014 15:48

Exercise! Exercise! Exercise!

You could say that those of us who work in preparedness are a little obsessed with making sure we’ve got our emergency kits stocked and ready, our emergency plans up to date, and our neighbors are ready too.  So we’ve got a few households in Georgia ready for a public health emergency (and a few others around the country – don’t forget about friends and family!), but how do we get the country ready?  How do we get the government and other response organizations prepared?

The answer, just like learning how to ride a bike, is practice. Practice, practice, and more practice.  And this past week, CDC participated in a government-wide exercise that tested our preparedness and response capabilities.  The National Exercise Program Capstone Exercise (NEPCE) 2014External Web Site Icon is a congressionally mandated preparedness exercise to test, assess, and improve the nation’s preparedness and resiliency.  CDC’s Office of Public Health Preparedness and Response (PHPR) and the National Center for Environmental Health and the Agency for Toxic Substances and Disease Registry (NCEH/ATSDR) worked together to participate in this event.  

NEPCE 2014 was designed to educate and prepare the whole community – from schools to businesses and hospitals to families – to prevent, mitigate against, protect from, respond to, and recover from acts of terroristic and catastrophic incidents. This was the first Capstone Exercise, formerly known as National Level Exercise, incorporated into the newly revised National Exercise Plan (NEP)External Web Site Icon, concluding and building on two years of smaller scale exercises.  The NEP includes exercises of all types, designed to engage all levels of government, non-government organizations and private sector organizations. 

exercise briefingThis exercise culminated over nine months of interagency planning efforts among DHS, HHS and CDC along with our state and local partners.  CDC planning officials attended planning meetings in Washington, D.C. to integrate CDC operations into the exercise. Additionally, CDC deployed four public health personnel with the HHS Incident Response Coordination Team to Sacramento, California, during the exercise to simulate coordination activities that CDC would normally provide to the impacted population.

History Repeats Itself for Exercise Purposes

The exercise scenario centered on a 9.2 magnitude earthquake in Alaska that caused catastrophic damage across multiple communities, requiring federal response and recovery assistance.  A similar event happened in Alaska at the same time in 1964.

As it did 50 years ago, the earthquake resulted in several tsunamis with substantial threat and damage to critical infrastructure like buildings, bridges, and roads, along with injuries, deaths, and population displacement across Alaska and Canada. While national officials confronted earthquake and tsunami impacts, disruption in and around Juneau, the capital, resulted in a requirement for government entities to relocate to alternate sites.

RADM Scott Deitchman, M.D. M.P.H., USPHS, Assistant Surgeon General who is the Associate Director for Environmental Health Emergencies in NCEH/ATSDR served as the Incident Manager and lead for the exercise. He remarked, “I appreciated the opportunity the exercise gave us, like the rest of government, to exercise how we would respond to a catastrophic disaster of this magnitude. A real earthquake, like a nuclear detonation, suddenly puts you in a situation where the things we take for granted – communications systems to give messages to the public, transportation systems to send responders to the area, data systems for collecting surveillance data – all are gone. How do we launch a public health response in that setting? In exercises like this, the goal is to “test to fail” – to see where things break down, in a setting where we can learn without failing people in actual need. That gives us the opportunity to strengthen our response systems in anticipation of a real disaster.”

exercise planningOne of CDC’s primary missions is to ensure that we are prepared to assist the nation to respond to, recover from, and alleviate the impacts of public health disasters.  Participation in last week’s exercise enhanced our overall ability to support our nation during emergency situations. 

During this and other exercises, all aspects of CDC’s response capabilities are tested.  Managed out of CDC’s Emergency Operations Center (EOC), this exercise brought together experts in public health preparedness, as well as those with expertise in earthquakes.  During a real emergency, CDC would activate the EOC in order to help coordinate the Agency’s response.  Although no exercise will truly mimic a real life emergency, we do everything possible to imagine what could happen – from dealing with power outages to delays in supplies reaching affected areas to incorrect media reports and wild rumors – in order to test who we would respond.  After the exercise is over, we work with the other organizations involved and analyze what went well and what could be improved upon next time.

David Maples, Exercise Lead for OPHPR’s Division of Exercise Operations, commented, “The Alaska Shield earthquake exercise provided CDC the primary venue to validate our All-Hazards Plan and its Natural Disaster Annex and Earthquake Appendix.  We engaged our whole of community partners in this exercise at the federal, state and local levels, our tribal partners as well as several non-governmental organizations and private public health partners.  Maintaining these relationships is essential to our ability to get our public health guidance and messaging into the hands of those impacted by an event like this.  In a catastrophic natural disaster similar to the one we just exercised, CDC’s mission is just the beginning. Similar to our real world response to Superstorm Sandy, the recovery phase of an event like this will challenge our public health capabilities for some time.  But that is the goodness of our Public Health Preparedness and Response exercise program; it gives us the opportunity to prepare for no-notice disasters and emergent outbreaks before they occur.”


Thursday, 10 April 2014 17:38

Five Questions with a Food Fraud Expert

BALTIMORE—After his Food Safety Summit session on food fraud and economically motivated adulteration, I caught up with Doug Moyer, a pharmaceutical fraud expert and adjunct with Michigan State University’s Food Fraud Initiative. Here are a few of his insights into top challenges for the supply chain, and the biggest risks to be wary of as a consumer.

What are the riskiest foods for fraud?

The most fraudulent are the perennials: olive oil, honey, juices and species swapping in fish. Most people underestimate the amount of olive oil adulteration, but the amount of what is labeled “extra virgin olive oil” that Americans buy is more than Italy could ever produce. I buy certified California olive oil because I’ve sat down with that group and I know that their industry is really concerned about standards and have established a rigorous certification process. I am also really concerned about species swapping in the seafood industry. I love sushi, but I have a lot of concerns eating it, and they are not always about health. I don’t like feeling duped, and a lot of companies now have to contend with that reputation issue after so many studies have found that the odds can be incredibly low that you are eating the fish that you think you ordered—as little as 30% in some sushi restaurants in Los Angeles, for example.



By Geary W. Sikich

The post-crisis recovery phase is one of the least addressed in planning, training and simulations. This is an area that, if not properly managed, can cost financially, reputationally and operationally. Guidelines for post-crisis recovery are lacking; and many entities lose focus when it comes to discussing post-crisis recovery operations. It may be that post-crisis recovery is one of the most complicated of the Business Continuity Lifecycle elements and that no two recoveries are going to follow the same pattern. However, the post-crisis recovery process can be segmented into manageable bits that can be undertaken using a project management approach.

The diagram below provides a top level graphic depiction of the typical cycle of event response, management, recovery and resumption of operations. I have added the emergency response and crisis management elements as they intermingle with business continuity. I have simplified the cycle to four major transition points.



Andrew Waite gives an overview of the Heartbleed vulnerability.

This week has been an interesting and busy one for those on both sides of the information security fence: a critical vulnerability, dubbed Heartbleed, was publicly disclosed in the widely used library OpenSSL, which forms the core of many SSL/HTTPS provisions.

What is it?

Without getting too technical, the Heartbleed flaw allows a malicious and unauthorised third party to access protected data in memory. The exact data access is random, but there have been corroborated reports that it can expose clear-text passwords, private SSL keys and other sensitive data which would negatively impact the security of your systems, users and clients.

How to determine if you’re vulnerable

The vulnerability effects any service utilising OpenSSL version 1.0.1 through to OpenSSL version 1.0.1f. If you (or your in-house sysadmin) can confirm that your SSL implementation isn’t running any of the affected versions, you’re safe from this particular weakness. Unfortunately, OpenSSL is widely used and embedded into many other appliances and application stacks.

Since the notification announcement, a number of websites have been released to enable you to enter your system name/IP address and the site will check for you. However, what a third party may do with the information once determining your system is vulnerable could be a risk in its own right…



Tamiflu (the antiviral drug oseltamivir) shortens symptoms of influenza by half a day, but there is no good evidence to support claims that it reduces admissions to hospital or complications of influenza. This is according to the updated Cochrane evidence review, published today (10th April 2014) by The Cochrane Collaboration, the independent, global healthcare research network and The BMJ.

Evidence from treatment trials confirms increased risk of suffering from nausea and vomiting. And when Tamiflu was used in prevention trials there was an increased risk of headaches, psychiatric disturbances, and renal events.

Although when used as a preventative treatment, the drug can reduce the risk of people suffering symptomatic influenza, it is unproven that it can stop people carrying the influenza virus and spreading it to others.



CIO — In 1998, when Paul Rogers started at GE, implementing optimization software at a coal-fired power plant was easier said than done. Management understood and worked with GE to develop the software. Within the plant itself, though, the vast majority of employees didn't know how to use a computer, let alone software, and were very suspicious of the system.

These days, says Rogers, now GE's chief development officer, the tables have turned. Smartphone-toting plant employees know firsthand how technology changes their lives as consumers — and they want to know why the industrial environment isn't like their home environment.

"They want to optimize equipment, and that's a sign that the world is ready," Rogers says. Put another way: "My daughter has radically different experiences about how the world works."



CIO — The past two weeks brought big news in the public cloud computing market. In the course of four days, three technology giants made bold statements about their intent to be one of the most important public cloud providers — and, indeed, position themselves to be the No. 1 cloud company on the planet.

For anyone using cloud computing, what happened last week indicates how critical the biggest companies in technology view it and how cloud adopters need to evaluate their strategy in light of the ongoing price competition upon which the leaders have embarked.

Here's the high-level overview of what was announced:



Business continuity is often about reinforcing existing infrastructure or eliminating sources of business disruption. Bringing in techniques to accelerate or multiply results thanks to good business continuity may not be so frequent, but here’s one that may well do that. It’s version control, which is used when several knowledge workers need to simultaneously work on the same computer files to create advantage for the organisation – but without stepping on each other’s toes. Version control technology started in software development. However, it can be used for projects to create web content, coordinated product rollouts, corporate business plans and more.



PC World — By now you've likely heard about the Heartbleed bug, a critical vulnerability that exposes potentially millions of passwords to attack and undermines the very security of the Internet. Because the flaw exists in OpenSSL--which is an open source implementation of SSL encryption--many will question whether the nature of open source development is in some way at fault. I touched based with security experts to get their thoughts.

Closed vs. Open Source

First, let's explain the distinction between closed source and open source. Source refers to the source code of a program--the actual text commands that make the application do whatever it does.

Closed source applications don't share the source code with the general public. It is unique, proprietary code created and maintained by internal developers. Commercial, off-the-shelf software like Microsoft Office and Adobe Photoshop are examples of closed source.



A new report from application specialists Camwood reveals that, in the wake of recent migrations following the conclusion of support for the Windows XP operating system, and with the accelerating pace of change in the IT department, IT directors and managers now see near constant change and migration projects as the new norm. Coping with this change has now become a primary concern for IT departments.

According to the report, 90% of IT decision makers believe that the pace of change in IT is accelerating, and that this presents a significant challenge. 72% find the pace of change in IT ‘unsettling’. 93% also agree that, in the new IT environment, a flexible IT infrastructure is key to their organisation’s success, with 79% believing that IT departments that don’t adapt risk demise.



Wednesday, 09 April 2014 18:13

Monitoring Food Safety from Farm to Fork

BALTIMORE—The Food and Drug Administration is increasingly harnessing data-driven, risk-based targeting to examine food processors and suppliers under the Food Safety Modernization Act. At this week’s Food Safety Summit, the FDA’s Roberta Wagner, director of compliance at the Center for Food Safety and Applied Nutrition, emphasized the risk-based, preventative public health focus of FSMA.

While it has long collected extensive data, the agency is now expanding and streamlining analysis from inspections to systematically identify chronic bad actors. FSMA regulations and reporting are revolutionizing many of the FDA’s challenges, but so is technology. According to Wagner, whole genome sequencing in particular has tremendous potential to change how authorities and professionals throughout the food chain look at pathogens. WGS offers rapid identification of the sources of foodborne pathogens that cause illness, and can help identify these pathogens as resident or transient. In other words, by sequencing pathogens (and sharing them in Genome Trakr, a coordinated state and federal database), scientists can track where contamination occurs during or after production.



Hurricane forecasters are sounding a warning bell for the U.S. East coast in their latest predictions for the 2014 hurricane season, even as overall tropical storm activity is predicted to be much-less than normal.

WeatherBell Analytics says the very warm water off of the Eastern Seaboard is a concern, along with the oncoming El Niño conditions.

In its latest commentary forecaster Joe Bastardi and the WeatherBell team notes:

We think this is a challenging year, one that has a greater threat of higher intensity storms closer to the coast, and, where like 2012, warnings will frequently be issued with the first official NHC advisory.”

WeatherBell Analytics is calling for a total of 8 to 10 named storms, with 3-5 hurricanes and 1-2 major hurricanes.



Wednesday, 09 April 2014 18:12

London’s flood risks reviewed

The London Assembly Environment Committee has published a summary of the flood risks facing the UK capital.

24,000 properties in London are at significant risk of river flooding and the Environment Agency estimates that plans currently under development could protect 10,000 of these.

The Committee warns that the risks of flooding may be increasing. The effects of climate change in southern England could mean drier summers and wetter winters. More heavy rain in the Thames region would increase surface water risk and may lead to more river flooding in London.

Ways to reduce flood risk include sustainable drainage and river restoration, which create space for flood waters to be held higher in the river catchment and soak back into the ground. Allowing low-lying areas to flood safely at times of high water flow should protect homes, roads and businesses.

Murad Qureshi AM, Chair of the Environment Committee says:

“London needs to bring back its rivers to protect itself from inevitable flooding in the future. The more we can restore natural banks to London’s rivers, the less likely heavy rain will cause the degree of flooding we saw in the early part of this year.”

“Heavy or prolonged rain locally or upstream can cause rivers to flood. Tens of thousands of properties are at high or medium risk of river flooding. This is not just from the Thames, but also from the many smaller rivers that flow into it. A lot of people don’t know where their local rivers are, until they escape their channels.”

Read Flood Risks in London Summary of Findings (PDF).

Wednesday, 09 April 2014 18:10

Leave the CIO Alone

Computerworld — My son is a chief technology officer. Some companies have a chief digital officer. Can chief data wrangler be far behind?

What's so bad about being a CIO?

There seems to be a trend to come up with a title to replace "CIO" that encompasses the latest direction of the profession. Titles are reflecting an emphasis on big data, social networking and data analytics.

This doesn't happen with other titles. Take the chief financial officer. I have yet to hear of a CFO becoming the chief mergers officer when the company contemplates its first merger or acquisition. The CFO's role changes to encompass some new duties but that officer remains in charge of finance. And I suspect that most CFOs would not appreciate a change in title every time their role was redefined. And yet, add big data to IT's functions and someone says we need a new title to reflect that. But we really don't. The CIO remains in charge of the enterprise's information and data, big or otherwise.



CSO — Symantec has declared 2013 the year of the "mega-breach," placing security pros on notice that they stand to lose big from phishing, spear-phishing and watering-hole attacks.

The company released Tuesday its Internet Security Threat Report for 2013, which found that eight breaches exposed the personal information of more than 10 million identities each. By comparison, 2012 had only one breach that size and in 2011 there were five.

The number of massive data breaches in 2013 made it the "year of the mega-breach," Symantec said. Information stolen included credit card information, government ID numbers, medical records, passwords and other personal data.



Wednesday, 09 April 2014 18:08

Banks Ordered to Add Capital to Limit Risks

Federal regulators on Tuesday approved a simple rule that could do more to rein in Wall Street than most other parts of a sweeping overhaul that has descended on the biggest banks since the financial crisis.

The rule increases to 5 percent, from roughly 3 percent, a threshold called the leverage ratio, which measures the amount of capital that a bank holds against its assets. The requirement — more stringent than that for Wall Street’s rivals in Europe and Asia — could force the eight biggest banks in the United States to find as much as an additional $68 billion to put their operations on firmer financial footing, according to regulators’ estimates.

Faced with that potentially onerous bill, Wall Street titans are expected to pare back some of their riskiest activities, including trading in credit-default swaps, the financial instruments that destabilized the system during the financial crisis.



Wednesday, 09 April 2014 18:07

Options Abound for the Private Cloud

Mistrust of the public cloud is driving many enterprises toward the pursuit of private clouds. For critical data and applications, this may seem like a no-brainer as it is wiser to keep the important stuff on trusted infrastructure.

Not all private clouds are the same, however, and unless you happen to be a platform developer, you’ll end up placing your trust in someone else’s technology, just as you do with physical and virtual infrastructure.

At the moment, it seems the private cloud is shaping up to be a battle between VMware and the OpenStack community, says cloud broker RightScale. And according to the firm’s latest survey, nearly a third of enterprises are looking to turn legacy vSphere and vCenter environments into private clouds. But that doesn’t mean the market is a lock for VMware. OpenStack deployments are on the rise, driven largely by a desire to avoid vendor lock-in, even as vCloud Director adoption is starting to flag.



LINCROFT, N.J. – In the weeks after a federally declared disaster, emergency teams from government agencies, nonprofits and volunteer organizations work together to help survivors make their way out of danger and find food, clothing and shelter.

After the immediate emergency is over, the long work of recovery begins.

And as New Jersey survivors of Hurricane Sandy have learned over the past 18 months, full recovery from a devastating event like Sandy may take years.

Communities throughout New Jersey have been working hard to repair, rebuild and protect against future storms. In many cases, the challenges they face are formidable.

At the invitation of individual communities and in partnership with the state, FEMA’s office of Federal Disaster Recovery Coordination works with residents and municipal officials in impacted municipalities to develop a strategy for full recovery.

For communities that require assistance, the FDRC can provide a team of recovery specialists with a broad array of skills. Among them: civil engineering, architecture, land-use planning, economic development, environmental science and disabilities integration.

The FDRC is activated under the National Disaster Recovery Framework, which provides a structure for effective collaboration between impacted communities, federal, state, tribal and local governments, the private sector, and voluntary, faith-based and community organizations during the recovery phase of a disaster.

Federal Disaster Recovery Coordinator consult with impacted municipalities and assist with long-term planning, helping these communities determine what their priorities are and what resources they will need to achieve a full recovery.

In major disasters or catastrophic events, the FDRC is empowered to activate six key areas of assistance known as Recovery Support Functions.

The RSFs are led by designated federal coordinating agencies: Housing (U.S. Department of Housing and Urban Development); Infrastructure Systems (U.S. Army Corps of Engineers); Economic (U.S. Department of Commerce); Health and Social Services (U.S. Department of Health and Human Services); Natural and Cultural Resources (U.S. Department of Interior); and Community Planning and Capacity Building (FEMA).

Working in partnership with a State Disaster Recovery Coordinator and a Hazard Mitigation Adviser, the FDRC oversees an assessment of impacted communities and helps to develop a recovery support strategy. That strategy helps these hard-hit communities gain easier access to federal funding, bridge gaps in assistance, and establish goals for recovery that are measurable, achievable and affordable.

Here in New Jersey, approximately 12 communities have partnered with FDRC to prioritize their goals for recovery, locate the resources needed to achieve those goals and rebuild with resiliency.

In the Borough of Highlands, FDRC has assisted this severely impacted community in developing a plan for a direct storm water piping system that will decrease flooding in the low-lying downtown area. FDRC has also collaborated with the community on designing a more resilient, attractive and commercially viable central business district called the Bay Avenue Renaissance Project. The U.S. Army Corps of Engineers has initiated a feasibility study on their plan to protect the town from future flooding via a mitigation effort that includes installing floodwalls, raising bulkheads and building dune barriers.

In the devastated Monmouth County town of Sea Bright, FDRC worked with the community to create a plan for the construction of a beach pavilion that will serve as a year-round community center, library, lifeguard facility and beach badge concession. FDRC is also working with Sea Bright officials to develop a grant application to fund streetscape improvements in the downtown area of this beachfront municipality

In Tuckerton, FDRC worked with municipal officials on a plan to relocate its heavily damaged police station and borough facilities to a former school building that is much less vulnerable to flooding.

In partner communities throughout the state, FDRC subject matter experts are working to help residents envision a future that incorporates a strong infrastructure, increased storm protection and an enhanced environment that reflects the vision of the community.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

DENVER - Crisis counseling services will continue over the next nine months for survivors of the Colorado flooding disaster in September 2013 because of a $4 million federal grant. FEMA and the Substance Abuse and Mental Health Administration have awarded the $4,058,060 grant to the Colorado Department of Public Health and Environment through the 2014 Crisis Counseling Assistance and Training Program (CCP).  

The new grant will allow counselors to continue door-to-door services and community outreach counseling programs. Since the disaster, Colorado Spirit crisis counselors have:

  • Talked directly with 18,178 people and provided referrals and other helpful information to more than 88,000;
  • Met with nearly 1,200 individuals or families in their homes.

CCP was established by the Stafford Disaster Relief and Emergency Assistance Act to provide mental health assistance and training activities in designated disaster areas. The program provides the following services:

  • Individual crisis counseling and group crisis counseling to help survivors understand their reactions and improve coping strategies, review their options and connect with other individuals and agencies that may assist them;
  • Development and distribution of education materials such as flyers, brochures and website information on disaster-related topics and resources;
  • Relationship building with community organizations, faith-based groups and local agencies.

They say that age is only a number, so with that in mind, IBM set out to prove that the 50-year-old mainframe still has what it takes to dominate enterprise computing.

As part of its celebration of the 50th birthday of the mainframe, IBM today unveiled a slew of products and initiatives intended to make sure the mainframe stays relevant through at least the first half of the 21st Century.

The new offerings include the zDoop implementation of Hadoop for mainframes that IBM worked with Veristorm to develop, and an IBM DS8870 flash storage system that IBM says is four times faster than traditional solid-state disk (SSD) technology.

In addition, IBM unveiled an IBM Enterprise Cloud System based on mainframes that has been configured with IBM cloud orchestration and monitoring software.



CSO — In large-scale organizations, implementing mobile device management (MDM) is typically given. After all, with so many employees using mobile devices that either contain or connect to sources of sensitive information, there needs to be some way to keep everything in check. But what about those companies that aren't big enough to be able to afford an MDM implementation and a full-sized IT department to manage it? Without a means to centralize the control of mobile devices, how can these smaller companies protect their data?

Some SMBs have found ways to help mitigate risk without traditional MDM, but it isn't always easy. Right off the bat, things are tricky given that smaller companies often implement BYOD since they can't afford to provide employees with devices.



I’m excited about the Internet of Things (IoT), and I expect it to create incredible opportunities for companies in almost every industry. But I’m also concerned that the issues of security, data privacy, and our expectations of a right to privacy, in general — unless suitably addressed — could hinder the adoption of the IoT by consumers and businesses and possibly slow innovation. So, with all the hype of the IoT, I’m going to play devil’s advocate, because these issues tend to receive limited coverage when considering the impact of new technology developments on society.

First of all, I am amazed at all the connected products and services that are starting to appear. These include, for example: those for connected buildings and homes, like heating and air conditioning, thermostats, smoke detectors, and so on; entertainment systems; and sensor-enabled pill boxes and remote healthcare monitoring devices. There are also a lot of consumer devices (in addition to smartphones and tablets), such as smart watches and Internet-enabled eye glasses, connected kitchen appliances like crock pots and refrigerators, wearable exercise trackers and pet trackers, and too many more to practically list.



From the title of this post, some people might immediately think of intuition: that vague and rather flaky resource used when that’s all you have. However, we’re actually thinking of something a little more structured in this context. In the coming age of Big Data and associated worldwide online resources, analytical techniques like those used in business intelligence can be used to detect trends and tipping points. They can give individuals and organisations meaningful information about how likely certain disasters will be: for example, "there is a 90 percent chance currently that your factory will be flooded out to a depth of eighteen inches of water."



You got a call from a reporter asking for your comment about an issue you were afraid might see the light of day. So, you know they’re onto it and going to run something.

This is a fairly common situation and unfortunately for PR and crisis comms consultants, this is often when you get the call from the client. No time to lose, but what is the strategy?

My thoughts on this were prompted by PR Daily’s post today on “Five Ways to Respond to Bad Press Before the Story Runs.” I have great regard for Brad Phillips, who wrote the post and the book: “The Media Training Bible.”



Without doubt, cloud computing is the future of the enterprise. But clouds come in many varieties – some light and fluffy, others dark and ominous – so the question for CIOs today is what kind of cloud is appropriate, and are there ways to ensure that today’s cloud does not become tomorrow’s storm?

According to IHS Technology, cloud spending is on pace to jump by more than a third over the next three years to $235 billion. Key drivers run the gamut from lower operating costs and more flexible data environments to support for advanced business applications like collaboration and Big Data analytics. As the market matures, then, organizations across multiple industries are likely to shed their concerns about security and management as they strive to turn IT infrastructure from a cost center to a competitive advantage.



PC World — Why should you use open source software? The fact that it's usually free can be an attractive selling point, but that's not the reason most companies choose to use it. Instead, security and quality are the most commonly cited reasons, according to new research.

In fact, a full 72 percent of respondents to the eighth annual Future of Open Source Survey said that they use open source because it provides stronger security than proprietary software does. A full 80 percent reported choosing openA source because of its quality over proprietary alternatives.

Sixty-eight percent of respondents said that open source helped improve efficiency and lower costs, while 55 percent also indicated that the software helped create new products and services. A full 50 percent of respondents reported openly contributing to and adopting open source.



Computerworld — A couple of weeks into his job as lead QT developer at software development consultancy Opensoft, Louis Meadows heard a knock on his door sometime after midnight. On his doorstep was a colleague, cellphone and laptop in hand, ready to launch a Web session with the company CEO and a Japan-based technology partner to kick off the next project.

"It was a little bit of a surprise because I had to immediately get into the conversation, but I had no problem with it because midnight here is work time in Tokyo," says Meadows, who adds that after more than three decades as a developer, he has accepted that being available 24/7 goes with the territory of IT. "It doesn't bother me -- it's like living next to the train tracks. After a while, you forget the train is there."

Not every IT professional is as accepting as Meadows of the growing demand for around-the-clock accessibility, whether the commitment is as simple as fielding emails on weekends or as extreme as attending an impromptu meeting in the middle of the night. With smartphones and Web access pretty much standard fare among business professionals, people in a broad range of IT positions -- not just on-call roles like help desk technician or network administrator -- are expected to be an email or text message away, even during nontraditional working hours.



Monday, 07 April 2014 19:31

What Do IT Workers Want?

Computerworld — As the economy continues to rebound and the competition for qualified IT professionals reaches new heights, employers seeking to attract or retain staffers are increasingly becoming like anxious suitors, desperate to figure out how to please their dates: "What do you want? What will make you stay? What really matters in our relationship?"

According to Computerworld's 2014 IT Salary Survey, tech workers are looking for many traditional benefits of a good partnership: financial security, stability and reliability -- all represented by salary and benefits. But this year's results confirm a growing trend: IT professionals are placing increasing importance on "softer" factors in the workplace, which have less to do with dollars and cents and more to do with corporate culture, personal growth and affirmation.

Read the full report: Computerworld IT Salary Survey 2014



Monday, 07 April 2014 19:30

Why So Certain About Uncertainties?

It must be the human condition that does it; the certainty with which we approach the issues that may affect us. Risk assessment incorporates a requirement to analyse probability or likelihood; we can attach mathematical process to this and I have attached an example – not to critique it – but to illustrate the concept of what I term ‘buffering’. Buffering is something which protects us from actuality, and allows us to distance ourselves from the realities of issues.  In the example, the mathematics are quite simple but convincing to the layman; I term myself a layman in mathematics and I have colleagues who can do this type of thing to a very significant and complicated level indeed.  However, the problem that I have with this is that buffering allows us to interpret what we see and orientate it to our needs.

Risk and uncertainty are not about rolling dice; of course they are linked aspects and the loss risks associated with the activities of some dice rollers can be extreme.  Maths allow calculation of probability  - but the die will roll a different way every time due to other unmeasured variable such as who is throwing, where and with what degree of energy.  There is therefore uncertainty that is additional even to the study and assessment of random variables.



The shooting rampage at Fort Hood has once again focused attention on the military’s ­mental-health system, which, despite improvement efforts, has struggled to address a tide of psychological problems brought on by more than a decade of war.

Military leaders have tried to understand and deal with mounting troop suicides, worrying psychological disorders among returning soldiers, and high-profile violent incidents on military installations such as the one that left four people dead and more than 16 injured at the Army post in Texas on Wednesday.

But experts say problems persist. A nationwide shortage of mental-health providers has made it difficult for the military to hire enough psychiatrists and counselors. The technology and science for reliably identifying people at risk of doing harm to themselves or others are lacking.



A discussion is going on right now about the role of the enterprise service bus in cloud integration. Does it matter?

I’m not convinced it does. Most of the discussion seems to be coming from vendors, and while it’s probably good thought fodder for architects, I’m unconvinced there’s much of a strategic case for caring here.

One recent example, “Why Buses Don't Fly in the Cloud: Thoughts on ESBs,” appeared on Wired Innovation Insights and was written by Maneesh Joshi, the senior director of Product Marketing at SnapLogic.



Monday, 07 April 2014 19:27

Energy Metrics: No Easy Answers

One of the reasons energy conservation is such a hot button issue in the data center these days  is that no one has a clear idea how to assess the situation.

To be sure, metrics like PUE (Power Usage Effectiveness) are a step in the right direction, but even its backers will admit that it is not a perfect solution and should not even be used to compare one facility against another. And as I pointed out last month, newer metrics like Data Center Energy Productivity (DCeP) provide a deeper dive into data operations but ultimately rely largely on subjective analysis in order to gauge the extent that energy is being put to good use.



Did you get a boatload of World Backup Day pledge messages through Facebook and Twitter last week? This independent global initiative encourages everyone to backup important data on all computing devices — and spread the word. As they say, “friends don’t let friends go without a backup.” Absolutely right.

As people around the globe were taking the World Backup Day pledge, I was presenting at DRJ Spring World 2014, the world’s largest BC/DR conference. As I reported, the vast majority of organizations are NOT prepared to respond to intentional or accidental threats to IT systems.

  • 73% failing in terms of disaster readiness (scored a D or F)
  • 60% do not have a documented DR plan
  • 68% plans don’t exist or proved not very useful

The news is not much better for the minority of organizations who have a DR plan in place. Again, the 2014 annual report documents that where they exist, DR plans are largely gathering dust:



Friday, 04 April 2014 16:26

DDoS: a seven-point action plan

By Rakesh Shah

Distributed denial of service (DDoS) is no longer just a service provider problem: far from it. It can be a very real business continuity issue for many organizations.

DDoS attacks are what some would consider an epidemic today for all sorts of organizations. Why? The stakes continue to skyrocket. The spotlight continues to shine brightly, attracting attackers looking for attention for many reasons and motivations.

In recent times, attack motivation has been politically or ideologically motivated. Attackers want to make a statement and to make headlines (and to cause many headaches along the way) – quite similarly to the effect a sit-in or a strike would have in the ‘offline’ world. 

This new breed of attacker targets high profile organizations in order to ensure his or her grievances will be heard. Few targets are as high profile or mission critical to the economy as financial services.



Avere Systems has released the findings of its ongoing original study into cloud adoption conducted at the recent Cloud Expo Europe 2014.

Like their US counterparts at the AWS Summit in Vegas last November, the majority of the attendees in London surveyed indicated that they currently use or plan to use cloud within the next two to five years for compute (71 percent), storage (76 percent), with application purposes (80 percent).

One major difference in response was that 53 percent of US respondents cited organizational resistance as a major barrier to cloud use compared to just 11 percent in Europe indicating a potentially less conservative approach in the region.



Today ends my review of what I believe to be the five steps in the management of a third party under an anti-bribery regime such as the Foreign Corrupt Practices Act (FCPA) or UK Bribery Act. On Monday, I reviewed Step 1 – the Business Justification, which should kick off your process with any third party relationship. On Tuesday, I looked at Step 2 – the questionnaire that you should send and third party and what information you should elicit. On Wednesday, I discussed Step 3 – the due diligence that you should perform based upon the information that you have received from and ascertained on the third party. On Thursday, I examined Step 4 – how you should use the information you obtain in the due diligence process and the compliance terms and conditions which you should place in any commercial agreement with a third party. Today, I will conclude this series by reviewing how you should manage the relationship after the contract is signed.

I often say that after you complete Steps 1-4 in the life cycle management of a third party, the real work begins and that work is found in Step 5– the Management of the Relationship. While the work done in Steps 1-4 are absolutely critical, if you do not manage the relationship it can all go down hill very quickly and you might find yourself with a potential FCPA or UK Bribery Act violation. There are several different ways that you should manage your post-contract relationship. This post will explore some of the tools which you can use to help make sure that all the work you have done in Steps 1-4 will not be for naught and that you will have a compliant anti-corruption relationship with your third party going forward.



Computerworld — Although Apple isn't the sole focus of Microsoft's Enterprise Mobility Suite (EMS) or of Satya Nadella's new "mobile-first cloud-first" vision for the company, its iOS devices dominate enterprise mobility, meaning that Apple will play a major role in Microsoft's mobility strategy. In pursuing this strategy, Microsoft is, in a way, copying Apple's approach to business and enterprise iOS customers, albeit from a different perspective.

Microsoft began adding the ability to manage iOS and Android devices to its cloud-based Intune management suite last year. Although initial support for iOS device management was very basic, the company updated Microsoft Intune's iOS capabilities in January. While Microsoft has a ways to go before it catches up to the feature sets of the major mobile device management and enterprise mobility management vendors, the company looks committed to advancing its mobile management tools quickly.



Friday, 04 April 2014 16:19

Putting the 'B' in BRM

Computerworld — The challenge: Justify to the senior management committee the expense of business relationship management (BRM) within the IT function.

Now, there are many ways to do that. All the tools for assessing value can be drawn upon. There's the balanced scorecard, ROI, maturity models (with key performance indicators) and assessments against them, surveys, IT investment ratios, IT productivity over time. All very plausible, given the right circumstances.

But as CIO, I knew that I had to do more than show that BRM made compelling sense from a stockholder perspective. I also had to show how its success would be measured over time.



Do you think your anti-virus software is doing an adequate job in detecting malware and keeping your computers and network safe?

Unfortunately, you may need to re-think your attitudes toward AV software. According to a new report from Solutionary and the NTT Group, AV fails to spot 54 percent of new malware that is collected by honeypots. Also, 71 percent of new malware collected from sandboxes was undetected by over 40 different AV solutions.

The report also found that even a minor SQL injection could result in financial losses upwards of $200,000 – the kind of dollar amount that could cripple a small business.



Everything in IT these days is rapidly moving to be defined by software, including now backup and recovery.

EMC today launched a Data Protection Suite spanning its Avamar, NetWorker, Data Protection Advisor, Mozy and SourceOne products that not only makes them easier to acquire, but also sets the stage for managing them as an integrated set of processes.

Rob Emsley, senior director of product marketing for EMC, says that just like the rest of the enterprise, data protection is moving toward a software-defined model that promises to make it easier to manage backup and recovery, compliance and archiving.

As part of that exercise, Emsley says EMC is moving toward enabling a self-service model under which end users would be able to directly invoke EMC products and services within the policy guideline set by the internal IT organization across both structured and unstructured data sets.



This week, a new report from the United Nations’ Intergovernmental Panel on Climate Change summarized the ways climate change is already impacting individuals and ecosystems worldwide and strongly cautioned that conditions are getting worse. Focusing on impacts, adaptation and vulnerability, the panel’s latest work offers insight on economic loss and prospective supply chain interruptions that should be of particular note for risk managers—and repeatedly highlights principles of the discipline as critical approaches going forward.

Key risks the report identified with high confidence, span sectors and regions include:



Friday, 04 April 2014 16:15

Earthquakes and Mortgage Markets

The second earthquake to strike the Los Angeles area on March 28 is a wake-up call and reminder of the risk to commercial and residential properties in Southern California, according to catastrophe modeling firm EQECAT.

(The M5.1 quake located 1 mile south of La Habre follows the M4.4 earthquake near Beverley Hills (30 miles to the northwest) on March 17.)

In its report on the latest quake, EQECAT notes that most homeowners do not carry earthquake insurance (only about 12 percent of Californians have earthquake coverage, according to I.I.I. stats), and those that do typically carry deductibles ranging from 10 percent to 15 percent of the replacement value of the home, and commercial insurance often carries large deductibles and strict limits on insurance coverage.



CSO — Hacking is no longer just a game for tech-savvy teens looking for bragging rights. It is a for-profit business -- a very big business. Yes, it is employed for corporate and political espionage, activism ("hacktivism") or even acts of cyberwar, but the majority of those in it, are in it for the money."

So, security experts say, one good way for enterprises to lower their risk is to lower the return on investment (ROI) of hackers by making themselves more expensive and time-consuming to hack, and therefore a less tempting target. It's a bit like the joke about the two guys fleeing from a hungry lion. "I don't have to outrun him," one says to the other. "I just have to outrun you."

Of course, this only applies to broad-based attacks seeking targets of opportunity -- not an attack focused on a specific enterprise. But, in those cases, being a bit more secure than others is generally enough.



With the anniversary of the Southern Alberta floods looming, are organizations now any better prepared for emergencies?
CALGARY, ALBERTA – From cold snaps and ice storms to polar vortex windchill, Canadians are emerging from one of the coldest and snowiest winters in decades. It has been a long, bitter winter but are we really ready for spring? Questions around emergency preparedness are naturally arising as the record snowfall blanketing cities across the country begins to melt and is already causing flooding in some areas. 
A recent Ipsos Reid study reveals critical gaps in emergency response plans following the 2013 Southern Alberta floods and the need to take action to prepare before disaster strikes again. In 2013, severe weather like heavy snow, rain and floods directly affected more than 3.5 million Canadians. Toronto’s ice storm wreaked havoc and cost the city in excess of $100 million. It has been Winnipeg’s second coldest winter on record since 1938, leaving hundreds of homes with frozen pipes. And Canada’s largest natural disaster, the 2013 Southern Alberta floods, is still fresh in the minds of nervous Albertans. There’s plenty of focus on the preparedness of homeowners living in high-risk areas but questions still surround the readiness of corporations across Canada. 
According to the “2013 Calgary Flood & State of Emergency Corporate Crisis Communications Study”, 80 per cent of large Calgary organizations surveyed had an Emergency Response Plan (ERP) in place before the floods but just 44 per cent of these plans included emergency communications plans and protocols. The lack of communication systems and limited access to organizational databases hindered the speed and efficiency of several companies' efforts. Email and manual calling were the primary methods of communication used during the flood (92 per cent and 84 per cent, respectively). Of the organizations surveyed, only 20 per cent factored contact lists into their ERP, just 19 per cent said they were able to reach employees and a mere 8 per cent said people clearly knew what to do.
“Spring is a perfect opportunity to take a fresh look at these too-low numbers and see how we can better prepare ourselves with forward-thinking solutions before another flood or crisis,” says Steve Hardy, director of RallyEngine, an app-based internal communications system, which commissioned the independent study. 
“Approximately two-thirds of Canadians and more than 90 per cent of business people now use smartphones. Ninety-one per cent of adults are within arm’s reach of their mobile phone 24/7,” he says. “It’s possible now to easily reach and inform far more people – and just the right people – using now-common internet and mobile technology.”
Approximately four in ten surveyed organizations updated their ERPs following the floods, but many overlooked vital information such as contact lists, roles and responsibilities, or steps for business continuity. Hardy says the study revealed that leaders of some of these large organizations either didn’t have up-to-date company-wide directories, couldn’t access their directory physically or virtually, or weren’t able to reach the people responsible for such important but overlooked lists. As a result, communications were more manual and less efficient.
“It can be very difficult to find this information in a crunch. The most important factor in a crisis is an organization’s people. Are they ok? Where are they? Are they available to help? Even the best plan falls apart if the right people can’t be alerted, informed, and rallied when needed.”
Hardy points out that municipalities and emergency management agencies did remarkable work during the 2013 flood. “Over the last several months, they’ve been diligently analyzing what went well and what didn’t, especially with regards to communications, so that they’re even more prepared and resilient next time. There’s no reason why corporations shouldn’t be just as focused and proactive.”
If the 2013 floods taught us anything, it was how resilience and timely responses are critical to ensuring positive outcomes in the face of a crisis. Versatile internal communications systems like RallyEngine facilitate nimble business continuity and can be set up within weeks, not months.
To download the full Ipsos Reid 2013 Calgary Flood & State of Emergency Corporate Crisis Communications Study, visit http://use.rallyengine.com/study/YYCflood. 
About RallyEngine
RallyEngine is a powerful and streamlined app-based internal communications system that facilitates nimble business continuity. Designed for organizations with dispersed teams or mobile workforces, the system works by having team members install an app on their smartphone, which connects to the RallyEngine server, providing a channel to transmit location data, important information, and push notifications in real-time.

Let’s proceed by elimination. Servers? Those are the things that fall over when your data centre is hit by lightning and for which you do your disaster recovery planning anyway. Desktop PCs? They’re practically nailed to your desk, so they won’t be going with you as you run for the exit. Laptops? Maybe, although battery power and hard drive fragility may be issues. Smartphone? Compact, highly portable, runs tons of apps but has such a tiny screen. So finally, is the tablet computer the best compromise for IT on the run while you’re trying to get everything else back to normal?



CIO — The concept of a "data lake," sometimes called an "enterprise data hub," is a seductive one.

The data lake is the landing zone for all the data in your organization — structured, unstructured and semi-structured. — a central repository where all data is ingested and stored at its original fidelity All your enterprise workloads, from batch processing and interactive SQL to enterprise search and advanced analytics, then draw upon that data substrate.

Generally, the idea is to use HDFS (Hadoop Distributed File System) to store all your data in a single, large table. But building out such a next-generation data infrastructure requires more than simply deploying Hadoop; there's a whole ecosystem of related technologies that need to integrate with Hadoop to make it happen. And while Hadoop itself is open source, many of the other technologies that can help you build that infrastructure are open core or fully proprietary.



Thursday, 03 April 2014 15:02

Rise of the Mega Data Center?

It seems the more the enterprise becomes steeped in cloud computing, the more we hear of the end of local infrastructure in favor of utility-style “mega-data centers.” This would constitute a very dramatic change to a long-standing industry that, despite its ups and downs, has functioned primarily as an owned-and-operated resource for many decades.

So naturally, this begs the question: Is this real? And if so, how should the enterprise prepare for the migration?

Earlier this week, I highlighted a recent post from Wikibon CTO David Floyer touting the need for software-defined infrastructure in the development of these mega centers. Floyer’s contention is that “megaDs” are not merely an option for the enterprise, but the inevitable future, in that they will take over virtually all processing, storage and other data functions across the entire data ecosystem. The key driver, of course, is cost, which can be distributed across multiple users to provide a much lower TCO than traditional on-premise infrastructure. At the same time, high-speed networking, 100 Gbps or more, has dramatically reduced latency of distributed operations and is now available at a fraction of the cost of only a few years ago.



Thursday, 03 April 2014 15:01

Plans within business continuity

By Michael Bratton

Even though plans represent just one component of a larger business continuity management system, they are what guide the organization through all phases of response and recovery following the onset of a disruptive incident – from the initial response and assessment to the eventual return to normal operations. Effective planning is meant to ensure that response and recovery efforts align to the expectations of all interested parties and provide a repeatable approach to minimize downtime.

This article explores different types of plans and examines their purpose within a wider business continuity strategy.



Thursday, 03 April 2014 15:00

Sungard Availability Services goes it alone

Sungard Availability Services has announced that it is now a standalone company, following its split-off from SunGard Data Systems Inc. The new company, with annual revenues of approximately $1.4 billion and operations in 11 countries, will remain headquartered in Wayne, PA.

As a result of the split-off, Sungard AS now has its own board of directors and a new brand.
"Now that we are an independent firm, we have the flexibility to evolve our culture, our industry relationships and our investments to maximize our business and best serve customers," said CEO Andrew A. Stern.

"Today's announcement is the next step towards creating a highly-focused IT services business that's dedicated to providing world-class managed / availability services to our customer base," Stern noted. "All of us here at Sungard AS are very excited about the prospects to accelerate our growth, and we look forward to continue partnering with our customers to deliver the business outcomes they need."

Sungard AS today revealed its new brand identity, which includes a new logo. The company, which pioneered the concept of shared IT disaster recovery infrastructure more than 30 years ago, will continue to leverage its ‘always on, always available’ brand positioning. Its new logo represents strength and dynamism. A forward-leaning angle in the logo conveys progression and growth, while a triangle in the logo represents stability and the support that the company will continue to provide its customers.

Sungard AS leverages its scale and global reach to address its approximately 7,000 customers' cloud, managed hosting and recovery-services needs. "Our company will continue to focus investments in our newer service offerings, which include Enterprise Managed Services, Enterprise Cloud Services, Recovery as a Service and Assurance, our next-generation business continuity management software offering," Stern said.


CIO — The perennial data center quest to beat the heat has sparked a wave of innovation in enterprise computing.

Densely packed computing facilities produce a lot of heat. Getting rid of it is a must for boosting the reliability of computing and communications gear. The trick is keeping things cool without running up utility bills and expanding the carbon footprint.

To that end, IT managers have an expanding list of options and measures to consider. Data centers may combine straightforward approaches (such as organizing centers into cold and hot aisles) with more elaborate components (such as cooling towers). Even water-cooled computers, once a staple of the mainframe world, appear to be making a comeback. Immersion cooling, in which servers are bathed in a nonconductive cooling fluid, has made an appearance in a few data centers.



Since the March 22 landslide, the Red Cross has mobilized five response vehicles and more than 300 trained workers – more than half of them from Washington State.

Through Monday (March 31), the Red Cross has served 15,000 meals and snacks in partnership with Southern Baptist Disaster Relief, handed out hundreds of comfort and relief items, and provided nearly 2,400 mental health or health-related contacts. In addition, our shelters have provided more than 130 overnight stays.

Response details:

  • Red Cross mental health and spiritual care volunteers are caring for families who have lost loved ones or are waiting for word on the missing.
  • Red Cross workers are meeting one-on-one with people affected to create recovery plans, navigate paperwork and locate help from other agencies. In some situations, the Red Cross may also provide direct financial support to people who need extra help, including assistance with funeral expenses and mental health counseling.
  • Red Cross Family Care Centers that are open in Darrington and Arlington are places where affected family members can receive emotional and spiritual support, mental health assistance, and care for children after they receive notification of loss of a loved one.
  • Red Cross workers are also providing emotional support and help with creating individual recovery plans at Joint Resource Centers in Darrington and Arlington.

With eight confirmed cases of Ebola reported in the Guinea capital, Conakry, Médecins Sans Frontières (MSF) says that the country is 'facing an unprecedented epidemic in terms of the distribution of cases.'

“We are facing an epidemic of a magnitude never before seen in terms of the distribution of cases in the country: Gueckedou, Macenta Kissidougou, Nzerekore, and now Conakry,” said Mariano Lugli, coordinator of MSF's project in Conakry.

To date, Guinean health authorities have recorded 122 suspected patients and 78 deaths. Other cases, suspected or diagnosed, were found in Sierra Leone and Liberia.

MSF continues to strengthen its teams on the ground in Guinea. By the end of the week, there will be around 60 international fieldworkers who have experience in working on haemorrhagic fever. The group will be divided between Conakry and the other locations in the south-east of the country.



Just got back from Orlando where I helped kick off the largest BC/DR conference in the world yesterday, Spring World 2014.

I previewed my talk in Orlando Sunday with an online webinar last week. If you were able to participate in last Wednesday’s webinar, (which is archived on the Disaster Recovery Journal’s website) entitled The State of Disaster Recovery Preparedness, you may recall this excellent question posed by one of the attendees:

“How do we convince upper management to fund disaster recovery?”

Getting the executive team on your side is a foundational step toward developing and implementing a sound DR plan. Like most things in life, I think communications is key — both what you say and how you say it.



Thursday, 03 April 2014 14:55

BCI North America Awards presented

The 2014 BCI North America Awards took place on Sunday March 30th as part of the Disaster Recovery Journal (DRJ) Spring World 2014. The awards recognise the outstanding contribution of business continuity professionals and organizations living in or operating in the North America Region, including USA and Canada.

The winners were:

Business Continuity Industry Personality of the Year
Frank Perlmutter MBCI

BCM Newcomer of the Year
Leanne Metz AMBCI, Associate Director, Mead Johnson Nutrition

Business Continuity Innovation of the Year

Public Sector Manager of the Year
Brian Gray MBCI Chief, Business Continuity Management, United Nations

Business Continuity Manager of the Year
Dave Morgan MBCI, Senior BCP Manager, Delta Dental

Business Continuity Team of the Year
Franklin Templeton Investments

Highly Commended:
Kaiser Permenante

Most Effective Recovery of the Year
Telus Communications

Business Continuity Consultant of the Year
Skip Williams, Owner, Kingsbridge Disaster Recovery

Business Continuity Provider of the Year (Product)
ResilienceONE® BCM Software

Highly Commended:

Business Continuity Provider of the Year (Service)


The Intergovernmental Panel on Climate Change (IPCC) has issued a new report that says the effects of climate change are already occurring on all continents and across the oceans. The world, in many cases, is ill-prepared for risks from a changing climate. The report also concludes that there are opportunities to respond to such risks, though the risks will be difficult to manage with high levels of warming.

The report, entitled ‘Climate Change 2014: Impacts, Adaptation, and Vulnerability’, from Working Group II of the IPCC, details the impacts of climate change to date, the future risks from a changing climate, and the opportunities for effective action to reduce risks. A total of 309 coordinating lead authors, lead authors, and review editors, drawn from 70 countries, were selected to produce the report. They enlisted the help of 436 contributing authors, and a total of 1,729 expert and government reviewers.



CloudEndure has published the results of a benchmark survey, entitled ‘2014 State of public cloud disaster recovery’. This presents best practices and success metrics reported by companies that host web applications in the public cloud.

The highlights of the survey report are:

  • When it comes to service availability, there is a clear gap between how organizations perceive their track record and the reality of their capabilities. While almost all respondents claim they meet their availability goals consistently (43 percent) or most of the time (49 percent), 26 percent of the organizations surveyed don’t measure service availability at all. It is hard to tell how these organizations claim to meet their goals when they are not able to measure them.
  • While the vast majority of the organizations surveyed (79 percent) have a service availability goal of 99.9 percent or better, over half of the companies (54 percent) had at least one outage in the past 3 months.
  • The top challenges in meeting availability goals are insufficient IT resources, budget limitations, and limited ability to prevent software bugs.
  • Load balancing and local (single region/zone) storage backup are the leading strategies to ensure system availability and data protection cited by 59 percent and 51 percent of the respondents respectively.
  • There is a strong correlation between the cost of downtime and the average hours per week invested in backup / disaster recovery.

Complimentary copies of the report are available for download after free registration.

Avalution Consulting has announced the release of a new feature, ‘Catalyst Insights’, for its Catalyst business continuity software suite.

Catalyst Insights provides automatic business continuity metrics that enable business continuity and IT disaster recovery managers to quickly identify and address preparedness gaps and report on their organization's level of preparedness.

With Catalyst Insights users can:

  • View granular business continuity dashboards, ratings, relationships, and dependencies for each element of the planning lifecycle by department, location, application, IT infrastructure, products and services, or the program as a whole;
  • Examine individual elements of the organization to understand upstream and downstream dependencies, identify and address gaps, and report on their current level of preparedness;
  • Visually map directional relationship dependencies for individual departments, locations, applications, IT infrastructure, products and services, or across the entire organization.

The Catalyst business continuity software suite can be trialled for 30-days before buying.


LOS ANGELES — It has been 20 years since Southern California experienced a major earthquake, a powerful 6.7-magnitude temblor that rolled through Northridge, killing 57 people. But this stretch of seismic calm, though welcome in obvious ways, has undermined efforts to force Los Angeles to deal with what officials describe as potentially lethal deficiencies in earthquake preparation.

That may be changing. Since two back-to-back earthquakes Friday evening — a relatively small one with a magnitude of 3.6, followed by a long and rolling 5.1 quake — Los Angeles has been shaken by nearly 175 smaller aftershocks. It is the first time this area has suffered an earthquake in excess of 5 magnitude since 1997, and it comes two weeks after a 4.4 earthquake jolted residents awake.

None of these quakes caused injuries or widespread damage, other than broken water pipes and some homes that have been declared at least temporarily uninhabitable. But geologists see them as the predictable end of a cycle: a return to what might be an uncomfortable normal in which 5-magnitude earthquakes become routine events.



YOKOHAMA, Japan — Climate change is already having sweeping effects on every continent and throughout the world’s oceans, scientists reported on Monday, and they warned that the problem was likely to grow substantially worse unless greenhouse emissions are brought under control.

The report by the Intergovernmental Panel on Climate Change, a United Nations group that periodically summarizes climate science, concluded that ice caps are melting, sea ice in the Arctic is collapsing, water supplies are coming under stress, heat waves and heavy rains are intensifying, coral reefs are dying, and fish and many other creatures are migrating toward the poles or in some cases going extinct.



Thursday, 03 April 2014 14:40

The Risk Appetite Dialogue

Risk levels and uncertainty change significantly over time. Competitors make new and sometimes unexpected moves on the board, new regulatory mandates complicate the picture, economies fluctuate, disruptive technologies emerge and nations start new conflicts that can escalate quickly and broadly. Not to mention that, quite simply, stuff happens, meaning tsunamis, hurricanes, floods and other catastrophic events can hit at any time. Indeed, the world is a risky place in which to do business.

Yet like everything else, there is always the other side of the equation. Companies and organizations either grow or face inevitable difficulties in sustaining the business. Value creation is a goal many managers seek, and rightfully so, as no one doubts that successful organizations must take risk to create enterprise value and grow. The question is, how much risk should they take? A balanced approach to value creation means the enterprise accepts only those risks that are prudent to undertake and that it can reasonably expect to manage successfully in pursuing its value creation objectives.



Among a whirlwind of course leadership, business development, teaching, writing, and course validations, I found time to present to Thames Valley Chamber of Commerce’s Windsor Debate on ‘The Changing Face of National Security’. My presentation – Cyber Security: Mission Impossible? – was part of a wider programme of discussions by senior military and industry influencers and analysts about the dynamic changes that affect policy and capability.

The event was held at Windsor Castle, a suitable backdrop for discussions concerning the defence and maintenance of the UK’s values and priorities in the face of historic challenges, and looking forward to an uncertain and unpredictable future. The debate was a fantastic opportunity to contribute to and learn from the knowledge and ideas surrounding our resilience for the future. Delegates discussed everything from international stability to aviation security, and from state intelligence to cyber security. As we at Bucks have these subjects firmly in our portfolio, the debates allowed me to contextualise what we think we know about some of these areas – and of course how much we don’t know.



Gavin Butler attended the fourth Future of Cyber Security 2014 conference in London on 20th March 2014 and here’s what he thought about it:

The series of presentations gave a useful overview of the current state of play in cyber security thinking and predictions for the future. Retired Colonel John Doody hosted proceedings and introduced talks from Lord Errol (Merlin) and Chris Gibson from the CERT-UK, IBM, Palo Alto Networks, Barclays Bank, Encode, Allianz and Airwatch. In particular, Chris Gibson’s presentation did provide confidence in that the future of cyber security is indeed in ‘safe hands’ as the CERT-UK seeks to reinforce links with industry and academia and help promote information sharing, such as through CISP, which will enable organisations to take a more ‘resilient’ outlook towards developing their own effective cyber security controls. There is also further scope and recognition for SMEs to adopt concepts from the ‘Cyber Security Strategy’, perhaps as they are now seen as vital to the UK economy and hence ‘critical national infrastructure’?



After disasters like the Oso landslide in Washington State, a common question is why people are allowed to live in such dangerous places. On the website of Scientific American, for example, the blogger Dana Hunter wrote, “It infuriates me when officials know an area is unsafe, and allow people to build there anyway.”

But things are rarely simple when government power meets property rights. The government has broad authority to regulate safety in decisions about where and how to build, but it can count on trouble when it tries to restrict the right to build. “Often, it ends up in court,” said Lynn Highland, a geographer with the United States Geological Survey’s landslide program in Golden, Colo.

Continue reading the main story

Her agency provides scientific information about geologic features and risks, but it has no regulatory authority, and state and local regulations are a patchwork, she said. When disaster strikes, people find that their insurance policies do not cover landslides without special riders that can be ruinously expensive.



Sunday, 30 March 2014 16:12

Time Enough to Choose the Right Cloud

How odd that even though we are this deep into the cloud transition, people are still debating the merits of public vs. private vs. hybrid.

If the latest research is to be believed, however, most enterprises have already moved beyond this debate and are actively seeking a variety of cloud-based solutions that will combine the best of the cloud as well as legacy virtual and even physical infrastructure.

Take, for example, CTERA Networks’ recent Cloud Storage Report, which holds that 63 percent of enterprises prefer internal or hosted virtual private cloud solutions over SaaS offerings like Dropbox for their storage and collaboration needs. This is actually a no-brainer – in fact, I’m surprised the number is that low – considering the advantage of keeping critical data safely tucked behind the firewall rather than on a public service. Public services will have their role to play going forward, but they are not likely to house mission-critical data and applications, at least not for long.



Asia Pacific firms are gradually beginning to understand how important big data is for responding to rising customer expectations and becoming customer-obsessed to gain a competitive edge in the age of the customer. Data from our Forrsights Budgets And Priorities Survey, Q4 2013 shows that 40% of organizations across Asia Pacific expect to increase their spending on big data solutions in 2014.

In addition to traditional structured data (from ERP and other core transactional systems), organizations are increasing seeking insight from unstructured data originating in both internal (IM, email) and external (social networks, sensors) sources to enhance the business value of data. But these initiatives pose a significant challenge to security and risk professionals:



Sunday, 30 March 2014 16:10

Coping With a Cloud Outage

By Samuel Greengard

In recent years, as organizations have embraced cloud computing, CIOs and other executives have witnessed significant gains. In many cases, their enterprises have boosted IT availability, reduced demands on internal infrastructure and notched productivity improvements along with cost savings. Last October, Gartner reported that cloud computing will emerge as the bulk of IT spend by 2016 and half of all cloud services will take a hybrid cloud approach by 2017.

But as more and more organizations drift into the cloud, one fact is perfectly clear: the risk of an outage or outright failure is real, and such an event could have significant repercussions during and after an event. Already, a number of high-profile cloud providers have endured episodic outages and failures, including Amazon Web Services, Google Drive, Dropbox and Microsoft Azure. In some instances, companies using these products and services haven't just endured downtime, they've also lost data.



Sunday, 30 March 2014 16:09


By Nathaniel Forbes, MBCI, CBCP

Late in 2013 the head of BCM for one of Asia’s largest banks voluntarily transferred within his bank to a job entirely unrelated to BCM. He is the most-experienced, knowledgeable and highest-paid non-expatriate BCM professional I know in Asia.

I wondered why anyone with eleven years of full-time BCM experience and a compensation package the envy of his peers would make such a move. He agreed to answer my questions on-the-record if I didn’t use his name or identify his employer.



By Paul Kirvan, FBCI

In March 2014 the business continuity profession lost one of its founding fathers, Ron Ginn, (Hon) FBCI. Although Ron was in his 80s he lived a vigorous life and never lost his passion for the profession he helped create. For a fitting tribute to Ron’s memory, I have compiled thoughts and remembrances from several of Ron’s friends and colleagues, including myself.

As one of the few ‘foreigners’ in the early days of the business continuity profession in the UK and Europe, I became involved in an organization many of you will remember, called Survive! This was instrumental in the growth of the profession in Europe and North America and also in the founding of the Business Continuity Institute. During my many trips to the UK I had the pleasure of meeting Ron Ginn on several occasions. Ron was one of my early mentors and inspirations for my continued involvement in the profession. His enthusiasm was infectious; he really understood the direction that the profession needed to go and was a constant source of encouragement and challenge for all of us who were there in the ‘early days’. I last spoke to Ron during the 2012 BCI World Conference in London, and even in his 80s, Ron was still challenging me to do more in the profession. He was a true inspiration to me, and will be greatly missed.



Teon Rosandic, VP EMEA, xMatters, gives a vendor’s view of the developments which are improving the capabilities of emergency notification systems and why traditional one-way mass notification is on the way out.

Many of the mass notification systems that businesses utilise today haven’t changed or evolved since they were originally designed many years ago. It’s the same old thing – put your message in the message box and broadcast it out to everyone in your database. This type of archaic communication system just doesn’t cut it today with more and more incidents and crises that require immediate attention and the need for two way communication at every step of the way.

However, there is new technology available, and there are things that the business continuity and risk manager should consider when looking for a mass notification approach.

This article delves into the ins and outs of what effective mass communication technology can deliver and what the old systems lack.



US statesman Benjamin Franklin was famous for many things and for one in particular: his proclamation that “in this world nothing can be said to be certain, except death and taxes”. Well, Benjamin, it seems like modern technology and inflation have conspired to add a couple more items: server crashes and data security breaches. In other words, it’s not a matter of if these events will occur. It’s a matter of when. It’s true that robust quality IT products can push out the when so far that it seems to disappear in the distant future. However, smart organisations make the assumption that both things will happen and take appropriate precautions.



When Spiceworks surveyed IT professionals recently about their attitudes toward certifications, one of the most interesting data points was that about half of the respondents will be paying for their continuing IT education themselves this year. Only 56 percent said that their employers would pay for training in 2014. But half of the IT pros said they think that certifications are very valuable or extremely valuable to their careers. And 80 percent of them said they plan to complete some training or certification this year. Since having to pay for continuing education yourself often really means you’ll need to find some free or lower-cost training, let’s take a look at a range of vendor-specific, higher education, online and free online training resources. We’ll begin with one of the hottest IT skill sets for 2014:Big Data, aka analytics and/or business intelligence.



Sunday, 30 March 2014 16:03

Mudslide Was Forewarned, Experts Assert

Even as rescue teams search for more bodies in the aftermath of the March 22 mud slide in Washington, records show that while the area is prone to these disasters, homes were allowed to be built there anyway.

The slide, triggered by excessive rain, has claimed 24 lives so far and 176 are still unaccounted for, the Associated Press reports.

Snohomish County Emergency Management Director John Pennington said during a news conference on March 24 that the slide was “completely unforeseen” and that it “came out of nowhere.”

In a 1999 report filed with the U.S. Army Corps of Engineers, however, geomorph­ologist Daniel J. Miller and his wife, Lynne Rodgers Miller, warned of “the potential for a large catastrophic failure” in the area, according to the Seattle Times.



Wednesday, 26 March 2014 19:44

The Enterprise of Things

Computerworld — The mobile market is moving on. Traditional smartphones and tablets are maturing. The next phase is coming, and it consists of the Internet of Things, a descriptive phrase that includes all manner of smart (and barely smart) devices, often connected wirelessly.

While smartwatches, fitness bands and connected appliances are important, the current focus on consumer products diminishes the fact that the greatest impact this category may have will be on the enterprise. Consumer experimentation will lead the market, but enterprise adaptation will not be far behind. For this reason, I use the term "Enterprise of Things" (or EoT) to describe this next wave that enterprises will need to deal with, even as most still try to adequately cope with the more mature mobile devices already impacting their users, networks and applications.



Computerworld — Business groups in a growing number of companies appear to be plowing ahead on data analytics projects with little input or help from their own IT organizations.

Rather than leveraging in-house IT skills and technology, many business groups are using their own data and department-level analysts to cobble together analytics strategies, according to a survey by IDC.

Business managers and IT managers appear to have different assessments of the value enterprise IT organizations bring to big data and data analytics projects. While IT groups see themselves as enablers, business leaders tend to view IT as a stumbling block.

For the study, IDC surveyed 578 line-of-business managers, IT managers, data analysts and business executives.



IDG News Service (Boston Bureau) — SAP is continuing to merge its HANA in-memory database platform with its Business Warehouse data warehousing software, with the latest update adding support for HANA's real-time data loading services.

Companies with large data warehouses often load information sets at off-peak times, such as in overnight batch jobs. But with the general availability of Business Warehouse 7.4, HANA's "smart data access" services can tap any source within or outside a company as it's needed. SAP is calling the approach an "in-memory data fabric."

The services don't actually physically move data into Business Warehouse; rather, the target sources are viewed as virtual tables. This services provide broader access to data sets, as well as the ability to keep frequently accessed information sets inside the core data warehouse while reaching out to ones that are needed only occasionally as desired.



James Leavesley, outlines why risk managers need to be up to speed with the social media revolution.

Social media is no longer just the latest buzz word or an experiment for creative marketing teams. Organizations are fast recognising the importance of social media from a customer, employee and business partnership perspective. Companies are using blogs, videos, Facebook and Twitter to connect with ‘communities’. However, it only takes one disgruntled customer to take to Twitter, You Tube or Facebook and the results can be costly. Even worse damage can be done by a rogue employee with access to corporate social media accounts and a determination to discredit the company.

So here are five reasons why risk managers should get up to speed with social media and how to control it:



In its latest Bulletin, APEC (Asia-Pacific Economic Cooperation) has provided details of what it is doing to assist regional SMEs to develop business continuity plans.

The Bulletin focuses on a multi-year project launched in 2011 by APEC to enhance the capacity of SMEs to prepare for disasters and to ensure “minimal and tolerable disruption to business operations and supply chains”.

“The main goal of the APEC project is to promote SMEs to establish business continuity plans for sustainable global supply chains,” Johnny Yeh, executive director of the APEC SME Crisis Management Center in Chinese Taipei, told the APEC Bulletin. Mr. Yeh is overseeing the APEC project.

“This is accomplished by training related government, non-profit and private sector organizations in APEC member economies, so they, in turn can train SMEs in their respective economies,” Mr Yeh continued.

As part of the project, experts have developed a simple step-by-step APEC Business Continuity Planning Guidebook for SMEs.

Read the full Bulletin.

Network World — Cisco this week is unveiling two new configurations of its recently-launched Nexus 9000 switches, a new 40G Nexus switch.A In addition, Cisco is celebrating the fifth anniversary of its UCS server.

Cisco also announced certification programs for its new Application Centric Infrastructure (ACI) programmable networking product line, which includes the Nexus 9000 switches. ACI is Cisco's non-SDN response to the software-defined networking trend sweeping the industry.

The 16-slot Nexus 9516 and four-slot Nexus 9504 had been expected, and they join the existing eight-slot Nexus 9508. The Nexus 9516 is positioned as an aggregation layer switch for service provider or high-demand deployments, offering 576 wire-speed 40Gbps Ethernet ports and 60Tbps of throughput. It takes up 21 RUs, supports 2,304 10G ports, consumes 11 watts per 40G port, and uses two to four Cisco and/or Broadcom ASICs per line card.



CIO — Few deny that the healthcare industry in the U.S. faces tremendous pressure to change. Few deny the role that technology will play in stimulating this change, either.

Uncertainty creeps in, though, when healthcare organizations try to address their healthcare needs. This is especially true of healthcare providers — the hospitals, medical offices, clinics and myriad long-term care facilities that account for roughly 70 percent of healthcare spending and that have spent much of the 21st century rushing to catch up to other vertical industries.

Most providers, says Skip Snow, a senior analyst with Forrester, are "very new to the idea that they have all this structured data in clinical systems." That's largely because, until recently, the mission of the healthcare CIO was ancillary to a provider's core mission. IT often fell under the CFO's domain, Snow says, since it focused so much on business systems.



It was recently revealed that the personal details of 10,000 asylum-seekers housed in Australia were accidently leaked via the Department of Immigration and Border Protection’s website. This has damaged asylum-seekers’ trust in the Australian government and, according to Greens Senator Sarah Hanson-Young, potentially put lives at risk. Such incidents represent significant breaches of local regulations and can result in heavy penalties.

Recent amendments to existing privacy laws in Australia and Hong Kong allow each country’s privacy commissioner to enforce significant penalties for repeated or serious data breaches. Countries like Japan and Taiwan, where new privacy laws have been passed and/or existing ones are being enforced more strictly, also assess penalties for noncompliance.



It’s funny how some myths continue to be believed, even by hard-nosed business people. The notion that virtualisation will save a company’s data is such a myth. Although it can be valuable in optimising an organisation’s use of IT resources and reacting quickly to changing IT needs, virtual environments are not inherently safer than independent physical servers. But data recovery provider Kroll Ontrack found that 80 percent of companies believe that storing data virtually like this is less or no riskier. Beliefs are one thing, statistics are another. 40 percent of companies using this virtual mode of storage were hit with data loss in 2012 – 2013. What’s going on?



Computerworld — Driven by a very strong belief in the future of software-defined data center technology, Bank of America is steering its IT to almost total virtualization, from the data center to desktop.

The technology does for the entirety of a data center what virtualization did for servers: It decouples hardware from the computing resources. Its goal is to enable users to create, expand and contract computing capability virtually, quickly and efficiently.

The software-defined data center is not yet a reality. But there are enough parts of the technology in place to convince David Reilly, Bank of America's global infrastructure executive, that it is the future.

"The software-defined data center is going to dramatically change how we provide services to our organizations," said Reilly. "It provides an opportunity for, in effect, the hardware to disappear.

"We think it's irresistible, this trend," said Reilly.



Dell yet again signaled its intentions to compete more aggressively in the analytics space with the acquisition today of StatSoft.

With 1,500 customers, StatSoft is the second major analytics acquisition that Dell has made since acquiring Quest Software. In 2012, just prior to being acquired by Dell, Quest Software acquired Kitenga, a provider of high-end analytics software that usually gets applied to Big Data problems.

In contrast, John Whittaker, director of product marketing for Dell Information Management, says StatSoft represents a more mainstream play into the realm of predictive analytics. As there is definitely a blurring of the line these days between analytics applications, Whittaker says customers should expect to see Dell Software being significantly more aggressive in terms of delivering analytics capabilities into the midmarket.



Tuesday, 25 March 2014 19:50

Improving Cyberattack Response

About a month ago, I reported on a study from Ponemon Institute and AccessData that revealed that most companies are doing a poor job when it comes to detecting and effectively responding to a cyberattack. As Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, said in a statement when the report was released:

“When a cyber-attack happens, immediate reaction is needed in the minutes that follow, not hours or days. It’s readily clear from the survey that IR processes need to incorporate powerful, intuitive technology that helps teams act quickly, effectively and with key evidence so their companies’ and clients’ time, resources and money are not lost in the immediate aftermath of the event.”

AccessData’s Chief Cybersecurity Strategist, Craig Carpenter, has been looking at this problem in some depth. We aren’t totally clueless on why these attacks are able to cause tremendous amounts of damage, both financial and reputational, to companies. For example, as information about the Target breach continues to trickle out, we have a pretty good idea of how and why the incident occurred. Our concern now, Carpenter said in a blog post, is fixing these problems. The key, he said, is prioritization and improved integration. In an email to me, Carpenter provided a few steps every company should take to prevent a “Target-like” breach in the future:



Tuesday, 25 March 2014 19:47

Cassandra Lowers the Barriers to Big Data

InfoWorld — Apache Cassandra is a free, open source NoSQL database designed to manage very large data sets (think petabytes) across large clusters of commodity servers. Among many distinguishing features, Cassandra excels at scaling writes as well as reads, and its "master-less" architecture makes creating and expanding clusters relatively straightforward. For organizations seeking a data store that can support rapid and massive growth, Cassandra should be high on the list of options to consider.

Cassandra comes from an auspicious lineage. It was influenced not only by Google's Bigtable, from which it inherits its data architecture, but also Amazon's Dynamo, from which it borrows its distribution mechanisms. Like Dynamo, nodes in a Cassandra cluster are completely symmetrical, all having identical responsibilities. Cassandra also employs Dynamo-style consistent hashing to partition and replicate data. (Dynamo is Amazon's highly available key-value storage system, on which DynamoDB is based.)



“If you’re not paranoid, you’re not paying attention.” It’s an old joke, but one that rings true as I finish my presentation for this Wednesday’s online webinar with The Disaster Recovery Journal. Here are just three of the danger signals from the 2014 Annual Report on the State of Disaster Recovery Preparedness that I’ll describe during the webinar.

DANGER SIGNAL 1: 3 out of 4 companies worldwide are failing in terms of disaster readiness. Having lots of company will be no consolation for organizations that have failed to respond to the alarming rise in intentional and accidental threats to IT systems.

DANGER SIGNAL 2: More than half of companies worldwide report having lost critical applications or most/all datacenter functionality for hours or even days. Once again, more evidence that business is at-risk for crippling losses.

DANGER SIGNAL 3: Human error is the #2 cause of outages and data loss, reported by 43.5% companies reporting in. How does your disaster recovery plan address this key vulnerability?

The good news? There are specific actions you can take right now to be better prepared to recover your systems in the event of an outage.



The Terrorism Risk Insurance Program, a public/private risk-sharing partnership which is set to expire at the end of 2014, is absolutely critical to maintaining the health of the American economy, according to an updated white paper just released by the Insurance Information Institute (I.I.I.).

The I.I.I.’s Terrorism Risk: A Constant Threat, Impacts for Property/Casualty Insurers explains that should the federal Terrorism Risk Insurance Program Reauthorization Act (TRIPRA) be allowed to expire at year-end 2014, this would have a detrimental impact on the availability and affordability of terrorism insurance for businesses.



Monday, 24 March 2014 15:56

Risk Assessment – By the Book

Nothing is more important to developing and maintaining an effective C&E program than risk assessment, and effective risk assessment is, as a general matter, perhaps the most daunting task a C&E officer is likely to face.  The challenges are both conceptual (a surprising lack of consensus on what the point of a risk assessment is) and practical (getting business people and others to be candid and thoughtful about what they may view as unpleasant and unnecessary topics).

But C&E risk assessment has been an expectation of the U.S. government since the 2004 amendments to the Federal Sentencing Guidelines for Organizations, and anti-corruption compliance standards of other countries are turning these expectations into something of a global mandate. Beyond this, many companies’ C&E programs are in desperate need of some sort of refreshment – and, as much as any program function, a risk assessment can provide a powerful foundation for this.



CIO — IT security is a tricky issue: Too much security -- or too little -- could bankrupt your company. The key is to strike the right balance. These three IT executives share their advice.

Determine Your Investment Best Bets

Martin Gomberg, global director of security, governance and business protection, A&E Networks: Security is a slide switch. Slide it all the way to the right, and nothing will get in, nothing will get out  -- and nothing will get done. Slide it all the way to the left, and we will all have a party, it will be a great day -- but we'll only have one of them. My approach is to find the setting where risk is not too high, nor is risk mitigation an impediment to innovation.

In our industry, the threats are increasing and becoming more targeted, and our ability to protect ourselves is diminishing. Meanwhile, the technologies required for protection are getting more complicated and expensive, capable security staff are more difficult to find, and new laws and regulations are more likely to impose severe penalties for breaches.



Computerworld — A few weeks ago, I was happy to hear that Target CIO Beth Jacobs had resigned. This wasn't only because falling on her sword was the right thing to do after her company's massive data breach. The fact that just days earlier I had realized that I was caught up in this mess had something to do with it.

My credit card was used several dozen times at a Mumbai shopping site, and I am convinced that it was compromised in the Target breach. But why didn't my credit card issuer's security algorithms pick up this obvious anomaly? Because I am a frequent traveler, I was told, the charges didn't seem out of the ordinary.

Really? Forty purchases from the same online shopping site didn't seem just a bit suspicious -- even though, in all my travels, I've never been to India?



Think Target and the hit it took when hackers stole the private information of millions, requiring many to update credit cards and the like. It’s a disaster that most executives believe will happen to them–not if, but when. So, that makes it even more amazing to find out that most executives think, according a study published in the Economist, that two thirds of CEOs think a good response to such an attack will enhance their reputation.

PRNewser from mediabistro reporting on the Economist story notes that while 66% think they will come out of such an event smelling like a rose, only 17% surveyed say they are “fully prepared.”

Hootsuite, perhaps the best social media management and monitoring tool that I know of, today experienced a hack attack in the form of a Denial of Service attack. One client emailed me Ryan Holmes’ response. The CEO of Hootsuite was fast, empathetic, transparent and almost completely on target. (Only thing missed in my mind was an apology, but perhaps he felt there was nothing to apologize for and he may be right).



I’ve seen some hefty price tags associated with poor data quality, but I have to say, last year’s figure from the Ministry of Defence may take the prize. The UK agency was told “it was at risk of squandering a £1 billion in investments in IT because of dire data quality” last year, according to Martin Doyle, the Data Quality Improvement Evangelist for DQ Global.

This year, another UK agency, the National Health Service (NHS), is under scrutiny for sharing data without consent. Names and addresses may have been taken from the database and sold for studies, which meant it was uploaded to third-party cloud storage services, according to Doyle.

As if that weren’t bad enough, the NHS is also working on a project called Care.data, which is a centralized hub for patient care records. The NHS has “problems recalling exactly who has all of this patient information already, suggesting it has bigger problems to solve,” he writes. This issue has triggered a backlog in patient care.



The cloud is the latest juggernaut to sweep the enterprise IT industry, and if you ask most experts, the expectation is that the entire data universe will one day reside on distributed virtual architecture.

At the moment, however, the vision has not been completely sold to the people who build and maintain enterprise corporate environments.

According to new data from 451 Research at the behest of Microsoft, more than 45 percent of IT executives consider their organizations to be beyond the pilot phase of cloud computing, with at least half of that group saying they are “heavy” cloud users. However, only 6 percent have labeled the cloud as the default platform for new applications, while only 18 percent turn to the cloud regularly for new projects. All of this suggests that while the enterprise has embraced the cloud with open arms, the vast majority are using it for low-value or non-critical functions – hardly the new data paradigm that has been touted so far.



In a white paper entitled ‘Are public agencies better prepared to deal with crises in 2014?’ Noggin IT has released findings from a survey of US organizations.

The survey, conducted in late 2013, reveals an increasingly complex environment for those in crisis management due to greater regulatory compliance, Internet-connected stakeholders, more unpredictable weather events and political and financial volatility, where technology is key to improving organizational resilience and business continuity.

James Boddam-Whetham, managing director Noggin IT says “We are seeing a situation where public agencies are being required to do more with less. Some of the interesting pain points that came out of this survey were that actual crisis management team activation was still a struggle for many organizations; as was the broader issue of employee communications during a crisis. Both point to a perhaps overlooked consideration for a crisis management software solution: can it actually assist you manage your internal people affairs during a crisis. Much of the emphasis for crisis management systems has been on informing the public, or alerts and notifications, rather than necessarily getting the internal ship in order. An ability to organise internal stakeholders would therefore seem to be a logical consideration for any crisis management solution.”



Monday, 24 March 2014 15:49

BCI launches new senior membership grade

The Business Continuity Institute has announced the creation of a new ‘Associate Fellow’ (AFBCI) senior membership grade for those people who have reached a senior level in the business continuity profession but have concentrated more on developing their practical working experience rather than specifically contributing to the development of the Institute or the discipline.

The AFBCI grade sits between MBCI and FBCI. Applicants must fit into either of the following criteria:

  • A current MBCI held for a minimum of 3 years;
  • A current MBCP credential held for at least 3 years with the DRII.

The applicant must also:

  • Be currently working in business continuity management;
  • Have a minimum of seven years working experience within the discipline and knowledge across all six BCI Professional Practices;
  • Have three years of CPD completed using the BCI’s CPD system or CPEs through the DRII system if using MBCP to apply (These must be the three years previous to year application);
  • Complete a full scored assessment application process.

If you would like more information or would like to request an application form, please contact membership@thebci.org

PC World — Each time there's a high-profile data breach, security experts exhort the same best practices: Create unique logins for every service you use, use complex passwords, vigilantly comb your credit card statements for anomalies. The advice is sound. Unfortunately, it obscures the fact that the safety of your personal information is ultimately in the hands of companies you share it with.

Identity theft is changing. Customer databases are a treasure trove of personal information and much more efficient for hackers to target than individuals. In this new landscape, the guidelines security experts--and journalists like me--espouse are really just damage-control measures that minimize the impact of a successful attack after the fact, but do absolutely nothing to protect your personal data or financial information from the attack itself.

Look back on some of the major data breach incidents of 2013. Adobe was hacked, and attackers gained access to customer account information for nearly 150 million users, as well as credit-card information from nearly three million customers. Target was hacked, and the credit- or debit-card details for 40 million customers were exposed. In those cases, there was little any individual consumer could have done to prevent being affected by those data breaches.



Computerworld UK — Big data analytics tools will be crucial to enterprise security as criminals deploy faster and more sophisticated methods to steal valuable data, according to security firm RSA.

"We are really at the beginning of intelligence-driven security: it is just the tip of the iceberg. Looking forward we are going to have to be smarter [to deal with threats], and we are going to be looking at better data science," said RSA's head of knowledge delivery and business development, Daniel Cohen.

"It's not 'if' we are going to be breached, but 'when' we are going to be breached, so there is a need to focus more on detection. We saw with the Target breach it was the human factor that slipped there, so we have to be able to bring in more automation."

The number of successful attacks against high-profile businesses have clearly increased in recent years, with the compromise of Target's point of sale systems just one example of the variety of methods that cyber criminals are using to steal data on a large scale.



IBM moved today to take a bigger bite out of fraud by combining various pieces of software and services into a common framework that is simpler to deploy.

Rick Hoene, worldwide fraud solutions leader for IBM Global Services, says that while IBM has been delivering technologies to fight fraud for over 20 years, the scope of criminal fraud activity now requires a more integrated approach. To that end, IBM is launching a Smarter Counter Fraud initiative, which isbased on IBM Counter Fraud Management Software and existing assets. This combination creates a single offering that is simpler to both acquire and install.

Based on IBM’s Big Data analytics technologies, the IBM software is designed to aggregate data from external and internal sources and apply analytics in ways to prevent, identify and investigate suspicious activity. It includes analytics that identify non-obvious relationships between entities, visualization technology that identifies patterns of fraud, and machine-learning software to help prevent future occurrences of fraud based on previous discoveries.



While Hadoop may make Big Data more accessible, the setting up of a Hadoop cluster on commodity servers is not particularly simple.

To help IT organizations automate that process, Continuuity today announces it is contributing Loom, cluster management software that automates the process of provisioning a Hadoop cluster, to the open source community.

Continuuity CEO Jonathan Gray says it is a byproduct of the company’s effort to provide an application development environment for Hadoop that can be deployed on a private or public cloud. As customers began to build applications on the Continuuity platform-as-a-service (PaaS) environment, it became apparent they needed help with the DevOps elements of Hadoop.



Monday, 24 March 2014 15:44

Big Data Funding Spree Continues

Network World — Venture capital firms continue to funnel big sums of money to big data startups.

Most recently, Cloudera raised $160 million in new financing from investors including T. Rowe Price and Google Ventures. The latest round for Cloudera (which offers its own distribution of Hadoop plus integrated tools) brings its total funding to $300 million.

On the same day Cloudera announced its venture capital windfall, analytics startup Platfora announced funding of its own. Platfora, based in San Mateo, closed a $38 million round from investors including Tenaya Capital, Citi Ventures, Cisco and Allegis Capital. The latest round brings Platfora's total financing to $65 million.

Platfora's analytics and visualization software is designed to run on top of Hadoop; existing customers include DirecTV, Disney, and The Washington Post.



CIO — HR professionals and recruiters continue to rely on big data to refine the application and hiring process. They are tapping data analytics to predict ROI, performance and likely behavior. However, with so much valuable data available, it's easy to gloss over one of the most important parts of the recruiting process: the human element.

Focusing on "small data" can not only improve the speed and efficiency of your hiring process and pinpoint obstacles in your organization, it can make it easier to find passive talent candidates.

"It's been so exciting over the last few years to see the number of data collection and analysis tools growing, and I have no problem with using those tools," says Jason Berkowitz, vice president of client services at Seven Step Recruiting Process Outsourcing (RPO.)



Companies invest in enterprise risk management to identify, analyze, respond to and monitor risks and opportunities in their internal and external environments. These investments maximize opportunities, help avoid nasty surprises and provide reasonable assurance on the achievement of the organization’s objectives.

Established risk assessment processes can suffer from stale thinking in identifying and evaluating risks, especially risks that are ever-changing. Here are five ways to refresh your process, push thinking beyond the “known knowns” and improve the quality of thinking in your risk assessment process.



In the 1980s, the average annual cost of natural disasters worldwide was $50 billion. In 2012, Superstorm Sandy met that mark in two days. As it tore through New York and New Jersey on its journey up the east coast, Sandy became the second-most expensive hurricane in American history, causing in a few hours what just a generation ago would have been a year’s worth of disaster damage.

Sandy’s huge price tag fit a trend: Natural disasters are costing more and more money. See the graph below, which shows the global tally of disaster expenses for the past 24 years. It’s courtesy of Munich Re, one of the world’s largest reinsurance companies, which maintains a widely used global loss data set. (All costs are adjusted for inflation.)



Business Continuity specialist Vocal has announced that it has been shortlisted for a BCI North America Award in the ‘Business Continuity Innovation of the Year’ category, after nominating its product ‘Command’ for the accolade.

“Command has spent many months in development, and years in its realisation, so this recognition is very important to us” says Vocal’s Trevor Wheatley-Perry. “It’s a truly unique and highly revolutionary solution, because it means that for the first time, any organisation, of any size, anywhere in the world can precisely replicate and manage its operations, systems, and processes during an incident.”Vocal Shortlist BCI North America Awards

Built on Vocal’s award-winning iModus platform, Command is an easy-to-use tool which offers Vocal’s clients a comprehensive overview of their business continuity plans and fixed assets, a multi-faceted communications system to relay processes to people instantly, and a full audit of decisions made and actions taken during an incident. With hopes to implement the solution across a diverse range of new industries in 2014, the shortlist for the prestigious BCI Award has come at precisely the right time.

“The BCI North America Awards recognise the outstanding contributions of organisations and individual professionals working in the USA and Canada’s business continuity industry,” Trevor continues.

“It’s fantastic to have been shortlisted for such a prestigious award, and as all winners are automatically entered into the BCI Global Awards, being considered for this accolade might turn into some even more exciting opportunities in the future.”

The BCI North America Awards event will be held as part of the Disaster Recovery Journal Spring World Show 2014, a four-day industry gathering which begins on 30 March in Orlando, USA.

About Vocal:

Vocal is recognised throughout the world as a trusted innovator of multi award-winning and proven business continuity and communication solutions. In 2007, Vocal launched iModus; the first fully integrated business continuity suite encompassing Notification, Planning, Mapping, Alerting, Staff Safety and Incident Management modules. A multi award-winning solution, iModus was selected as the emergency messaging system for incident management during the London 2012 Olympic and Paralympic Games.

On March 12th, 2014, Everbridge, the leader in Unified Critical Communications, acquired Vocal. The strategic acquisition further elevates Everbridge’s status as the world’s largest provider of emergency notification and critical communication solutions. With the addition of Vocal, Everbridge now offers the broadest product family in the industry, delivered through 12 distributed datacenters, and supported by employees in seven offices in North America, Europe, and Asia. The combined entity will serve more than 2,500 global clients who use the solution to communicate with over 50 million unique end-users every year.

About Command:

COMMAND is all about actions; the ability to plan ahead and see what will actually happen, in any scenario, at any time. Built on the iModus platform, COMMAND has communications at its core; relaying processes according to the people who need to know, and manipulating data according to exacting operations and processes. We believe that COMMAND is the most exciting innovation the industry has seen in recent years, a new language that will transform the way incidents are managed across the world.

While it is highly innovative, COMMAND is beautiful in its apparent simplicity. It replicates a flow chart of any organisation’s process mapping systems, actions and priorities for varying scenarios are pre-set, one action at a time so that in even in the worst set of circumstances, an untrained member of staff could use the system to confidently and effectively respond to a business interruption.

COMMAND will eliminate the compromises involved in paper processes. It will allow any organisation to build a resilient framework, step-by-step and layer-by-layer according to its processes. The added incentive here is being able to test, measure and evaluate as part of the process, leaving no stone unturned.  

Key Features of Command:

  • Automatic Workflow at the touch of a button
  • Bespoke Software
  • Precision Management
  • Double Edged: Joint role of director and recorder

CIO — The best practices and technologies involved with data loss prevention (DLP) on mobile devices aim to protect data that leaves the security of the corporate network. Data can be compromised or leaked for a variety of reasons: Device theft, accidental sharing by an authorized user or outright pilferage via malware or malicious apps.

The problems associated with mobile data loss have been compounded by the uptick in employees bringing their own devices to work, whether they have permission from IT or not. In a BYOD situation, the user owns the device, not the organization, and makes security somewhat trickier for IT to establish and maintain.

At a minimum, any mobile device that accesses or stores business information should be configured for user identification and strong authentication, should run current anti-malware software and must use virtual private networking (VPN) links to access the corporate network.



IDG News Service (Bangalore Bureau) — Brazil's lawmakers have agreed to withdraw a provision in a proposed Internet law, which would have required foreign Internet companies to host data of Brazilians in the country.

The provision was backed by the government in the wake of reports last year of spying by the U.S. National Security Agency, including on communications by the country's President Dilma Rousseff.

The legislation, known as the "Marco Civil da Internet," will be modified to remove the requirement for foreign companies to hold data in data centers in Brazil, according to a report on a website of the Brazilian parliament.



Businesses can’t function if they don’t have customers. When customers find other solutions and move away, it’s therefore a threat to business continuity. Conventional banks may be at risk if a new development in online-only banking takes off. Startup ‘Simple’ (that’s the company’s name) for instance is giving clients an innovative alternative. Its solution is to eliminate fees, move all the banking activity to the Internet and offer online apps to help track budgets and finances. It makes its money from interest charges and internetwork payments, but can work with lower margins than conventional bricks-and-mortar banks that must pay for the operation of high street branches. Is this the end of the old-style banks?



Rudyard Kipling once said, “If history were told in the forms of stories, it would never be forgotten.” Could the same be true for your data?

Mike Cavaretta argues that it is true. Cavaretta is a veteran data scientist, as well as a manager at the Ford Motor Company in Dearborn, Michigan. In a recent GigaOm column, he says telling a good story is key to helping others understand your data.

“Many analytics presentations crash and burn because no one answered the question, ‘So what?’” he writes. “Almost as bad are the presentations with dense formulas and a single R2 value. Take your audience on a data journey.”



Despite 77 percent of companies suffering an incident in the past two years, over a third of firms (38 percent) still have no incident response plan in place should an incident occur.

Arbor Networks, Inc., has published the results of an Economist Intelligence Unit survey on the issue of incident response preparedness that it sponsored. The Economist Intelligence Unit surveyed 360 senior business leaders, the majority of whom (73 percent) were C-level management or board members from across the world, with 31 percent based in North America, 36 percent in Europe and 29 percent in Asia-Pacific.

The report entitled ‘Cyber incident response: Are business leaders ready?’ shows that despite 77 percent of companies suffering an incident in the past two years, over a third of firms (38 percent) still have no incident response plan in place should an incident occur. Only 17 percent of businesses globally are fully prepared for an online security incident.



The global energy sector is increasingly vulnerable to cyber-attacks and hacking, due to the widespread adoption of Internet-based, or ‘open’, industrial control systems (ICS) to reduce costs, improve efficiency and streamline operations in next-generation infrastructure developments.

According to the Marsh Risk Management Research paper, ‘Advanced Cyber Attacks on Global Energy Facilities’, energy firms are being disproportionately targeted by increasingly sophisticated hacker networks that are motivated by commercial and political gain.

Releasing the paper at Marsh’s bi-annual National Oil Companies (NOC) conference being held in Dubai, Andrew George, Chairman of Marsh’s Global Energy Practice, commented:



By Vali Hawkins Mitchell


I live in Seattle. I listen to KOMO news daily. The helicopter traffic guys keep us moving every day. Their copter crashed this morning. 2 dead. More injured. And as I watch the news on KOMO TV…the news employees are reporting on the event with their emotions tucked back as deep as possible to do their jobs. This was their company, their co-workers, they saw the crash out their window. My heart goes out to them. And I am again humbled by the topic of my book about people in their companies having big emotions, disasters, and hoping they have a plan in place. Wishing I could run down there to help. Hoping they have a protocol in place for the immediate situation, knowing counselors will be volunteering, HR and EAP providers will be called in, and the day will progress to the next news story. The staff will be expected to just move forward. The costs will be numerated into small boxes in accounting books. There will be funerals. There will be memorials. I wonder if the person driving the car to work who was missed by the ball of flame by only a few feet will have a good day at work or go home and get drunk. I wonder. I care. Emotional preparations for unexpected incidents cannot be ignored. I am weary of trying to “sell a damn book” in order to get people to consider the long term ramifications of emotional impact on companies. But I know the costs, because I have done the math and provided companies the format to do so. I know the costs and emotionally charged long term influence of the IMPACT of this helicopter next to the Seattle Space needle will involve KOMO employees, locals, tourists who can’t go on the monorail today, first responders, and much more.

With planes disappearing, nations unhinged, and helicopters falling out of the sky, I admit I wish this morning that I didn’t know that each event was not only an emotional strain, but a fiscal devastation to all parties. Once again, I encourage you and your company to plan for the expected and the unexpected. Sigh.

"The Cost of Emotions in the Workplace"  www.improvizion.com

Computerworld UK — Most companies are spurning the chance to improve their anti-fraud and anti-bribery efforts by not taking full advantage of big data analysis, according to research from business consulting firm EY.

EY found that 63 percent of senior executives surveyed at leading companies around the world agreed that they need to do more to improve their anti-fraud and anti-bribery procedures, including the use of forensic data analytics (FDA).

The survey polled more than 450 executives in 11 countries, including finance professionals, heads of internal auditing and executives in compliance and legal areas. They were asked about their use of FDA in anti-fraud and anti-bribery compliance programs.



CIO — Project managers are in short supply, and that will leave many organizations woefully disadvantaged as the economy rebounds, according to a recent study by project management training company ESI International.

The ESI 2013 Project Manager Salary and Development Survey, based on data from 1,800 project managers in 12 different industries across the U.S., reports that as projects continue to increase in complexity and size, many organizations find themselves both understaffed and with underdeveloped project management professionals. And that's putting them at a competitive disadvantage.

"Budget constraints, an aging base of professionals and a looming talent war all contribute to a talent crisis that should be addressed from the highest levels of the organization," says Mark Bashrum, vice president of corporate marketing and open enrollment at ESI International.



Big data frightens me sometimes. Seeing this headline from Information Week, “IBM: We'll Stand Up To NSA,” gave me heart palpitations.

It sounds noble and all, but really, when a large corporation is willing to stand up to the NSA over data about you or your company … is there any chance you’re winning? It reminds me of Tolkien’s “The Hobbit,” when the trolls are arguing over how to cook the dwarves.  Roasting, boiling or jelly — it’s all the same to the dwarves in the end, right?

David J. Walton, a litigator who specializes in technology issues, took a look at how companies are really using Big Data. He’s an attorney, so this isn’t about business cases, ROI or any of that stuff — it’s about law, and when viewed through that lens, this is Brave New World stuff.



Wednesday, 19 March 2014 14:57

VoIP Technology Improvements on the Horizon

It has been a decade since VoIP became a standard telecommunications tool. Its age has not slowed the development of the technology, however. For instance, Twilio this week announced a VoIP advancement that it says could improve ease of use of enterprise-based systems.

According to GigaOm, Twilio will use an approach called Global Low Latency (GLL), which repurposes the approach used by the public switched telephone networks (PSTN) that VoIP is displacing.

A call on the PSTN offers great quality because a circuit is guaranteed. VoIP, to this point, has cut costs by sending packets via the best available path. Though cheaper, this approach introduces imperfections. Twilio’s idea is to limit the extremes of the traditional VoIP approach:



CSO — "Data Lake" is a proprietary term. "We have built a series of big data platforms that enable clients to inject any type of data and to secure access to individual elements of data inside the platform. We call that architecture the data lake," says Peter Guerra, Principal, Booze, Allen, Hamilton. Yet, these methods are not exclusive to Booze, Allen, Hamilton.

"I have read what's available about it," says Dr. Stefan Deutscher, Principal, IT Practice, Boston Consulting Group, speaking of the data lake; "I don't see what's new. To me, it seems like re-vetting available security concepts with a name that is more appealing." Still, the approach is gaining exposure under that name.

In fact, enterprises are showing enough interest that vendors are slapping the moniker on competing solutions. Such is the case with the Capgemini / Pivotal collaboration on the "business data lake" where the vendors are using the name to highlight the differences between the offerings.



Wednesday, 19 March 2014 14:56

Pulling IT Back from the Shadows

Shadow IT is a fact of life for nearly every IT department across the board. But does that mean it’s time to throw in the towel? Not exactly, but it does mean that things will have to change, both for users and managers of data infrastructure.

First, some numbers. According to CA Technologies, more than a third of IT spending is now heading to outside IT resources, and this is expected to climb to nearly half within three years. The figures are shocking, but keep two things in mind: First, they come from CA, which makes its living building systems that help organizations keep track of their data infrastructure, and second, they represent all outsourcing activity, not just what is termed “shadow IT.”



Wednesday, 19 March 2014 14:55

How good are your business continuity plans?

In this article Charlie Maclean-Bristol, a highly experienced business continuity consultant, lists ten areas where many business continuity plans can be improved. How does your plan stack up?

Charlie’s list is as follows:

1. Scope. On many of the business continuity plans that I see it is not clear what the scope of the plan is. The name of the department may be on the front of the plan but it is not always obvious whether this is the whole of the department, which may cover many sites, or just the department based in one location. It should also be clear within strategic and tactical plans what part of the organization the plan covers. Where large organizations have several entities and subsidiaries it should be clear whether the tactical and strategic plans cover these.

2. Invocation criteria. I believe it should be clear what sort of incidents should cause the business continuity plan to be invoked. I also believe that these invocation criteria should be ‘SMART’ (specific, measurable, attainable, realistic and timely), so as not to be open to misinterpretation. The criteria should be easy to understand so if you get a call at 3am in the morning to inform you of an incident it should be obvious whether you invoke or not. Focus should be on the loss of an asset such as a building or an IT system, not on the cause of the loss. There needs to be a ‘catch-all’ in the invocations criteria which says 'and anything else which could have a major impact on our operations’ so that the criteria are not too rigid if you need to invoke for an incident you have not yet thought of.



Costs and benefits of BCM: let us ask the right questions, not answer the wrong ones

By Matthias Rosenberg

The costs and benefits of BCM : I have dealt with this issue for almost 20 years now and it always goes back to one question: Why would a company invest in something that does not provide a contribution to revenue and that is meant to protect the company against something that hopefully never happens? This question is quite understandable from a business perspective and therefore justified as a basic question. Those who cannot give a plausible answer to this question will fall at the first hurdle. This issue is fundamental to our profession and at the same time underrepresented in the BCM literature. Even the Good Practice Guide (GPG) 2013 does not see the task of selling BCM as a central task of a BC manager; but in reality the sale and presentation of the business continuity topic are critical for our success.

Soft skills are as important in BCM as in any other management discipline.

Let me give you some examples: BCM professionals need strong presentation skills and they need strong training skills (e.g. to train BCM coordinators). These are specific skills that can be described. Analytical skills (e.g. to prepare BIA results for top management) and communication skills are equally important. In the end it is not enough to read another BCM standard, to take part in a training course or to buy BCM software and hope to run a BCM programme successfully. A BCM professional needs experience and one of the most important skills to implement a BCM programme successfully: patience.



By Jayne Howe

The costs associated with developing and implementing a business continuity program in your organization can vary greatly. Most of the cost variables are going to be dependent on two factors: what you already have in place and what components still need to be addressed; and whether your organization has internal business continuity expertise.

It’s likely that any organization successfully operating in this century will have at least a few basic components in place. They may be components that are necessary to be eligible for insurance coverage; to meet the criteria for regulatory bodies that your organization’s industry needs to be part of; or complying with basic building fire codes. But even if you don’t have internal BC expertise, you don’t need to start with a blank piece of paper to try to configure the other components that are necessary for a complete and robust business continuity program.

Using a business continuity standard as a base guideline for your own internal development can assist in identifying those modules that are necessary to develop an all-inclusive and comprehensive BC program. This can be extremely helpful in preventing you from travelling down an incorrect or incomplete path, and therefore saving wasted resource time and costs.



Wednesday, 19 March 2014 14:53

Less risk, more reward

Managing vulnerabilities in a business context.

By Paul Clark

Network security can be both an organization’s saviour, and its nemesis: how often does security slow down the business? But security is something you can’t run away from. Today’s cyber-attacks have a direct impact on the bottom line, yet many organizations lack the visibility to manage risk from the perspective of the business.

Traditionally, network security revolves around scanning the servers for vulnerabilities, reviewing them and the risk to the server by drilling down through the reporting to assess how vulnerabilities could be exploited, and then looking at how those risks can be remediated. Looking at vulnerabilities in this technical context leaves a lot to be desired in terms of actual impact on the business.

These risks can be put into two groups. There is the security risk, which is about compromise. How can the network be compromised and what would happen if the vulnerability was exploited? What damage would be done, and what information could be lost? Assessing these types of risk is usually the domain of the information security team.



On March 13th BATS Global Markets (BATS), a leading operator of securities markets in the US and Europe, successfully conducted a full-scale business continuity test
of its US equities exchanges BZX and BYX, and BATS Options. These operations were switched to BATS’ disaster recovery site and the company’s global headquarters was disconnected from all outside network access for the entire day.

All of BATS’ Kansas City-area employees reported to the disaster recovery site and conducted their daily routines from the secure and remote location. The BATS offices in New York City, Jersey City, and London continued normal operations.



Wednesday, 19 March 2014 14:51

Flood Safety Awareness Week 2014

Flooded road, with barrier and "high water," "road closed" signs

Turn Around Don’t Drown

Turn Around Don’t Drown, or TADD for short, is a NOAA National Weather Service campaign used to educate people about the hazards of driving a vehicle or walking through flood waters.

This year is the 10th anniversary of the TADD program. Hundreds of signs depicting the message have been erected at low water crossings during the past decade. The phrase “Turn Around Don’t Drown” has become a catchphrase in the media, classroom, and even at home. It’s one thing to see or hear the phrase, and another to put it into practice.

Flooding is the 2nd leading cause of weather related fatalities in the U.S. (behind heat). On average, flooding claims the lives of 89 people each year. Most of these deaths occur in motor vehicles when people attempt to drive through flooded roadways. Many other lives are lost when people walk into flood waters. This happens because people underestimate the force and power of water, especially when it is moving. The good news is most flooding deaths are preventable with the right knowledge.

Just six inches of fast-moving water can knock over an adult. Only eighteen inches of flowing water can carry away most vehicles, including large SUVs. It is impossible to tell the exact depth of water covering a roadway or the condition of the road below the water. This is especially true at night when your vision is more limited. It is never safe to drive or walk through flood waters. Any time you come to a flooded road, walkway, or path, follow this simple rule: Turn Around Don’t Drown.

For more information on the TADD program, visit http://tadd.weather.govExternal Web Site Icon

For flood safety tips, visit the newly redesigned website at www.floodsafety.noaa.govExternal Web Site Icon or http://emergency.cdc.gov/disasters/floods/index.asp

Essentially the Non-Executive Director's role is to provide a creative contribution to the board by providing objective criticism. So I recommend that all Non-Executive Directors consider challenging the board to count the costs involved in deploying business continuity management and balancing these costs against quantifiable benefits gained from its Business Continuity Management System and Programme.

The Good Practice Guidelines suggest that embedding BCM is hard to measure, but secretly I believe that Executive Directors deep down in their hearts and minds know full well if they are merely trying to be compliant.

In the busy world of the Executive, maybe they only have time to ask if the business is adequately covered from a risk and business continuity perspective. Is it the difference between plausible deniability and culpable liability? To paraphrase a well-known political interviewer: “Did you know there was a problem, in which case you are culpable or did you genuinely not know in which case you were incompetent, which is it?”



Wednesday, 19 March 2014 14:49

Counting the cost of consultancy support

Before I start I feel I should make two important points :

1) If you’re expecting a serious, academic blog containing a reasoned argument backed up by empirical evidence, you’ve come to the wrong place;

2) I was asked to write 500 words, which I understand is what proper bloggers do. I’ve exceeded that ever so slightly so if you have a short attention span, you might want to leave now.

Assuming you’re still with me…



Well, it’s Business Continuity Awareness week and with the aim of raising the profile and understanding of the whole thing, from multiple perspectives, it is a fantastic idea. I wonder how many BC ‘professionals’ will be participating and become more aware. Really aware.

Being an educator is a privilege and to develop knowledge, capability and understanding along with confidence and perhaps earning power is a fantastic motivator. From the learning perspective, education should develop enthusiasm, knowledge and understanding in learners. To understand learners you need to understand what potentially limits their own capability; particularly when they are at the start of the higher education journey.

Here’s the thing from my perspective, there are a significant number of practitioners and consultants in the BC education system (not training – education!), but not enough. Of course, I would say that, wouldn’t I? But the point is this; of the thousands of practitioners, highly experienced perhaps, with professional memberships and in good positions in their businesses, not nearly enough make the time, effort or commitment to become educated in their profession.



CIO — Imagine you're working for a big financial services company and you stupidly left your BYOD smartphone on the seat in a commuter train, yet you're not really sure where you've misplaced it. So you search high and low, in your home and car, at restaurants and coffee shops.

In the back of your mind, you know that the company requires you to contact those robotic IT folks within 24 hours of losing your phone so that they can remotely wipe it, but you don't want that to happen. They'll delete precious notes that you need for a client, maybe even personal photos that you forgot to back up. Besides, you haven't searched everywhere yet.

You miss the 24-hour window, and the company promptly fires you.

Fiction? Hardly.



Techworld — Most large organisations now make advance plans to bring in external security consultancies should they suffer data and security breaches, a new survey for Arbor Networks has found.

The Economist Intelligence Unit (EIU) study (registration required) of 360 global senior business executives backed up by interviews with a dozen security executives found that around two thirds of firms had formal incident response plans in place for serious security incidents, with the same number complimenting this with a dedicated in-house response team.

Despite this apparent readiness, 80 percent of larger organisations had made advance arrangements with external experts, mainly in computer forensics, to supplement the initial response by an internal IT team.



The data snooping debate has quietened down a little recently, even if Edward Snowden’s name still crops up here and there. Whether or not the revelations about intelligence activities have changed much in terms of governmental attitude and behaviour remains to be seen. Pressure can still be applied to Internet, cloud and telecommunications service providers to provide data about users, and the only safe data encryption may be the one you do yourself. Indeed, increasingly large quantities of information are generated every day and are available for analysis by government agencies. But who decides what to do with all the data?



Wednesday, 19 March 2014 14:45

Outsourcing Doesn’t Eliminate Risk

For decades, businesses have used ‘outsourcing’ (obtaining goods or services through a 3rd party, rather than from an internal source) as a mean of reducing expenses, eliminating overhead and reducing risks.

As a Business Continuity professional, I’ve always been leery of the risk reduction angle.  While outsourcing may shift the burden of risk onto the outsourced party, it doesn’t eliminate the consequences of the risk, should it occur.  It’s easy to dismiss the potential impact of a disruption that occurs to an outsourced process, function or service.  But – like every other risk – the internal ‘ripple effect’ can still be felt, even though the actual disruption happens to that 3rd party.

Most outsourcing contracts require that the 3rd party have a Business Continuity and/or IT Disaster Recovery Plan in place.  Too often, that Plan’s existence is never verified.  You should know how often it is updated and tested.  You should get a copy and read it (even if you have to visit the 3rd party to view it).  Perform your own audit: is the plan adequate when compared to your own BCM standards?  If not, make suggestions for improvements, and follow-up to assure those improvements occur.



CIO — We're all familiar with the Target payment card breach late last year. Up to 110 million payment card numbers were stolen through a huge hole in the company's network, right down to the security of the PIN pads. The breach cost Target CIO Beth Jacobs her job; it was, and still is, a serious matter.

Target is obviously a public company, so this situation garnered a lot of attention. As a CIO or member of the executive technical staff, though, there are some observations about the situation that can apply to your company.

Here are four key lessons from Target's very public example of a data breach.



Network World — When Microsoft stops supporting Windows XP next month businesses that have to comply with payment card industry (PCI) data security standards as well as healthcare and financial standards may find themselves out of compliance unless they call in some creative fixes, experts say.

Strictly interpreted, the PCI Security Standards Council requires that all software have the latest vendor-supplied security patches installed, so when Microsoft stops issuing security patches April 8, businesses processing credit cards on machines using XP should fall out of PCI compliance, says Dan Collins, president of 360advanced, which performs security audits for businesses.

But that black and white interpretation is tempered by provisions that allow for compensating controls supplementary procedures and technology that helps make up for whatever vulnerabilities an unsupported operating system introduces, he says.



Celebrating this St. Patrick’s Day, I’m reminded that luck has very little to do with being prepared to recover your systems in the event of an outage.  In fact, one of the most important lessons from the 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council involves a commitment to taking action—and not accepting the status quo.  Based on hundreds of responses from organizations worldwide, the Annual Report offers a few key suggestions for implementing DR best practices so that companies can be much better prepared to recover from outages or disasters.

You can download the report for free at http://drbenchmark.org/

Here are three of the Annual Report’s major recommendations:

  1. Build a DR plan for everything you need to recover, including applications, networks and document repositories, business services such as the entire order processing system, or even your entire site in the event of an outage or disaster.  It’s an important exercise that will force you to prioritize your DR planning efforts
  2. Define Recovery Time Objectives (RTO) & Recovery Point Objectives (RPO) for critical applications. Without these important metrics, you cannot set proper expectations and assumptions from management, employees, and customers about your DR capabilities and how to improve them.  You need to set the playing field before a disaster or outage happens.  There is a free tool, for example, that can help you test your own Recovery Time Actuals or RTAs in VMware environments.
  3. Test critical applications as frequently as possible to validate they will recover within RTOs/RPOs. For DR preparedness to improve, companies around the world must begin to automate these processes and get beyond the high cost in time and money of verifying and testing their DR plans.  If you don’t test, you simply can’t know what will happen.

As both intentional and accidental threats to IT systems continue to grow and accelerate, we at the DR Preparedness Council have dedicated our efforts to increasing awareness of the need for DR preparedness. At the same time, we will continue to identify and share best practices as they evolve so that we can help organizations worldwide feel more secure and confident about their own ability to recover systems when outages and disasters strike.

To get you started, you can do a few things right now to improve your own DR preparedness:

Visit the www.DRbenchamrk.org website for your free copy of the 2014 Annual Report

Take time out to fill out the online benchmark survey to see how you are doing compared to others

Get a free trial of PHD Virtual ReliableDR and see how you can affordably test your recovery capabilities every week, every day or every hour if you want—a breakthrough in DR planning and preparedness.

To learn more, tune into an online webinar on Wednesday, March 26 with The Disaster Recovery Journal.  At this webinar, I’ll be giving a sneak-preview of my presentation at Disaster Recovery Journal’s Spring World 2014, the industry’s largest business continuity conference and exhibition, taking place March 30 – April 2, 2014 in Orlando, Florida. By attending  this webinar you will learn:

- The Findings of the 2013 DR Preparedness Survey
- What Downtime Costs in Real Dollar Terms - Top Causes of Outages - Best Practices from the Best Prepared Organizations - How to Increase Your DR Preparedness


Emerging risks that risk managers expect to have the greatest impact on business in the coming years could be on the cusp of a changing of the guard, according to an annual survey released by the Society of Actuaries.

It found that the risk of cyber attacks and rapidly changing regulations are of growing concern to risk managers around the world, and may be slowly replacing the risk of oil price shock and other economic risks which were of major concern just six years ago.

Some 47 percent of risk managers saw cyber security as a significant emerging risk in 2013, up seven points from 40 percent in 2012.



Another Business Continuity Awareness Week has arrived.

In this timezone that tends to mean late nights if you want to catch the webinar program live, rather than on the replay. Replays are fine if you only want to passively consume the material, but if you also plan to ask questions and engage with the presenter then the live broadcast is the only option.



Monday, 17 March 2014 16:24

8 Ways to Improve Wired Network Security

Network World — We sometimes focus more on the wireless side of the network when it comes to security because Wi-Fi has no physical fences. After all, a war-driver can detect your SSID and launch an attack while sitting out in the parking lot.

But in a world of insider threats, targeted attacks from outside, as well as hackers who use social engineering to gain physical access to corporate networks, the security of the wired portion of the network should also be top of mind.

So, here are some basic security precautions you can take for the wired side of the network, whether you're a small business or a large enterprise.



During Flood Safety Awareness Week, March 16 to 22, the National Oceanic and Atmospheric Administration (NOAA) and the Federal Emergency Management Agency (FEMA) are calling on individuals across the country to Be a Force of Nature: Take the Next Step by preparing for floods and encourage others to do the same.

Floods are the most common — and costliest — natural disaster in the nation affecting every state and territory. A flood occurs somewhere in the United States or its territories nearly every day of the year. Flood Safety Awareness Week is an opportunity to learn about flood risk and take action to prepare your home and family. 

"Many people needlessly pass away each year because they underestimate the risk of driving through a flooded roadway,” said Louis Uccellini, Ph.D., director of NOAA's National Weather Service. "Survive the storm: Turn Around Don't Drown at flooded roadways."

“Floods can happen anytime and anywhere,” said FEMA Administrator Craig Fugate.  “Take steps now to make sure your family is prepared, including financial protection for your home or business through flood insurance. Find out how your community can take action in America’s PrepareAthon! with drills, group discussions and community exercises at www.ready.gov/prepare.”

Our flood safety awareness message is simple: know your risk, take action, and be an example. The best way to stay safe during a flood and recover quickly once the water recedes is to prepare for a variety of situations long before the water starts to rise.

Know Your Risk:  The first step to becoming weather-ready is to understand that flooding can happen anywhere and affect where you live and work, and how the weather could impact you and your family. Sign up for weather alerts and check the weather forecast regularly at weather.gov. Now is the time to be prepared by ensuring you have real-time access to flood warnings via mobile devices, weather radio and local media, and avoiding areas that are under these warnings. Visit ready.gov/alerts to learn about public safety alerts and visit floodsmart.gov to learn about your flood risk and flood insurance available.

Take Action: Make sure you and your family members are prepared for floods.  You may not be together when weather strikes, so plan how you will contact one another by developing your family communication plan. Flood insurance is also an important consideration: just a few inches of water inside a home can cost tens of thousands of dollars in damage that typically will not be covered by a standard homeowner’s insurance policy.  Visit Ready.gov/prepare and NOAA to learn more actions you can take to be better prepared and important safety and weather information.

Be an Example: Once you have taken action, tell family, friends, and co-workers to do the same. Technology today makes it easier than ever to be a good example and to share the steps you took to become weather-ready.

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. http://www.ready.gov/

CIO — WASHINGTON — CEOs at some of the nation's leading tech companies see boundless potential for big data and smarter, integrated systems to address major social challenges in areas ranging from medicine to education to transportation — but at the same time, they worry that policymakers at home and abroad could stand in the way of that vision.

Top executives at firms such as Dell, IBM and Xerox gathered in the nation's capital this week under the auspices of the Technology CEO Council, bringing with them a message that the data economy is imperiled by concerns about security and privacy and protectionist policies that could limit the growth of cloud computing and balkanize the Internet.

"The biggest barriers I think that we see are not around the engineering. It's around regulation. It's around protectionism. It's around trust, or lack thereof. It's around policies and procedures," says Xerox Chairman and CEO Ursula Burns, who also chairs the CEO council.



CIO — CIOs who toiled in obscurity while keeping back-office operations running are about to be thrust into the spotlight and will be staring eyeball to eyeball with the all-important customer. At least, this is the main finding in an IBM survey of more than 1,600 CIOs.

"CIOs are increasingly being called upon to help their companies build new products and services and transform front-office capabilities," says Linda Ban, global C-suite study director at IBM's institute for business value, adding, "We're starting to see CIOs move outside of IT, maybe doing a stint in marketing."

ERP Gives Way to ROI

Times have certainly changed from the ERP days, when a CIO's career hung on awesome integration skills and deep knowledge of complex software. Today's CIO needs to be well-versed in new-fangled technology areas that drive sales, such as mobile that has become a touch point for reaching the emerging digital customers.





By Jacquelyn Lickness

When a hospital in South Carolina spotted bats flying through its facility, officials sprang into action launching an investigation to prevent a possible rabies outbreak. Because bats are commonly infected with the virus, any contact with the flying mammals is taken very seriously. The hospital quickly involved state public health officials, who then reached out to CDC to help investigate any possible exposure to the rabies virus.

Team in the EOC

Rabies is a disease typically acquired through the bite of a rabid animal, and can be deadly if the exposure (e.g., bite) is not recognized early enough. Across the globe there are more than 55,000 human deaths from rabies each year. However, in the U.S. human cases are extremely rare, with approximately two human deaths annually. Most exposures to the rabies virus in the U.S. occur through contact with animals that are commonly infected with the virus, including bats, raccoons, skunks, and foxes.

Participation in the response effort

The response effort in South Carolina is ongoing and has involved collaboration among hospital staff, state public health officials, and CDC rabies experts and volunteers. Because hundreds of patients and hospital staff might have come in contact with bats, it was important to assess each individual’s risk of exposure.

In this event, it was critical to understand any interaction with a bat. It is possible that bat bites can go unnoticed if the person is sleeping or sedated, thus placing a person at risk for rabies. As a result, the investigation team asked about certain activities such as bat handling and touching, heavy sleeping or sedation, and other medical history that may indicate exposure.

Rabies expert and CDC Epidemic Intelligence Service (EIS) Officer Dr. Neil Vora orchestrated a response that included the administration of hundreds of phone-based surveys to hospital patients and staff. This large-scale investigation was managed through the CDC Emergency Operations Center. EIS officers, veterinary and medical students, and public health students from nearby Emory University eagerly offered their support for the data-gathering activities. The Student Outbreak and Response TeamExternal Web Site Icon (SORT), a public health organization from Emory University that assists in outbreak responses, organized a contingency of nearly 20 students to assist the efforts. In the span of four days, a total of 55 volunteers made 817 calls.

EOC team

The investigation wasn’t just limited to patient questionnaires. Other activities included the distribution of letters and flyers to patients and visitors to warn of bat exposure, mapping and creation of a timeline of bat sightings, and testing of bats for rabies. A quick response was made possible through collaboration between the hospital, South Carolina public health officials, a local pest control company, and all participants at CDC.

Determining the extent of exposure

In total, 53 bats have been sighted in the hospital, of which 12 were tested and have results available, all of which were negative. That said, other bats in the colony that have not been tested could still have had rabies. After the removal of the bats and other interventions to prevent their re-entry, the bat sightings have decreased. As a result of the collaborative effort among CDC, the state public health department, and the affected hospital during this response, partnerships were strengthened and new public health tools and practices were developed. Most importantly, all involved continue taking measures to understand best practices in rabies prevention and treatment to ensure the safety of the public’s health.

DENVER – Flooding is the most common natural disaster in the United States.  Recent years have seen more frequent severe weather events, like Hurricane Sandy, which ravaged the East Coast.  The Federal Emergency Management Agency (FEMA) manages the National Flood Insurance Program (NFIP) that provides flood insurance policies that provide millions of Americans their first line of defense against flooding.  But those flood insurance policies are only one component of the program and just part of the protection NFIP provides to individuals and the American public at large.

For anyone to be able to purchase an NFIP policy, the only requirement is that they live in a participating community.  A participating community can be a town or city or a larger jurisdiction like a township or county that includes unincorporated areas.  It is up to the community to opt into the NFIP program for the benefit of its citizens.  When joining the program, the community agrees to assess flood risks and to establish floodplain management ordinances.  In return for taking these actions, residents are able to purchase federally backed flood insurance policies.

One of the cornerstones of the NFIP is the flood mapping program.  FEMA works with states and local communities to conduct studies on flood risks and develop maps that show the level of risk for that area, called a Flood Insurance Rate Map (FIRM).  The FIRM provides useful information that can assist in communities in planning development.  The area that has the highest risk of flooding is the Special Flood Hazard Area (SFHA), commonly called the floodplain.  The SFHA has a one percent chance of being flooded in any given year.  Because of the greater risk, premiums for flood insurance policies for properties in the SFHA are greater than for those for properties outside of it. 

Equally important to knowing the risks of flooding is having a game plan to address those risks.  This is role of floodplain management.  Local communities must comply with minimum national standards established by FEMA, but are free to develop stricter codes and ordinances should they choose to do so.  Key elements of floodplain management include building codes for construction in the floodplain and limitations on development in high risk areas.  Floodplain management is an ongoing process, with communities continually reassessing their needs as new data becomes available and the flood risk for areas may change.

The NFIP brings all levels of government together with insurers and private citizens to protect against the threat of flooding.  Federally sponsored flood maps and locally developed floodplain regulations give property owners the picture of their risk and ensure building practices are in place to minimize that risk.  As a property owner, purchasing a flood insurance policy is a measure you can take to further protect yourself.  To find out more about your individual risk contact your local floodplain administrator. For more information on flood insurance policies or to find an agent, visit www.floodsmart.gov or call 1-800-427-2419.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

DENVER – There’s a hidden threat that strikes countless unprepared Americans each year – flooding.  Unlike fire, wind, hail or most other perils, flood damage is not covered by a homeowners’ policy.  An uninsured flood loss can undo a lifetime’s worth of effort and create a mountain of bills.  Fortunately, a National Flood Insurance Program (NFIP) policy provides the defense against such losses and can ensure that a flood doesn’t bring financial ruin.

Flooding is an ever present threat; it can happen at any time and in virtually any location.  While certain areas may be more prone to flooding – especially those in coastal areas or riverine environments – history has shown that almost no place is immune to flooding.  Flooding can have many causes: a quick heavy rainfall or rapid snowmelt can cause flash flooding, a blocked culvert or storm sewer drain can create flooding in a city neighborhood, or prolonged wet weather can swell streams and rivers.  Even dry conditions can pose a threat, as minimal rainfall in wildfire burn areas or drought stricken regions can create flash flooding when soils are unable absorb even slight precipitation.

Flood insurance is easy to get, the only requirement is that you live in a participating community (which might be a county or other jurisdiction for those living in unincorporated areas).  That’s right; you don’t need to live in a floodplain to purchase a policy.  In fact, if you live outside a floodplain you may be eligible for a preferred risk policy that has a much lower premium than for a policy in a higher flood risk area.  And in most cases you can purchase an NFIP policy with the insurance agent you already deal with for other insurance needs.  When that isn’t possible, NFIP can put you in touch with another agent that can get you a flood insurance policy.

One key difference of an NFIP policy from another insurance policy is the 30-day waiting period prior to the policy going into effect.  But that doesn’t mean anyone should view a policy like a lottery ticket, something purchased only if flooding appears imminent.  A policy should be viewed as protection against a continuing threat rather than a hedge against a singular event such as anticipated spring flooding or following a wildfire.

The average flood insurance premium nationwide is about $700 a year – less than $2 a day for financial protection from what could be devastating effects of a flood to one’s home or business. By purchasing a policy now, or keeping your existing policy, you have peace of mind.  As with any insurance, be sure to talk with your agent about the specifics of your policy – how much coverage you need, coverage of contents as well as structure and any other questions you might have.

Find out more about your risk and flood insurance at www.floodsmart.gov. To purchase flood insurance or find an agent, call 1-800-427-2419.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Friday, 14 March 2014 14:58

What Could Be Worse?

If you wear the CIO hat of a very large retail company, what could be worse than to have your site broken into and tens of millions of customers’ information records stolen and … right at the peak of the holiday season? Well, I suppose it could be worse if your organization had recently spent millions to buy the latest in security equipment and software and set up a large, 24×7 monitoring center halfway around the world to monitor the critical alerts from security software … and then when someone 12 time zones away did notice that the organization’s networks had been breached and sent a notice to their overlords in the US, nothing much happened for nearly three weeks while the bad guys were stealing millions of customers’ credit card information and passwords.

Of course, that could be a really big problem. In fact, it might get a CIO, along with a number of underlings, fired after having to testify on nationwide TV before Congress, and after launching a huge internal review to see what really happened and placing blame somewhere other than at the top. And all this might cause any company to lose hundreds of millions in sales and frighten away millions of loyal customers … and three months later it might be on the front cover of one of the US’s leading business journals (see “Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It“).



Tomorrow, March 15 is enshrined as one of the most famous days of all-time, the “Ides of March”. On this day in 44 BC, the “Dictator for Life” Julius Caesar was assassinated by a group of Roman nobleman who did not want Caesar alone to hold power in the Roman Empire. It was however, this event, which sealed the doom of the Roman Republic as his adopted son Octavian first defeated the Republic’s supporters and then his rival Dictator Marc Anthony and became the first Emperor of the new Roman Empire, taking the name Augustus.

One of the more interesting questions in any anti-corruption compliance regime is to what extent your policies and procedures might apply in your dealings with customers. Clearly customers are third parties and in the sales chain but most compliance programs do not focus their efforts on customers. However, some businesses only want to engage with reputable and ethical counter-parties so some companies do put such an analysis into their compliance decision calculus.

However, companies in the US, UK and other countries who do not consider the corruption risk with a customer may need to rethink their position after the recent announcements made by Citigroup Inc. regarding its Mexico operations.



Friday, 14 March 2014 14:56

11 Tips to Prepare for SDN

Network World — Making the leap to SDN? Don't jump in blind. It helps to know what software-defined networking is, first off, and then what it can do for you.

Then it's smart to know all the inner workings of an SDN controller, the differences between products offered by established vendors and start-ups, and whether open source and bare metal switching might be an option. Lastly, learn your own network -- will it even support SDN or require a wholesale rip-and-replace? -- and then learn from your peers about their experiences. Here's an 11-tip guide on how to prep for SDNs:



You would think Big Data would be important to any financial services firm, but it turns out, data integration and management are more pressing problems, particularly for buy-side companies, according to a recent FierceFinanceIT article.

Buy-side companies typically sell investment services such as private life insurance, hedge funds, equity funds, pension funds and mutual funds. Sell-side companies are registered members of the stock exchange and handle direct investments, often for the buy-side companies.

The article quotes executives from DataArt, which builds custom software solutions for financial services and other industries. DataArt executives say only a few buy-side companies are dabbling in Big Data as a way to learn more from social media data. Instead, the real focus for asset managers and midsize firms is preparing data for compliance reports.



CSO — It was 2010 and the monthly ISSA meeting featured speaker was Major Gen. Dale Meyerrose, VP of Harris information assurance at the time. Dale asked if we should teach being a responsible cyber citizen in our schools. Back then I had just started working in a large Public School District that had never before had an information security analyst. I had lots to share about information Security and lots more to learn about educating users in the business of Education!

I think this is a very appropriate term. So how long have you been a responsible cyber citizen? Where did you learn to become one? We all learned how to drive a car and hopefully we are responsible drivers, at least there is training and a test for drivers of automobiles. What about being a responsible cyber citizen? There is no official curriculum in our schools for it? Can you actually cause your country and yourself significant monetary losses or worse, just by not being aware of the dangers that lurk on the internet? The point is, over time malware has become quite sophisticated, what started as a prank in the 1980s is now a multi-billion dollar cyber-crime industry.



In a new study, the Workplace Bullying Institute found that 27% of Americans have suffered abusive conduct at work and another 21% have witnessed it. Overall, 72% are aware that workplace bullying happens. Bullying was defined as either repeated mistreatment or “abusive conduct.” Only 4% of workers responded that they did not believe workplace bullying occurred.

The study found that 69% of the bullies were men and they targeted women 57% of the time. The 31% of bullies who are female, however, overwhelmingly bullied other women—68% compared to 32% who mistreated men in the workplace. Identifying the perpetrators also shed light on how corporate power dynamics play a role in abusive workplace behavior. The majority of bullying came from the top (56%), while only a third came from other coworkers.



The recent flooding episode has highlighted shortcomings in the UK government’s approach to risk events says Chairman of the Institute of Risk Management, Richard Anderson.

“The terrible flooding in Somerset and the Thames has brought into sharp focus the ‘fingers crossed’ and ‘touching wood’ approach to risk management strategy that is so often adopted by government. It is regrettable that this seems to be the default mechanism to approaching all manner of risks. It is an appalling state of affairs because we understand how to manage risk better now than we ever have in the past. Since the flooding we have seen lots of frenetic activity from government officials which is unproductive and the government would be better served by seeking the advice of the increasing cadre of expert risk professionals who are largely being ignored at the moment.

“Routine risk thinking tends to be handled at a very junior level in government. Much of it is no more than painting by numbers as committees consider whether a risk should be red, amber or green. Most risks are considered in isolation of other risks materialising. That is not what happens in real life: in real life as one thing hits, another does straight after, and another and another. The interdependence of multiple impact risks needs to be managed far more professionally.



The US National Institute of Standards and Technology (NIST) will host the first of six workshops devoted to developing a comprehensive, community-based disaster resilience framework, a national initiative carried out under the President's Climate Action Plan. The workshop will be held at the NIST laboratories in Gaithersburg, Md., on Monday, April 7, 2014.

Focusing on buildings and critical infrastructure, the planned framework will aid communities in efforts to protect people and property and to recover more rapidly from natural and man-made disasters. Hurricanes Katrina and Sandy, and other recent disasters, have highlighted the interconnected nature of buildings and infrastructure systems and their vulnerabilities.

The six workshops will focus on the roles that buildings and infrastructure systems play in ensuring community resilience. NIST will use workshop inputs as it drafts the disaster resilience framework. To be released for public comment in April 2015, the framework will establish overall performance goals; assess existing standards, codes, and practices; and identify gaps that must be addressed to bolster community resilience.

NIST seeks input from a broad array of stakeholders, including planners, designers, facility owners and users, government officials, utility owners, regulators, standards and model code developers, insurers, trade and professional associations, disaster response and recovery groups, and researchers.

All workshops will focus on resilience needs, which, in part, will reflect hazard risks common to geographic regions.

The NIST-hosted event will begin at 8 a.m. and is open to all interested parties. The registration fee for the inaugural workshop is $55. Space is limited. To learn more and to register, go to: www.nist.gov/el/building_materials/resilience/disreswksp.cfm.

Registration closes on March 31, 2014.

More information on the disaster resilience framework can be found at www.nist.gov/el/building_materials/resilience/framework.cfm

The UN Office for Disaster Risk Reduction (UNISDR) is working with IBM and AECOM to measure cities’ resilience to disasters.

The first output of the partnership is a Disaster Resilience Scorecard created for use by members of UNISDR’s ‘Making Cities Resilient’ campaign which has been running now for almost four years.

The scorecard is based on the Campaign’s Ten Essentials – UNISDR’s list of top priorities for building urban resilience to disasters — and has been developed by IBM and AECOM. A list of potential cities is being developed to test the scorecard and to support their disaster resilience planning.

The Disaster Resilience Scorecard reviews policy and planning, engineering, informational, organizational, financial, social and environmental aspects of disaster resilience. Each of the criteria has a measurement scale of 0 to 5, whereby 5 is regarded as ‘good practice.’

The scorecard will be available at no cost through UNISDR, AECOM or IBM.

Both IBM and AECOM are part of UNISDR’s Private Sector Advisory Group and the Making Cities Resilient Steering Committee.

View the Disaster Resilience Scorecard for Cities as a PDF.

CSO — Healthcare organizations see an expanding landscape of uncertainty that has raised concerns among security pros and points to the need for more thorough threat analyses, a study showed.

Risks posed by health insurance and information exchanges, employee negligence, cloud services and mobile device usage has dampened confidence in protecting patient data, the Fourth Annual Benchmark Study on Patient Privacy & Data Security found. The study, released Wednesday, was conducted by the Ponemon Institute and sponsored by data breach prevention company ID Experts.

Despite the concerns, the study showed progress on the security front. The average cost of data breaches for organizations represented in the study fell to $2 million over a two-year period, compared to $2.4 million in last year's report.



Data deduplication or the elimination of repetition of data to save storage space and speed transmission over the network – sounds good, right? ‘Data deduping’ is currently in the spotlight as a technique to help organisations boost efficiency and save money, although it’s not new. PC utilities like WinZip have been compressing files for some time. The new angle is doing this systematically across vast swathes of data. By reducing the storage volume required, enterprises may be able to keep more data on disk or even in flash memory, rather than in tape archives. Vendor estimates indicate customers might store up to 30 terabytes of digital data in a physical space of just one terabyte.



In my previous post, I shared the ongoing debate about the most effective way to approach Big Data so that it will yield meaningful, useful and, hopefully, profitable findings.

The top two options are approaching data as an explorer versus Tom Davenport’s contention that you need to use a hypothesis, which I translate as using a more scientific-method based approach.

Explorer advocates say Big Data is too big for the typical reports-driven approach, and what’s worked for early adopters has been tinkering with the data to see what it reveals. Davenport and others contend that is a great way to waste time, spend money and create unhappy business leaders.



Thursday, 13 March 2014 15:45

SMBs Need Proper Tools to Refine Big Data

According to a recent Entrepreneur article, small businesses should find effective ways to analyze data in order to give their customers what they need without pushing too hard to gather more data from those same customers. Sounds simple, yet complex.

And when you also consider that data is increasing exponentially, and the way Big Data has been multiplying, it’s no wonder small to midsize businesses (SMBs) have become quite overwhelmed about how to collect, sort, and use Big Data in any effective manner.  

But what SMBs need to realize is that the key to using data is “refinement.” In his Entrepreneur article, Suhail Doshi explains:



Thursday, 13 March 2014 15:44

Easing Up on Archival Cost and Complexity

Archiving has always been one of those functions that pulls the enterprise in two different directions. Increased data volumes, of course, require more storage capacity, but as data sits in the archives for longer periods of time, it loses its value. So in the end, the enterprise must devote more resources to constantly diminishing assets.

Of course, this is the lifeblood of the archival management industry as numerous companies work up sophisticated algorithms and other tools to analyze data and then shift it from one set of resources to another based on its intrinsic value. The real purpose behind Big Data management, after all, is not to accommodate increasing volumes but to mine existing stores for gold and then store the rest at the lowest possible cost—or discard it altogether.

Naturally, part of this process requires the development of low-cost media, such as tape, which offers the benefit of stable, long-term storage for data that is accessed infrequently. Disk-based archiving is also gaining in popularity, although this is primarily in tiered solutions, considering the disk’s relatively weak long-term reliability.



Thursday, 13 March 2014 15:43

One of Those Days

Ever have one of those days? Blargggg. The good news is that it is normal and human and okay to be “off” a bit from time to time…which is a lot different than having an emotional spin event…in that context, since I’m having one of those blarggh days….and don’t feel very creative I thought I’d just share a few Emotional Continuity Management definitions today…

Emotional: All human feelings, those defined as positive and negative.

Spinning: Normal emotions that, for some reason, escalate and continue to develop an additional energy beyond the emotions of the original event. Emotional spinning occurs when a person, or several people, join forces with someone else to form a mutual or collective energy spin. The increasing collective emotional dynamic created by rampant, unmanaged, or poorly managed feelings.



Big Data is a bit of a problem for businesses. The fact is that data is growing enormously, both in its volume and importance. Also, we’ll soon see a big push on usable open data and its value. So, many organizations must move on Big Data.

Yet, I haven’t found a use case that will deliver for any and every company.  McKinsey recently asked eight executives from companies with leading data analytics programs about their experiences. According to the McKinsey report, “[t]he reality of where and how data analytics can improve performance varies dramatically by company and industry.”

One problem may be that Big Data requires a paradigm shift in how businesses approach data. Typically, business is goal-oriented with data: You run a report because you need a specific set of data on a specific topic.



CIO — On the day of Facebook's IPO, a concurrency bug that lay hidden in the code used by Nasdaq suddenly reared its ugly head. A race condition prevented the delivery of order confirmations, so those orders were re-submitted repeatedly.

UBS, which backed the Facebook IPO, reportedly lost $350 million. The bug cost Nasdaq $10 million in SEC fines and more than $40 million in compensations claims — not to mention immeasurable reputational damage.

So why was this bug not discovered during testing? In fact, how did it never manifest itself at all before that fateful day in 2012?



Wednesday, 12 March 2014 15:20

Europe Approves New Data Protection Law

IDG News Service (Brussels Bureau) — European politicians voted overwhelmingly on Wednesday in favor of new laws safeguarding citizens' data.

The new Data Protection Regulation was approved with 621 votes for, 10 against and 22 abstentions.

"The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible," said Justice Commissioner Viviane Reding, who first proposed the law.

"Strong data protection rules must be Europe's trade mark. Following the U.S. data spying scandals, data protection is more than ever a competitive advantage," she said in a statement.



Wednesday, 12 March 2014 15:19

Flight 370 on ground?

[Updated Wednesday, March 12, 2014; new copy added at bottom.]

Rescue operations are launched from a number of countries, combing the seas for Malaysia Airlines Flight 370.

All the talking heads are claiming the Boeing 777-200 went into the water.


But maybe not.

According to the tv talking heads, Flight 370 set off from Kuala Lumpur headed north-northeast toward is Beijing destination. But it diverted from its flight plan and turned westward, crossing over Malaysia or southern Thailand on a mostly westerly course where it dropped off the radar; contract was lost.



This question is as old as Business Continuity Best Practices.  But there is a logical answer that many organizations (and most BCM Auditors) fail to recognize.

That simple answer: No.

But this would be a very short blog if some explanation didn’t accompany that short answer.  So let’s see if I can make the logic clear…

The chief purpose of a BIA is to gain an understanding of what’s important to the enterprise.  An enterprise-wide BIA enables an organization to rank its Business Processes and IT Applications in order of criticality to the delivery of the organization’s Products and Services.  That ranking enables the organization to prioritize which Processes and Applications – if impacted by a disruption – should be restored first (or which Recovery Plans should be activated first).



CSO — Given last year's revelations about the National Security Agency's (NSA) massive surveillance and data analytics conducted on Americans, along with continuing stories about local police scanning thousands of license plates per day, it might sound absurd to say that government lags behind the private sector in the use of Big Data analytics.

But those examples tend to be outliers among the nation's sprawling bureaucracies, especially at the state and local levels. In general, the private sector is well ahead of the public sector in the use of Big Data analytics, according to a recent report titled "Realizing the promise of Big Data," sponsored by the IBM Center for the Business of Government.

While the report's author, Kevin Desouza, an associate dean for research at Arizona State University, cited multiple examples of it being used in government, he found that the overall promise of Big Data analytics is largely unrealized so far in the public sector. He called it, "a new frontier" for government at all levels.



(CNN) -- New York police and fire officials were responding to reports of a massive explosion in Manhattan's East Harlem, authorities said Wednesday.

There were at least 11 minor injuries as clouds of dark smoke rose over the residential neighborhood of red-brick tenements, fire officials said.

Metro North commuter rail service, which runs along the site of the blast on Park Avenue, was suspended, officials said.

"Two buildings have collapsed. I hope there is no one in there. It's just rubble," a worker at a nearby flea market said.



Computerworld — Marketing executives salivate at the thought of being able to track shoppers via their mobile devices. The only problem: How to get consumers to sign on to that? MasterCard might have the answer. By spinning it as a global payment convenience, MasterCard has put a happy face on a major potential information grab.

Here's the deal. MasterCard and its partner Syniverse, a global mobile telecom firm, want you to opt in to let them track your mobile geolocation data. MasterCard says that cardholders who opt in and then travel to other countries will have fewer transactions denied. You see, cardholders are supposed to call their issuer before leaving the country so that their itineraries can be fed to the issuer's antifraud systems. When the cardholders don't do that, they are more likely to have their purchases denied.

So, says MasterCard, let's make this easier for everyone. Just register your phone with us, and then when a transaction request for you comes in from, say, Greece, our system will be able to check to see if your phone is in Greece too. If it is, the transaction is more likely to go through.



Techworld — The attack that planted malware on Target's point of sale (POS) terminals in November's huge data breach used inside knowledge of the network rather than a vulnerability in its retail software, McAfee has said in its latest quarterly analysis.

Snippets of information on the attack's engineering have been trickling out steadily since Target made the incident public in January, but this one suggests if not complexity then at least a degree of planning.

As has been widely discussed, the Target attack deployed the off-the-shelf BlackPOS, a generic but hugely popular toolkit used by criminals to capture data on retail computers connected to the card readers used by customers.



Hilary Estall takes a look at how organizations are faring with their BCMS audits and what, if any, trends are appearing.


ISO 22301 has been in circulation for approaching two years but the uptake for third party certification remains at a steady crawl. Why is this? As with many other management system standards, there will be some organizations keen to be amongst the first to obtain certification and maximise the associated benefits, but for most, there will need to be an external factor to influence the decision whether to seek formal certification. ISO 22301 is no different.

That said, a number of organizations have taken the initiative and now benefit from a business continuity management system (BCMS) which not only stands up to the scrutiny of an independent auditor (which let’s face it can vary in its worth) but more importantly, offers assurance that should the worst happen, the business (or part covered by business continuity arrangements) stands in good stead for riding the storm.

So, what can we learn from those who have already dipped their corporate toes into the water, otherwise known as ISO 22301? This article draws on my personal experience both as an auditor (one of the tough ones!) and a BCMS consultant; and tries to get underneath what might be holding your BCMS back.



SWIFT has launches a new business continuity solution to support global payment systems. Developed by SWIFT, the Market Infrastructure Resiliency Service (MIRS) is a backup service for Real Time Gross Settlement (RTGS) systems - electronic platforms used for the continuous settlement of high value and multi-currency cash payments between banks.

Central banks and financial market infrastructures operate RTGS systems to ensure effective settlement of high value payment transactions. As a backup platform, MIRS provides a third line of support to RTGS operators experiencing problems with first and second line backup systems. Once active, MIRS provides the essential functions required to achieve final settlement in real-time on a transaction by transaction basis. Once MIRS is deployed, RTGS operators remain in full control of the service while SWIFT manages the technical operations.

Juliette Kennel, head of market infrastructures, SWIFT, says: "Given the prominent role that RTGS systems play in the world economy, it is vital to safeguard effectively against operational disruptions and manage related risks. MIRS provides market infrastructures with the necessary tools to maintain business as usual operations even in the very unlikely but high impact event that their first and second lines of defence were to fail."

Since July 2011, SWIFT has been working with a group of central banks, including the Bank of England, to identify the necessary requirements to enable RTGS functions to operate normally in the case of disruptions at their existing sites. At the end of 2013, the Bank of England completed a pilot and successfully tested MIRS with the Clearing House Automated Payment System (CHAPS) community. CHAPS is a UK payments scheme that processes and settles both systemically important and time-dependent payments in sterling. On 24 February 2014, the Bank of England went live with MIRS, further increasing the resiliency of the Bank's RTGS service, the UK's High Value Payments System.

Toby Davies, head of market services at the Bank of England, says: "With two live operational sites, our current RTGS systems are highly resilient. However, we wanted to establish an additional contingency solution that was both robust and cost effective. MIRS will allow us to continue operating at full business volumes in the unlikely event of an outage affecting both our existing sites simultaneously."

MIRS is available to all HVPS market infrastructures, including those not currently on SWIFT.


When it comes time to build a new data center or modernize an old one, the movement of applications from one set of systems to another can be a painful process that usually takes weeks to accomplish.

To address that particular challenge, Delphix launched the Delphix Modernization Engine, which automatically creates, manages and archives virtual copies of applications, databases and files.

The Delphix Modernization Engine is based on data virtualization technology that Delphix has been using to allow IT organizations to copy databases. Rick Caccia, Delphix’ vice president of strategy and marketing, says the company is now extending the reach of that technology to include applications and files. This helps reduce a process that once took weeks to complete down to a couple of days.



Tuesday, 11 March 2014 17:29

The Change Management Challenge

Organizations tend to develop far-reaching plans to describe their strategic ambitions, tactics, goals, milestones, and budgets. However, these plans in and of themselves do not create value. Instead, they merely describe the path and the prize. Value can be realized only through the unremitting, collective actions of the hundreds or thousands of employees who are ultimately responsible for designing, executing, and living with the changed environment.

Unless an organization successfully aligns its culture, values, people, and behaviors to encourage the desired results, failure is highly predictable.

This challenge becomes even more acute when considering transformation efforts that are enabled through the introduction of enterprise resource planning (ERP) or other technology-enabled solutions. As is frequently the case in these deployments, companies often pay a lot of attention to new processes and technologies. However, they limit their focus on the essential resource — people — and how they must work and behave in the “future state.” Though deployment success demands that employees adopt new business processes, ways of working, new behaviors, communication channels, software tools, and so on, many initiatives frequently focus the dominant portion of a change budget on how to operate the new tool and, as a consequence, underachieve or fail.



CIO — Today's businesses generate more data than ever before. Not coincidentally, IT has never been more critical to the success of a small business. Luckily, the per-gigabyte cost of hard disk drives and associated storage technologies has never been lower, while the advent of technology such as cloud storage offers even greater opportunities to do more with less.

For many small businesses, though, their backup and storage strategy hasn't caught up with their more pervasive use of computers. This could be due to confusion about the various storage options, or a failure to understand that the old paradigm of the occasional batch backup is no longer adequate.

A storage vendor representative will have you believe that it offers the perfect backup hardware for your business. However, backup is more than hardware, since storage needs for individual organizations invariably differ. This means a one-size-fits-all mentality is doomed to offer a mediocre fit in terms of either budget or functionality.



Think you need advanced computer skills to set up a phoney bank website and fool people into giving you their money? Think again. DIY phishing is now on offer in kit form. Someone who knows how to set up a personal website or even a Facebook page probably has the level of knowhow required to get started in fraud and identity theft. For business continuity, the threats are multiplied. Instead of having to deal (only) with specialised cybercriminals, organisations and their employees must now be wary of almost anyone and everyone. But is that such a bad thing?



IDG News Service (Boston Bureau) — Oracle is planning to make significant investments in its ERP software for higher education institutions, with an eye on keeping the installed base happy and fending off challenges from the likes of Workday.

A new Oracle Student Cloud service will be configurable to manage "a variety of traditional and non-traditional educational offerings," Oracle said. The first incarnation of the product will be released sometime in 2015 and will support student enrollment, payment and assessment.

In addition, Oracle will release new features for higher education in its HCM (human capital management) and ERP (enterprise resource planning) cloud services during this year and next, according to Monday's announcement. The capabilities will target areas such as union contracts and grant management, and will be tied into Oracle Student Cloud.



Everybody wants to explain technology in terms that business leaders can understand. Generally, that’s a good thing, but it can have a downside.

When you oversimplify the technology, it can help sell in the short term, but in the long run, it leads to unpleasant surprises, scope creep and skeptical business leaders.

That’s what seems to be happening with Big Data analytics, according to eight executives from companies heavily vested in data and analytics.