Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 29, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (6531)

Innovation has become accepted as central to competitiveness in today’s world, both in new product development and in enhancement of internal processes. Companies struggle with innovation, and there have been numerous attempts to regularize and program it. But the development of truly breakthrough ideas is difficult, and recognizing them when they do arrive can be harder still. We have processes available for vetting ideas and passing them through a series of increasingly selective gateways until they reach the point of usefulness or are discarded altogether. But we do not have good processes for stitching together new ideas and reaching that eureka moment that says a critical new idea has been found.

Some of the ways that ideas are sourced include crowdsourcing, internal suggestions, brainstorming, and the like. There are idea factories employing innovative individuals who apply diverse experience to create an “out of the box” concept. And, there are programs such as TRIZ, an innovation program developed in Russia in 1946 that seek to apply a systemic process to ideation itself, based around principles extracted from patent literature subjected to contradiction, synthesis, and new arrangement. But creation of ideas is forever thwarted by the fact that we don’t really understand the creative process and may, in fact, be generalizing a multitude of processes in a way that makes them impossible to replicate.



Tuesday, 24 February 2015 00:00

Preventing Burst Water Pipes

Unrelenting frigid weather often means frozen water pipes – one of the biggest risks of property damage. In fact, a burst pipe can cause more than $5,000 in water damage, according to IBHS research.

Structures built on slab foundations, common in southern states, frequently have water pipes running through the attic, an especially vulnerable location. By contrast, in northern states, builders recognize freezing as a threat and usually do not place water pipes in unheated portions of a building or outside of insulated areas.

Freezing temperatures can be prevented with the installation of weather stripping and seals. This offers two major benefits: keeping severe winter weather out of a structure, and increasing energy efficiency by limiting drafts and reducing the amount of cold air entering.



Deloitte Analytics Senior Advisor Tom Davenportwarned last year that data scientists waste too much time prepping data. After interviewing data scientists, Davenport concluded that they needed better tools for data integration and curating.

Now, a Ventana Research column shows that data scientists aren’t the only ones wasting enormous amounts of time on data preparation at the expense of actual analysis.

Ventana CEO Mark Smith shares research from several reports, all of which demonstrate how much of a time suck data preparation can be without the right tools.



The widespread popularity of social media and associated mobile apps, especially among young people, has potential in public safety, a new study finds.

Use of such sites as Facebook and Twitter has become so significant that universities should strongly consider utilizing them to spread information during campus emergencies, according to a study from the University at Buffalo School of Management called Factors impacting the adoption of social network sites for emergency notification purposes in universities.

Social media not only enables campus authorities to instantly reach a large percentage of students to provide timely and accurate information during crisis situations, the study states, but sending messages through social networking channels also means students are more likely to comply with emergency notifications received.



In the wake of a natural disaster, about a quarter of businesses never reopen. Whether due to primary concerns like a warehouse flooding, secondary complications like supply chain disruption, or indirect consequences like transportation shutdown that prevents employees from getting to work, there are a broad range of risks that can severely impact any business in the wake of a catastrophe that must be planned for.

Planning and securing against natural disaster risks can be daunting and exceptionally expensive, but researchers have found that every dollar invested in preparedness can prevent $7 of disaster-related economic losses. Check out more of the questions to ask and ways to mitigate the risk of natural disasters for your organization with this infographic from Boston University’s Metropolitan College Graduate Programs in Management:



Recently, we had a client pick up a new contract with a company that was escaping a relationship with a bad IT provider. The transition was a nightmare for the business. Why? Because their previous IT company had constantly kept them in the dark about the state of their technology.

How transparent are YOU with your clients?

When you say, “Honesty is the best policy,” you'd better mean it. Be as open as possible with your clients without overloading them on the technical stuff. It’s all about building trust, and you can’t do that if they think you’re keeping secrets from them. Even if something goes wrong with a bad bug or a security breach, you need to keep them in the loop. Own up to everything you do, good and bad, and if it’s bad – make it right.



(Tribune News Service) -- A hiker lost in the mountains of New Mexico called 911 repeatedly, but was routed seven times to non-emergency lines.

A 911 call made by an elderly woman from her home in Texas was picked up by an emergency dispatcher in Tennessee, some 700 miles away.

And an emergency call made last month from a middle school in Delano, Calif., after a young student collapsed and later died there, was routed to a 911 dispatcher in, of all places, Ontario, Canada.

Hundreds of millions of Americans have moved rapidly from traditional land lines to relying on various forms of wireless phone services, making the 911 emergency system ever more complex, experts say, and therefore more subject to misrouted calls or misidentified locations.



As your customers decide whether or not to move their cloud-based file sharing to a hybrid cloud, they will have many questions along the way. Of course, some questions are more common than others – and as their managed service provider, you should be prepared to answer them.



Monday, 23 February 2015 00:00

Can IT Evolve Beyond the Cloud?

Everyone in IT is anxious to see how the cloud shakes out. When all is said and done, what will the enterprise look like when cloud computing becomes the established model for IT infrastructure?

And some are looking even farther into the future, wondering what, if anything, will come after the cloud?

To be sure, there is no shortage of predictions over how the cloud will evolve over time. IDC’s most recent assessment has hybrid infrastructure heading into 65 percent of enterprises within the year and predicts that by 2017, 20 percent of the industry will be using the public cloud as a strategic resource. As well, more than three quarters of IaaS offerings will be redesigned, rebranded or phased out over the next two years as providers concentrate on more lucrative services higher up the stack.

The utility of the cloud is beyond question at this point, so while most experts can debate the merits of the various architectures, it is hard to imagine IT in the future without a significant cloud presence. NetSuite CEO Zach Nelson told the Australian Financial Review last fall that he believes the cloud to be “the last computing architecture,” because there is no way to improve upon always-on data access from any device anywhere in the world. This may be true, but it was also true in the early 1970s that computer technology was simply too expensive and too complex for the average citizen.



In all the big news about the impact of mobile technology on small to midsize businesses (SMBs), one item that stands out is that SMBs that adopt mobile strategies outperform those that do not. This data comes from a recent study on the mobile revolution by the Boston Consulting Group and Qualcomm. Another report from Juniper Research found that in 2014, SMBs contributed $630 billion to the growing mobile industry, which is nearly triple the number from four years prior.

That kind of growth proves that SMBs are not only adopting mobile technologies; they are relying on it to fuel their business growth and change the ways that business is done.



The debate about build versus buy has raged for years. But the total cost of owning your own data center outweighs the perceived benefits, and it looks like the argument in favor of “buy” may have gained the upper hand once and for all.

Let’s talk about it, though, from the point of view of people who are considering building their own and see how their claims stand up to the current state of backup.



Well, it’s time to work on the Business Continuity Management (BCM) / Disaster Recovery (DR) program based on the maintenance schedule. You’ve got your plan all well laid out and people know it’s coming and are ready to participate…sometimes begrudgingly. Yet, for some reason your well-thought out plan isn’t going to plan at all.

Sometimes that because what one believes they have, they really don’t. For example, just because you have an executive buy in on the need for the BCM/DR program and what’s needed, doesn’t always translate to mean the same thing as having their support. For example, an executive may buy in to the idea that a specific initiative is needed and give the go ahead but no one really follows along as expected because the executive themselves doesn’t offer or provide support to the BCM/DR practitioner and when others see this they quickly realize that the BCM/DR is just a make-work effort and isn’t something the company executives really – and I mean really – supports.

The executive may see it as a checkbox on an audit report and wants it quickly to go away; to quickly have the golden checkmark in that tick box appear on a report so that BCM/DR goes away. Again, they see the need to do something but don’t provide the means, communication channels and support, resources (both physical and financial) or moral support to get it done.



As the number of platforms where enterprise IT organizations can store data proliferates, getting data in and out of those platforms quickly has become a major IT challenge.

To address that issue, Syncsort has released an update to its suite of data integration offerings that adds an “Intelligent Execution Layer” that enables users to visually design a data transformation once and then run it anywhere—across Hadoop, Linux, Windows, or Unix—on premise or the cloud.

Tendü Yoğurtçu, general manager for Big Data at Syncsort, says version 8.0 of the company’s DMX Software is designed to provide not only a consistent approach for collecting, transforming and distributing data across multiple platforms, but also one that embeds algorithms that automatically select the optimal execution path based on the type of platform, the attributes of the data and the condition of the cluster.

The goal, says Yoğurtçu, is to allow business users and data scientists to take advantage of a run-time environment that allows them to transform data in flight in a single step.



A new survey from identity and access management (IAM) solutions provider SailPoint has revealed there is a "clear disconnect" between cloud usage and IT controls in many businesses.

SailPoint's "2014 Market Pulse Survey" of at least 3,000 employees worldwide showed that one out of every four workers admitted they would take copies of corporate data with them when they leave a company.

Survey researchers also pointed out that one in five employees is "going rogue" with corporate data and has uploaded this information to a cloud application such as Dropbox or Google Docs with the intent to share it outside the company.

"The challenge with cloud applications is that IT organizations must now manage applications that are deployed – and accessed – completely outside the firewall," SailPoint President Kevin Cunningham wrote in a blog post. "Adding to the complexity, employees are starting to use consumer-oriented applications for work-related activities, creating a significant blind spot when it comes to risk."



I have recently detailed the COSO 2013 Framework in the context of a best practices compliance regime. However there is one additional step you will need to take after you design and implement your internal controls. That step is that you will need to assess against your internal controls to determine if they are working.

In its Illustrative Guide, the Committee of Sponsoring Organization of the Treadway Organization (COSO), entitled “Internal Controls – Integrated Framework, Illustrative Tools for Assessing Effectiveness of a System of Internal Controls” (herein ‘the Illustrative Guide’), laid out its views on “how to assess the effectiveness of its internal controls”. It went on to note, “An effective system of internal controls provides reasonable assurance of achievement of the entity’s objectives, relating to operations, reporting and compliance.” Moreover, there are two over-arching requirements which can only be met through such a structured post. First, each of the five components are present and function. Second, are the five components “operating together in an integrated approach”? Over the next couple of posts I will lay out what COSO itself says about assessing the effectiveness of your internal controls and tie it to your compliance related internal controls.

As the COSO Framework is designed to apply to a wider variety of corporate entities, your audit should be designed to test your internal controls. This means that if you have a multi-country or business unit organization, you need to determine how your compliance internal controls are inter-related up and down the organization. The Illustrative Guide also realizes that smaller companies may have less formal structures in place throughout the organization. Your auditing can and should reflect this business reality. Finally, if your company relies heavily on technology for your compliance function, you can leverage that technology to “support the ongoing assessment and evaluation” program going forward.



The harsh winter of 2015 shows no sign of letting up. It’s too late for enterprises to do much to protect themselves this year. The good news is that, though it doesn’t seem so now, the temperatures will moderate and snow will melt relatively soon.

But, with the uncertainty introduced by global warming, it is irresponsible to assume next year won’t be as bad – or even worse. Therefore, it is important to take special note of what can be done to prepare for next winter.

This prudence seems to be lacking, however. A poll commissioned by property insurer FM Global revealed the problem. It found that 32 percent of workers give their employers grades of “F,” “D” or “C” for winter storm preparedness. Fifty-two percent of full-time workers expressed dissatisfaction with their companies’ winter storm preparations.



It’s a terrifying but plausible scenario. You’re in an enclosed crowded place—perhaps a subway or a mall—and a terrorist organization releases lethal quantities of a nerve agent such as sarin into the air. The gas sends your nervous system into overdrive. You begin having convulsions. EMTs rush to the scene while you go into respiratory failure. If they have nerve agent antidotes with them, you may have a greater chance of living. If they don’t, you may be more likely to die. Will you survive?

Thanks to CDC’s Strategic National Stockpile CHEMPACK program, the answer is more likely to be yes.

First responders prepare for CHEMPACK training.

First responders prepare for CHEMPACK training.

CHEMPACKs are deployable containers of nerve agent antidotes that work on a variety of nerve agents and can be used even if the actual agent is unknown. Traditional stockpiling and delivery would take too long because these antidotes need to be administered quickly. CDC’s CHEMPACK team solves this problem by maintaining 1,960 CHEMPACKs strategically placed in more than 1,340 locations in all states, territories, island jurisdictions, and the District of Columbia. Most are located in hospitals or fire stations selected by local authorities to support a rapid hazmat response. More than 90% of the U.S. population is within one hour of a CHEMPACK location, and if hospitals or first responders need them, they can be accessed quickly. The delivery time ranges from within a few minutes to less than 2 hours.

The medications in CHEMPACKs work by treating the symptoms of nerve agent exposure. According to Michael Adams, CHEMPACK fielding and logistics management specialist, “the CHEMPACK formulary consists of three types of drugs: one that treats the excess secretions caused by nerve agents, such as excess saliva, tears, urine, vomiting, and diarrhea; a second one that treats symptoms such as high blood pressure, rapid heart rate, weakness, muscle tremors and paralysis; and a third that treats and can prevent seizures.”

Maintaining CHEMPACKs throughout the nation is challenging, but it is an essential part of the nation’s defenses against terrorism. The CHEMPACK team must coordinate with limited manufacturers to keep the antidote supply chain functioning. CHEMPACK antidotes are regularly tested for potency and are replaced when needed. They must be maintained in ideal locations for quick use by hospitals and first responders. But, having them available is only the first step. Personnel who may use them need to know where they are and must be trained. CDC supports state and local partners as they identify CHEMPACK placement locations and conduct trainings for their responders.

2008 map of the fielded CHEMPACK Cache Locations

2008 CHEMPACK locations across the U.S.

Terrorist nerve agent attacks are not hypothetical. The Aum Shinrikyo group in Japan used sarin gas to attack subway passengers twice: an attack in 1994 killed eight people and a second attack in 1995 killed 12. Experts agree that these attacks were amateurish and a better timed and executed attack could have killed many more people.

CDC’s CHEMPACK team is part of the rarely seen network that protects the people of the United States from unusual threats. You might not have heard much about them, but if you are ever attacked by nerve agents, they may be the reason you survive.


I recently had a conversation with someone about BYOD and security. He told me that he thought that enterprise was having BYOD fatigue and there was a growing attitude that its security problems were overblown. This person wasn’t alone in his feelings. I had read some articles and heard others repeat similar complaints about BYOD. Perhaps mobile devices weren’t as bad of a security issue as once thought?

Or maybe the threats are even worse than we realized. Some recent studies show just how much of a security risk mobile devices have become within the workplace, and this carries over into BYOD security risks as well.

First, a study conducted by Alcatel-Lucent's Motive Security Labs found that mobile malware has increased by 25 percent in 2014, and 16 million devices – mostly Androids but not exclusively – are infected. For the first time, we’re seeing infection rates of mobile devices that rival those on Windows computers. Out of the top 20 threats, six of them involved spyware meant to track location and monitor the user’s communications. The reason for all this malware, according to an eSecurity Planet article, comes down to the device owner:



Leveraging Big Data for operational analytics is generating more interest these days, despite integration concerns. Companies are always looking for ways to reduce operational expenses, and Big Data promises to help.

A recent SCM World report, “The Digital Factory: Game-Changing Technologies That Will Transform Manufacturing Industry,” asked 200 manufacturers around the globe about Big Data and other new technologies. The report is available to clients only, but Forbes recently shared some key findings.

The survey revealed that 49 percent see advanced analytics as a way to “reduce operational costs and utilize assets efficiently,” Forbes notes. It’s telling, too, that only 4 percent said they saw no use case for Big Data analytics in their future.



For many people, stepping into the office can feel like stepping back in time. In an age where so many people carry around mobile computers in their pockets, employees have become frustrated at being forced to use cumbersome technologies such as VPN and FTP to remotely access files stored on an on-premises file server. As a result, many of these employees have resorted to storing more of their data in free, non-secure cloud services like Dropbox.

How do MSPs reconcile the virtues of the file server with the benefits of cloud file sync? One way is to cloud-enable the file server. Here are three ways cloud-enabling the file server keeps the file server sexy and makes your clients happy:



The harshness and repeated ferocity of the winter of 2015 (especially in the New England states) sent many businesses scrambling to update their Business Continuity Plans.  The earlier Ebola crisis in West Africa set off the same kind of frenzy.  As a wise Business Continuity Management (BCM) guru once said “no good crisis should go unexploited”.  What he meant was that public crises can be leveraged to stimulate interest (and funding) for BCM.

The result of the blizzard and Ebola phenomena isn’t about stimulating interest, it borders on panic – for all the wrong reasons.  An earlier blog addressed the wisdom of planning for impacts, not for events.  These recent snows and epidemics have served to reinforce that advice.

There are so many things that could happen to disrupt your organization.  Many of them are as yet unknown (those “black swans”). But are “Scenario Plans” worth the effort? Consider that the 30-day snowfall record for Boston set in January-February 2015 (90 inches) broke the previous record (59 inches) set 37 years earlier (1978).  Does it make sense to create a ‘Blizzard Plan’ – if it occurs every 30 years?  Likewise, is an ‘Ebola Plan’ really necessary when that specific virus is unlikely to spread in significant numbers beyond West Africa?



Security and compliance skills were named as the top IT skills that hiring managers will be seeking in 2015, according to a survey of 405 senior-level technology professionals conducted by Cybrary.IT from late 2014 to early 2015. And that’s good news for the fledgling cybersecurity training site, which began offering its roster of free security courses a few weeks ago.

While the majority of companies represented in the survey plan to spend the same amount on IT training in 2015 that they spent in 2014, 11 percent said they have no money for IT training at all and fewer than 25 percent spend at least 10 to 20 percent of the total IT budget on training.

Billing itself as the first and only tuition-free massive open online course (MOOC) for IT and cybersecurity training, Cybrary.IT, whose founders came out of the paid IT training space, targets “unserved and underserved” individuals and aims to transform cybersecurity training as a whole, as co-founder Ryan Corey told me upon launch. The price of training is a major issue for individuals and companies, as both attempt to keep up with rapidly changing cyber threats and the growing need for specialized security skills.



No, there is no typo in the title. In today’s C-level world, CRO can stand for Chief Risk Officer, but can also mean Chief Reputation Officer. By definition, the Chief Risk Officer looks after the governance of significant risks (both menaces and opportunities). The Chief Reputation Officer supervises the management of an organisation’s reputation, brand and communications. Looking after risks and reputation are both vital functions for organisations. The question is whether specific job functions are to be created for one or both of them. The definitive answer will depend on different factors.



In the light of recent news showing that $1bn (£648m) has been stolen since 2013 in cyber-attacks on up to 100 banks and financial institutions worldwide Konrads Smelkovs of KPMG’s cyber security team says that it is time for financial institutions to be more proactive when it comes to information security.

Smelkovs comments:

“These attacks were unique in terms of the organization it took to execute them. However, the tools used by these cyber-crime gangs weren’t particularly sophisticated. It was the persistence and cautious approach of the criminals that netted them the prize. The banks targeted - primarily in Russia and Ukraine - suggest a selective operation in areas where tracking transactions is more complex.

“Financial institutions need to take more of a pre-emptive approach to such attacks. Playing ‘war games’ is one effective way of highlighting potential weak spots where attacks are simulated. Each organization should also look to have someone committed to defending their network, rather than someone who merely adheres to prescribed standards. The continued investment towards anti-malware technology and internal network monitoring tools remains crucial to being a step ahead of cyber criminals.”


The UAE’s National Emergency Crisis and Disasters Management Authority has published an updated version of the country’s business continuity standard.

The new UAE Business Continuity Management Standard builds upon the first version, published in 2012, and aligns the standard with international best practices and guidelines. It contains three parts:

  • Specifications: sets out all the key parts and elements of the business continuity program.
  • Guidelines: interprets how the elements mentioned in the Specifications work in practice.
  • Toolkit: includes framework templates for developing a business continuity management system.

The Specifications document is available as a free PDF here. For details of obtaining the other parts of the standards contact the NCEMA

CHICAGO – Dangerously low temperatures and bitterly cold wind chills continue to be in the forecast for much of the Midwest this week. The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) wants individuals and families to be safe when faced with the hazards of cold temperatures.

“Whether traveling or at home, subfreezing temperatures and wind chills can be dangerous and even life-threatening for people who don't take the proper precautions,” said Andrew Velasquez III, FEMA Regional Administrator. “FEMA continues to urge people throughout the Midwest to monitor their local weather reports and take steps now to stay safe.”

During cold weather, you should take the following precautions:

• Stay indoors as much as possible and limit your exposure to the cold;
• Dress in layers and keep dry;
• Check on family, friends, and neighbors who are at risk and may need additional assistance;
• Know the symptoms of cold-related health issues such as frostbite and hypothermia and seek medical attention if health conditions are severe.
• Bring your pets indoors or ensure they have a warm shelter area with unfrozen water.
• Make sure your vehicle has an emergency kit that includes an ice scraper, blanket and flashlight – and keep the fuel tank above half full.
• If you are told to stay off the roads, stay home. If you must drive, don’t travel alone; keep others informed of your schedule and stay on main roads.

You can find more information and tips on being ready for winter weather and extreme cold temperatures at http://www.ready.gov/winter-weather.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Wednesday, 18 February 2015 00:00

What Does Bad Data Cost?

For Valentine’s Day, Talend published a fun infographic, “Use Big Data to Secure the Love of Your Customers.” It lists data quality as the second leading challenge with Big Data, but perhaps more striking is the  $13.3 million annual financial impact caused by data quality problems.

I’m not entirely sure from the graphic which research group provided that stat, but a 2013 Gartner research paper put the cost higher, at $14.2 million a year.

Actually, there’s no shortage of scary statistics and numbers on the high cost of bad data. For instance, this infographic by Lemonly.com and Software AG notes that bad data:



Recently, there was an online discussion where the question was raised if both Business Continuity Planning (BCP) and Disaster Recovery (DR) service and implementation can be quantified in terms of real dollar savings. I believe that to be a great question—one that anyone in those fields should be asking. And to be clear, I think the reply is a resounding “yes.”

In recent years, it would be very easy to say that dollars have become “scarce” from the standpoint of business planning and operations. Many of our clients have recently shifted their focus toward and improved cost/benefit ratio and greater overall savings in BCP and DR. This eye toward savings extends into both the tactical and—more importantly—strategic areas.



Many businesses across the US score poorly on being prepared for severe winter weather, according to a new poll of America's workforce, commissioned by FM Global.

Nearly one third of full-time American workers (32 percent) assign their employers a grade of C, D or F when it comes to preparedness for a major winter storm, the research finds. Furthermore, more than half of US workers (52 percent) employed full time indicated they are dissatisfied with their employers' preparedness, wanting their company to be better prepared for a winter storm.

"America's feedback speaks to the need for businesses to be more proactive, and overall more resilient, when it comes to winter weather," said Brion Callori, senior vice president, engineering and research, FM Global. "Insurance won't bring back lost customers, market share or fix a damaged corporate reputation for unprepared businesses. A business continuity plan which has been well-tested and communicated to employees can address such risk and help companies avoid costly physical and financial losses."

FM Global recommends the following best practices for businesses to help prevent damage in severe winter weather conditions:



Wednesday, 18 February 2015 00:00

West Coast Ports Dispute and Supply Chain Risk

A protracted labor dispute that continues to disrupt operations at U.S. West coast ports underscores the supply chain risk facing global businesses.

Disruptions have steadily worsened since October, culminating in a partial shutdown of all 29 West coast ports over the holiday weekend.

The Wall Street Journal reports that operations to load and unload cargo vessels resumed Tuesday as Labor Secretary Tom Perez met with both sides in the labor dispute in an attempt to broker a settlement amid growing concerns over the impact on the economy.

More than 40 percent of all cargo shipped into the U.S. comes through these ports, so the dispute has potential knock on effects for many businesses.



Oil is hovering around $50 per barrel. For most of the US economy this drop in oil price has provided a much-needed economic boost. One piece on the NPR website, entitled “Oil Price Dip, Global Slowdown Create Crosscurrents For U.S.”, said “economists have suggested the big drop in oil prices is a gift to consumers that will propel the economy.” Liz Ann Sonders, who is the chief investment strategist at Charles Schwab, was quoted as saying “The U.S. economy is 68 percent consumer spending, so right there you know that falling oil prices is a benefit.” Another economist said the positive effects could be “worth $400 billion” for the US economy as a whole.

But in the energy space, particularly in the city of Houston, Texas, this plunge has been devastating. It is so bad that in this past week’s issue of the Houston Business Journal (HBJ), it provided a ‘Box Score’ for energy company lay-offs. And that was before Halliburton announced a 10%-15% reduction and Hercules Offshore announced that it had laid off some 30% of its work force since last October. Nationally, for the energy industry, it will be just as bad. In the NPR piece, David R. Kotok, of Cumberland Advisors, said, “cuts in production and energy company payrolls will cost the U.S. economy up to $150 billion.” The Houston Chronicle headlined it was a “Bloodbath”.

I thought about what this plunge in the price of oil could mean for the compliance function in energy and energy related companies going forward. Many Chief Compliance Officers (CCOs) and compliance practitioners struggle with metrics to demonstrate revenue generation. Most of the time, such functions are simply viewed as non-revenue generating cost drags on business. This may lead to compliance functions being severely reduced in this downturn. However I believe such cuts would be far from short-sighted; they would actually cost energy companies far more in the short and long term.



The more IT pervades businesses, the more IT-based tools hackers have to exploit vulnerabilities. If you want your company to stay safe, you may need to ‘attack’ yourself to find out where the weak points are and fix them to prevent others from breaking in. The following list of hacker tools and techniques will give you an idea of the range of resources readily available over the Internet. Remember also that hackers may be plying their trade every day of the week. By comparison, some organisations may not have the time to run checks more once or twice a month. If you’re strapped for internal resources, consider other options like third party services to check or boost security.



Tuesday, 17 February 2015 00:00

BCI publishes its annual Horizon Scan report

The Business Continuity Institute has published its fourth annual Horizon Scan report. This year’s report has been published in association with BSI.

The BCI Horizon Scan assessed the business preparedness of 760 organizations worldwide and shows that the top three threats that business continuity managers are concerned about are:

  • Cyber-attack (82 percent are concerned about this threat);
  • Unplanned IT outages (81 percent);
  • Data breaches similar to that suffered by Sony in 2014 (75 percent).

Supply chain disruption is seen as the fastest rising threat, climbing to fifth place in this year’s report, up from 16th in 2014. Almost half of those polled (49 percent) identified increasing supply chain complexity as a trend, leaving their organization vulnerable to disruption from conflict or natural disasters.

Despite growing fears over the resilience of their firms, the report records a shock fall in the use of trend analysis by business continuity practitioners, with a fifth of firms (21 percent) failing to invest in this protective discipline. A similar proportion (22 percent) report not employing trend analysis at all, making it a blind spot for organizations. Globally business preparedness shows variations with 8 out of 10 (82 percent) organizations in the Netherlands utilising trend analysis, while just 6 in 10 firms in the Middle East and Africa do so (63 percent).

Adoption of ISO 22301, the business continuity standard, appears to have reached a tipping point with more than half (53 percent) of organizations now relying upon this, up from 43 percent last year. Almost three quarters of firms (71 percent) intend to better align their activities with ISO 22301 over the next 24 months.

You can read the full Horizon Scan report after registration here.

Luke Bird reflects on career progression opportunities in business continuity and how the profession could improve in this area.

As a kid growing up all I ever wanted to be was a sailor in the navy and once I got to the right age there was no one going to tell me otherwise. So off I went, hell bent on passing through basic training and finally getting to wear that shiny uniform. Well done me I thought to myself…

However, it wasn’t until the Monday morning after my big passing in parade and following a weekend of celebrations with my family and friends that it finally hit me. I had absolutely no idea what I wanted to do with my career beyond that point.

It’s really only now at this stage of my career in business continuity and over 10 years later that I can draw some interesting parallels. Much like my experience during basic training in the Navy, my career as a junior professional in business continuity has often involved those long 18-hour days, those difficult superiors (occasionally) and that regular feeling of being a deer in the headlights. However, the greatest parallel I can draw from this collective experience is the way I’m feeling right now: trying to decide on my future.



Tuesday, 17 February 2015 00:00

How to define your recovery time objectives

By Charlie Maclean-Bristol FBCI FEPS

Defining the recovery time objectives (RTO) for your activities is one of the most critical things the business continuity manager will carry out. Get them wrong and the whole basis for your recovery strategy is flawed. Often, rather than being an objective assessment, the RTO is driven by internal politics and by managers wanting their part of the organization (and hence themselves) to be seen as important.

For a long while I have wondered if there was any scientific way, or even a rule of thumb, for defining your RTOs but I have never come across one. A while ago I reached out to the BCMIX LinkedIn Group to ask how members went about defining their RTOs. I got lots of explanations of the process for defining them but no set rule. Most people said that defining RTOs was a combination of common sense, knowledge of the organization, and experience. These are all very good but how is a beginner going to get that experience?

In the absence of any set method of defining RTOs here are my thoughts on the subject:




Cyber-attack is the top threat perceived by businesses, according to the fourth annual Horizon Scan report published today by the Business Continuity Institute (BCI), in association with BSI. Supply chain disruption is reported as the fastest rising threat, up 11 places since last year.

The annual BCI Horizon Scan assessed the business preparedness of 760 organizations worldwide and shows that three quarters (82%) of Business Continuity Managers fear the possibility of a cyber-attack, with 81% worried about the possibility of unplanned IT outages and 75% data breaches similar to that suffered by Sony in 2014. A recent industry report(i) highlights the annualized cost of cyber-crime per global company now stands at $7.6 million, a 10.4 per cent year-over-year increase.

Concerns over supply chain disruption were the fastest rising threat, climbing to fifth place in this year’s report, up from 16th in 2014. Almost half of those polled (49%) identified increasing supply chain complexity as a trend, leaving their organization vulnerable to disruption from conflict or natural disasters.

This year’s global top ten threats to business continuity are:

  1. Cyber-attack – up 1
  2. Unplanned IT and telecoms outages – down 1
  3. Data breach – static
  4. Interruption to utility supply – up 1
  5. Supply chain disruption – up 11
  6. Security incidents – up 1
  7. Adverse weather – down 3
  8. Human illness – up 3
  9. Fire – down 3
  10. Acts of terrorism – down 1

Despite growing fears over the resilience of their firms, the report records a shock fall in the use of trend analysis by business continuity practitioners, with a fifth of firms (21%) failing to invest in protective discipline. A similar proportion (22%) report not employing trend analysis at all, making it a blind spot for organizations. Globally business preparedness shows variations with 8 out of 10 (82%) organizations in the Netherlands utilising trend analysis, while just 6 in 10 firms in the Middle East and Africa do so (63%).  Small businesses, evaluated for the first time in this year’s report, are seen to lag behind industry best practice with just half currently applying international standards for business continuity management.

Howard Kerr, Chief Executive at BSI, commented: “Globalization has brought the world’s conflicts, epidemics, natural disasters and crime closer to home. It is of real concern that this year’s report shows that businesses are not fully utilising information to identify and remedy blind spots in their organizational resilience strategies. Tracking near and long-term threats provides organizations of all sizes with an objective assessment of risks and how to mitigate them. Failing to apply best practice leaves organizations and their employees, business partners and customers at risk.”

The report provides the strong recommendation that the rising costs of business continuity demand greater attention from top management. Encouragingly, adoption of ISO 22301, the business continuity standard, appears to have reached a tipping point with more than half (53%) of organizations now relying upon this, up from 43% last year. Almost three quarters of firms (71%) intend to better align their activities with ISO 22301 over the next 24 months.

Lyndon Bird FBCI, Technical Director at the BCI, commented: “The world faces diverse problems from cybercrime and political unrest to supply chain vulnerabilities and health hazards. This report shows the vital importance of business continuity professionals understanding such trends. No longer can those working in the field believe they can resolve all their problems themselves. As an industry we must work together with our fellow practitioners to deal with the complexity of these threats.”

Click here to download your free copy of the Horizon Scan. If you would like to know more about the report, or perhaps ask some questions, Patrick Alcantara (BCI) and Lorraine Orr (BSI) will be hosting a webinar on Tuesday 24th February at 2pm (GMT) where they will be discussing some of the findings. Click here to register for the webinar.

The derailments this week of two trains carrying crude oil have raised new questions about the adequacy of federal efforts to improve the safety of moving oil on tank cars from new North American wells to distant refineries.

A 100-car, southbound CSX train derailed Monday in a West Virginia river valley, destroying a home and possibly contaminating the water supply for downriver residents. A thundering fireball rose hundreds of feet above the community amid an intense winter storm.

On Sunday, an eastbound oil train derailed in Ontario, Canada, near the city of Timmins, engulfing seven cars in an intense fire and disrupting passenger service between Toronto and Winnipeg.

The most recent accidents follow a long string of crashes that have occurred amid an exponential increase in the amount of crude being transported by rail, as energy production booms across the U.S. and Canada.



(TNS) — When Summer Fowler goes to sleep, the Cranberry mother of three knows computer hackers around the world are working through the night to undo the defenses she spends her days building.

Fowler, 37, is deputy technical director for cybersecurity solutions at CERT, the nation's first computer emergency response team, at Carnegie Mellon University's Software Engineering Institute. She works with Pentagon soldiers, intelligence directors and corporate titans to help them identify key electronic assets, secure them from cyberattacks and plan for what happens if someone steals them.

But at the end of the day, once her children are tucked into bed, Fowler wonders what the impact would be from a real cyber 9/11 attack on the United States.



Tuesday, 17 February 2015 00:00

Everyone Wins in a Diverse Storage Environment

For a while, it looked like enterprise storage was on a pretty stable development path: convert tape to disk, convert disk to solid state, and ultimately transition the storage array to modular infrastructure featuring server-side and in-memory solutions.

That plan is starting to crumble, however, as developments across multiple storage media are increasing the flexibility of previously staid solutions and even causing some to question storage’s actual role in the emerging virtual data ecosystem.

IBM's James Kobielus, for one, is backing off earlier predictions that 2015 would be a tipping point for SSDs in the enterprise. He still sees SSD dominance as inevitable, but continued investment in hard disk development is doing wonders for storage density and cost-per-bit. So while Flash solutions will likely dominate emerging applications like data mobility and the Internet of Things, tried and true magnetic media still has a lot to offer the old-line functions that many enterprises will continue to rely upon even in a cloud-dominated universe.



Agile methods allow developers to create dependable applications with repeatable results. The same type of practice can also be applied to database development to promote proper data management, which in turn reflects in successful application creation. Efficient data governance is one key toward achieving well developed software more quickly.

However, it seems that for many enterprises, there has always been tension between the development groups and those who manage the data. Developers often lament that issues with data management prohibit quick, adaptive software creation. On the other hand, data management staff feels that the tenets of Agile methodologies don’t consider the needs of data asset management. The clash isn’t new, but today’s business cycles demand software that’s created even more quickly and effectively than ever. This is why Agile development has become so important.

To help your organization achieve a tighter relationship between development and data management, author Larry Burns offers his book, “Building the Agile Database.” In his book, Burns explains the business case behind efficient data management via Agile methods. He also takes time to identify the usual stakeholders involved in application development and database development. Burns gives a detailed view of the financial stakes behind the software development process and ties that to the importance of good data management.



As an IT professional, what would you say are the top three concerns that keep you awake at night? According to the results of a recent survey, your peers listed security, downtime (disaster recovery), and talent management, in that order.

The survey was commissioned by Sungard Availability Services, a cloud computing, disaster recovery, and managed hosting services provider based in Wayne, Pa. I had the opportunity to discuss the findings with Ric Jones, CIO at LifeShare Blood Centers, a blood donation services provider in Shreveport, La., that’s a Sungard AS customer. Jones ranked disaster recovery ahead of security on his own list of concerns, but he indicated that the two are inextricably linked.

“Disaster recovery is extremely important to the success of LifeShare Blood Centers. If the primary datacenter in Shreveport experiences downtime for even a few hours, it disrupts the nonprofit’s ability to collect the data needed to gather and distribute critical, life-saving blood supply,” Jones explained. “Security couples up with disaster recovery, as data breaches are occasionally the cause for a disaster or unplanned downtime. This not only impacts an organization’s reputation, but also their ability to do business efficiently. LifeShare Blood Centers houses private information from donors, and it’s vital to our nonprofit we keep their information protected and out of hackers’ hands.”



Thursday, 12 February 2015 00:00

Shingled Drives for Re-roofing Your Storage?

In the last several years, there have been an increasing number of storage options. Initially we had just magnetic hard drives with a single rotational speed. Then they started to come in several varieties. Now we have a range of drive speeds starting at 15,000 rpm at the top end, followed by 10,000 rpm drives, then the ubiquitous 7,200 drives, and slower drives with speeds such as 5,900, 5,400, 4,500 and even variable speed drives.

The rotational speed of the disk drive is strong indicator of performance, price, capacity and power usage. Typically the higher the speed, the more expensive the drive. And usually high-speed drive has a smaller capacity, better performance and higher power consumption. As the drive speed comes down, the drive price decreases, the capacity increases, the performance decreases, and the power usage decreases.

There are other sources of drive variation, for example, drive cache size and physical drive size (2.5" and 3.5"). There is also the drive communication protocol such as SATA, SAS or Fiber Channel. There are also protocol speed differences such as 6 Gigabits per second (Gbps), 3 Gbps and slower (although these are older drives).



The analytics capabilities exist for Internet of Things (IoT) data — it’s the integration of systems and lack of interoperability that will challenge organizations, warns Deloitte Consulting.

Deloitte predicts that the “Analytics of Things” will be one of the top analytics trends in 2015, but also predicts that organizations may have trouble leveraging the data due to proprietary solutions and APIs.

“There needs to be more interoperability, more interconnectivity, more integration of all these devices, otherwise we’re just going to have these competing standards, competing formats and I think you’ll have disappointed customers in the end,” John Lucker, Deloitte Consulting principal and global advanced analytics and modeling market leader, said in a recent interview with IT Business Edge.



Thursday, 12 February 2015 00:00

When Automated Business Continuity Breaks Down

Computers are typically robust and reliable. When it comes to doing the same thing over and over again at scheduled times, they leave human beings far behind. That makes IT automation an attractive proposition for many business continuity routines or processes. Where people might forget or botch a data entry because of the monotony of a task, computers remain unaffected. They will check the status of all your branch servers every hour on the hour without fail. They will monitor manufacturing stocks and supply chains and send alerts when any out of bounds situation occurs. What could ever go wrong? Two things at least that human beings still have to help computers sort out.



Thursday, 12 February 2015 00:00

Dealing with the loss of data


Whether you've forgotten to press save, a file has become corrupted or perhaps due to something more malicious, I'm sure we've all suffered the frustration of losing data at one time or another. A new study from Kroll Ontrack has now shown just how common this is by revealing that over a 12 month period from 2013 to 2014, one in four (25%) UK workers interviewed as part of their research lost work data due to malfunction or corruption of technology. This is up from 19% just over two years ago. The report also highlights that only 68% of this data was recovered, meaning that almost a third of all work related data lost was irrecoverable.

Paul Le Messurier, Programme and Operations Manager at Kroll Ontrack commented: “The business environment is now, more than ever, data driven and digital first. It is therefore extremely alarming that data loss is on the up. If we see this trend continue to build, there is a risk that we will continue to see large scale data disasters as well as negative impacts on the provision of service level agreements to customers. Organisations must prepare for potential data disasters by developing a robust business continuity plan that includes a back-up plan, education for employees and a data disaster strategy if all else fails.”

Additional findings by Kroll Ontrack highlight that one in three UK employees (33%) used personal devices or cloud services to store work-related data in the last 12 months. Recovery rates of lost work-related data among these devices are low. One in five users successfully recovered from home desktops (19%), just 8% from personal mobile devices and 17% from laptops and tablets.

Le Messurier continued: “With the rise of BYOD the lines between personal and work-related data are being blurred. As such, organisations have to take extra considerations when devising a disaster recovery plan. This includes a full audit of what devices are holding work-related data and ensuring that these devices are being used responsibly. It is also important that businesses understand what data is critical on the device and what is not to ensure that only work related data is backed up to company servers – ignoring personal apps and music.”


NEW ORLEANS—While it may seem counterintuitive at an event that also has an expo, one speaker at the International Disaster Conference today argues that a lot of the “preparedness” products on the market are not worth the price tag—and may even work against public safety.

According to the graduate research of disaster management expert and firefighter paramedic Jay Shaw, dikes and levies reduced people’s preparedness levels by 25% for all hazards including flooding. About three quarters of respondents in his research had experience with a major flood, and 75% felt prepared for a flood. Yet 65% felt unprepared for any other disaster, and 46% did not have any emergency kit, plan or supplies. The dikes in their town, Shaw found, led to a sense of security against flooding risk, and left many unaware of other risks and how to best prepare for them.

Nationally, a 2009 FEMA study found that 57% of people claim to be prepared for a disaster for 72 hours. Under further review, however, 70% of these individuals did not know the basic components of an emergency go-bag or emergency plan.



By Jenny Gottstein

Last August, I embarked on a cross-country train trip to explore  how games might be used for disaster preparedness.

In each city I met with first responders, Red Cross chapters, disaster management agencies, and community leaders. The goal was to identify ways to increase resilience through interactive games. The trip was fascinating, and exposed some core truths about our country’s relationship with disasters.

Here is what I learned:

1)  The coastal cities generally feel vulnerable and unprepared.  By contrast, the states in the middle of the country feel much more confident and capable. For example, everyone I spoke to in Montana was certified in some sort of disaster training, had survived 20 different avalanches or snow storms, and had impressive stockpiles of food and supplies. In other words, Montana is ready.

2)  Different regions are facing different challenges in the effort to become more resilient. In Seattle, disaster preparedness professionals need help communicating safety messages to high school and college students. In Milwaukee, the main fear is extreme weather and water contamination. In New York, preparedness resources have to be translated to a population that speaks over 800 different languages. My job was to determine how game mechanics might be applied to overcome these hurdles.

3)  Socio-economic factors play a huge role in the severity and impact of disasters. Therefore we can’t take a “one size fits all” approach to preparedness. Building a resilient community doesn’t start and end with emergency kits. We have to tackle larger issues of transportation, housing, and resources way before disasters happen.

Woman demonstrating how to perform CPR

4)  Despite major disparities across the country, two things remain true for every individual: Confidence and kindness are essential qualities during a crisis. We might be thrown into unprecedented scenarios, but the first step is having confidence in our ability to respond, and the second step is, quite simply, to be kind to others. Kindness can go a long way in de-escalating a crisis. Which presents an interesting challenge: how do we teach this concept through gaming?

5)  I’ve heard many people blame our country’s lack of preparedness on apathy. How else would you explain the fact that people still don’t have Go Bags or basic emergency plans for their family? But I don’t think “apathy” is the issue. I believe disasters are so enormous and terrifying, that people simply block them out. It is too big, it is too inaccessible. Therefore the problem isn’t apathy, it is paralysis.

6)  The act of “getting prepared” can be isolating and boring. Would I rather go to the hardware store and pick out flashlights for a crisis that is too scary to think about, or spend time with my family and friends? The latter, obviously.

7)  Finally, there is one thing that was true in every place I visited on my trip, one thing that united everyone in these incredibly diverse regions: people are more interested and responsive to emergency preparedness messages that are fun and engaging rather than messages focused on motivating people through fear.

So by creating interactive games, we can offer people a different entry point – an opportunity to tackle disaster preparedness in a way that is social, memorable, and fun. We can make something that is boring and isolating and turn it into something engaging and social. We can turn something that is paralyzing, into something that is accessible. We can design games that are entertaining and thought-provoking, without trivializing the disaster experience.

Over the next few years I’ll be exploring these nuances, and designing games as tools for resilience. If you find this interesting, please join me!

Jenny  Gottstein_small_headcrop

Jenny Gottstein

Jenny Gottstein is the Director of Games and a senior event producer for Go Game. Jenny has led interactive game projects, creativity trainings and design workshops around the world. Click here to read more about Jenny’s trip.


Data breaches can be terrifying; they can cost a business millions of dollars and cause long-lasting damage to a company's reputation, too.

And it often seems like no matter what companies do, data breaches are unstoppable. But is this really the case?

Let's find out...



(TNS) — When an ice storm hit Augusta, Ga., on Feb. 11, 2014, and lasted into the next morning, the city lacked disaster assessment teams to survey storm damage and had no unified effort to coordinate volunteer help. Nearly half the 57 locations approved for emergency shelter use by the American Red Cross were without backup generators or an alternate power supply.

The city’s debris removal plan was an “incomplete draft” that listed Traffic Engi­neering and Solid Waste as the departments in charge.

A year later, Fire Chief Chris James says Augusta’s Emergency Management Agency has overhauled its operations to address the problems it encountered.



NEW ORLEANS — Edward Gabriel, principal deputy assistant secretary for preparedness and response for the U.S. Department of Health and Human Services, told a gathering of emergency managers that every incident they respond to is in some way related to health and medical and he revealed a couple of secrets.

Gabriel delivered a keynote address at the International Disaster Conference and Expo in New Orleans on Feb. 10, and talked about some of the work his office is doing to develop resiliency to catastrophic events.

“There are things that we know that you should be aware,” Gabriel told the crowd. He was hinting at some of the dangers that could affect the U.S. regarding biological and nuclear attacks. Those threats are treated as possibilities in the offing by the Biomedical Advanced Research and Development Authority (BARDA) under his watch.



Anthem recently said hackers were able to illegally access the health insurance company's IT system, along with personal information from up to 80 million current and former members. And as a result, Anthem landed at the top of this week's list of IT security newsmakers to watch, followed by TurboTax, Trend Micro (TYO) and Avast.

What can managed service providers (MSPs) and their customers learn from these newsmakers? Check out this week's list of the biggest IT security stories to find out:



Wednesday, 11 February 2015 00:00

Big Data and the Mirror of Erised

“This mirror will give us neither knowledge or truth.”

So says Dumbledore in J.K. Rowling’s book, Harry Potter and the Sorcerer’s Stone, commenting on a mirror that shows us what our most desperate desires want us to see.

This is an apt analogy when describing the analytics available in big data solutions. When you suddenly have all the data you could want and can quickly analyze it anyway you like, unencumbered by extraneous effort that we have historically had to endure, what happens? Being human beings with a tendency to confirm what we so want to have happen or to relive what felt so good in the past, managers often drift into self-sealing and circular analysis that at first doesn’t seem so wrong. Big data has to poke through the subtle and instinctual responses of data denial.



NEW ORLEANS—At the first day of the International Disaster Conference and Expo (IDCE), one of the primary topics of areas of concern for attendees and speakers alike was the risk of pandemics and infectious diseases. In a plenary session titled “Contagious Epidemic Responses: Lessons Learned,” Dr. Clinton Lacy, director of the Institute for Emergency Preparedness and Homeland Security at Rutgers, focused on the recent and ongoing Ebola outbreak.

While only four people in the United States were diagnosed with Ebola, three of whom survived what was previously considered a death sentence, government and health officials cannot afford to ignore the crisis, Lacy warned.

“This outbreak is not just a cautionary tale, it is a warning,” Lacy said. “Ebola is our public health wakeup call.”

A slow start by the Centers for Disease Control, inadequate protective gear in healthcare facilities, and inadequate planning for screening quarantine and waste management were some of the key failings in national preparedness for Ebola. And all were clearly preventable. A significant amount has been done to improve preparedness, Lacy said, but there is still a significant amount yet to do as well.



(TNS) — Commissioners and emergency officials in Pennsylvania are calling for reform for what they say is an outdated emergency telephone services law.

The law, enacted in 1990, doesn’t sufficiently address cellphones and other wireless devices and is adversely affecting funding for 911 systems, they say.

“This is the top priority for the (County Commissioners Association of Pennsylvania) this year,” Somerset County Commissioner Pam Tokar-Ickes said.

Tokar-Ickes also is a directors board member of the statewide organization.

“Since 1990, there have been significant changes because of technology — a lot more people using wireless devices — and the legislation is a piecemeal collection.”



(TNS) — When Paul Allen picks a cause, he usually takes his time.

The Microsoft co-founder likes to convene brainstorming sessions, consult experts and recruit advisers before making major philanthropic gifts.

But when Ebola flared in West Africa last summer, Allen was among the first private donors to step up. As the toll from the disease soared, he quickly raised his commitment to $100 million — the largest from any individual and double the amount contributed by the Bill & Melinda Gates Foundation.

Now that the epidemic seems to be slowing, Allen is still moving fast.



(TNS) — What would you do with a few seconds or minutes of warning before an earthquake strikes?

When late-night comedian Conan O’Brien considered the question recently, the result was a laugh-out-loud segment with people stampeding into walls, snapping risqué selfies or cranking up the boom box for one last dance.

A more sober — and useful — range of options will be on the table next week, when a small group of businesses and agencies embark on the Northwest’s first public test of a prototype earthquake early warning system.

“Up until now, we’ve been running it and watching the results in-house only,” said John Vidale, director of the Pacific Northwest Seismic Network at the University of Washington.



Enterprise apps are a hot item. I wrote a recent feature that cited research from appFigures, Kinvey and Frost & Sullivan that, in a variety of ways, pointed to the growth in interest on the parts of both developers and their clients.

QuinStreet Enterprise, which publishes IT Business Edge, has released survey research that reveals an important finding: The user interface (UI) and related ease-of-use features are very high (if not at the top) of the list of important elements in the success of an enterprise app. The survey, “2015 Enterprise Applications Outlook: To SaaS or not to Saas” (free download with registration) said that the key features for enterprise users are easy implementation, smooth integration with existing technology and good security.



No matter what your stance on the cloud and its role in supporting critical vs. non-critical workloads, it should be clear by now that any data infrastructure that remains in the enterprise will be dramatically different from the sprawling, silo-based facilities of today.

Retaining key workloads in-house will likely be a priority for some, but that does not mean the data center isn’t ripe for an upgrade that improves data-handling while lowering capital and operational costs. And the strategy of choice at the moment is convergence.



(TNS) — As earthquakes continue to rattle Oklahomans after a record-setting year, state officials are trying to coordinate their responses and soothe fears.

Secretary of Energy and Environment Michael Teague said Friday his office will develop a website to help keep the public informed of various agency actions on earthquakes. He said it will be modeled after the Oklahoma Water Resources Board’s drought page, drought.ok.gov.

The state had 585 earthquakes greater than a 3.0 magnitude in 2014, up from 109 in 2013. Some studies have linked wastewater injection wells from oil and gas development to increased seismic activity.

“We recognize we have a problem,” said Teague, who heads the Governor’s Coordinating Council on Seismic Activity. “There’s something going on. But the science is not completely settled.



(TNS) — New Mexico hasn’t had its first zombie infection yet, but if that happens, Nick Generous and others on a Los Alamos National Laboratory team will probably map it on their new Biosurveillance Gateway website.

All epidemics — whether ebola, measles or zombie apocalypses — begin with patient zero.

“In the earliest stages of outbreak, there’s this critical period of time that officials can enact certain interventions to minimize and prevent the spread,” said Generous, a molecular biologist who helped develop the Biosurveillance Gateway. “So, how do you decide what to do?”

Quarantine, vaccinate or, in the case of that nasty zombie, just shoot its head off?



Telecommunications networks are huge users of energy. The cable industry, for instance, relies upon millions of servers, amplifiers and other network devices throughout vast networks. These all need to be powered. In homes, set-top boxes, gateways and other gadgets need juice, as well.

Cable and telcos, and the companies that support them, are taking steps to control this usage, at least in the home. In 2012, companies connected with the pay television industry entered a voluntary agreement to cut energy use in set-top boxes (STBs). Late last summer, D&R International, Ltd. on behalf of the group, published a report on the impact of the initiative on usage during 2013.

The report, according to Switchboard, the National Resources Defense staff blog, suggests very strongly that the agreement is having the desired effect. Energy use decreased 5 percent during the year and saved about $168 million. Usage of energy by STBs was 14 percent less than devices installed in 2012. The story points out that the next wave of voluntary requirements will increase savings to $1 billion annually when they are implemented in 2017.



Tuesday, 10 February 2015 00:00

Expect Shadow IT to Be a Long-Term Problem

Last week, CipherCloud revealed the results of a survey regarding the use of shadow IT. The study found that of the 1,100 cloud applications used in an enterprise setting, 86 percent of those are being used without authorization of the IT department.

Fellow IT Business Edge blogger Arthur Cole believes that, despite the high use of shadow IT within the workspace, the practice’s decline is inevitable. He wrote:

Now that the cloud has taken a firm hold in the enterprise, shadow IT will diminish naturally as internal resources gain the flexibility and availability that knowledge workers require. In fact, you could argue that shadow IT is a net positive for the enterprise because it creates the impetus to shed aging, silo-based infrastructure in favor of a more flexible, dynamic environment. And ultimately, this will allow many organizations to abolish their IT cost centers entirely in order to focus resources on more profitable endeavors.



Here’s the quick version. Hackers operating in the same cloud server hardware as you can steal your encryption keys and run off with your data/bank codes/customers/company (strike out items that do not apply – if any). Yes, behind that mouthful of a title is a scary prospect indeed. Until recently, this kind of cloud-side hacking possibility had been discussed but not observed. Now a team of computer scientists have managed to recover a private key used by one virtual machine by spying on it using another virtual machine. Therefore a hacker could conceivably do the same to your VM from another VM running on the same server. How worried should you be?



Friday, 06 February 2015 00:00

Federal Flood Risk Management Standard

WASHINGTON – On January 30, the President issued an Executive Order 13690, “Establishing a Federal Flood Risk Management Standard and a Process for Further Soliciting and Considering Stakeholder Input.” Prior to implementation of the Federal Flood Risk Management Standard, additional input from stakeholders is being solicited and considered on how federal agencies will implement the new Standard. To carry out this process, a draft version of Implementing Guidelines is open for comment until April 6, 2015.

Floods, the most common natural disaster, damage public health and safety, as well as economic prosperity. They can also threaten national security. Between 1980 and 2013, the United States suffered more than $260 billion in flood-related damages. With climate change and other threats, flooding risks are expected to increase over time. Sea level rise, storm surge, and heavy downpours, along with extensive development in coastal areas, increase the risk of damage due to flooding. That damage can be particularly severe for infrastructure, including buildings, roads, ports, industrial facilities and even coastal military installations.

The new Executive Order amends the existing Executive Order 11988 on Floodplain Management and adopts a higher flood standard for future federal investments in and affecting floodplains, which will be required to meet the level of resilience established in the Federal Flood Risk Management Standard. This includes projects where federal funds are used to build new structures and facilities or to rebuild those that have been damaged. These projects make sure that buildings are constructed to withstand the impacts of flooding, improves the resilience of communities, and protects federal investments.

This Standard requires agencies to consider the best available, actionable science of both current and future risk when taxpayer dollars are used to build or rebuild in floodplains. On average, more people die annually from flooding than any other natural hazard. Further, the costs borne by the federal government are more than any other hazard. Water-related disasters account for approximately 85% of all disaster declarations.

The Standard establishes the flood level to which new and rebuilt federally funded structures or facilities must be resilient. In implementing the Standard, agencies will be given the flexibility to select one of three approaches for establishing the flood elevation and hazard area they use in siting, design, and construction:

  • Utilizing best available, actionable data and methods that integrate current and future changes in flooding based on climate science;
  • Two or three feet of elevation, depending on the criticality of the building, above the 100-year, or 1%-annual-chance, flood elevation; or
  • 500-year, or 0.2%-annual-chance, flood elevation.

Prior to implementation of the Federal Flood Risk Management Standard, additional input from stakeholders is being solicited and considered. To carry out this process, FEMA, on behalf of the Mitigation Framework Leadership Group (MitFLG), published a draft version of Implementing Guidelines that is open for comment. A Federal Register Notice has been published to seek written comments, which should be submitted at www.regulations.gov under docket ID FEMA-2015-0006 for 60 days.  Questions may be submitted to FEMA-FFRMS@fema.dhs.gov.

FEMA will also be holding public meetings to further solicit stakeholder input and will also host a virtual listening session in the coming months. Notice of these meetings will be published in the Federal Register.  At the conclusion of the public comment period, the MitFLG will revise the draft Implementing Guidelines, based on input received, and provide recommendations to the Water Resources Council.

The Water Resources Council will, after considering the recommendations of the MitFLG, issue amended guidelines to provide guidance to federal agencies on the implementation of the Standard. Agencies will not issue or amend existing regulations or program procedures until the Water Resources Council issues amended guidelines that are informed by stakeholder input.

FEMA looks forward to participation and input in the process as part of the work towards reducing flood risk, increasing resilience, cutting future economic losses, and potentially saving lives.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

I was watching one of my favorite news shows late last night when the host came back from commercials with a breaking news story: Health-insurance company Anthem had been breached. The show’s host provided a couple of details of what the breach entailed; he said that it was personal information of customers and employees, their addresses, birthdates, Social Security numbers (emphasis was the host’s).

After that, I knew exactly what I was going to be waking up to this morning: an inbox filled with commentary on this latest high-profile breach and a topic right at hand for today’s blog post.

Much of that commentary applauded Anthem for its quick response to the breach, like this comment from Lee Weiner, SVP of products and engineering with Rapid7:



The Internet of Things is among the trends driving companies to invest in data virtualization, according to Suresh Chandrasekaran, senior VP for data virtualization vendor Denodo.

Data virtualization isn’t normally something you hear in Big Data discussions. I asked Chandrasekaran what problem data virtualization solved for IoT and other Big Data projects. Sensor data is generally pooled in a data repository or data lake, he explained, but it’s useful without context.

Data virtualization allows you to leverage sensor and other Big Data and add context using other data sources. For instance, if you’re using sensors to monitor vehicles, you might want to combine that with maintenance records to predict when parts need to be changed.



Friday, 06 February 2015 00:00

Is Your CEO Next? The Data Ticking Time Bomb

This morning, I read the news that Anthem Insurance had a massive data breach and that Amy Pascal, who led Sony pictures as co-chairman, was stepping down as a result of Sony’s breach.

I’d just been sent a Varonis study, written by the Ponemon Institute. “Corporate Data: A Protected Asset or a Ticking Time Bomb?” couldn’t be more timely. The danger in not taking data security seriously is growing.

Let’s talk about this report against those events this week.



Thursday, 05 February 2015 00:00

Building the Agile Database

Is fast development the enemy of good development? Not necessarily. Agile development requires that databases are designed and built quickly enough to meet fast-based delivery schedules - but in a way that also delivers maximum business value and reuse. How can these requirements both be satisfied? This book, suitable for practitioners at all levels, will explain how to design and build enterprise-quality high-value databases within the constraints of an agile project.

Starting with an overview of the business case for good data management practices, the book defines the various stakeholder groups involved in the software development process, explains the economics of software development (including "time to market" vs. "time to money"), and describes an approach to agile database development based on the five PRISM principles.



Thursday, 05 February 2015 00:00

12-Step Program for Emergency Managers

There are 12-step programs for many personal issues, so I figured there should be a 12-Step Program for Emergency Managers. I’ve written about our addiction to Department of Homeland Security grants that are administered by FEMA. Therefore it is only natural that we look for ways to escape our addiction and gain control over our individual programs. Getting out of addictive behavior can be difficult. 

Generally the concept of 12-step programs is to acknowledge a higher power and give everything over to its control. The only “higher power” that emergency managers have is FEMA, so we are in a bit of a Catch-22 in that we are trying to escape its grant clutches while at the same time giving our lives over to its control. We should at least try this 12-step program that I’ve adapted from Alcoholics Anonymous.



Thursday, 05 February 2015 00:00

The Inevitable Decline of Shadow IT

Sometimes it seems as if the enterprise is so caught up in preparing for the future that it fails to notice what is happening in the present.

The cloud is a prime example, with most top data executives enamored by visions of limitless, federated infrastructure able to do anyone’s bidding at the touch of a few mouse clicks. In the meantime, however, few are overly concerned by the unorganized spread of data across external cloud platforms, the so-called shadow IT, despite the significant loss of control it represents.

According to CipherCloud, about 86 percent of enterprise applications are now tied to shadow IT, especially those involved in publishing, social networking and career-based functions. This should be of particular concern to the enterprise considering the increasing sophistication of mobile malware and the ongoing spate of massive data breaches. However, many organizations are not even aware of the scope of the problem: One major enterprise in the survey claimed to have only 15 file-sharing apps in use when in reality it was nearly 70.



Thursday, 05 February 2015 00:00

What’s ‘Good Enough’ Data Quality?

When you dig into data quality—and more of you are—you’ll hear a lot about “good enough” data quality. But what the heck does that mean? And how do you know if you’ve achieved it?

Data folks have long understood that data quality is a continuum. Data quality comes with an associated cost and, at some point, that cost is not worth paying to further “perfect” the data; hence, the concept of “good enough” data quality.

That may have made sense in a relational database world, but now … it’s complicated. The data isn’t just being used for reporting, but is also being leveraged in BI and analytics systems. Data has left IT and is being used to drive decisions across the organization. What’s more, data looks different—it’s now social data, sensor data, external data, Big Data.



Thursday, 05 February 2015 00:00

Selfie-Sticks and Risk Assessments

Greetings from Venice and a big thanks to Joe Oringel at Visual Risk IQ for allowing my to post his five tips on working with data analytics while I was on holiday in this most beautiful, haunting and romantic of cities. While my wife and I have come here several times, we somehow managed to arrive on the first weekend of Carnivale, without knowing when it began. On this first weekend, the crowds were not too bad and it was more of a local’s scene than the full all out tourist scene.

As usual, Venice provides several insights for the anti-corruption compliance practitioner, whether you harbor under the Foreign Corrupt Practices Act (FCPA), UK Bribery Act, both, or some other such law. One of the first things I noticed in Venice was the large number of selfie-sticks and their use by (obviously) tourists. But the thing that struck me was the street vendors who previously sold all manner of knock-off and counterfeit purses, wallets and otherwise fake leather goods had now moved exclusively to market these selfie-sticks. Clearly these street vendors were responding to a market need and have moved quickly to fill this niche.



Thursday, 05 February 2015 00:00

Cloud adoption and risks

With faster time to market, massive economy of scale, and unparalleled agility, the cloud is entering enterprises at an unprecedented rate. As a result, hundreds of high risk cloud applications are commonly used across North American and European organizations, says a CipherCloud report. The report details the results of a comprehensive study of cloud usage and risks, compiled from enterprise users in North America and Europe.

‘Cloud Adoption & Risk Report in North America & Europe – 2014 Trends’ includes anonymised data of cloud user activity collected for the full 2014 calendar year, spanning thousands of cloud applications.



Thursday, 05 February 2015 00:00

Another Mega Data Breach

In what is being described as potentially the largest breach of a health care company to-date, health insurer Anthem has confirmed that it has been targeted in a very sophisticated external cyber attack.

The New York Times reports that hackers were able to breach a company database that contained as many as 80 million records of current and former Anthem customers, as well as employees, including its chief executive officer.

Early reports here and here suggest the attack compromised personal information such as names, birthdays, medical IDs/social security numbers, street addresses, email addresses and employment information, including income data.



(TNS) — While many coastal communities in the Tampa Bay area have been spared a catastrophic spike in flood insurance rates for now, local city leaders say they’re preparing for the worst over the long haul.

In Belleair Bluffs on Tuesday, the Florida League of Cities hosted the first in a series of meetings throughout the state to encourage city governments to invest more in flood mitigation programs that can reduce the risk of storm damage and lower federal flood premiums for local residents by an average of 20 percent.

Cities can increase those savings for nearly all residents who carry flood coverage by improving storm-water drainage, enhancing building codes, moving homes out of potentially hazardous areas and effectively communicating about storm danger and evacuation routes.



(TNS) — Colorado Springs is making a pitch to host a new state-funded center for fire research, a technology hub that could help propel Colorado to the forefront of revolutionizing how wildfires are fought.

The Colorado Springs Regional Business Alliance plans to submit a report this week detailing why El Paso County, twice victim of catastrophic wildfire, should be the new home for the fire research center.

While the public eye may have been trained on the Colorado Firefighting Air Corps created last year, a lesser-known aspect of the Centennial-based fleet — the Center for Excellence for Advanced Technology Aerial Firefighting — has been on the wish list for some Colorado Springs leaders for months.



Despite increasing attention to cybersecurity and a seemingly constant stream of high-profile data breaches, the primary security method used in businesses worldwide remains the simple password. According to a recent study, the average person now has 19 passwords to remember, so it is not surprising that the vast majority of passwords are, from a security perspective, irrefutably bad, including sequential numbers, dictionary words or a pet’s name.

A new report by software firm Software Advice found that 44% of employees are not confident about the strength of their passwords. While many felt their usage was either extremely or very secure, the group reported, “our findings suggest that users either remain unaware of the rules despite the hype, do not believe them to be good advice or simply find them too burdensome, and thus opt for less secure passwords.”

Among the biggest password sins employees commit:



Data security has become an even bigger topic in the last year following several high-profile data breaches at consumer companies. And much of the focus been protecting against the breaches themselves. But are there other ways to protect data? MSPmentor recently took a deeper look at a technology called data masking. Here's what we found.

Many banks, government agencies, hospitals, insurance companies and other organizations that manage highly sensitive information are using a technique to hide their data from cybercriminals – data masking. The technique camouflages the real data that you want to protect by interspersing other characters and/or data with it. So the data hides in plain site, but it cannot be seen or discovered.



Enterprises are scrambling to come up with ways to scale their infrastructure to meet the demands of Big Data and other high-volume initiatives. Many are turning to the cloud for support, which ultimately puts cloud providers under the gun to enable the hyperscale infrastructure that will be needed by multiple Big Data clients.

Increasingly, organizations are turning to in-memory solutions as a means to provide both the scale and flexibility of emerging database platforms like Hadoop. Heavy data loads have already seen a significant performance boost with the introduction of Flash in the storage farm and in the server itself, and the ability to harness non-volatile RAM and other forms of memory into scalable fabrics is quickly moving off the drawing board, according to Evaluator Group’s John Webster. In essence, the same cost/benefit ratio that solid state is bringing to the storage farm is working its way into the broader data infrastructure. And with platforms like SAP HANA hitting the channel, it is becoming quite a simple matter to host entire databases within memory in order to gain real-time performance and other benefits while still maintaining persistent states within traditional storage.



By Leon Adato

In the corporate environment, end users and, more worryingly, the occasional IT pro, are the first to point the finger of blame at the network when an application is sluggish, data transfer is too slow or a crucial Voice over IP (VoIP) call drops, all of which can have a wider impact on the bottom line.

Issues arise when the IT department looks to blame the network as a whole, rather than work to identify problems that are caused by an individual application running on the network. Poor design, large content and memory leaks can all cause an application to fail, yet IT departments can be slow to realise this.

Many companies are reliant on applications to drive business-critical processes. At the same time, applications are becoming increasingly complex and difficult to support, which puts additional pressure on the network. So, the question remains, when there’s an issue with application performance, is it the network or is it the application? How do you short-circuit the ‘blame game’ and determine the root-cause of an issue so it can be solved quickly and efficiently?




In the past we have often heard that people got involved with business continuity through another career, perhaps drifitng in to it from facilities management or IT security. Now we are finding that more and more people are starting off in a business continuity role; the industry has developed into a career opportunity in its own right and people are joining it straight from school, college or university. In order to develop the industry further and take it forward, we need to inspire and encourage the right people to become business continuity professionals, and where better to do this than in schools.

To meet this aim, the Business Continuity Institute has formed a new partnership with Inspiring the Future, a free service where volunteers pledge one hour a year to go into state schools and colleges and talk about their job, career, and the education route they took. Already to date, over 7,500 teachers from 4,400 schools and colleges and over 18,500 volunteers have signed up.

Everyone from Apprentices to CEOs can volunteer for Inspiring the Future. Recent graduates, school leavers, apprentices, and people in the early stages of their career can be inspirational to teenagers - being close in age they are easy to relate to; while senior staff have a wealth of knowledge and experience to share. Your insights will help to inspire and equip students for the next steps they need to take.

Inspiring the Future is currently running a campaign called Inspiring Women with the aim to get 15,000 inspirational women from Apprentices to CEOs signed up to Inspiring the Future, to go into state schools and colleges to talk to girls about the range of jobs available, and break down any barriers or stereotypes.  For further information click here

Why volunteer in a local school or college?

  • Going into state schools and colleges can help dispel myths about jobs and professions, and importantly, ensure that young people have a realistic view of the world of work and the routes into it.
  • Getting young people interested in your job, profession or sector can help develop the talent pool and ensure a skilled workforce in the future.

To sign up to Inspiring the Future as a BCI member, simply click here and follow the steps. In the ‘My Personal Details’ section, under the heading ‘My memberships of Professional Association …’ please write Business Continuity Institute and it will appear for you to select.

By signing up, you make it easy for local schools and colleges to get in touch to see if you can help them help their pupils make better decisions about the future.  You might be asked if you could take part in a careers’ fair, in career networking (speed dating about jobs) or do a lunchtime talk to sixth formers about your job and how you got it. 

Volunteering for Inspiring the Future is free, easy, effective and fun. Volunteers and education providers are connected securely online, and volunteering can take place near home or work as employees specify the geographic locations that suit them. Criminal Records Bureau checks are not needed for career insights talks, as a teacher is always present.

Inspiring the Future is a UK initiative but if you know of a similar scheme in another country then get in touch and let us know. Our aim is to inspire people to become business continuity professionals all across the world.




When he speaks of that Thursday, Nov. 6, 2014, Dan Hoffman’s memory is a blur. Details come back in hazy pieces. His first recollections flash back to a headache, a throbbing pain that drove him into an afternoon nap. Next he recalls the sensations of heat, waking to a baking swelter. Next the glow of flames, a black canopy of smoke above, coughs shaking his lungs, the fire alarm shrieking, attempting to stand, to breathe, to reach for his cellphone and dial 911.

“My instinct was to get out,” Hoffman said.

He stumbled from the bedroom, to the bathroom, to the living room of his family’s home in Traverse City, Mich. The voice of a dispatcher must have spoken to him through his cellphone. He doesn’t recall it though. He only remembers listening to his own voice. He said the word “help” twice. It was the last thing he heard before collapsing, falling unconscious as his house continued to burn.



Tuesday, 03 February 2015 00:00

Measles and the Risk of Infectious Diseases

If you’re reading about the rising number of measles cases in California, you may also be thinking about pandemic risk.

First, let’s look at the status of measles cases and outbreaks in the United States.

The CDC notes that from January 1 to January 28, 2015, 84 people from 14 states were reported to have measles. Most of these cases are part of a large, ongoing outbreak linked to Disneyland in California.

On Friday (January 30, 2015), the California Department of Public Health released figures showing there are now 91 confirmed cases in the state. Of those, 58 infections have been linked to visits to Disneyland or contact with a sick person who went there.

At least six other U.S. states – Utah, Washington, Colorado, Oregon, Nebraska and Arizona—as well as Mexico have also recorded measles cases connected to Disneyland, according to this AP report.

What about last year?



Don’t think you are vulnerable to an insider threat? You might want to have a conversation with your IT department, then. According to Vormetric's 2015 Insider Threat Report, 93 percent of IT personnel think their company is at risk from an insider threat. Also, 59 percent of respondents worry about privileged users or employees who have high-level access to very sensitive data, who are considered to be the company’s greatest threat.

Thanks in part to the recent Sony hack, insider threats and the dangers they pose are getting a lot more attention than they have in the past. But as Eric Guerrino, executive vice president of the Financial Service Information Sharing and Analysis Center, was quoted in eSecurity Planet, insider threats have been a problem for a long time and a top focus area for security concerns. It’s just that now those beyond IT and security staff are beginning to grasp the severity of the issue.



A survey of New South Wales Shires and Councils has looked at risk management, business continuity, and internal audit practices and identified a number of gaps in some critical areas. Over 50 percent of NSW councils participated in the survey, which was conducted by InConsult.

“The high number of responses has provided data that we believe to be valid and paints a good picture of the current state of risk management in NSW councils” says InConsult Director Tony Harb.

“Overall, we have seen improvements across the board in risk management practices, such as developing formal risk management policies and strategies, formal risk appetite statements and maintaining comprehensive risk registers. More Councils now class their risk management in the ‘proficient’ category of risk management maturity.



Many CEOs tend to see business continuity management purely within the context of complying with governance codes. But, says Leigh-Anne van As, business development manager at ContinuitySA, CEOs also need to see how business continuity management can help them answer three key strategic questions.

Van As argues that CEOs need to be able to answer ‘yes' to three key questions:

  • Do you know which products and services offered by your company are vital to ensuring its strategic objectives can be met?
  • Is your organizational structure aligned to the company's strategic objectives?
  • Do you know exactly which resources (including human resources) are required for the company to achieve its strategic objectives?

"Companies typically offer a multiplicity of products and services, but CEOs and their immediate teams need to understand which ones are absolutely vital to the company's ability to meet its strategic targets. They also need to understand exactly which resources are essential to delivering those products and services," she explains. "Once they have the answers, CEOs and their teams can allocate investment and attention appropriately, and optimise the company's operations."



Tuesday, 03 February 2015 00:00

DDoS attacks proving costly for businesses


According to a study conducted by Kaspersky Lab and B2B International, a Distributed Denial of Service (DDoS) attack on a company’s online resources might cause considerable losses – with average figures ranging from $52,000 to $444,000 depending on the size of the company. For many organizations these expenses have a serious impact on the balance sheet as well as harming the company’s reputation due to loss of access to online resources for partners and customers.

According to the study, 61% of DDoS victims temporarily lost access to critical business information; 38% of companies were unable to carry out their core business; 33% of respondents reported the loss of business opportunities and contracts. In addition, in 29% of DDoS incidents a successful attack had a negative impact on the company’s credit rating while in 26% of cases it prompted an increase in insurance premiums.

DDoS attacks are not just costly, they are also becoming more frequent and more complex. In a different study, one carried out by Arbor Networks, it was revealed that 38% of respondents to a survey experienced more than 21 attacks per month compared to just over 25% in 2013. It was also noted that we are now experiencing much larger attacks, sometimes over 100Gbps and even up to 400Gbps. Ten years ago the largest attack was 8Gbps.

With this as a backdrop, it is perhaps no surprise that cyber attacks have consistently been one of the top three threats for business continuity professionals according to the Business Continuity Institute’s annual Horizon Scan report.

“A successful DDoS attack can damage business-critical services, leading to serious consequences for the company. For example, the recent attacks on Scandinavian banks (in particular, on the Finnish OP Pohjola Group) caused a few days of disruption to online services and also interrupted the processing of bank card transactions, a frequent problem in cases like this. That’s why companies today must consider DDoS protection as an integral part of their overall IT security policy. It’s just as important as protecting against malware, targeted attacks, data leak and the like,” said Eugene Vigovsky, Head of Kaspersky DDoS Protection, Kaspersky Lab.



Most actuaries know about projections that go awry, so we have quite a bit of sympathy for the weather forecasters who missed the mark early this week, says I.I.I.’s Jim Lynch:

Weather forecasts have improved dramatically in the past generation, but this storm was odd. Usually a blizzard is huge. On a weather map, it looks like a big bear lurching toward a city.

This storm was relatively small but intense where it struck. On a map, it looked like a balloon, and the forecasters’ job was to figure out where the balloon would pop. They were 75 miles off. It turned out they over-relied on a model – the European model, which had served them well forecasting superstorm Sandy, according to this NorthJersey.com post mortem.



If you’ve ever wondered whether your data governance committee is covering the right issues, then you’ll want to read Joey Jablonski’s recent column, “12 Step Guide for Data Governance in a Cloud-First World.”

Despite the title, five of the steps are actually a great strategic discussion list for any data governance group. Jablonski says organizations should cover each of the following:



Monday, 02 February 2015 00:00

A Strange Diagram

I found this – and have never seen it before:


It’s a strange thing as it appears to begin at the top of the cycle with ‘Corporate responsibility’.  While I understand the definition (here’s one : Corporations have a responsibility to those groups and individuals that they can affect, i.e., its stakeholders, and to society at large. Stakeholders are usually defined as customers, suppliers, employees, communities and shareholders or other financiers. (Financial Times Lexicon)) – is it something that should be at the core of the diagram rather than part of a security management cycle?  I’m not splitting hairs here – it is about separation of process from strategy I think.  Further – shouldn’t ‘Understand the Organization’ come first?  It does for me – unless we understand the organization how can we meet our responsibilities – corporate, security or otherwise?



A growing hazard has emerged in the cloud security space that is threatening organizations from inside of their own physical and virtual walls. As employees across multiple industries continue to adopt ‘shadow cloud’ services in the workplace, organizations and managed service providers (MSPs) need to carefully monitor its effects on security and cloud-based file sharing.

The Cloud Security Alliance’s (CSA), official definition of “shadow cloud” services is “cloud applications and services adopted by individual employees, teams, and business units with no formal involvement from the organization’s IT department.”  The threat of this unsanctioned cloud usage is a potential security risk to both individuals and enterprises, alike, as the services are less protected and secured.



Robin Murphy is a leader in the field of disaster robotics, having started working on the topic in 1995 and researching how the mobile technologies have been used in 46 emergency responses worldwide. She has developed robots that have helped during responses to numerous emergencies, including 9/11 and Hurricane Katrina. As director of the Center for Robot-Assisted Search and Rescue at Texas A&M University, Murphy works to advance the technology while also traveling to disasters when called upon to help agencies determine how robots can aid the response. The center’s first deployment was in response to 9/11, which also was the first reported use of a robot during emergency response.

Emergency Management: Since 9/11, how have you seen the use of robots in disasters change?

Robin Murphy: We started out in 2001 and up until 2005 you didn’t see the use of anything but ground robots. Everything was very ground-centric, and I think that reflected the state of the technology. For years we had bomb squad robots, which were being made smaller and smaller for military tactical operations so that gave them a tool that was pretty easy to use. Starting in 2005, we saw the first use of small unmanned aerial vehicles that were being developed primarily for the military market and those were very useful. Those have really come up and, in fact, since 2011, I’ve only found one disaster that didn’t use an unmanned aerial vehicle and that was the South Korea ferry where they used an underwater vehicle. So we went from ground robots dominating to about 2005 and then we started shifting toward unmanned aerial vehicles. In about 2007, it became much more commonplace to see underwater vehicles being used. Then starting in about 2011, I think if you have a disaster and you’re an agency and you haven’t figured out a way to use a small unmanned aerial system, it’s kind of surprising.



Friday, 30 January 2015 00:00

10 Expert Tips for Better Data Storage

Better data storage means different things to different people. For some it is all about speed, for others cost is the primary factor. For many it is about coping with soaring data volumes while for some, simplicity and ease of install/use are top-of mind elements.

Whatever your opinion of what better data storage is, here are a few tips on how to improve storage in the coming year.



What worries chief information officers (CIOs) and IT professionals the most? According to a recent survey commissioned by Sungard Availability Services information security, downtime and talent acquisition weigh heaviest on their minds.

Information security
Due to the increasing frequency and complexity of cyber-attacks, security ranks highest among IT concerns in the workplace for CIOs; as a result more than half of survey respondents (51 percent) believe security planning should be the last item to receive budget cuts in 2015.

While external security threats are top of mind for IT professionals, internal threats are often the root cause of security disasters. Nearly two-thirds of the survey respondents cited leaving mobile phones or laptops in vulnerable places as their chief security concern (62 percent), followed by password sharing (59 percent). These internal security challenges created by employees, lead 60 percent of respondents to note that in 2015 they would enforce stricter security policies for employees.



Friday, 30 January 2015 00:00

Public apathy in the path of preparedness

Responses to winter storm Juno seem to show that you cannot please the public when it comes to preparedness. In this article Geary Sikich asks whether business continuity and emergency planners are missing something when it comes to communicating preparedness with the public.

I was supposed to be in Boston presenting at ‘The Disaster Conferences’ on 28 January 2015. Well, the weather just put us out to 19 March 2015 for the now rescheduled Boston conference. I guess that they are still feeling the effects of this week’s blizzard, now named ‘Juno’; that left Boston with over 24 inches of snow. According to the Weather Channel Winter Storm Juno pounded locations from Long Island to New England with heavy snow, high winds and coastal flooding late Monday into Tuesday. The storm is now winding down. The National Weather Service has dropped all winter storm and blizzard warnings for Juno.

Snow amounts in New York have ranged from 9.8 inches at Central Park in New York City to 30 inches on Long Island. The snippets from the Weather Channel and from other news sources barrage us with the details of this latest storm:

  • In Massachusetts, up to 36 inches of snow has been measured in Lunenburg, while Boston has seen 24.4 inches. Juno was a record snowstorm for Worcester, Massachusetts (34.5 inches). Incredibly, 31.9 inches fell in Worcester on Jan. 27, alone!
  • Thundersnow was reported in coastal portions of Rhode Island and Massachusetts late Monday night and early Tuesday.



Thursday, 29 January 2015 00:00

REAL ID Act Catches Up with States

(TNS) — Lawmakers are scrambling to fix a problem that could result in Idaho driver's license holders being denied entry to federal facilities nationwide by the end of the year.

The issue arose last week, when the Idaho National Laboratory began enforcing the REAL ID Act.

The act, adopted in 2005, was a response to the Sept. 11, 2001, terrorist attacks. It tries to limit the availability of false driver's licenses and identification cards by imposing detailed security requirements on states for issuing such cards.

Idaho is one of nine "non-compliant" states, meaning the U.S. Department of Homeland Security isn't satisfied with its efforts to implement the act.

Consequently, Idaho licenses and ID cards can no longer be used to gain entry to nuclear power plants, to restricted portions of the Homeland Security headquarters building or - as of Jan. 19 - to INL and certain other federal facilities (see related story, at right).



(TNS) —When disaster strikes in Palm Beach County, Fla., a team of volunteers trained by county emergency managers can be deployed as the first line of defense, helping their communities with everything from search and rescue to basic first aid to putting out small fires.

They can also be called upon to distribute or install smoke alarms, hand out disaster education materials or replace smoke alarm batteries in the homes of the elderly, according to a brochure about the program.

But there's no requirement that they be subject to any kind of criminal background check.

That could change after a concerned Boynton Beach resident complained to the Florida Division of Emergency Management's Inspector General. In a report released last week, the inspector recommended that background checks be a condition of the grants doled out for the program.



Business continuity and cloud file sync services provider eFolder, has announced the release of its production version of Cloudfinder for Box, a dedicated cloud-to-cloud backup, search and restore service for Box. The company rolled out the production version of the offering following Box’s (BOX) long-anticipated initial public offering last week.

The production version of Cloudfinder for Box builds on a Freemium version that was available last year.  eFolder completed its acquisition of Cloudfinder in Q3 of 2014.



The Business Continuity Institute is pleased to announce the launch of its new Careers Centre, providing those working in the industry with the support they need to further their career by highlighting the job opportunities available. The BCI Careers Centre will also allow recruiters to find the perfect candidate for them by offering a CV search facility.

If you’re looking for a new job in business continuity or resilience then look no further than the BCI Careers Centre. Powered by JobTarget, the Careers Centre pulls in advertised vacancies from global recruitment sites, as well as those advertised directly with the BCI, and allows users to search by position or location. The system also allows users to set up a job alert so they can be the first to see new vacancies.

If you’re a recruiter then post your job within the Careers Centre to make sure it can be seen by a wide selection of desired candidates. If you’d rather seek people directly then search through the CVs uploaded by business continuity professionals to find the one who is suitable for you, or perhaps a selection that you would like to shortlist. The BCI Careers Centre is an open site with business continuity and resilience specialists from around the world encouraged to register for vacancies.

As the Careers Centre is specifically designed to focus on roles in the business continuity and resilience industry, it might be helpful to know what industry memberships or credentials a potential employee has. If you're a member of the BCI or hold a BCI credential then this will be clearly identified on your profile. It will also be clearly identified if you are on the BCI's CPD scheme.


Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.

Yesterday, I shared four Big Data governance challenges:

  • Changing data roles
  • Broader business involvement
  • Business buy-in
  • Technical challenges

Let’s look at how those success principles can be applied to the first two Big Data governance challenges.

- See more at: http://www.itbusinessedge.com/blogs/integration/a-three-step-strategy-for-tackling-big-data-governance-challenges.html#sthash.ewxFgvwx.dpuf

Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.

Yesterday, I shared four Big Data governance challenges:

  • Changing data roles
  • Broader business involvement
  • Business buy-in
  • Technical challenges

Let’s look at how those success principles can be applied to the first two Big Data governance challenges.

- See more at: http://www.itbusinessedge.com/blogs/integration/a-three-step-strategy-for-tackling-big-data-governance-challenges.html#sthash.ewxFgvwx.dpuf

Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.

Yesterday, I shared four Big Data governance challenges:

  • Changing data roles
  • Broader business involvement
  • Business buy-in
  • Technical challenges

Let’s look at how those success principles can be applied to the first two Big Data governance challenges.

- See more at: http://www.itbusinessedge.com/blogs/integration/a-three-step-strategy-for-tackling-big-data-governance-challenges.html#sthash.ewxFgvwx.dpuf



Is there anything that can’t be connected to the Internet? For example, where I once wore a $10 pedometer clipped to the waistband of yoga pants, I now wear a $130 fitness tracker on my wrist. In the past, I just took a look at the numbers on the pedometer to see how many steps I’d taken; now I need to log onto an app on my smartphone to see how far I’ve walked and how many calories I’ve burned and even how well I’ve slept. Or, if I wanted to, I could turn on any light in the house from the comfort of my couch rather than get up and do so manually. And that’s just a small scratch on the surface of the phenomena that is known as the Internet of Things (IoT).

However, if we know that virtually everything can now be connected to the Internet, we have to recognize its corollary statement: everything that can be connected to the Internet can be hacked. That fitness tracker I’ve come to depend on? Most of the information transmitted isn’t done securely and the apps have been known to have vulnerabilities. According to Symantec, this could make my movements easy to track and make my login details easy to steal. Those smart light bulbs, according to Slate, have insecure transmitters that could share too much information. And what about the home security system you have … you know, the one you turn on and off with your smartphone?



Wednesday, 28 January 2015 00:00

Big Data: Four New Governance Challenges

Before you move forward with Big Data, you’ll need to evolve your approach to data governance, experts say.

By now, most organizations are familiar with the basics of data governance: Identify the data owner, appoint a data steward, and so on. While those concepts are still essential to data governance, Big Data introduces new challenges that will require new adaptations.

“The arrival of Big Data should compel enterprises to re-think their approach to conventional data governance,” writes Dan O’Brien for Inside Analysis. “Everything about Big Data – its context, provenance, speed, scale and ‘cleanliness’ – extends data governance far beyond traditional, rigid databases, where it’s already an issue.”

Here’s a look at the new challenges Big Data introduces:



There has been innovation in every aspect of how individuals prepare for major snow storms – everything from funky new snow removal devices to new ways of pre-treating road surfaces for anti-icing before the onset of a major storm. Now, the real promise is in taking some of Silicon Valley’s hottest technologies — the Internet of Things, artificial intelligence, crowdsourcing, renewable energy and autonomous vehicles — and using them to improve the way cities respond to blockbuster snow events such as the Blizzard of 2015:



It was an unprecedented step for what became, in New York City, a common storm: For the first time in its 110-year history, the subway system was shut down because of snow.

Transit workers, caught off guard by the shutdown that Gov. Andrew M. Cuomo announced on Monday, scrambled to grind the network to a halt within hours.

Residents moved quickly to find places to stay, if they were expected at work the next day, or hustle home before service was curtailed and roads were closed.

And Mayor Bill de Blasio, whose residents rely upon the transit system by the millions, heard the news at roughly the time the public did.

“We found out,” Mr. de Blasio said on Tuesday, “just as it was being announced.”



Marshall Goldsmith, an executive coach to the corporate elite, is the author of the very popular book called What Got You Here Won’t Get You There. And while the title may be true as it relates to your individual career path, I have news for C-suite executives everywhere: it is not true when it comes to adopting new technology. In fact, what got you here – to your current state of success – is precisely what will get you to the next level. The problem is, as Chief Information Officers (CIO) and IT professionals, we sometimes allow ourselves to be pressured into acting contrary to what we know is the right thing to do.

Here’s what happens. A CEO approaches a CIO and says (in a nutshell), “What’s our cloud strategy? We have to get everything into the cloud.” The CEO has read the analysts, seen the marketing materials, been to the trade shows, and talked to peers. Is it any wonder that he or she comes to the CIO with an urgent “let’s-move-it-all-before-we-get-left-behind” deliverable? The cloud is the newest, latest, greatest, sexiest thing out there. It has benefits galore. Let’s get in on this. Now.



Were most of the data breaches that occurred in the first half of last year preventable? According to the Online Trust Alliance (OTA), a nonprofit organization that provides businesses with online security best practices, 90 percent of these incidents "could have easily been prevented."

And thanks in part to its recent findings, the OTA sits atop this week's list of IT security newsmakers to watch, followed by Adobe (ADBE) Flash Player, Kaspersky Lab founder Eugene Kaspersky and St. Peter's Health Partners.



Tuesday, 27 January 2015 00:00

The inevitability of a cyber attack


Research published by ISACA has shown that close to half (46%) of respondents to a global survey of IT professionals expect their organization to face a cyber attack in 2015 and 83% believe cyber attacks are one of the top three threats facing organizations today. Despite this, 86% say there is a global shortage of skilled cyber security professionals and only 38% feel prepared to fend off a sophisticated attack.

It is not just IT professionals who are worried about cyber attacks, the Business Continuity Institute’s own Horizon Scan report showed that cyber attacks and data breaches are two of the greatest threats to organizations. It is therefore vital that they have systems and people in place to combat these threats or, should any attack be successful as they all too often are, have processes in place to manage the aftermath.

Data breaches at a series of high profile retailers in 2014 made the issue of data security particularly visible to consumers and demonstrated the struggles that companies face in keeping data safe. Finding and retaining skilled cyber security employees is one of those challenges. In fact, 92% of ISACA’s survey respondents whose organizations will be hiring cyber security professionals in 2015 say it will be difficult to find skilled candidates.

“ISACA supports increased discussion and activity to address escalating high profile cyber attacks on organizations worldwide,” said Robert E Stroud, International President of ISACA. “Cyber security is everyone’s business, and creating a workforce trained to prevent and respond to today’s sophisticated attacks is a critical priority.”



On the surface (pardon the pun), NASA’s recent move to the cloud would not seem to have much to do with MSPs who offer cloud-based file sharing. But a closer look into the high-profile project – as recently highlighted on GigaOm – proves otherwise. 

Indeed, there are some things that all cloud transitions have in common, whether it’s the nation’s space program or a 10-person SMB. To illustrate our point, we wanted to examine this story through the lens of a managed service provider and their clients. Here we go…



(TNS) — From intuitive improvements — such as better statewide communication and pre-storm protocols — to more sensible plow blades and smarter technology for plow truck drivers, the crews at the Pennsylvania Department of Transportation’s (PennDOT) District 9 are becoming more equipped each year to handle Pennsylvania weather as efficiently as possible.

“The key words are ‘situational awareness,’” said Walter Tomassetti, assistant district executive for PennDOT’s District 9, which includes Cambria and Somerset counties. “The focus is on being ahead of the storm.”

Now, when PennDOT officials see major weather coming, such as double-digit snow, representatives from each district statewide have a pre-storm meeting to cover what resources will be needed most — and where. Depending on what’s expected, they also may set up a command center in each district.



(TNS) — There's something that has appeared on the Diamond School District campus that is so anticipated that it's drawing youngsters away from their recess to watch it in action.

It's a bulldozer, and it's turning ground outside the elementary school in preparation for a new safe room — Diamond, Mo.'s first official community storm shelter.

"If we could do something about it, then let's do it," Superintendent Mike Mabe said of his school district's proposal to try to maximize safety in case of severe weather. "It's just the right thing to do."



The recent collapse of an Interstate 75 overpass in Cincinnati, killing a worker and injuring a truck driver, is yet another reminder of the plight of America’s infrastructure, which is estimated to require billions of dollars to bring up to 2015 standards.

The bridge that collapsed had been replaced and was being torn down as part of an extended project to increase capacity on a congested, accident-prone section of the interstate, according to the Associated Press.

President Obama, speaking today in Saint Paul, Minnesota, outlined several proposals, including launching a competition for $600 million in competitive transportation funding and investing in America’s infrastructure with a $302 billion, four-year surface transportation reauthorization proposal, according to a press release from the White House. Obama also plans to “put more Americans back to work repairing and modernizing our roads, bridges, railways, and transit systems, and will also work with Congress to act to ensure critical transportation programs continue to be funded and do not expire later this year.”



Federal leaders want to like the cloud. They really do.

Then again, they have to — they’re under a cloud first mandate. And yet, they’re still not gung-ho when it comes to actually pursuing adoption, a recent survey shows.

Every year, MeriTalk surveys federal managers about cloud adoption. In the latest survey of 150 federal executives, nearly one in five say one-quarter of their IT services are fully or partially delivered via the cloud.

For the most part, they’re shifting email (50 percent), web hosting (45 percent) and servers/storage (43 percent). They’re not moving traditional business applications, custom business apps, disaster recovery ERP or middleware.

And it seems they’re pretty happy with that so far. This year, 75 percent said they want to migrate more services to the cloud — except they’re worried about retaining control of their data.



Tuesday, 27 January 2015 00:00

Winter Storms and Power Outages

As the blizzard of 2015 starts to hit hard across the Northeast, with several feet of snow, intense cold and high winds expected, utility companies are warning of widespread and potentially lengthy power outages across the region.

In New Jersey, utility companies say it’s the high winds, with gusts of up to 65 mph, rather than the accumulation of snow, that are likely to bring down trees or tree limbs and cause outages.

Consolidated Edison inc. which supplies electricity to over 3 million customers in New York City and Westchester county, told the WSJ that the light and fluffy snow expected in this blizzard should limit the number of power outages, but elevated power lines could come down if hit by trees.



The answer to this question depends on how fast you want your data back and how much time and effort you are prepared to spend. If your data is both mission and time critical, then full, frequent backups possibly with mirrored systems for immediate restore or failover may be the only solution. Financial trading organisations, large volume e-commerce sites and hospital emergency wards are examples. Other users who do not want to or cannot go down this route will be faced with more basic options.



Advice from James Leavesley, CEO, CrowdControlHQ.

Social media is no longer the exclusive preserve of the ‘Facebook Generation’ eager to connect with each other or simply a channel for consumer advertisers. It is fast becoming a valuable multi-faceted communications tool with many industries actively using social media networking sites to promote their products and services and drive commercial success.

Mirroring the trend, the finance industry is also waking up to the power of engaging with customers through social media at a time when its clients are increasingly turning to online resources for information and advice. Last year, consultancy giant Capgemini forecast that social media was on its way to becoming a “bona fide channel for executing transactions” and previously a study by Accenture stated that half of US financial advisers had successfully used social media to convert enquiries into clients. So far, so good so what’s the catch?



Information security has become a fixture in the daily headlines, ranging from the latest high-profile data breach; to exotic hacks of USB drives, ICS devices and IOT systems; and new zero-day exploits and attack techniques. While these stories are interesting and help us understand the vulnerabilities and risks that make up the threat landscape, they reflect a frequent bias in the industry towards focusing on the ‘cool’ exploit and detection side of cyber-defense, rather than the more operational response and mitigation side. One of the results of this focus, as reported in a recent SANS study, is that for over 90 percent of incidents, the time from incident discovery to remediation was one hour or longer.

This appears to be changing, however, as new reports shine a spotlight on incident response as both welcome and essential, and now courts are reinforcing that sentiment. This article by Proofpoint considers the other side of the equation and look at incident response. A comprehensive view of threat management includes people, processes, and tools in a process outlined below.



By Sal DiFranco

Misrepresentation isn’t reserved for entry-level interviewees. Chief Information Officer (CIO) candidates can exaggerate their accomplishments with the best of them. Let’s say you and your fellow C-suite executives need to hire a CIO. You know what you want – that picture-perfect ideal CIO candidate. Someone who is current on technology while being business savvy. Someone who takes smart risks when it comes to new technology, but who has insight on when to maintain the systems already in place. Someone who can talk to any segment of the business in their own terms, rather than resorting to technical jargon.

Of course, when interviewing CIO candidates, they will all try to make you believe they are that ideal CIO. It is up to you to identify any bull that get tossed around during the interview process, which is why I’ve come up with five specific points to watch out for.



(TNS) — Gov. Jerry Brown’s office is urging state emergency and law enforcement agencies to take advantage of a system that uses cellphone towers to pinpoint and send alerts.

Established in 2012 through a collaboration between the Federal Emergency Management Agency, the Federal Communications Commission and the wireless industry, the Wireless Emergency Alerts system is meant to complement existing alert systems.

“The Wireless Emergency Alerts are just one addition,” said Lilly Wyatt, an Office of Emergency Services spokesperson. “It’s an additional tool that local agencies can use for public messages.”

Of the 58 counties in California, only 24 have signed up to send alerts through the system.



Tuesday, 27 January 2015 00:00

The New Reality of Weather Risk

What do you do when you are responsible for the safety of town, county or state residents and forecasts call for drastic weather conditions? Risk professionals can come under criticism if they are overly cautious, yet under-reacting can mean lives are at stake.

Take the current situation here in New York, New Jersey and Connecticut. Predictions called for one- to three-feet of snow and blizzard conditions over a wide swath of the tri-state area and states of emergency were declared. Governor Andrew Cuomo of New York yesterday called for a full travel ban in 13 counties, beginning at 11:00 p.m. Those breaking the ban were subject to fines of up to $300, he said.

“With forecasts showing a potentially historic blizzard for Long Island, New York City, and parts of the Hudson Valley, we are preparing for the worst and I urge all New Yorkers to do the same – take this storm seriously and put safety first,” Gov. Cuomo said.



Monday, 26 January 2015 00:00


“I always knew I was going to be somebody. But now I wish I had been more specific.” – Lily Tomlin

In April 2014 at a conference on “Redefining Roles: Embracing the Patient as Partner,” one of the speakers, a Ph.D. and President of a division of UnitedHealthcare Corporation, began by taking a step back in time to recount the historical evolution of risk management practiced by the leading doctors of the past.

During the early settlement of the United States, the principal medical treatment consisted of “blood letting.”  In the 1700s, during the Yellow Fever epidemic, Benjamin Rush, a physician signatory of the Declaration of Independence, bled 100 to 125 people per day. Other treatments included “purging,” “sweat boxes,” “mercury ointments” and “medicinal hanging.”  The treatments sound worse than the illnesses.

Before anesthesia, medicine was a horror show, with surgery often resulting in death from shock.  Successful amputations were based on the speed and strength of the surgeon often at the expense of the fingers of surgical assistants.



Monday, 26 January 2015 00:00

How Chicago Solved Its Open Data Dilemma

In New York City, obtaining a public data set required an open records request and the researcher toting in a hard drive.

So grab a notepad, Big Apple, and let the Windy City show you how to do open data.

A recent GCN article describes how Chicago simplified the release and updating of open data by building an OpenData ETL Utility Kit.

Before the kit, the process was onerous. Open data sets required manual updates made mostly with custom-written Java code.

That data updating process is now automated with the OpenData ETL Utility Kit. Pentaho’s Data Integration ETL tool is embedded into the kit, along with pre-built and custom components that can process Big Data sets, GCN reports.



GENEVA — The number of people falling victim to the Ebola virus in West Africa has dropped to the lowest level in months, the World Health Organization said on Friday, but dwindling funds and a looming rainy season threaten to hamper efforts to control the disease.

More than 8,668 people have died in the Ebola epidemic in West Africa, which first surfaced in Guinea more than a year ago. But the three worst-affected countries — Guinea, Liberia and Sierra Leone — have now recorded falling numbers of new cases for four successive weeks, Dr. Bruce Aylward, the health organization’s assistant director general, told reporters in Geneva.

Liberia, which was struggling with more than 300 new cases a week in August and September, recorded only eight new cases in the week to Jan. 18, the organization reported. In Sierra Leone, where the infection rate is now highest, there were 118 new cases reported in that week, compared with 184 in the previous week and 248 in the week before that.



ISO 22318 is a guidance document developed by ISO to address Supply Chain Continuity Management (SCCM).  It has been created to complement ISO 22301 the specification for Business Continuity Management Systems and its associated guidance ISO 22313. 

Before Standards are finalised there is a process of review and comment that helps ensure the quality and consistency of the content they contain.

ISO 22318 despite being called a techincal specification is a guidance document that aims to help those managing BCMS programmes better address the challenge of Supply Chain Continuity.



Monday, 26 January 2015 00:00

Make resilience your 2015 resolution

As one of the goals for the New Year, companies should take stock of how resilient they are, and take steps to improve their ability to prevent disasters, and to recover should one occur.

“As part of their business continuity management, companies assess the risks they face, prioritise them and then put mitigation plans in place. That’s prudent and best practice, and something every board should insist is being done on an ongoing basis,” says Michael Davies, CEO of ContinuitySA. “In addition, I think that we all understand that the risk climate is becoming increasingly more complex, and the chances of a totally unexpected ‘Black Swan’ event are becoming more likely, that we think companies also need to see business continuity as a way to build a business that’s resilient by nature, intrinsically prepared to bounce back from anything. Companies should also become more proactive in avoiding disruptions associated with disasters rather than reacting to them when they occur.”
In fact, Davies argues, this type of approach can help executives and their boards enhance their oversight of the company, and discharge their obligation to ensure the company’s long-term sustainability.

The formal business continuity plan and management processes should provide the starting point for setting about building a more resilient organization, says Davies.

“Once you have done your best to pinpoint all the risks and put mitigation plans in place, then it’s time to put measures in place to help ensure you are prepared for the unexpected,” he notes. “Based on ContinuitySA’s own assessment of the risk environment and our experience with clients, we think the following seven initiatives will enhance organizational resilience.”



HOB has published the results of a new survey which set out to quantify employee knowledge and understanding of their organization’s emergency procedures in the event of a natural disaster or an epidemic.

‘An Inside Look at Disaster Recovery Planning’ surveyed 916 employed people in five cities across the United States: Houston, Los Angeles, Miami, New York and San Francisco.

When asked if their place of employment has emergency procedures in place to ensure the security of company information and data, 40 percent of respondents stated their company either does not have systems in place to protect data in an emergency, or they are not aware of the existence of these procedures.



Vision Solutions Inc., has published its Seventh Annual State of Resilience Report. Entitled ‘The Future of IT: Migrations, Protection & Recovery Insights,’ the report looks at trends, opportunities and challenges.

Highlights of the report include:

  • Nearly 75 percent of respondents have not calculated the hourly cost of downtime for their business;
  • For those who experienced a storage failure, nearly 50 percent lost data in the process due to insufficient disaster recovery methods or practices;
  • Nearly two thirds of those surveyed said they delayed an important data migration for fear of downtime or lack of resources;
  • Hosted private cloud is still the most prevalent cloud environment at 57 percent usage; hybrid cloud adoption lags at 32 percent with room to grow;
  • Despite the growing popularity of cloud, nearly two thirds state they do not have high availability or disaster recovery protection in place for their data once it is in the cloud.

The report combines findings from five industry-wide surveys of more than 3,000 IT professionals.

Obtain the report after registration

Businesses face new challenges from a rise of disruptive scenarios in an increasingly interconnected corporate environment, according to the fourth Allianz Risk Barometer 2015. In addition, traditional industrial risks such as business interruption and supply chain risk (46 percent of responses), natural catastrophes (30 percent), and fire and explosion (27 percent) continue to concern risk experts, heading this year’s rankings. Cyber (17 percent) and political risks (11 percent) are the most significant movers. The survey was conducted among more than 500 risk managers and corporate insurance experts from both Allianz and global businesses in 47 countries.

“The growing interdependency of many industries and processes means businesses are now exposed to an increasing number of disruptive scenarios. Negative effects can quickly multiply. One risk can lead to several others. Natural catastrophes or cyber attacks can cause business interruption not only for one company, but to whole sectors or critical infrastructure,” says Chris Fischer Hirs, CEO of Allianz Global Corporate & Specialty SE (AGCS), the dedicated insurer for corporate and special risks of Allianz SE. “Risk management must reflect this new reality. Identifying the impact of any interconnectivity early can mitigate or help prevent losses occurring. It is also essential to foster cross-functional collaboration within companies to tackle modern risks.”



Monday, 26 January 2015 00:00

How To Move Shadow IT Into The Light

Whether you realize it or not, many companies contain workstations with software that is not approved by the information technology (IT) department; instead, it has been adopted and installed by individuals or even, in some cases, entire departments. We call this use of unapproved applications or third party cloud services ‘Shadow IT’ due to its clandestine or covert status.

More often than not, these activities are not malicious in nature: they are merely a means of maintaining productivity when IT response times to support requests are sadly lacking. One key – and often overlooked – aspect of shadow IT is found in development environments where some users/developers are using public clouds to do development work, or running their own open source software in a virtual machine (VM) on someone else’s cloud.



There’s no doubt that managing databases and associated middleware has become more complicated over the years. Given the fact that the number of people with the skills needed to manage that class of IT infrastructure has not risen appreciably, there’s naturally going to be a requirement for increased reliance on automation.

With the unveiling of Oracle Enterprise Manager Cloud Control 12c Release 4, Dan Koloski, senior director of product management and business development at Oracle, says that the company has added a raft of new data governance capabilities designed to make it easier to manage large “data estates.”

The new capabilities include the ability to detect differences across databases to eliminate configuration drift, the capacity to patch fleets of databases at the same time, and tools that optimize the placement of databases based on current workloads and other IT infrastructure constraints and requirements.



Monday, 26 January 2015 00:00

NIMS / ICS Forms – Automation

If you use ICS (Incident Command System) forms – and you’re like most users – you hate them.  While simple in design, the forms can be cumbersome to manage.  Your organization (federal, state, municipal government, gas & oil exploration and transport, public utility, etc.) may be mandated to use ICS to respond to accidents, disasters and even disruptions of normal business operations.  And like many others, you may struggle to manage use of the common ICS forms.


The forms themselves are easy to complete.  The stumbling block is collaboration.  To share an ‘in progress’ ICS form you need to print it, or sharing it visually (on a projection or computer screen).  Both can be difficult when your Operational personnel are not all in the same room.  You may resort to updating ‘in progress’ ICS form manually (from multiple copies of a printed form) – and then have someone compile them MS Word later.  While Word forms are helpful, they lack true automation. That makes collaborative management of ICS forms cumbersome, inefficient – and can lead to errors and omissions of vital information.

If you created your own ICS form ‘wish list’ it would probably include improvements in both efficiency and collaboration:



Friday, 23 January 2015 00:00

Cyber Value-At-Risk

Measures and methods widely used in the financial services industry to value and quantify risk could be used by organizations to better quantify cyber risks, according to a new framework and report unveiled at the World Economic Forum annual meeting.

The framework, called “cyber value-at-risk” requires companies to understand key cyber risks and the dependencies between them. It will also help them establish how much of their value they could protect if they were victims of a data breach and for how long they can ensure their cyber protection.

The purpose of the cyber value-at-risk approach is to help organizations make better decisions about investments in cyber security, develop comprehensive risk management strategies and help stimulate the development of global risk transfer markets.



(TNS) — Despite high-profile computer attacks on Target, Sony and other major corporations, Idaho's director of homeland security said cyberthreats remain the "most important and least understood risk" to government and the private sector.

In a presentation Tuesday to the Senate State Affairs Committee, Brig. Gen. Brad Richy said the potential threats range from defaced or misleading websites to data theft and disruption of public services.

"The vulnerabilities are extreme," Richy said. "A breakdown in IT [information technology] services could take it from that sector into our industrial sector, to our water supply or electrical supply."

Cyberattacks are "a trend that's been going in the wrong direction for quite some time," said J.R. Tietsort, who heads up Micron Technology's global security efforts.



The September arrests/detentions in Australia of suspected Islamic State of Iraq and Syria (ISIS) supporters who had allegedly been planning to kidnap random people, decapitate them and then drape their bodies in the group’s flag and post the entire horrific event live to the Internet has brought to the forefront one of the most serious yet least discussed scenarios in counterterrorism. We term it “Main Street terrorism” and by that we mean terror attacks not on a grand scale, but multiple small attacks carried out by individuals or very small groups in environments where we have traditionally felt safe.

The December hostage situation in Australia is another example. It was an attack on a soft target, a target that would not fit the “traditional” profile of being highly visible or connected to government or military operations, carried out by an individual espousing extremist beliefs but acting essentially alone.

Who remembers the pipe bombs placed in mailboxes throughout the American Midwest during spring 2002? A total of 18 bombs were placed with six of those exploding (injuring four U.S. Postal Service mail carriers and two residents) and 12 others discovered without exploding. Until the suspect was apprehended, how many of us changed our routine for something as mundane as getting the mail because, suddenly, that everyday activity had become potentially deadly?



Cosentry has expanded its disaster recovery-as-a-service (DRaaS) offering to help customers improve their data recovery times.

The data center services provider said its expanded DR service is designed to meet a full range of business recovery point objectives (RPO) and recovery time objectives (RTO), with targets ranging from less than 15 minutes to several days based on application importance and budget.

"We anticipate that our customers will be able to implement a disaster recovery solution that meets their own specific requirements as it pertains to availability and the potential for data loss at a price that meets their budget," Craig Hurley, Cosentry's vice president of product management, told MSPmentor. "Our service expansion also looks to address the reality that many of our customers are looking to protect both virtual and physical servers."



Friday, 23 January 2015 00:00

Sharing the plan with employees


A new study titled ‘An inside look at disaster recovery planning’ has revealed just how little employees know about their organization’s planned response to a crisis. In a survey by HOB, 40% of respondents stated that their company either does not have systems in place to protect data in an emergency, or they are not aware of the existence of these procedures.

The report also revealed that, even if a plan does exist, 52% of employees are unaware of the details. This study shows just how important it is for the details of any plan that involves employees to be shared with them. The worst time to find out what to do in a crisis is once the crisis has occurred.

Over the last decade we have seen a tendency towards more flexibility working environments and a greater trend towards working remotely, however 45% of respondents noted that they either do not have the ability to access company information that will enable them to do so, or they just don’t know if they have access.

If working remotely is one of your possible responses to a crisis, does your organization have the capability to do this? If your office is out of action and Plan B is for employees to work from home, you might be in for a surprise if 45% of your employees suddenly find out they can’t.

“For most businesses, access to and the sharing of information is critical to ongoing successful operations,” said Klaus Brandstätter, CEO of HOB. “The survey revealed that most companies are unprepared to withstand the negative consequences of disrupted operations, as many employees won’t have access to the resources and information needed to remain functional in emergency situations. In today’s world with so many unforeseen pending disasters, it is clearly paramount that companies implement comprehensive disaster recovery plans as part of their overall business continuity strategy.”


The hybrid cloud is now the new normal in cloud computing. The whole point of a hybrid cloud is to design and customize cloud capabilities that address your customer’s unique needs. But today – MSPs typically offer a one-size-fits-all service level agreement. Customers will demand a service provider that is willing and able to customize the service level agreement to meet those unique needs of their organization so that they can take advantage of the flexibility, scalability, cost reductions, and resiliency that cloud computing offers. 2015 will be the year that customers demand customized SLAs.

Service Level Agreements (SLA) serve as a roadmap and a warranty for cloud services offerings. All cloud providers offer some type of standard, one-size-fits-all SLA that may or may not include the following, depending on your requirements:



Thursday, 22 January 2015 00:00

Ohio Helps Pay for Tornado-Proof Safe Rooms

(TNS) — Mary Kirstein and her partner hunkered down under a dining-room table, with their cat corralled in a laundry basket between them, as the tornado roared toward their home.

And this didn’t happen just once during Kirstein’s nine years in Houston, where tornadoes seem as common as wide-brimmed Stetsons. It happened time and again. Thankfully, she said, the big one never hit, but a person doesn’t easily forget that fear.

“Tornadoes freak me out,” said Kirstein, a purchaser at Battelle who now calls Hilliard home.

In 2012, while researching tornado safety as part of her role on a committee at work, she discovered that the state of Ohio had a new program to help pay for safe rooms that can withstand even the 250 mph winds that accompany the most-destructive EF5 storms. She filled out an application for the Ohio Safe Room Rebate Program, run by the Ohio Emergency Management Agency.



Big Data is quickly moving from concept to reality in many enterprises, and with that comes the realization that organizations need to build and provision the infrastructure to deal with extremely large volumes, and fast.

So it is no wonder that the cloud is emerging as the go-to solution for Big Data, both as a means to support the data itself and the advanced database and analytics platforms that will hopefully make sense of it all.

In a recent survey from Unisphere Research, more than half of all enterprises are already using cloud-based services, while the number of Big Data projects is set to triple over the next year or so. This leads to the basic conundrum that the business world faces with Big Data: the need to ramp up infrastructure and services quickly and at minimal cost in order to maintain a competitive edge in the rapidly expanding data economy. The convergence between Big Data and the cloud, therefore, is a classic example of technology enabling a new way to conduct business, which in turn fuels demand for the technology and the means to optimize it.



Thursday, 22 January 2015 00:00

Putting the Cloud inside Your Company Firewall

Some enterprises are attracted by the potential advantages of the cloud for disaster recovery and business continuity. However, they fear the possibility of information being spied on, stolen or hacked after it leaves their own physical premises. A little lateral thinking suggests another possible solution. Instead of moving outside a company firewall to use cloud possibilities, how about implementing cloud functionality inside the firewall? A number of vendors now offer private cloud solutions and they have some customers whose identity may surprise you.



Component distributor partners with DigitasLBi Commerce and hybris to scale its commerce capabilities in global markets


LONDON – DigitasLBi Commerce, the global connected commerce specialist and hybris software, an SAP company and the world’s fastest growing commerce platform provider, have been selected by RS Components (RS), a trading brand of Electrocomponents plc, the global distributor for engineers, to implement a new connected commerce platform. This will enable it to enhance and rapidly scale its B2B eCommerce offerings to an expanding customer base and deliver a highly personalised experience to individual customers in markets around the globe.


Under the agreement, DigitasLBi Commerce will implement the hybris Commerce Suite, a powerful and scalable single-stack commerce platform capable of delivering highly sophisticated B2B features to a global user base. The solution enables RS to further enhance its online B2B functionality while seamless integration with the company’s enterprise architecture, which includes a SAP business intelligence system, will support streamlined business operations and make the faster initiation of go-to-market strategies and new business models possible.


Guy Magrath, Global Head of eCommerce at RS, commented: “eCommerce is a major driver of growth for our business and the entry point for our customers to a long term multi-channel relationship with us. By partnering with DigitasLBi Commerce and hybris we’ll gain the ability to respond faster to new market needs and further exploit the potential of our eCommerce offer to a diverse B2B customer base.”


With operations across 32 countries and a global network of 16 distribution centres worldwide, RS is the world’s largest distributor of electronics and maintenance products, shipping over 44,000 parcels daily. With around 500,000 products available for same day dispatch and serving more than one million customers worldwide, the company is dedicated to helping customers find the right product at the right price.


As a next phase, DigitasLBI Commerce will undertake the global deployment and rollout of a new connected multi-language, multi-currency, multi-site commerce platform that can be adapted fast to changing market conditions. DigitasLBi Commerce’s robust agile implementation approach will enable RS to incrementally advance its eCommerce capabilities.


With 58 percent of global revenues generated online, RS’s ambition is to build a £1 billion plus connected commerce business and DigitasLBi Commerce will support the brand in extending its ‘eCommerce with a human touch’ vision to further improve the online customer experience with innovative B2B functionality that make it even easier for customers to transact.


“With connected commerce at the heart of the company’s operation, RS has to make the online customer experience the best and most relevant in each and every market they do business in,” said Jim Herbert, Managing Partner at DigitasLBi Commerce. “As a leading exponent of global hybris implementations we’re delighted to have been chosen to support RS in extending how it connects to its global audience to reach customers locally, at the point of need.”


The new multi-device optimised commerce platform will power 29 highly localised websites, and finely tune procedures that address specific market requirements. Under the agreement, DigitasLBi Commerce will enable the brand’s global connected commerce team of 100 staff, who oversee online trading, merchandising and behavioural repurchasing (email/offline event triggers across all channels and digital devices), to become fully self-supporting in their utilisation of the hybris Commerce Suite.


“In today’s market where B2B customers expect and are demanding a B2C-like experience, companies - especially industry giants such as RS - require a new breed of solutions that consider the customer interaction across touch points and channels, including that pivotal moment in the journey where a purchase is made,” explained Rob Shaw, Vice President New Business EMEA and MEE, hybris software. “hybris makes it possible to integrate web, customer service, print, mobile and social commerce that will give RS’s customers a more seamless multi-channel shopping experience.”

Now that the dust has settled on the infamous hack of Sony Pictures Entertainment, it would be prudent to take a look back at how the attack was carried out, consider what lessons IT security professionals can learn from it, and formulate a plan to counter a similar attack.

To that end, I recently conducted an email interview with Gary Miliefsky, an information security specialist and founder and president of SnoopWall, a cybersecurity firm in Nashua, N.H. To kick it off, I asked him what the likelihood is that a Sony insider assisted with the attack, and whether it could have even been carried out without the help of an insider. Miliefsky dismissed the insider theory:

While many speculate that the attack on Sony Pictures Entertainment was done by a malicious insider, I believe that the DPRK carried out the attack themselves, originally initiated from IP addresses they lease from the Chinese government. I believe they initially eavesdropped on emails to learn a pattern of behavior for socially engineering a Remote Access Trojan to be installed via email of an unsuspecting employee, inside the network.



In a Jan. 13 presentation to the federal Health IT Policy Committee, Annie Fine, M.D., a medical epidemiologist in the New York City Department of Health and Mental Hygiene, described both the sophisticated software used to track disease outbreaks such as Ebola, as well as how better integration with clinicians’ electronic health records (EHRs) would improve her department’s capabilities.

“In New York City, every day we are on the lookout for unusual clusters of illness. And we receive more than 1,000 reports a day just in my program,” Fine said. Epidemiologists run a weekly analysis to detect clusters in space and time, and use analytics and geocoding to compare current four-week periods with baselines of earlier four-week periods.

“We get a large number of suspect cases reported, and they may be way out of proportion to the number of actual cases,” Fine said. Epidemiological investigations require hundreds of phone calls to providers and labs. “That could be made much less burdensome and efficient if we could have improved integration with EHR data.”



Wednesday, 21 January 2015 00:00

Can You Make Disaster Information Go Viral?

What role could social media play in effectively communicating information about breaking news such as natural disasters and disease outbreaks? It’s not a new question, but one that lacks an easy answer. Researchers and emergency response personnel in San Diego plan to spend the next four years exploring the topic, and what they find may eventually serve as a model for other communities looking to better leverage social media for disaster response.

San Diego County and San Diego State University (SDSU) recently formed a partnership to research and develop a new social media-based platform for disseminating emergency warnings to citizens. The project aims to allow San Diego County’s Office of Emergency Services (OES) to spread disaster messages and distress calls quickly and to targeted geographic locations, even when traditional channels such as phone systems and radio stations are overwhelmed.



Is your business prepared for IT outages? Disaster preparedness is vital for businesses of all sizes, especially for those that want to avoid prolonged service interruptions, and companies that prioritize disaster preparedness can find ways to protect their critical data during IT outages as well.

Managed service providers (MSPs) can offer data backup and disaster recovery (BDR) solutions to help companies safeguard their sensitive data during IT outages. These service providers also can teach businesses about the different types of IT outages, and ultimately, help them prevent data loss.



Whether you are planning a traditional data center build-out or all-new cloud infrastructure, the appeal of white box hardware is difficult to resist.

Provided you need enough of a particular device to benefit from economies of scale, and you have a plan to layer all the functionality you need via software, white box infrastructure can do wonders to reduce the capital costs of any project. Plus, you always have the option to rework the software should data requirements change.

But it isn’t all wine and roses in the white box universe. As IT consultant Keith Townsend noted to Tech Republic recently, white box support costs often emerge as a fly in the ointment. Large organizations like Facebook and Google have the in-house knowledge to deploy, configure and optimize legions of white boxes, but the typical data center does not. It takes a specialized set of skills to implement software-defined server, storage and networking environments, and white box providers as a rule do not offer much support other than to replace entire units, even if only a single component has gone bad. There is also the added cost of implementing highly granular management and monitoring tools to provide the level of visibility needed to gauge a device’s operational status to begin with.



Wednesday, 21 January 2015 00:00

High Performance Data Storage Tips

Talk to many data storage experts about high-performance storage and a good portion will bring up Lustre, which was the subject of a recent Lustre Buying Guide. Some of the tips here, therefore, concern Lustre, but not all.

Use Parallel File Systems

Parallel file systems enable more data transfer in shorter time period than their alternatives.

Lustre is an open source parallel file system used heavily in big data workflows in High Performance Computing (HPC).  Over half of the largest systems in the world use Lustre, said Laura Shepard, Director of HPC & Life Sciences Marketing, DataDirect Networks (DDN). This includes U.S. government labs like Oakridge National Lab’s Titan, as well as British Petroleum’s system in Houston.



To small business owners, the buzz words from the Big Data world (i.e., petabytes, zettabytes, feeds, analytics, etc.) seem very foreign indeed. According to research from the SMB Group, only 18 percent of small businesses currently make use of Big Data analytics and business intelligence solutions. On the other hand, midsize businesses have shown greater adoption, with 57 percent of those surveyed reporting that they use BI and analytics to gain actionable information.

However, many Big Data vendors have begun creating a better story for smaller businesses, focusing more on how they can use their tools to achieve deeper insight into business data to help them make more informed decisions. And the ones that listen to this retooled message will receive a decent payoff for their efforts.



You’ve taken the time to implement a disaster recovery (DR) plan for your company – you’re prepared for anything. You’ve covered all the milestones, including:

  • Performing a Business Impact Analysis (BIA) to determine the recovery times you’ll need for your applications.
  • Tiering your applications and documenting their interdependencies so you know which order your servers should be restored in.
  • Putting your recovery infrastructure in a geographically-diverse data center.
  • Created a comprehensive recovery playbook and tested each and every step.

Bring on the storms … the floods … the power outages … you’re ready. But are you really?



Wednesday, 21 January 2015 00:00

10 steps to cyber security


The United Kingdom’s GCHQ, in association with the Centre for the Protection of National Infrastructure, Cabinet Office and Department for Business Innovation and Skills, has re-issued their ’10 Steps to Cyber Security’ publication, offering updated guidance on the practical steps that organizations can take to improve the security of their networks and the information carried on them.

Originally launched 2012, the guidance has made a tangible difference in helping organizations large and small understand the key activities they should evaluate for cyber security risk management purposes. The 2014 Cyber Governance Health Check of FTSE 350 Boards showed that 58% of companies have assessed themselves against the 10 Steps guidance since it was first launched. compared to 40% in 2013.

‘10 Steps to Cyber Security’ has been updated to ensure its continuing relevance in the climate of an ever growing cyber threat. It now highlights the new cyber security schemes and services that have been set up more recently under the National Cyber Security Programme.

The Business Continuity Institute’s Horizon Scan report has consistently shown that cyber attacks and data breaches are two of the biggest concerns for business continuity professionals with the latest report highlighting that 73% of respondents to a survey expressed either concern or extreme concern at the prospect of one of these threats materialising.

Robert Hannigan, Director of GCHQ, said: “GCHQ continues to see real threats to the UK on a daily basis, and the scale and rate of these attacks shows little sign of abating. However despite the increase in sophistication, it remains as true today as it did two years ago that there is much you can do yourself to protect your organisation by adopting the basic Cyber Security procedures in this guidance.”


With more enterprise IT organizations relying on software-as-a-service (SaaS) applications than ever, securing the data that flows in and out of those applications has become a major challenge and concern.

To give IT organizations more control over that data, Protegrity today unveiled the Protegrity Cloud Gateway, a virtual appliance that, once deployed on a server, enables organizations to apply policies to the flow of data moving in and out of multiple SaaS applications.

Protegrity CEO Suni Munshani says it applies a mix of encryption and vaultless tokenization to make sure data residing in a SaaS application can only be viewed by users that have been given explicit rights to see that data. Those rights are assigned using a “configuration-over-programming” (CoP) methodology that allows administrators to configure the gateway without having programming skills.

Support for SaaS applications is provided by accessing the public application programming interfaces (APIs) those applications expose, with support for each additional SaaS application that Protegrity supports taking a few days or weeks to add, depending on the complexity of the project.



A new survey of more than 3,000 IT decision-makers worldwide revealed the majority of businesses are "behind the curve" when it comes to their data protection strategies. The survey showed that most businesses are "not very confident" that they can fully recover their critical data after an IT service disruption, yet they also considered data protection "to be totally critical to their success."



Tuesday, 20 January 2015 00:00

The Modular Approach to a Scalable Cloud

Following up on my previous post regarding hyperscale infrastructure, I feel I should point out that once the decision to go hyperscale has been made, it will most likely take place in a Greenfield hardware environment.

Unless you are already working with a state-of-the-art data facility, any attempt to convert complex, multiformat legacy environments will almost certainly lead to a morass of integration issues. The key benefit to hyperscale is that it is both large and flexible, allowing data executives to craft multiple disparate data architectures completely in software. This is why current hyperscale plants at Google and Facebook rely on bulk commodity hardware.

But as I mentioned last fall, the average enterprise does not have the clout to purchase tens of thousands of stripped down servers and switches at a time, and besides, all those components still need to be deployed, provisioned and integrated into the cluster, which takes time, effort and of course, money.



(TNS) — Until now, North Texas has been one of the least likely places in the country to have an earthquake.

But after the Dallas area suffered a series of more than 120 quakes since 2008, the U.S. Geological Survey is re-evaluating the metroplex’s “seismic hazard” — or the risk of experiencing earthquakes.

This year, for the first time, the USGS will include quakes believed to have been caused by human activity in its National Seismic Hazard Map, which engineers use to write and revise building codes, and which insurers use to set rates.

The map predicts where future earthquakes will occur, how often they will occur and how strongly they will shake the ground.



(TNS) — "A rising tide lifts all boats," John F. Kennedy said, in defense of the government taking on big public works projects for the greater good.

About 10 of Iowa's river towns will share a $600 million pot of state money based on the belief sales tax revenue will rise higher and commercial and residential development will flourish along riverfronts, if protected from flood with sophisticated green and hard infrastructure.

Flooding in Iowa is occurring more often, making the nomenclature 100-year or 500-year flood levels meaningless. The city of Burlington had 500-year floods in 1993 and 2008, a 15-year interval.

Cedar Rapids, which sustained $6 billion of the state's $10 billion flood damage in 2008, led the way in convincing the Legislature to establish a flood mitigation fund.



Tuesday, 20 January 2015 00:00

Defining the Five Lines of Defense

As the Board of Directors focuses its attention on risk oversight, there are many questions to consider. One topic the Board should consider is how the organization safeguards itself against breakdowns in risk management (e.g., when a unit leader runs his or her unit as an opaque fiefdom with little regard for the enterprise’s risk management policies, a chief executive ignores the warning signs posted by the risk management function or management does not involve the Board with strategic issues and important policy matters in a timely manner). As illustrated during the financial crisis, the result of these breakdowns can be the rapid loss of enterprise value that took decades to build.

An effectively designed and implemented lines-of-defense framework can provide strong safeguards against such breakdowns. From the vantage point of shareholders and other external constituencies (an external stakeholders’ view), we see five lines of defense supporting the execution of the organization’s risk management capabilities.1 They are outlined below.



Cyberattacks are clearly on the minds of President Barack Obama, Islamic State jihadists, Sony Pictures execs and the CBS producers who are launching a new show this spring called CSI: Cyber. On Jan. 13, Obama announced plans to reboot and strengthen U.S. cybersecurity laws in the wake of the Sony Pictures hack and the one on the Pentagon's Central Command Twitter account from sympathizers of the Islamic State. Whether a real attack or depicted in television and films like Blackhat, this flood of cyberattacks means that hackers are relentless and more sophisticated than ever before.

I’m not a fear monger by trade but want to sound the alarm that there is another cyber-risk that is looming and warrants attention of our emergency management community and government: electronic health records. The American Recovery and Reinvestment Act of 2009 authorized the Centers for Medicare and Medicaid Services to award billions in incentive payments to health professionals (hospitals, long-term care agencies, primary care, etc.) to demonstrate the meaningful use of a certified electronic health record (EHR) system.

The intent to create EHR systems is to improve patient care by providing continuity of care from provider to provider by creating health information exchanges (HIEs) that allow “health-care professionals and patients to appropriately access and securely share a patient’s vital medical information electronically,” says HealthIT.gov. In addition, financial penalties are scheduled to take effect in 2015 for Medicare and Medicaid providers who do not transition to electronic health records.



As the enterprise tries to make the data center more efficient in the face of rising operating costs, one problem keeps reoccurring: Disparate infrastructure makes it very difficult to determine what systems and solutions are in place and how they interact with each other.

The data center, after all, is a collection of assets, so it only makes sense to have a good idea of what those assets are and how they operate in order to either improve their efficiency or swap them out for new, better assets.

The idea of asset management (AM) in the data center is not new – in fact, it is a bustling business. MarketsandMarkets puts the total value of the AM industry at $565.4 million, with annual growth rates averaging 34 percent between now and 2019 to top out at more than $2 billion. The report segments the market by region, components, services, support and other factors, concluding that efficiency, management, planning and expansion of data footprints are key drivers, while limiting factors include tight budgets, poor awareness of available solutions, and a lack of perceived benefits. And as with most technology solutions these days, established markets in Europe and North America provide the bulk of activity, while emerging markets represent the fastest growth.



For most organizations, employees, or the human resources, account for the largest percentage of total costs. Northeastern University D’Amore-McKim School of Business Distinguished Professor of Workforce Analytics and Director of the Center for Workforce Analytics Dr. Mark Huselid says the workforce often represents fully 60 to 70 percent of all expenses. Quite clearly, the refinement of workforce management, and attempting to “connect human capital and performance with management strategy and business goals” is a keen point of interest for both HR and upper management.

The fact that a Professor of Workforce Analytics position exists is intriguing, and the sort of academic research that the Center for Workforce Analytics conducts may well result in some rather unexpected outcomes for some industries. Consider this idea, for example: “Most organizations tend to invest in talent hierarchically, where senior-level talent gets the most pay, best development opportunities and other professional perks. However, organizations should be managing vertically in who and what really matters – and in measuring and managing the outcomes associated with these processes.”

In the tech world, the idea of investing a higher percentage of pay and perks in less senior and less experienced employees is not foreign. Raising pay rates and bonuses for, say, highly in-demand developers and designers can often be easily justified in shortened time-to-market or other deliverables. In other areas, though, HR and the business would have a hard time with the concept without some solid predictive numbers.



Organizations that reap high return rates on Big Data projects do so by changing operational systems for everybody, rather than “enlightening a few with pretty historical graphs,” according to IT research and consulting firm Wikibon.

How do you do that? You stop using Big Data to drive “using the rear-view mirror.” Instead, you couple Big Data insights with in-memory technologies so you’re “driving with real-time input through the front windshield,” writes David Floyer, a former IDC analyst and co-founder and CTO of Wikibon.org.

Floyer’s lengthy piece on Big Data ROI goes into the technical details on how you piece this together. His technologist background really shows, though, so here are a few terms you’ll need to know to follow it:




More than a third (34%) of IT professionals claim that their organization has suffered a major incident that has required them to implement disaster recovery procedures. In the event of such a disaster or other incident occurring, 51% believe they are only ‘somewhat prepared’ to recover their IT and related assets, and of those who had experienced a major incident, more than half lost data and 11% experienced permanent IT losses.

These were some of the findings in a report published by Evolve IP which also showed that the leading causes of such incidents are hardware outages (47%), environmental disasters (34%), power outages (27.5%) and human error (20%). Perhaps surprisingly, a significant number of organizations continue to use legacy methods for disaster recovery. 45% of those surveyed continue to use backup tapes and 41.5% use additional servers at their primary site as a principal method for disaster recovery.

“For many organizations the question isn’t ‘if’ they will suffer a disaster, it’s ‘when,’” says Tim Allen, Chief Sales Officer of Evolve IP. “As we saw in the survey, when disaster hits, it hits hard typically taking over a day to recover and causing financial as well as data losses.”

The results of this survey demonstrate just why the IT related threats are the biggest concern for business continuity professionals as shown in the Business Continuity Institute’s annual Horizon Scan report. The latest report revealed that 77% of respondents to a survey expressed either concern or extreme concern at the prospect of an IT or telecoms outage occurring.


(TNS) — In the wake of the March 11, 2011, Great East Japan Earthquake, local and prefectural governments around the country rushed to assist the Tohoku region, sending material aid and personnel, while private firms and individuals arrived to volunteer their services wherever they were needed.

Few were as quick to respond as Hyogo Prefecture and the city of Kobe, which had experienced their own earthquake in January 1995, and had worked in the intervening years to become Japan’s premier center for disaster response-related knowledge, and an example that towns, cities and prefectures in Tohoku could use as they attempted to rebuild.

At a recent symposium, held ahead of the 20th anniversary this Saturday of the Great Hanshin-Awaji Earthquake and attended by officials and representatives of nonprofit organizations from Iwate and Hyogo prefectures, Hyogo Gov. Toshizo Ido and Iwate Gov. Takuya Tasso spoke on the administrative and planning challenges governments face when dealing with a large-scale natural disaster.



How to balance the risks and rewards of emerging technologies is a key underlying theme of the just-released World Economic Forum (WEF) 2015 Global Risks Report.

The rapid pace of innovation in emerging technologies, from synthetic biology to artificial intelligence has far-reaching societal, economic and ethical implications, the report says.

Developing regulatory environments that can adapt to safeguard their rapid development and allow their benefits to be reaped, while also preventing their misuse and any unforeseen negative consequences is a critical challenge for leaders.



A new survey from Chicago-based managed security service provider (MSSP) Trustwave revealed that organizations with 1,000 Internet users or fewer spent more than twice as much on IT security on a per-user basis than larger organizations (those with more than 1,000 Internet users).

The survey of 172 IT professionals showed that IT security cost $157 per Internet user in smaller organizations versus $73 per user in larger ones.

Also, Trustwave found that 28 percent of all respondents said they believed they were not getting full value out of their security-related software investments.



Integration isn’t an excuse to avoid trying SaaS enterprise applications, argues principal cloud architect Mike Kavis.

“Sometimes enterprise IT executives think their requirements are so different than those of other companies that they cannot be met by a SaaS provider. This thought process is often nothing more than a poor excuse …” Kavis writes.

Kavis is also now a vice president at Cloud Technology Partners, but I’ve followed his writings for years. Kavis is an industry veteran with extensive experience as an architect and IT analyst.



Forecasting what the IT security landscape will look like in the year ahead has become an annual technology tradition, and following 2014 as the Year of the Data Breach, I think anyone could make a fairly accurate guess as to what the major trend of the New Year will be: more data breaches.

Forty-three percent of organizations reported a data breach in the past year, a figure that Forrester predicts will rise up to 60% in 2015. And it’s not just the frequency of breaches that we will see escalate in the year ahead, but also that malware will be increasingly difficult to dismantle. P2P, darknet and tor communications will become more prevalent, and forums selling malware and stolen data will retreat further into hidden corners of the Internet in an attempt to avoid infiltration.

By now, it is no longer a matter of if your business is going to be breached, but when. The last thing any organization needs as we enter another year of risk, is a blind side. The good news, though, is that there are ways to prevent them if we act immediately.



With the arrival of Ebola in the U.S. came public fear, widespread misinformation, and the ever-present danger of contamination and contagion. While the cases have been isolated, the threat of the virus required state and local leaders to assume unprecedented leadership and extreme diplomacy in dealing with the public, the medical community, and even medical suppliers and contractors, who balked at handling blood samples, soiled linens and hospital waste out of fear of the virus.

But when a virus like Ebola hits a jurisdiction, there is a hefty fiscal price as well. In Texas, Dallas County was the first U.S. locality to deal with the sudden challenge of an outbreak. The impact on the budget was not inconsequential. It cost the county a quarter of a million dollars to gut and decontaminate the one small apartment of the nation’s first Ebola victim, Thomas Eric Duncan -- part of the approximately $1 million the county expended in the first weeks of the crisis.

Unlike with some contagions, the unknowns with Ebola could constitute the gravest challenge. There are surprising gaps in scientists’ knowledge about the virus, including the time it can survive in different environments outside the body. That is information vital to EMTs, solid waste departments, hospitals and clinics, and public and private water and wastewater systems -- as well as public transportation agencies.



Thursday, 15 January 2015 00:00

Mapping for Ebola: A Collaborative Effort

Map of Africa

One of the difficulties faced by teams responding to the current Ebola outbreak in West Africa is identifying individuals and communities residing in remote areas. Existing maps of these regions either do not exist or are inadequate or outdated. This means that basic data like location of houses, buildings, villages, and roads are not easily accessible, and case finding and contact tracing can be extremely difficult.

To help aid the outbreak response effort, volunteers from around the world are using an open-source online mapping platform called OpenStreetMap (OSM) to create detailed maps and map data of  Guinea, Sierra Leone, Liberia, and parts of Mali.

Person mapping at a computer

Commonly referred to as “Wikipedia for maps,” OSM is working toward the goal of making a map of the world that is freely available to anyone who wants to use it. The Humanitarian OpenStreetMap Team (HOT) is a U.S.-based non-profit organization that represents a subset of the OSM community. HOT’s mission is to use OSM data and tools to help prepare and respond to humanitarian disasters. Because OSM data is available for free download anywhere in the world, volunteer mappers generate data that are useful not only to CDC but also to other agencies involved in the Ebola response, such as Doctors Without Borders (MSF), International Red Cross (IRC), and World Health Organization.

Mappers frequently use satellite images to identify villages, houses, paths, and other details that were previously unmapped. The U.S. State Department’s Humanitarian Information Unit (HIU) is supporting HOT and OSM by creating the MapGive.org website, which provides easy-to-follow instructions on how to begin mapping very quickly. Personnel in CDC’s Division of Global Migration and Quarantine (DGMQ) are coordinating with HIU and HOT to support and promote volunteer mapping in affected West African areas where CDC teams are currently working.

Members of Emory’s Student Outbreak and Response Team (SORT) are some of these volunteer mappers. SORT is a graduate student organization that collaborates with CDC and provides hands-on training in outbreak response and emergency preparedness. Ryan Lash, a mapping scientist in DGMQ’s Travelers’ Health Branch, initially contacted SORT for help in August as the number of Ebola cases in West Africa continued to rise. He has since provided two workshops for SORT members, taught a small number of CDC staff, and trained students at the University of Georgia.

Rabies response-EOC

In the 8 months that HOT has been mapping countries with Ebola outbreaks, more than 2,500 volunteers have mapped more than 750,000 buildings and hundreds of kilometers of roads, resulting in detailed maps of affected West African communities. Not only do these maps help first responders and other organizations around the world, they also contribute to the national information infrastructure essential to the recovery and rebuilding of affected regions. The value of OSM was highlighted especially well during the 2010 Haiti earthquake, after which the U.S. State Department decided to promote volunteer mapping as a way for the general public to get involved in humanitarian emergencies.

Volunteer mapping in OSM for HOT can be done by anyone. All you need is a computer, an internet connection, and the time and willingness to learn. Find out more about how you can help here: Learn to MapExternal Web Site Icon

It’s a near daily occurrence for most enterprises—a laptop or server becomes obsolete or unusable. But often the most important step is forgotten before a new media is brought in. How do you ensure that the old device is cleansed of all usable traces of important data before it is disposed of?

Many organizations have internal procedures for disposing of technology, and those steps include wiping hard drives of data or restoring a device to its original status before use. But does this alone ensure that no discernible traces of private data are left on the media? Are there ways to absolutely be sure that the organization’s confidential information has been completely and absolutely removed? Or is there a level of data removal that may not be complete, but is acceptable?



All business in a competitive market is risk-based, whether or not enterprises admit it. Positive risk indicates opportunities. Negative risk points to the need to take measures to avoid, transfer or mitigate that risk. Banks are a case in point, with risk analysis at the heart of their daily activities as they continually calculate the probabilities of profitability in investments and loans. For enterprises in other sectors, risk may be less in the spotlight, but no less important. All companies need good disaster recovery and business continuity management for instance. Both depend on properly assessing risks and their impact. So how can you tell if senior management is taking risk management seriously?



Thursday, 15 January 2015 00:00

Global Risks 2015

The World Economic Forum has published its annual look ahead at the risks that are likely to dominate in the coming years.

The biggest threat to the stability of the world in the next 10 years comes from the risk of international conflict, according to the 10th edition of the World Economic Forum Global Risks report.

The report, which every year features an assessment by experts on the top global risks in terms of likelihood and potential impact over the coming 10 years, finds interstate conflict with regional consequences as the number one global risk in terms of likelihood, and the fourth most serious risk in terms of impact.




Less than twelve months ago the UK suffered severe flooding in many parts of the country and this is not an infrequent occurrence. During the last five years over half (51%) of businesses have experienced some form of damage through floods, wind and thunderstorms alone and this can often prove costly. The situation could be further exacerbated within smaller organizations as a new study has shown that 46% of small to medium sized businesses (SMBs) haven’t considered a business continuity plan to carry on trading or mitigate losses.

There are nearly five million SMBs in the UK and each one risks suffering on average £38,311 worth of damage because of the elements. As a result, the potential cost to the economy could be as high as £86 billion. Weather chaos means small businesses could also lose over three working days (26 hours) of staff time.

Weather is a threat to many organizations, so much so that in the Business Continuity Institute’s Horizon Scan report, adverse weather came fourth in a list of potential threats with 57% of respondents to a survey expressing either concern or extreme concern at the possibility of their organization suffering a disruption as a result.

The findings come from a survey of 1,000 SMBs conducted by Towergate Insurance and designed to ascertain the impact of bad and unexpected weather on the UK’s mass of smaller firms. The nationwide survey also reveals 43% of UK SMBs either simply do not have cover or do not know whether they are covered in the event of serious bad weather.

Commenting on the findings, James Tugendhat of Towergate Direct, said: “Small businesses are a vital part of the UK economy and can’t afford to lose money due to the unpredictable British weather. Whilst the good old British weather has become a joke, losing large sums of money or business days due to damage is no laughing matter. Making sure businesses are aware of the risks bad weather poses and how to mitigate against it means SMBs can be guaranteed peace of mind and get back to the business of business.”


By Sue Poremba

Hybrid cloud. BYOD. Big Data. Internet of Things. These are terms that have become part of the daily lexicon, not only within the information technology (IT) and cyber security world but also in the main stream. Jargon is integral to IT. They make complicated terms more accessible to the non-technical person, even if they aren’t easier to understand.

Buzzwords are commonplace in IT security, as well, but are they truly understood? As Frank Ohlhorst writes in Tech Republic, “it seems that IT security managers are giving too much power to terms and buzzwords, letting them dictate security best practices.” Ohlhorst goes on to point out that while BYOD is just an acronym that means, simply, Bring Your Own Device (such as when a company allows its employees to use their personally-owned phones, laptops, and other devices to access the network for work purposes), security professionals see it as Bring Your Own Disaster and the beginning of a security nightmare.



It would be interesting to see what would happen if there was another Ebola scare in the U.S. The answer might depend on when it happened and perhaps where the person became infected. But chances are the health infrastructure would handle it, and perhaps respond to another infectious disease outbreak much better, having had the experience that the recent Ebola episodes provided.

That experience included hiccups and communication errors that resulted not in panic but disagreement on the part of some in the health community and alarm in the public. One target of criticism is the Centers for Disease Control and Prevention (CDC), which was confident from the beginning in expressing that hospitals throughout the U.S. were ready to handle Ebola cases and messaging to the public about the difficulty of transmission of the infection. The CDC chose not to participate in this discussion.

When Thomas Eric Duncan, who eventually died, was first found to have Ebola the CDC sought to calm fears and educate the public about the likelihood of the disease spreading by normal contact with an infected individual, and what should be done if someone was thought to have symptoms. It also expressed confidence in the ability of the health infrastructure to deal with an outbreak.



Wednesday, 14 January 2015 00:00

Data Storage Benchmarking Guide

Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.

Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).

But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more.



Post-apocalyptic movies such as “The Road Warrior,” “I Am Legend” and “The Matrix” have long been a Hollywood staple. You will need something more than a backup service to keep your business going in the event of nuclear war or an alien invasion, but for customers of MSPs, disasters are not an all-or-nothing proposition. Instead, they encompass a whole range of large and small incidents that can result in data and service losses. A properly designed disaster recovery system will protect against:

  • Ransomware
  • Accidental deletion
  • Hardware failure
  • Software corruption
  • Power surges, brownouts or outages
  • Lost smartphones, laptops and tablets
  • Fires and fire protection system damage
  • Vandalism
  • Theft
  • And whatever floods, earthquakes, tornados, tsunamis, lightning strikes, hurricanes or blizzards our dear, sweet Mother Nature decides to give us

Here are five critical factors MSPs should keep in mind when setting up their own and their customers’ systems for easy data recovery after a disaster.




When was the last time you conducted a business continuity exercise? Were your colleagues enthusiastic participants? It’s not always easy to get buy-in, either from top management who don’t want to fund it, or from your non-BC colleagues who don’t have time to take part.

This is why 'testing times' was chosen as the theme for Business Continuity Awareness Week as we want to support you in explaining to your colleagues just how important testing and exercising is to the whole business continuity process. To put it simply, a plan that has not been exercised is not a plan! We also want to support you in organizing your exercises, or more to the point we want BC professionals to support each other.

To begin with, the Business Continuity Institute has produced a series of posters that are free to download and can be placed in a prominent location in your workplace to highlight the importance of exercising your plans. Each poster asks the question:

When do you want to find out your business continuity plan doesn’t work?
A) During an exercise?
B) When an incident occurs?

These posters can be found on the BCAW website.


We also plan to post a series of case studies, white papers and other material that would support your case for an exercise or help you in planning one, but for this we need your help. We need the help of those people who do this work on a daily basis. Have you recently run an exercise, then why not submit a case study? It doesn’t have to be lengthy, just say a little bit about what you did. Have you recently conducted some research? Then perhaps you’d like to submit a white paper, it could provide some great publicity for you and your organization.

As with previous years, we are putting together an extensive webinar programme where business continuity experts will discuss the relevant issues relating to the theme and offer the viewer the opportunity to ask questions.

If you would like to become involved with BCAW, either by submitting material or hosting a webinar, or if you would just like further information, please do get in touch by emailing Andrew Scott.

Tuesday, 13 January 2015 00:00

CDC: Flu Season a Bad One

(TNS) — The Centers for Disease Control and Prevention said this year’s flu season is shaping up to be a bad one.

Already there have been 26 confirmed pediatric deaths and flu is widespread in almost “the entire country,” CDC director Tom Frieden said on a conference call with reporters Friday morning.

The number of hospitalizations among adults aged 65 and older is also up sharply, rising from a rate of 52 per 100,000 last week to 92 per 100,000 this week, Frieden said.

And there’s still more possible hospitalizations and deaths to come. The nation is about seven weeks into this year’s flu season and seasons typically last about 13 weeks, Frieden said.

“But flu season is unpredictable,” he said, adding it could last longer than 13 weeks.



Not too long ago, organizations fell into one of two camps when it came to personal mobile devices in the workplace – these devices were either connected to their networks or they weren’t.

But times have changed. Mobile devices have become so ubiquitous that every business has to acknowledge that employees will connect their personal devices to the corporate network, whether there’s a bring-your-own-device (BYOD) policy in place or not. So really, those two camps we mentioned earlier have evolved – the devices are a given, and now, it’s just a question of whether or not you choose to regulate them.

This decision has significant implications for network security. If you aren’t regulating the use of these devices, you could be putting the integrity of your entire network at risk. As data protection specialist Vinod Banerjee told CNBC, “You have employees doing more on a mobile device and doing it ad hoc here and there and perhaps therefore not thinking about some of the risks that are apparent.” What’s worse, this has the potential to happen on a wide scale – Gartner predicted that, by 2018, more than half of all mobile users will turn first to their phone or tablet to complete online tasks. The potential for substantial remote access vulnerabilities is high.



Tuesday, 13 January 2015 00:00

Tackling the Unstructured Data in Big Data

There’s a lot of talk about Big Data as if it is one entity. We hear: How do you manage Big Data? How do you govern Big Data? What’s the ROI for Big Data? The problem with this is that it puts too much focus on the technology, while obscuring one of the major challenges in Big Data sets: the unstructured data. 

I suspect CIOs haven’t forgotten that component since about 80 percent of data in organizations today is unstructured data, according to Gartner. That’s a lot of value currently hiding in social media, customer call transcripts, emails and other text-based or image-based files.

That’s a problem, because that also happens to be where you may find the real value in Big Data. These disparate data sets were previously unanalyzed or sitting in application silos. Obviously, Hadoop will let you migrate that into one location, but what then? How do you turn that into valuable information?

This recent Datamation column by Salil Godika goes a long way toward answering these questions. Godika is the chief strategy & marketing officer and Industry Group head at Happiest Minds. I admit this gave me pause, because pieces by chief marketing officers can be too self-serving.



There are times when you wish you could undo what you just did. Sometimes, you can’t. Financial investments, office reorganisations and even that too-hasty email you sent often cannot simply be reversed. With IT on the other hand, it’s a different story. From individual PCs to corporate data centres, the ‘Undo’ function has become a standard feature of many computing systems for making errors and problems disappear. As little as one mouse click may be enough to turn back the hands of time and begin again as though a mistake had never been made. But is this disaster recovery capability the magical solution it is often made out to be?



The concept of digital transformation is not a new one, as technology has been used to augment business functions since the dawn of the computer age. However, these days, digital transformation means different things to different companies, requiring each company to tailor their integration of technology in a way that increases productivity and improves communication with internal and external parties.

Personally, I like the Altimeter Group’s definition of digital transformation, since it is the most appropriate for modern market-focused usage: “The realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touch-point in the customer experience lifecycle.” In most cases, the goals of digital transformation include better engagement with digital customers, greater collaboration with internal resources, and improved efficiency.




It may not come as a surprise that cyber security incidents are on the rise. Open any newspaper today and you will no doubt come across yet another article highlighting some organization that has become the latest victim of a breach in online security.

Quite by how much these incidents are on the rise is perhaps a little more concerning however, as a recent report produced by PwC on the Global State of Information Security has shown that the number of information security incidents reported by survey respondents has increased from 28.9 million in 2013 to 42.8 million in 2014 – a 48% increase. The report also cites additional research which suggests that 71% of compromises go undetected meaning that 42.8 million could just be the tip of the iceberg.

The Business Continuity Institute’s Horizon Scan report has consistently shown that cyber attacks and data breaches are a major concern for business continuity professionals with the latest survey highlighting that 73% of respondents to a survey expressed either concern or extreme concern at the prospect of one of these threats materialising.

The cost of security incidents can be high with the report noting that a recent study by the Center for Strategic and International Studies estimated that the annual cost of cybercrime to the global economy could be somewhere between $375 billion and $575 billion. The report further notes that this doesn’t cover the cost of IP theft which could range from $749 billion to as much as $2.2 trillion.

You might think that with this significant increase in the number of security incidents and the financial impact these incidents can have, budgets would be also be increasing in order to protect against them, however the opposite appears to be the case as the report reveals that the average security budget among respondents had decreased by 4% from the previous year.

The Global State of Information Security Survey 2015 was a worldwide study by PwC, CIO, and CSO conducted online during the first half of 2014. Readers of CIO, CSO and clients of PwC from around the globe were invited to take the survey and the results discussed in the report are based on the responses of more than 9,700 CEOs, CFOs, CIOs, CISOs, CSOs, VPs and directors of IT and security practices across more than 154 countries.

As companies increasingly turn to the public cloud to house various components of their IT infrastructures, it will probably always be the case that other components will remain on-premise. So the question of how to best manage that hybrid environment becomes one that an increasing number of IT pros will have to be able to answer.

I discussed that question in a recent email interview with Lynn LeBlanc, co-founder and CEO of HotLink, a hybrid IT management software provider in Santa Clara. I started by asking LeBlanc what she finds companies tend to keep on-premise, and why they’re going that route. She said the reasons for hybrid cloud deployments vary from organization to organization, but it’s generally more a question of what they want to put in the cloud:



Never before have there been so many options for alerting the public. In the last few months alone potential for new alerting channels has been unleashed for complementing an already growing array of channels. Names like Google, Twitter, Facebook and the Weather Channel have entered the alerting field. Legacy vendors have enhanced their offerings. The federal government now has impressive alerting success stories to tout. An industry and practice area that once seemed sleepy is wide awake. At the same time, new complexities and challenges have shown themselves.

As part of the move toward ubiquitous alerting, an organization is working to turn online advertisements into emergency alerts. Members of the Federation for Internet Alerts (FIA) are substituting “interest-based advertising” with targeted alerts. Interest-based ads are the ones you see online that know what you’ve been looking for by using Web cookies or mobile service identifiers left behind when you conduct a search or visit a site. Through the FIA plan, interest-based ads would be replaced with emergency alerts for a specific geographic area. The FIA’s Jason Bier, chief privacy officer at the company Conversant, said through a pilot, Amber Alert messages have been exposed via 500 million “impressions” to more than 100 million devices.



Pricing data backup and disaster recovery (BDR) and business continuity services can be challenging, especially for managed service providers (MSPs) that offer cloud-based storage of customer data.

A time-based cloud retention (TBCR) fixed-pricing model, however, ensures the monthly cost for cloud-based storage of customer data does not vary based on volume.

Also, service providers can use TBCR to offer customers secure, rapidly recoverable off-site backup for a fixed monthly price that is based on how long they need to retain their data.



Integration isn’t exactly a fast-moving part of IT, so it isn’t usually listed on New Year technology prediction lists. This year, I spied two integration trends among these lists that could potentially shake up IT and the business.

First, CIO.com lists deeper ERP integration as a top trend for enterprise software in 2015. This could be huge for business users, who could then leverage that rich ERP data for other applications — especially CRM. Jeremy Roche, CEO of cloud ERP provider FinancialForce, explained it thusly:



Friday, 09 January 2015 00:00

Abiding by the rules of business continuity


There are many 'rules' that govern what we do as business continuity professionals – some are sector specific, some are based on geography. But which of them apply to your organization? When you start to look into it, it's not difficult to become confused as to which you are supposed to abide by.

The Business Continuity Institute now aims to simplify this by publishing what we believe to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based upon information provided by the members of the Institute from all across the world. Some of the items may only be indirectly related to BCM, and should not be interpreted as specifically designed for the industry, but rather they contain sections that could be useful to a BCM practitioner.

The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, finally, it provides a link to the full document itself.

The BCI has done its best to check the validity of these details but takes no responsibility for their accuracy and currency at any particular time or in any particular circumstances. To download a copy of the document, click here.

Friday, 09 January 2015 00:00

IBM Stays the Storage Course

The overall storage market has seen a number of challenges recently in achieving desired goals, such as in the number of petabytes vendors actually sell. That has led a few prognosticators to express a “sky-is-falling” analysis (as that attracts attention) to the situation. But that approach is fundamentally wrong.

Now, in any dynamic and rapidly changing market such as storage where trends, such as software-defined solutions and Flash technologies are transforming vendor and customer expectations, and where global IT trends, like cloud, big data, and mobile also have an immense impact, there are likely to be challenges. That is especially the case where both established vendors and newer players duke it out.

The key is not to panic. And that is why it is so important to IBM’s storage customers that the company is staying the course. This does not mean standing still, but rather progressing in a measured manner. IBM’s recent 4th quarter storage announcements do not contain any blockbusters. For that we can be grateful as blockbusters absorb all the attention and we have to expend a lot of thought, time and energy in trying to understand what impact the blockbuster will have.



Friday, 09 January 2015 00:00

BYOD: Follow the Money

The topic of money – who pays for what and how to get the best plans when business and consumer activities are mixed – has been a vexing one since Bring Your Own Device (BYOD) emerged. It has taken something of a back seat while companies figured out how to keep data secure and separate in the two spheres.

Those primary tasks are well on their way to being solved. Now, attention in turning, as it eventually always does, back to the money. The industry is getting serious about the issue, at least at the rudimentary level of splitting work and consumer bills.

Mobile Enterprise reports that the AT&T Work Platform will enable organizations to separate work and consumer expenditures. The story says that it is an important task from several points of view. Of course, there is the simple point of figuring out who pays for what. Beyond that are the legal, human resources and tax regulations. AT&T is cooperating with big-name vendors MobileIron, AirWatch by VMware and Good Technology on the platform.



Friday, 09 January 2015 00:00

Why CIOs Will Want Data Lakes

Edd Dumbill may have just won the argument over whether data lakes are a practical, achievable idea.

Data lakes are a simple enough idea: You dump a wide range of data into a Hadoop cluster and then leverage that across the enterprise.

The problem is what Gartner calls the “Data Lake Fallacy,” which is the challenge of managing data lakes in a governable and secure way.

Dumbill acknowledges the barriers to data lake adoption in a recent O’Reilly Radar Podcast. Ultimately, though, the VP of strategy at Silicon Valley Data Science says data lakes will happen for one reason: Data lakes free data from enterprise silos.

“One of the hardest things for organizations to get their head around is getting data in the first place,” Dumbill told O’Reilly’s Mac Slocum. “A lot of CIOs will be, ‘Great, I want to do data science but I’ve got this database over here and this one over here and these all need to speak to each other and they’re in different formats and so on.’ In many ways, having data in a data lake provides you with a foundation (with) which you can start to integrate data with and then make it accessible as a building block in an organization.”



“Pandemic” and “panic” sound a lot alike. Certainly, the first can trigger the second in next to no time, as the recent outbreak of Ebola has demonstrated. But as a leader in your company, you can avoid both by encouraging your cross-functional teams to take the following six steps.



(TNS) — There are chilling similarities between the deadly Charlie Hebdo attack in Paris and the Boston Marathon bombings, with lessons to be drawn for law enforcement, terrorism experts say.

Both attacks have been blamed on homegrown terrorist brothers — in each case with a brother who had drawn law enforcement attention for Islamic radical ties before. In both cases, both police and citizens were targeted with equal cold-blooded vigor.

“I think what you’re going to see is governments going through their watch lists to see how many names appear identical. They should have added worry when you have two or three members of the same family giving prior warning, governments should be taking a second and third look at them,” said Victor David Hanson of the Hoover Institution. “When you are dealing with familial relations, it means there are fewer people who have privileged information about the ongoing plotting and the secret is reinforced by family ties ... it’s going to be much harder for Western intelligence to break into them.”



Thursday, 08 January 2015 00:00

43 States Have 'Widespread' Flu Problems

(TNS) -- Influenza viruses have infiltrated most parts of the United States, with 43 states experiencing "widespread" flu activity and six others reporting "regional" flu activity, according to the Centers for Disease Control and Prevention.

Hawaii was the only state where flu cases were merely "sporadic" during the week that ended Dec. 27, the CDC said in its latest FluView report. One week earlier, California also had been in the "sporadic" category, and Alaska and Oregon reported "local" flu outbreaks. Now all three states have been upgraded to "regional" flu activity, along with Arizona, Maine and Nevada.

The rest of the states are dealing with "widespread" outbreaks, according to the CDC.



Thursday, 08 January 2015 00:00

SMBs Should Consider These Tech Trends in 2015

Of course, the end of 2014 and the beginning of 2015 bring all sorts of articles predicting what will be hot in the coming year. For small to midsize businesses (SMBs), quite a few outlets are reporting their lists of technology trends to watch.

Entrepreneur gave three “promising trends” for 2015, which include creating and leveraging well-designed technology, adopting software as a service (SaaS) and developing “data-driven insights.”

Taking advantage of data to make better informed decisions is also a top trend for SMBs to watch from the Huffington Post. According to writer Joyce Maroney, “Smaller businesses, swimming in lots of data of their own, will likewise be taking more advantage of that data to bring science as well as art to their decision making.” That likely means delving further into more data sources than just Google Analytics, says Entrepreneur writer Himanshu Sareen, CEO of Icreon Tech.



The presence or lack of catastrophes is a defining event when it comes to the financial state of the U.S. property/casualty insurance industry.

At the 2014 Natural Catastrophe Year in Review webinar hosted by Munich Re and the Insurance Information Institute (I.I.I.), we can see just how defining the influence of catastrophes can be.

U.S. property/casualty insurers had their second best year in 2014 since the financial crisis – 2013 was the best – according to estimates presented by I.I.I. president Dr. Robert Hartwig.

P/C industry net income after taxes (profits) are estimated at around $50 billion in 2014, after 2013 when net income rose by 82 percent to $63.8 billion on lower catastrophe losses and capital gains.



Thursday, 08 January 2015 00:00

Survey: business continuity in 2015

Continuity Central’s annual survey asking business continuity professionals about their expectations for the year ahead is now live.

Please take part at https://www.surveymonkey.com/r/businesscontinuityin2015

The survey looks at the trends and changes the profession can expect to see in the year ahead.

Read the results from previous years:

Thursday, 08 January 2015 00:00

Scoping Out Your Program/Risk Assessment

At the PLI Advanced Compliance & Ethics Workshop in NYC in October, Scott Killingsworth of the Bryan Cave law firm noted that each risk assessment should be unique.  I agree, and I believe that the case for uniqueness is even more powerful for the combined program and risk assessments companies sometime undertake.  Given the diversity of possibilities, where should you start in scoping out such an engagement?  Another way of asking this question is “How should you conduct a needs assessment for a program/risk assessment?”

To begin, it may be worth thinking in terms of the following six fields of information which can comprise the subjects of an assessment:



The future of IT infrastructure is changing. My friend, BJ Farmer over at CITOC, is fond of reminding me that Change is the Only Constant (see what CITOC stands for?).

It’s true for most everything in life, and especially true for our industry. You can either embrace the changes that come along, evolving how you present services to your clients, or you can slowly lose relevance and fade out of the big picture. The choice is yours.

Right now, change comes from The Cloud.

Yes, there is definitely a lot of hype about the cloud, and it’s easy to grumble about fads and look at the big cloud migration as a bandwagon everyone’s too eager to jump on. But the plain fact is that the cloud is providing affordable, smart alternatives to the kind of infrastructure that used to be the bread and butter of an MSP, and it’s not going anywhere. So you can either keep railing against the cloud, running your Exchange servers and piecing together various services from different partners, or you can start thinking about how to offer innovative solutions for your clients by STRATEGICALLY leveraging the cloud.



Thursday, 08 January 2015 00:00

Human Error Caused 93% of Data Breaches

Despite tremendous increased attention, the number of reported cyberbreach incidents rapidly escalated in 2014. According to Information Commissioner’s Office data collected by Egress Software Technologies, U.K. businesses saw substantially more breaches last year, with industry-wide increases of 101% in healthcare, 200% in insurance, 44% among financial advisers, 200% among lenders 200%, 56% in education and 143% in general business. As a result, these industries also saw notable increases in fines for data protection violations.

The role of employees was equally alarming. “Only 7% of breaches for the period occurred as a result of technical failings,” Egress reported. “The remaining 93% were down to human error, poor processes and systems in place, and lack of care when handling data.”

Check out more of the findings from Egress’ review in the infographic below:



The recent Ebola outbreak unearthed an interesting phenomenon. A “mystery hemorrhagic fever” was identified by HealthMap — software that mines government websites, social networks and local news reports to map potential disease outbreaks — a full nine days before the World Health Organization declared the Ebola epidemic. This raised the question: What potential do the vast amounts of data shared through social media hold in identifying outbreaks and controlling disease?

Ming-Hsiang Tsou, a professor at San Diego State University and an author of a recent study titled The Complex Relationship of Realspace Events and Messages in Cyberspace: Case Study of Influenza and Pertussis Using Tweets, believes algorithms that map social media posts and mobile phone data hold enormous potential for helping researchers track epidemics.

“Traditional methods of collecting patient data, reporting to health officials and compiling reports are costly and time consuming,” Tsou said. “In recent years, syndromic surveillance tools have expanded and researchers are able to exploit the vast amount of data available in real time on the Internet at minimal cost.”



(TNS) — After a series of 13 small earthquakes rattled North Texas from Jan. 1 to Wednesday, a team of scientists is adding 22 seismographs to the Irving area in an effort to learn more.

The team of seismologists from Southern Methodist University, which has studied other quakes in the area since 2008, deployed 15 of the earthquake monitors Wednesday. SMU studies of quakes in the DFW Airport and Cleburne areas have concluded wastewater injection wells created by the natural gas industry after fracking are a plausible reason for the temblors in those areas.

But Craig Pearson, seismologist for the state Railroad Commission, said that is not the case with the Irving quakes.

“There are no oil and gas disposal wells in Dallas County,” said Railroad Commission of Texas seismologist Dr. Craig Pearson in a Wednesday email.



Wednesday, 07 January 2015 00:00

Frigid Weather Heightens Ice Hazards

Freezing weather now sweeping across much of the U.S. brings a greater risk of ice storms and underlines the need for careful planning and heightened safety measures.

In fact, it does not take much ice to create disaster conditions. Even a thin coat of ice can create dangerous conditions on roads. Add strong winds and you have a recipe for downed trees and power lines, bringing outages that can last for days.



Wednesday, 07 January 2015 00:00

How We Get Work Done: Good Old Email

While attention is focused this week on the CES 2015 show in Las Vegas and all the new technology, gadgets and apps that may change the way we work in the near future, Pew Research has a reminder of the technology that we truly consider indispensable at work: Email and the Internet.

After a survey of 1,066 adult Internet users, Pew Research analyzed results from those who have full- or part-time jobs. When it comes to the digital work lives of these respondents, the findings indicate, the tools designated as “very important” are nothing new. Sixty-one percent named email, 54 percent “the Internet,” and 35 percent a landline phone. Cell phones and smartphones trailed at 24 percent, and social networking sites grabbed a measly 4 percent.

Pew notes that email is still king despite increasing awareness of drawbacks, including “phishing, hacking and spam, and dire warnings about lost productivity and email overuse.” In fact, 46 percent of respondents said they think they are more productive with their use of email and other digital tools; 7 percent say they are less productive. Being more productive, these workers report, includes communicating with more contacts outside the company, more flexible work hours, and more hours worked.



By David Honour

As we enter a new year it’s always a good exercise to look ahead at potential changes in the coming 12 months and what these might mean for existing business continuity plans and systems. Will the strategies you had in place in 2014 remain fit for purpose, or will some reworking be necessary? What emerging threats need to be considered to ensure that new exposures are not developing? In this article I highlight three areas which are likely to be the biggest generic business continuity challenges in 2015.

The rise and rise of information security threats

2014 was the year that information security related incidents took many of the business continuity headlines, with attacks increasing in sophistication, magnitude and impact. This situation is only going to get worse during 2015.

The greatest risk is that of a full-on cyber war breaking out, which would inevitably result in collateral damage to businesses. The first salvoes have been seen in a potential United States versus North Korea cyber war; but other state actors are also well geared up for cyber battle, including Israel, Russia, China and India. The cyber-warfare skills of terrorist groups such as ISIS should also not be under-estimated.



On January 1, 2015, version 3.0 of the PCI (Payment Card Industry) Data Security Standards replaced version 2.0 as the standard. In other words, what some financial institutions, merchants, and other credit card payments industry members already saw as an onerous process—complying with PCI standards and possibly being audited—is about to get even harder. While I can’t take the blood, sweat and tears out of PCI compliance, as an experienced Qualified Security Assessor (QSA) I can give you some context for why PCI is issuing a new version of its standards, and why 3.0 is a good thing for your business in the end.