Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 30, Issue 1

Full Contents Now Available!

Industry Hot News

Industry Hot News (7044)

Many organizations are hesitant to adopt cloud services for cloud storage and cloud-based file sharing.  Although there are many customers that don’t understand cloud security (and don’t want to), slow adopters present a special challenge for managed service providers (MSPs).

One way to sell organizations on the importance of their own security practices is to point out how far cloud services have come in terms of safety and reliability.  How can you do this?  Here are some ways that MSPs can convince slow-to-adopt organizations to take responsibility for their data security:



Andrew MacLeod argues that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and can therefore be used to enhance organizational resilience. For an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.

By Andrew MacLeod BA (Hons) MBCI

“The concept of organizational culture must be recognised as one of vital importance to the understanding of organization and all activities and processes operating within and in connection with organization.” (Brooks, 2003)

As Brooks states, the concept of culture and therefore insights into its operation within an organization are fundamental. However, to fully understand how culture can enhance organizational resilience, one must be clear by what is meant by both organizational resilience and organizational culture. This paper will define organizational resilience in the contemporary context and explore what is meant by culture. It will be demonstrated that culture is a complex field of study and that every organization has its own unique culture which is interwoven with concepts of individual and national culture. This paper will argue that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and these insights can be used to enhance organizational resilience. It will be shown that for an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.



Businesses often struggle on with legacy server rooms due to budget constraints and fear of upgrade risks. In this article Mark Allingham challenges BC managers to face up to this problem.

One of the basic rules of business continuity management is to ensure that everyday information technology systems are protected and fit for purpose but often businesses struggle on with legacy server rooms. Mark Allingham challenges BC managers to

The server room is the beating heart of any but the smallest business. You rely on your servers for vital files, essential information and the day to day running of the organization, so any risk of failure is a considerable threat to business continuity. Legacy server rooms with outdated equipment and limited capacity are liable to power outages, downtime and worse. So any business continuity manager should consider carefully whether their existing server room is fit for purpose.



It seems like once a week, we see yet another story about a security failure involving passwords. In May alone, for instance, the news came that an unpatched vulnerability in Oracle’s PeopleSoft could open a hole for thieves to steal passwords; Google revealed that those security questions that help you retrieve a lost password are anything but secure; and Starbucks blamed passwords for its own recent hack attack.

It’s no wonder, then, that passwords (and usernames) were a popular topic at the RSA Conference this year. One of those speaking about the problem of passwords, Phillip Dunkelberger, president and CEO at Nok Nok Labs, said a number of significant problems with passwords make them a poor single method of authentication.

“First, passwords are a symmetric secret – we enter a password on our PC or smartphone that is matched up on a server, this means that organizations are holding hundreds of millions of passwords in large databases. Despite using techniques such as salting and hashing of password databases, security professionals have found it practically impossible to secure this infrastructure, so passwords are very vulnerable to massive, scalable hacks,” he said.



SURAT, India — “I don’t have to go to the gym,” says Urmil Kumar Vyas with an impish smile. “Don’t you think climbing 400 steps is enough exercise for a day?”

Vyas and I are wending our way toward a high-rise building in one of the wealthier zones of Surat, a city of 5 million in western India about five hours north of Mumbai. Vyas is a primary health worker in the Surat Municipal Corporation’s Vector Borne Diseases Control Department. He has spent 21 years on the job, and has seen his share of sickness and death. But his energy and sense of humor remain intact.

Vyas joined the city workforce in 1994, the year Surat exploded onto the front pages of newspapers worldwide in the aftermath of a virulent plague. More than 50 people died. Hundreds of thousands more, including migrant workers, fled the city out of fear; businesses across the city shut down.



Enterprises will account for 46 percent of Internet of Things (IoT) device shipments this year, BI Intelligence predicts. That’s not surprising when you consider the incredible predictions around IoT savings (billions, according to Business Insider) and IoT revenues ($14.4 trillion by 2022, according to this Forbes column).

But first, there will be raw data — terabytes of it, warns Elle Wood in a recent post for analytics vendor AppDynamics’ blog.

“With a sensor on absolutely everything – from cars and houses to your family members – it goes without saying there will be some challenges with these massive amounts of data,” Wood writes. “After all, IoT isn’t just about connecting things to the Internet; it’s about generating meaningful data.”



Wednesday, 10 June 2015 00:00

Quantifying supply chain risk

Today, more businesses around the world depend on efficient and resilient global supply chains to drive performance and achieve ongoing success. By quantifying where and how value is generated along the supply chain and overlaying of the array of risks that might cause the most significant disruptions, risk managers will help their businesses determine how to deploy mitigation resources in ways that will deliver the most return in strengthening the resiliency of their supply chains. At the same time, they will gain needed insights to make critical decisions on risk transfer and insurance solutions to protect their companies against the financial consequences of potential disruptions.

As businesses evaluate their supply chain risk and develop strategies for managing it, they might consider using a quantification framework, which can be adapted to any traditional or emerging risk.



Helping your clients remain compliant with the laws and standards set forth by the governing bodies presiding over their industries is an essential component of the role of managed service providers (MSP).  When it comes to protecting sensitive data being stored in the cloud or transmitted via cloud-based file sharing, MSPs often need to protect their clients from themselves.

Among the industries that appear to be fighting this battle against their own personnel, perhaps none is more scrutinized than the healthcare industry. While there are many strict stipulations in place for handling sensitive health data, there are also many employees that have access to the data from a host of endpoints.

The healthcare industry’s HIPAA regulations go a long way towards ensuring that the private, sensitive, personal information of patients is handled very carefully. What the regulations don’t stipulate well enough, however, is the management of an organization’s own administrative, physical, and technical safeguards.  According to HealthIT Security, “If a recent survey is any indication, health and pharmaceutical companies, along with other industries, might be falling behind when it comes to protecting sensitive data.”



Does resilience in your enterprise spring from its senior management as a source of inspiration to all? Or is it perhaps embedded in your organisational culture, lovingly nurtured and developed over the years? Either possibility would be gratifying. However, some recent information suggests that neither is the primary source of resilience. Researchers Sarah Bond and Gillian Shapiro surveyed 835 employees from a cross-section of firms in Britain and found that 90% of those employees considered their resilience to be inherently within themselves; and only 10% thought their organisation provided them with resilience. If this is true more generally, there are some important consequences for any enterprise to consider.



Not surprisingly, I’ve heard from a lot of people regarding the announcement of the Office of Personnel Management (OPM) breach, but what Andy Hayter, security evangelist for G DATA, told me in an email jumped out at me – in part because of the imagery but also because it was eerily similar to a thought that I had. Hayter said:

I have to think that it must appear to threat actors all over the globe that the U.S. government's IT systems are full of holes, like Swiss cheese, and the response from the U.S. is to play whack-a-mole every time, in a valiant attempt to close each hole. With all of these attacks, it’s likely that each one is arming cyber criminals with exactly what they need and want to execute another one, and the vicious cycle continues. Unfortunately every time there's another breach on a Federal agency, it spells out our vulnerabilities loud and clear to our adversaries, letting them know there are many more opportunities for them to hack our systems and networks over and over again.

Whack-a-mole security. It really is easy to think that way. The OPM breach is just the latest – and perhaps most damaging because of the vast amount of data that could be compromised – incident within the federal government, and now we are at a point where we’re going to wait for the next incident to pop up.



There really isn’t anything new under the sun. More than a century ago, Nikola Tesla made great strides in his dream of the wireless transmission of electricity. Tesla came up short, but his dream increasingly is coming true more than a century later.

Popular Science and InformationWeek report on research from the University of Washington that could pave the way for devices to be charged by Wi-Fi. The InformationWeek story says that the approach, which of course is called power over Wi-Fi (PoWiFi), could work at up to 28 feet. Prototypes (temperature and camera sensors) are operational to 20 feet.

Popular Science has more detail, saying that about 1 watt of power is transmitted as a normal part of Wi-Fi operations. The technology is aimed at capturing and putting that energy to work. The 1 watt of power isn’t enough to charge phones or perform other higher-level jobs. However, many tasks associated with the Internet of Things (IoT) can be satisfied. Wrote Dave Gershgorn:

This technology isn’t new. Companies like Energous have already brought products to market that send power over similar Wi-Fi signals, and they claim to be able to charge cell phones. Yet the novel feature of PoWiFi is the ability to harness power with pre-existing hardware, and the University of Washington team says their routers transmit both power and data in the same signal.



Tuesday, 09 June 2015 00:00

Getting a Handle on This Dev/Ops Thing

Is Dev/Ops for real, or is it simply the latest marketing tool to get you to buy more stuff for your data center? Or is it a little of both, a potentially revolutionary change to enterprise infrastructure management provided you can see through all the Dev/Ops-washing that is going on?

As with most technology initiatives, the concept behind Dev/Ops is solid – it offers a more flexible approach to the data-resource allocation challenges present in hyperscale and Big Data environments. But by the same token, success or failure is usually determined by the execution, not the initial design. So the real challenge with Dev/Ops is not in selecting the right platform but in taking the designs and concepts currently in the channel and making them your own.



Who was responsible for the recent U.S. Office of Personnel Management (OPM) data breach? Congressman Michael McCaul told CBS News that Chinese hackers could be the culprits in the incident that resulted in the theft of personal information from more than 4 million current and former federal employees.

And as a result, the OPM tops this week's list of IT security newsmakers to watch, followed by U.S. HealthWorks, the Dyre malware and CTERA Networks.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



(TNS) — Drone photography could soon take off for Victoria, Texas’ emergency responders.

Compared to the time, cost and challenges associated with using helicopters for search and rescue, drones could be a game-changer for the future of emergency response, said Emergency Management Coordinator Rick McBrayer.

Emergency responders used a drone at no cost to taxpayers to track in real time the Guadalupe River flood through Victoria. Now, officials are exploring the legality and permitting process to use drones again.



Tuesday, 09 June 2015 00:00

Attivio Updates Big Data Indexing Engine

For all the excitement that Big Data often generates with an organization, one of the fundamental challenges most of them face comes down to data management plumbing. There’s no shortage of data, but organizing all of it in a way that makes it consumable by a Big Data analytics application is problematic.

To enable IT organizations to manage that process better, Attivio today launched an update to its namesake indexing engine for data within an enterprise that adds a range of self-service capabilities for business analysts and data scientists to identify and unify self-selected data tables from the universal index.

Attivio CEO Stephen Baker says Attivio is squarely focused on applying search and indexing technologies to better manage data assets within an enterprise. All too often, IT organizations have hundreds of enterprise applications, but no one is quite sure what data resides inside each. As a result, these same organizations wind up investing in hiring a data scientist, only to watch the person spend months trying to organize all the data inside the organization. Attivio, says Baker, provides a mechanism to reduce the manual effort associated with integrating all that data by as much as 80 percent.



What cloud services should managed service providers (MSPs) sell to customers, and how profitable can those services really be? These are questions that MSPs are grappling with every day right now. Service Leadership CEO Paul Dippell presided over several sessions at LabTech Automation Nation 2015 last week that provided perspective on these questions.

Here are some of the takeways from a couple of Dippell's sessions, including an overview of the cloud market from his company and some real-world perspective from a panel of MSPs. Let’s start with an overview of the cloud market today.



(TNS) — In a narrow parking lot, Brett Kennedy and Sisir Karumanchi stand around what looks like a suitcase. But then four limbs extend from its sides, bending and clicking into position. Two spread out like legs and two rise up like arms as the robot goes through several poses, looking for all the world like a Transformer doing yoga.

This is RoboSimian, a prototype rescue robot whose builders at NASA's Jet Propulsion Laboratory hope can win the $2-million prize at the DARPA Robotics Challenge. The goal: to foster a new generation of rescue robots that could help save lives when the next disaster hits.

Twenty-four teams from around the U.S. and the globe have sent their best and brightest bots to compete in a grueling obstacle course — a robot Olympics, if you will.



(TNS) — On May 23, the extended Taylor family had just sat down for dinner at their River Road house when the phone rang. It was a pre-recorded call from Hays County emergency officials warning residents with homes along the Blanco River that the water was rising quickly and flooding was likely.

It was the first of several such calls his father-in-law took during the course of the meal, recalled Scott Sura. “But he sort of brushed it off. He’s been through several floods, and he wasn’t worried. In fact, he later went to bed.”

Across the river and downstream, on Flite Acres Road, Frances Tise said she and her husband Charles also fielded the emergency calls that evening. “But I had seen the river rise before, and it just came up to our backyard,” she said. “We just didn’t realize how fast it was coming up.”



AUSTIN, Texas – State and federal recovery officials urge Texans affected by the ongoing severe storms and floods to watch for and report any suspicious activity or potential fraud.

Even as government agencies and charitable groups continue to provide disaster assistance, scam artists, identity thieves and other criminals may attempt to prey on vulnerable survivors. The most common post-disaster fraud practices include phony housing inspectors, fraudulent building contractors, bogus pleas for disaster donations and fake offers of state or federal aid.

“Scam attempts can be made over the phone, by mail or email, or in person,” said Federal Coordinating Officer Kevin Hannes of Federal Emergency Management Agency (FEMA). “Con artists are creative and resourceful, so we urge Texans to remain alert, ask questions and require identification when someone claims to represent a government agency.”      

Survivors should also keep in mind that state and federal workers never ask for or accept money, and always carry identification badges with a photograph. There is no fee required to apply for or to get disaster assistance from FEMA, the U.S. Small Business Administration (SBA) or the state. Additionally, no state or federal government disaster assistance agency will call to ask for your financial account information; unless you place a call to the agency yourself, you should not provide personal information over the phone – it can lead to identity theft.

Those who suspect fraud can call the FEMA Disaster Fraud Hotline at 866-720-5721 (toll free). Complaints may also be made to local law enforcement agencies.

Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

Drug abuse with people sharing the same syringe

In a small, rural town in Southern Indiana, a public health crisis emerges.  In a community that normally sees fewer than five new HIV diagnoses a year, more than a hundred new cases are diagnosed and almost all are coinfected with hepatitis C virus (HCV).

How was this outbreak discovered, and what caused this widespread transmission? Indiana state and local public health officials – supported by CDC – set out to answers these questions and help stop the spread of HIV and HCV in this community.

The Outbreak

In January 2015, Indiana disease intervention specialists noticed that 11 new HIV diagnoses were all linked to the same rural community.  This spike in HIV diagnoses in an area never before considered high-risk for the spread of HIV, launched a larger investigation into the cause and impact of these related cases.

The investigation began by investigating the 11 newly diagnosed cases. This process involved talking to newly diagnosed individuals about their health and sexual behaviors, as well as past drug use. In the United States, HIV is spread mainly by having sex or sharing injection drug equipment such as needles with someone who has HIV.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

In the case of the 11 related diagnoses in Indiana, almost all were linked to injection drug use. Investigators discovered that syringe-sharing was a common practice in this community–often used to inject the prescription Opana; opioid oxymorphone (a powerful oral semi-synthetic opioid medicine used for pain.)  HIV can be spread through injection drug use when injection drug equipment, such as syringes, cookers (bottle caps, spoons, or other containers), or cottons (pieces of cotton or cigarette filters used to filter out particles that could block the needle) are contaminated with HIV-infected blood. The most common cause of HIV transmission from injection drug use is syringe-sharing. Persons who inject drugs (PWID) are also at risk for HCV infection. Co-infection with HCV is common among HIV-infected PWID. Between 50-90% of all persons who inject drugs are infected with both HIV and HCV.

The Investigation

“Contact tracing” is the process of identifying all individuals who may have potentially been exposed to an ill person, in this case a person infected with HIV.  Contact tracing involves interviewing the newly diagnosed patients to identify their syringe-sharing and sex partners.  These “contacts” are then tested for HIV and HCV infection, and if found infected are likewise interviewed to identify their syringe-sharing and sex partners. This cycle continues until no more new contacts are located.

As of May 18, contract tracing and increased HIV testing efforts throughout the community identified 155 adult and adolescent HIV infections. The investigation has revealed  that injection drug use in this community is a multi-generational activity, with as many as three generations of a family and multiple community members injecting together and that due to the short half-life of the drug, persons who inject drugs may have injected multiple times per day (up to 10 in one case). may be needed .

Early HIV treatment not only helps people live longer but it also dramatically reduces the chance of transmitting the virus to others.  People who do not have HIV and who are at high risk for HIV can also benefit more directly from the drugs used to treat HIV to prevent them from acquiring HIV.  This is known as pre-exposure prophylaxis (PrEP). Post-exposure prophylaxis, or PEP, is an option for those who do not have HIV but could have been potentially exposed in a single event.

The Response


So what is the next step in addressing this staggering outbreak? First, public health officials must work to get every person exposed to HIV tested. All persons diagnosed with HIV need to be linked to healthcare and treated with antiretroviral medication. Persons not infected with HIV are counseled on effective prevention and risk reduction methods; including condom use, PrEP, PEP, harm reduction, and substance abuse treatment. Getting messages about the benefits of HIV treatment to newly diagnosed individuals and prevention information to at-risk members of the community are key components to control this outbreak.

The underlying factors of the Indiana outbreak are not completely unique. Across the United States, many communities are dealing with increases in injection drug use and HCV infections; these communities are vulnerable to experiencing similar HIV outbreaks. CDC asked state health departments to monitor data from a variety of sources to identify jurisdictions that, like this county in Indiana, may be at risk of an IDU-related HIV outbreak.  These data include drug arrest records, overdose deaths, opioid sales and prescriptions, availability of insurance, emergency medical services, and social and demographic data. Although CDC has not seen evidence of another similar HIV outbreak, the agency issued a health alert to state, local, and territorial health departments urging them to examine their HIV and HCV surveillance data and to ensure prevention and care services are available for people living with HIV and/or HCV.

The work that has been done thus far, as well as the continued efforts being made to address this response, highlight importance of partnerships between federal, state and local health agencies. The work done by Indiana State Department of Health’s disease intervention specialist to link the initial HIV cases to this rural community, and the work of the local health officials to respond quickly and thoroughly to investigate all possible exposures and spread important health prevention information demonstrates the critical importance of strong public health surveillance and response.

The Division of HIV/AIDS Prevention commends the efforts of all the individuals involved in controlling the HIV outbreak in Indiana. The response illustrates that together we are committed to improving the health of our communities across the nation.

In addition to announcing that it is making its core engine available as an open source Project Apex technology, DataTorrent has released an update to its Big Data analytics software for Hadoop that eliminates the dependencies organizations now have on developers to create these applications.

John Fanelli, vice president of marketing for DataTorrent, says the latest version of DataTorrent enables individuals to assemble Big Data analytics applications without having to write code. In addition, end users can make use of a library of visualizations to create dashboards in a matter of minutes.

Finally, DataTorrent 3.0 comes with pre-built connectors for integrating with both enterprise applications and custom Java applications in addition to graphical tools that make it simpler to ingest data into a Big Data application.



Are you embarking on an IT career? Are you maybe a few years in and looking to make a big move in your career if you can find the right opportunity?

What are your expectations for your next IT job? Perhaps you expect the following:

  • To be treated by management with respect.
  • To have invigorating, exciting work and to feel your work is appreciated.
  • To have co-workers you admire and who admire you.
  • To be compensated well—because you’re worth it!

What you might want to do right now is write these expectations down.

Then, go out in the backyard and LIGHT THEM ON FIRE.

Congratulations! You have just liberated yourself from job disillusionment and career self-sabotage.



In the aftermath of the 2008 global financial crisis, postmortems were convened in countries around the world to identify what went wrong. A unanimous conclusion was that Boards of Directors of public companies in general, and financial institutions in particular, need to do more to oversee “management’s risk appetite and tolerance” if future crises are to be avoided.

This finding represents a significant paradigm shift in role expectations while introducing a new concept the Financial Stability Board (FSB) has coined effective “Risk Appetite Frameworks” (RAFs). Regulators around the world are now moving at varying speeds to implement these conclusions by enacting new laws and regulations. What regulators appear to be seriously underestimating is the amount of change necessary to make this laudable goal a reality.



I was leafing through a pile of old BCI documents when I stumbled across a paper detailing a presentation, entitled “Resilience isn’t the future of business continuity” given by Charlotte Newnham at the BCM World Conference and Exhibition in November 2012.

In the presentation a number of facts and figures were provided which explain a great deal about “actual” resilience capabilities. The figures provided, included the facts that, of the existing resilience departments approximately 50% were in the public sector and that 76% of organisations extend their remit to incident/emergency management. Whilst, these figures seem positive for the resilience function, only 30% oversaw security or risk management and just 7% had any involvement in IT continuity.



AUSTIN, Texas – Recovery specialists have some sound advice for Texans whose homes and property took on floodwaters: Protect your family’s health and your own by treating or discarding mold- and mildew-infected items.

Health experts urge those who find mold to act fast. Cleaning mold quickly and properly is essential for a healthy home, especially for people who suffer from allergies and asthma, said the Federal Emergency Management Agency (FEMA).

Mold and mildew can start growing within 24 hours after a flood, and can lurk throughout a home, from the attic to the basement and crawl spaces. The best defense is to clean, dry or, as a last resort, discard moldy items.

Although it can be hard to get rid of a favorite chair, a child’s doll or any other precious treasure to safeguard the well-being of your loved ones, a top-to-bottom home cleanup is your best defense, according to the experts.

Many materials are prone to developing mold if they remain damp or wet for too long. Start a post-flood cleanup by sorting all items exposed to floodwaters:

  • Wood and upholstered furniture, and other porous materials can trap mold and may need to be discarded.
  • Carpeting presents a problem because drying it does not remove mold spores. Carpets with mold and mildew should be removed.
  • However, glass, plastic and metal objects and other items made of hardened or nonporous materials can often be cleaned, disinfected and reused.

All flood-dampened surfaces should be cleaned, disinfected and dried as soon as possible. Follow these tips to ensure a safe and effective cleanup:

  • Open windows for ventilation and wear rubber gloves and eye protection when cleaning. Consider using a mask rated N-95 or higher if heavy concentrations of mold are present.
  • Use a non-ammonia soap or detergent to clean all areas and washable items that came in contact with floodwaters.
  • Mix 1-1/2 cups of household bleach in one gallon of water and thoroughly rinse and disinfect the area. Never mix bleach with ammonia as the fumes are toxic.
  • Cleaned areas can take several days to dry thoroughly. The use of heat, fans and dehumidifiers can speed up the drying process.
  • Check out all odors. It’s possible for mold to hide in the walls or behind wall coverings. Find all mold sources and clean them properly.
  • Remove and discard all materials that can’t be cleaned, such as wallboard, fiberglass and cellulose areas. Then clean the wall studs where wallboard has been removed, and allow the area to dry thoroughly before replacing the wallboard.

 For other tips about post-flooding cleanup, visit www.fema.gov, www.epa.gov, or www.cdc.gov.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

In communicating with the business and the board about the consequences of data breaches, IT is always going to be asked to place dollar figures, which can be difficult to do, even with increasing access to predictive analytics and historical data from any previous breaches in the organization. One of the most extensive benchmark studies that IT can use to help with this is the Ponemon Institute’s annual “Cost of Data Breach Study: Global Analysis.” In its 10th year, and sponsored by IBM, the recently released 2015 edition covers 11 countries, 350 companies, and detailed data about direct and indirect costs of data breaches.

Three major reasons are contributing to a rapid increase in the average cost of a data breach and the average cost per breached record – this last varying by industry – according to Chairman and Founder Dr. Larry Ponemon:

“First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."



The adoption rates have been slower than that of other industries, but financial institutions are finally starting to leverage the cloud in greater numbers. But the real story isn’t that they’re adopting it—it’s what they are adopting it for. As we discussed in a recent post, financial firms are more concerned about the security risks of cloud-based file sharing than most MSPs would like to hear.

CRM, application development, email and back-end services—these are the functions that most financial firms are prioritizing. Why is file-sharing noticeably absent? In an interview with eWeek, Luciano Santos, vice president of research and member services at the Cloud Security Alliance alluded to the reason:  

"Primarily the top security concerns were more focused around data protection. Data confidentiality, data governance and data breach were the top-ranked security concerns identified by the financial institutions that participated."



(TNS) — Florida has more homes at risk from the devastating damage of hurricane-powered storm surges than any other state, according to a new study by CoreLogic, a California-based real estate information firm.

While the designation will come as no surprise to anyone living smack in the path of hurricane alley, the numbers reported by CoreLogic are sobering. More than 2.5 million homes in the state are at risk for some kind of damage from storm surge, according to the study. Rebuilding costs statewide from an extreme worst-case surge could amount to $491 billion — more than the gross domestic products of Austria, Chile, Venezuela or a dozen other countries.

In the tri-county area between Miami and West Palm Beach, CoreLogic found more than a half million homes are at risk. The company estimated rebuilding costs for a worst-case flooding from storm surge at $105 billion.



(TNS) — The Hawaii National Guard is holding the largest disaster preparedness exercise in its history with more than 2,200 participants from multiple states responding to a simulated hurricane and other events across Oahu, Hawaii island, Maui and Kauai.

Some Chinook and Black Hawk helicopter activity will be seen, Waimanalo will request assistance — possibly for debris clearance — a mass-casualty exercise will take place at the Queen’s Medical Center-West Oahu, and harbor chemical spills will be dealt with in Honolulu and on Hawaii island, officials said.

“It combines the civilian government and military organizations, and that’s important because we need to get the organizations working together — understanding each other’s capabilities — before we get to a natural disaster, a real natural disaster event,” said Brig Gen. Bruce Oliveira, the head of the Hawaii Army National Guard.



Celebrating Europe's finest in the business continuity industry

At an awards ceremony at the La Maison du Cygne, a prestigious 17C building on the Grand Place in Brussels, Belgium and once home to the city's butchers' guild, the  Business Continuity Institute recognised the talent that exists in the business continuity industry across the continent as they held their annual European Awards.

The BCI Awards consist of nine categories – eight of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from across the region.

The winners were:

Continuity and Resilience Consultant of the Year 2015
Chris Needham-Bennett MBCI of Needhams 1834

Continuity and Resilience Professional of the Year 2015 (Private Sector)
Michael Crooymans CBCI of SOGETI

Continuity and Resilience Newcomer of the Year 2015
Jacqueline Howard CBCI of Marks and Spencer

Continuity and Resilience Team of the Year 2015
Ulster Bank Business Resilience Team

Continuity and Resilience Provider (Service/Product) of the Year 2015
Sungard Availability Services

Continuity and Resilience Innovation of the Year 2015
PinBellCom Ltd

Most Effective Recovery of the Year 2015

Industry Personality of the Year 2015
David Window MBCI of Continuity Shop

The BCI European Awards are one of seven regional held by the BCI and which culminate in the annual  Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI European Awards are automatically entered into the Global Awards.

Cloud Endure has released the results of a recent survey into public cloud usage, downtime, availability and disaster recovery.

The 2015 Public Cloud Disaster Recovery Survey looks at disaster recovery challenges and best practices. It also benchmarks the best practices of companies that host web applications in the public cloud. The survey received responses from 109 IT professionals from North America and Europe.

Key findings include:

  • The number one risk to system availability is human error followed by networks failures and cloud provider downtime.
  • While the vast majority of the organizations surveyed (83 percent) have a service availability goal of 99.9 percent or better, almost half of the companies (44 percent) had at least one outage in the past three months, and over a quarter (27 percent) had an outage in the past month.
  • The cost of a day of downtime in 37 percent of the organizations is more than $10,000.
  • When it comes to service availability, there is a clear gap between how organizations perceive their track record and the reality of their capabilities. While almost all respondents claim they meet their availability goals consistently (37 percent) or most of the time (50 percent), 28 percent of the organizations surveyed don’t measure service availability at all. It is hard to tell how these organizations claim to meet their goals when they are not able to measure them.
  • The top challenges in meeting availability goals are budget limitations, insufficient IT resources, and lack of in-house expertise.
  • There is a strong correlation between the cost of downtime and the average hours per week invested in backup and disaster recovery.

Read the survey report (registration required).

Friday, 05 June 2015 00:00

The Real Cost of IT Complexity

IT complexity is one of the enterprise’s biggest challenges, affecting every facet of the organization--from employees to customers.

But how do you define IT complexity, and what is the impact? Lucky for us, Oracle commissioned IDC to look at organizations that simplified their IT environment and to develop an index to quantify IT complexity’s impact.

According to IDC, IT complexity can be defined “as the state of an IT Infrastructure that leads to wasted effort, time, and expense.” Conditions contributing to this include:

  • Heterogeneous environments
  • Using outdated technologies
  • Server, application or data sprawl
  • Lack of sufficient management tools and automation
  • Silo’d IT




Global temperature trends.

(Credit: NOAA)

A new study published online today in the journal Science finds that the rate of global warming during the last 15 years has been as fast as or faster than that seen during the latter half of the 20th Century. The study refutes the notion that there has been a slowdown or "hiatus" in the rate of global warming in recent years.


The study is the work of a team of scientists from the National Oceanic and Atmospheric Administration's (NOAA) National Centers for Environmental Information* (NCEI) using the latest global surface temperature data.

"Adding in the last two years of global surface temperature data and other improvements in the quality of the observed record provide evidence that contradict the notion of a hiatus in recent global warming trends," said Thomas R. Karl, L.H.D., Director, NOAA's National Centers for Environmental Information. "Our new analysis suggests that the apparent hiatus may have been largely the result of limitations in past datasets, and that the rate of warming over the first 15 years of this century has, in fact, been as fast or faster than that seen over the last half of the 20th century." 

The apparent observed slowing or decrease in the upward rate of global surface temperature warming has been nicknamed the "hiatus." The Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report, released in stages between September 2013 and November 2014, concluded that the upward global surface temperature trend from 1998­­-2012 was markedly lower than the trend from 1951-2012.


Since the release of the IPCC report, NOAA scientists have made significant improvements in the calculation of trends and now use a global surface temperature record that includes the most recent two years of data, 2013 and 2014--the hottest year on record. The calculations also use improved versions of both sea surface temperature and land surface air temperature datasets. One of the most substantial improvements is a correction that accounts for the difference in data collected from buoys and ship-based data.

No slow down in global warming.

(Credit: NOAA)

Prior to the mid-1970s, ships were the predominant way to measure sea surface temperatures, and since then buoys have been used in increasing numbers. Compared to ships, buoys provide measurements of significantly greater accuracy. "In regards to sea surface temperature, scientists have shown that across the board, data collected from buoys are cooler than ship-based data," said Dr. Thomas C. Peterson, principal scientist at NOAA's National Centers for Environmental Information and one of the study's authors. "In order to accurately compare ship measurements and buoy measurements over the long-term, they need to be compatible. Scientists have developed a method to correct the difference between ship and buoy measurements, and we are using this in our trend analysis." 

In addition, more detailed information has been obtained regarding each ship's observation method. This information was also used to provide improved corrections for changes in the mix of observing methods.   

New analyses with these data demonstrate that incomplete spatial coverage also led to underestimates of the true global temperature change previously reported in the 2013 IPCC report. The integration of dozens of data sets has improved spatial coverage over many areas, including the Arctic, where temperatures have been rapidly increasing in recent decades. For example, the release of the International Surface Temperature Initiative databank, integrated with NOAA's Global Historical Climatology Network-Daily dataset and forty additional historical data sources, has more than doubled the number of weather stations available for analysis.

Lastly, the incorporation of additional years of data, 2013 and 2014, with 2014 being the warmest year on record, has had a notable impact on the temperature assessment. As stated by the IPCC, the "hiatus" period 1998-2012 is short and began with an unusually warm El Niño year. However, over the full period of record, from 1880 to present, the newly calculated warming trend is not substantially different than reported previously (0.68°C / Century (new) vs 0.65°C / Century (old)), reinforcing that the new corrections mainly have in impact in recent decades. 

On the Web

* Note: NOAA's National Centers for Environmental Information (NCEI) is the merger of the National Climatic Data Center, National Geophysical Data Center, and National Oceanographic Data Center as approved in the Consolidated and Further Continuing Appropriations Act, 2015, Public Law 113-235. From the depths of the ocean to the surface of the sun and from million-year-old sediment records to near real-time satellite images, NCEI is the nation's leading authority for environmental information and data. For more information go to: http://www.ncdc.noaa.gov/news/coming-soon-national-centers-environmental-information 


NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.

Friday, 05 June 2015 00:00

What to Do About Reputation Risk

Of executives surveyed, 87% rate reputation risk as either more important or much more important than any other strategic risks their companies face, according to a new study from Forbes Insights and Deloitte Touche Tohmatsu Limited. Further, 88% say their companies are explicitly focusing on managing reputation risk.

Yet a bevy of factors contribute to reputation risk, making monitoring and mitigating the dangers seem particularly unwieldy. These include business decisions and performance in the following areas:



Friday, 05 June 2015 00:00

Storm Surge: The Trillion Dollar Risk

More than 6.6 million homes on the Atlantic and Gulf coasts are at risk of hurricane-driven storm surge with a total reconstruction cost value (RCV) of nearly $1.5 trillion.

The latest annual analysis from CoreLogic finds that the Atlantic Coast has more than 3.8 million homes at risk of storm surge in 2015 with a total projected reconstruction cost value of $939 billion, while the Gulf Coast has just under 2.8 million homes at risk and nearly $549 billion in potential exposure.

Which states have the highest total number of properties at risk?

Six states—Florida, Louisiana, New York, New Jersey, Texas and Virginia—account for more than three-quarters of all at-risk homes across the United States. Florida has the highest total number of properties at various risk levels (2.5 million), followed by Louisiana (769,272), New York (464,534), New Jersey (446,148), Texas (441, 304) and Virginia (420,052).



Now that management science has taught us how to quantify so many other things, crisis management is a good candidate for being awarded its own scale of seriousness too. The detail you put into such a scale will depend on how much crises afflict your enterprise. If you are battling a continual stream of problems, your scale may be finer (say, 1 to 10), in order to sort out the life-and-death situations from the nuisances. Otherwise, a high-medium-low system of ranking may be sufficient, as long as there are clear definitions for crises to be categorised correctly. So, how does this work in practice?



Thursday, 04 June 2015 00:00

Implications of the All-Flash Data Center

From a performance perspective, the all-Flash data center certainly makes a lot of sense. In an age when the movement of data from place to place is more important than the amount of data that can be stored or processed in any given location, high I/O in the storage array should be a top priority.

But while no one disputes the efficacy of Flash over disk and tape when it comes to speed, the question remains: Does the all-Flash data center still make sense for the enterprise? And if so, what impact will this have on other systems and architectures up and down the stack?

HP recently pushed the envelope on the all-Flash data center a little further with a new line-up of arrays and services for the 3PAR StoreServ portfolio. The set-up is said to improve performance, lower the physical footprint of storage and reduce cost to about $1.50 per usable GB, which is about 25 percent less than current equivalent solutions. The company is already reporting workload performance of 3.2 million IOPS with sub-millisecond latency among its Flash drives, and the 3PAR family’s Thin Express ASIC provides a high degree of data resiliency between the StoreServ array and the ProLiant server to reduce transmission errors.



(TNS) — Rice University civil engineering professor Philip Bedient is an expert on flooding and how communities can protect themselves from disaster. He directs the Severe Storm Prediction, Education and Evacuation from Disasters Center at Rice University.

On Memorial Day evening, Houston suffered massive flooding after getting nearly 11 inches in 12 hours. Bedient designed the Flood Alert System — now in its third version — which uses radar, rain gauges, cameras and modeling to indicate whether Houston's Brays Bayou is at risk of overflowing and flooding the Texas Medical Center. In an interview with Ryan Holeywell, editor of the Kinder Institute's "Urban Edge" blog, Bedient said more places need this kind of warning system.



Wednesday, 03 June 2015 00:00

Datameer Applies Data Governance to Hadoop

One of the biggest inhibitors to applying Hadoop in any production environment is the general lack of governance tools for IT organizations to use to manage access permissions for the data that resides there.

To address that issue, Datameer today announced it has embedded a raft of data governance tools inside its analytics software that runs natively on Hadoop.

Matt Schumpert, director of product management at Datameer, says that because its software runs in memory as a Hadoop application, responsibility for data governance within Hadoop naturally falls to Datameer.



Financial firms are tasked with a lot of different responsibilities, not the least of which is the responsibility to protect sensitive data and information.  When it comes to the resistance on the part of financial firms choosing to adopt cloud services for data storage and cloud-based file sharing, managed service providers (MSPs) need to preach security as everyone’s top priority.

According to a How Cloud is Being Used in the Financial Sector, a recent study from the Cloud Security Alliance (CSA), a large number of security concerns are keeping financial firms on the sidelines looking in at cloud computing.  Chief among those concerns is data security apprehension.



Wednesday, 03 June 2015 00:00

Nepal: Risk from the Theoretical to Reality

The Nepal earthquake, which triggered massive destruction from the Himalayan Mountains to India, is more than a tragic story of bad luck. It’s an example of how little we really understand risk and the consequences of our inability to fully absorb events such as earthquakes in Nepal and Haiti and other natural disasters that are so devastating.

Even our perception of these events is skewed.  According to the USGS website, the U.S. government’s official site for monitoring earthquakes, approximately one major earthquake of magnitude 8.0 or greater has occurred each year over the last 24 years.  We tend to discount major disaster in our own lives while believing there is a higher probability that others may suffer calamity.  That may explain why so many say we “never saw that coming” when disaster strikes.  Many of these quakes have occurred with little damage or no deaths, but we remember the ones with a high death toll and quickly dismiss the others.

Given our inability to look into the future, the question is: has our world become more or less risky?  Well, it depends!  For many of us, the perception of risk depends on our own circumstances.  Let’s take two people of similar age but from remarkably different backgrounds.



According to a new study conducted by PwC and commissioned by the UK Government to raise awareness of the growing cyber threat, the average cost of the single worst online security breach suffered by big businesses is between £1.46m and £3.14m, up from £600k – £1.15m in 2014. The Information Security Breaches Survey 2015 highlights the rising costs of malicious software attacks and staff related breaches, and illustrates the need for companies to take action. And it is all companies, not just big business, as the research also shows that the equivalent costs for small business is £75k – £311k, up from £65k – £115k a year ago.

It is not just costs that are high, but occurrence too, as the survey also revealed that 90% of large organisations reported they had suffered an information security breach, while 74% of small and medium sized businesses reported the same. The median number of breaches for large organisations was 14 (down from 16 in 2014) while for small businesses it was four (down from six last year). The problem is unlikely to go away as 59% of respondents to the survey expect there will be more security incidents in the coming year.

These figures may not come as a surprise to business continuity professionals who have consistently expressed concern about data breaches, the disruption they can cause and the cost as a consequence. The latest Horizon Scan report published by the Business Continuity Institute revealed that 74% of respondents to a survey expressed concern or extreme concern at the prospect of a data breach occurring and, along with cyber attacks, it has been a top three threat since the survey began.

Attacks from outsiders have become a greater threat for both small and large businesses with 69% of large organisations and 38% of small organisations being attacked by an unauthorised outsider in the last year, although Denial of Service (DoS) attacks have actually decreased with only 30% of large organisations and 16% of small organisations being attacked in such a way. The outsider threat may be high, but when asked about the single worst breach, 50% of organisations stated that it was due to inadvertent human error.

Digital Economy Minister Ed Vaizey said: "The UK’s digital economy is strong and growing, which is why British businesses remain an attractive target for cyber-attack and the cost is rising dramatically. Businesses that take this threat seriously are not only protecting themselves and their customers’ data but securing a competitive advantage."

Andrew Miller, Cyber Security Director at PwC, said: "With 9 out of 10 respondents reporting a cyber breach in the past year, every organisation needs to be considering how they defend and deal with the cyber threats they face. Breaches are becoming increasingly sophisticated, often involving internal staff to amplify their effect, and the impacts we are seeing are increasingly long-lasting and costly to deal with."

Companies are learning the hard way that there’s a downside to data democratization: more data silos.

“On the heels of the consumerization of enterprise software and the growing ubiquity of easy-to-use analytics tools, silos appear to be coming back in all their former collaboration-stifling glory as individual teams and departments pick and choose different tools for different purposes and data sets without enterprise-level oversight,” writes Katherine Noyes in a recent Computerworld article exploring this growing problem.

It’s hard to hear in this age of Big Data and data lakes, but in hindsight, it really isn’t surprising. SaaS made it possible for the lines of business to choose their own applications with nothing more than a credit card. Then Apple tipped the balance on personal devices. Finally, Amazon and others democratized storage and Big Data processing power. It only makes sense that analytics — and more data — would leave the centralizing influence of IT and segregate into silos.



s a Public Information Officer, Mike was used to communicating health information to the people of his state. When word came that a major hurricane was approaching, he knew people would be facing fear and uncertainty. How could he make sure that the right information got to the right people? How should he react to the public’s negative emotions and false information? Most importantly, how could he help to protect health and lives?  Mike knew exactly where to begin- with the principles of CDC’s Crisis and Emergency Risk Communication training.

CDC’s Crisis and Emergency Risk Communication (CERC) program teaches you how to craft messages that tell the public what the situation means for them and their loved ones, and what they can do to stay safe.

CERC provides a set of principles that teach effective communication before, during, and after an emergency. The six principles of CERC are:

  1. Be First                              4. Express Empathy
  2. Be Right                             5. Promote Action
  3. Be Credible                      6. Show Respect

The CDC CERC program has resources, training, and shared learning where you can participate in online training and receive continuing education credits. CERC also has CERC in Action stories from other public health professionals who have successfully applied CERC to an emergency response.

Communicating during an emergency is challenging, but you’re not alone! CERC can help you figure out how to get the right information to the right people at the right time whether you’re dealing with a family emergency or a hurricane.

CERC in Action

23Frozen powerline.4

PHPR: Health Security in Action

This post is part of a series designed to profile programs from CDC’s Office of Public Health Preparedness and Response.

CERC and CERC training are a service provided by CDC’s Office of Public Health Preparedness and Response’s (OPHPR) Division of Emergency Operations.


Wednesday, 03 June 2015 00:00

Five Myths About the Commoditization of IT

“Commodity” is a bad word among technologists. It implies standardized, unchanging, noninnovative, boring, and cheap. Commodities are misunderstood. This post seeks to dispel some of the myths around the commoditization of IT services (i.e., the cloud).



(TNS) — When a powerful earthquake in March 2011 triggered a tsunami that devastated Japan’s Fukushima-Daiichi nuclear plant and raised radiation to alarming levels, authorities contemplated sending in robots first to inspect the facility, assess the damage and fix problems where possible. But the robots could not live up to the task and eventually, humans had to complete most of the hazardous work.

Ever since, Defense Advanced Research Projects Agency (DARPA), an agency under the U.S. Department of Defense, has been working to improve the quality of robots. It is now conducting a global competition to design robots that can perform dangerous rescue work after nuclear accidents, earthquakes and tsunamis.

The robots are tested for their ability to open doors, turn valves, connect hoses, use hand tools to cut panels, drive vehicles, clear debris and climb a stair ladder — all tasks that are relatively simple for humans, but very difficult for robots.



AUSTIN, Texas – Texans who sustained property damage as a result of the ongoing severe storms and flooding are urged to register with the Federal Emergency Management Agency (FEMA), as they may be eligible for federal and state disaster assistance.

The presidential disaster declaration of May 29 makes disaster aid available to eligible families, individuals and business owners in Hays, Harris and Van Zandt counties.  

“FEMA wants to help Texans begin their recovery as soon as possible, but we need to hear from them in order to do so,” said FEMA’s Federal Coordinating Officer (FCO) Kevin Hannes. “I urge all survivors to contact us to begin the recovery process.”

People who had storm damage in Harris, Hays, and Van Zandt counties can register for FEMA assistance online at www.DisasterAssistance.gov or via smartphone or web-enabled device at m.fema.gov. Applicants may also call 800-621-3362 or (TTY) 1-800-462-7585 from 6 a.m. to 9 p.m. daily. Flood survivors statewide can call and report their damage to give the state and FEMA a better idea of the assistance that is needed in undesignated counties.

Assistance for eligible survivors can include grants for temporary housing and home repairs, and for other serious disaster-related needs, such as medical and dental expenses or funeral and burial costs. Long-term, low-interest disaster loans from the U.S. Small Business Administration (SBA) also may be available to cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.

Eligible survivors should register with FEMA even if they have insurance. FEMA cannot duplicate insurance payments, but under-insured applicants may receive help after their insurance claims have been settled.

Registering with FEMA is required for federal aid, even if the person has registered with another disaster-relief organization such as the American Red Cross, or local community or church organization. FEMA registrants must use the name that appears on their Social Security card. Applicants will be asked to provide:

  • Social Security number
  • Address of the damaged home or apartment
  • Description of the damage
  • Information about insurance coverage
  • A current contact telephone number
  • An address where they can get mail
  • Proof of residency, such as a utility bill, rent receipts or mortgage payment record
  • Bank account and routing numbers if they want direct deposit of any financial assistance.

(TNS) — Thousands of Pinellas County, Fla., beach residents and business owners could hit an unexpected road block trying to return to the barrier islands after a storm evacuation.

Pinellas County Sheriff Bob Gualtieri said Monday his office, working with beach city governments, has developed a hang-tag identification system to allow drivers quick access to the islands after an evacuation.

However, since the program rolled out in February, only 17,000 hang tags have been handed out, while Gualtieri estimates about 88,000 people will need them.

“That gives me a lot of concern,” he said, urging people to get the tags as soon as possible.



What does the phrase “needle in a haystack” mean to you? For many, it implies the impossible or something that can’t be done. As an MSP, don’t you strive to do the seemingly impossible for your customers? It sure will endear them to you.

One feature that can help you triumph over “needle in a haystack” scenarios is granular recovery. Think back to a customer that got hit with CyrptoLocker or perhaps had a rogue employee who deleted important files. No doubt your customers had that empty feeling that their valuable data was unrecoverable. With granular recovery, it’s not only possible, but also easy. You can easily search documents, emails and attachments by keyword and restore exactly what you need. Now, won’t that impress your customers?



Downtime to the broadband connection is now one of the major threats facing today’s organizations, so why are many businesses not considering resilience when purchasing broadband or looking at how broadband failure fits into the disaster recovery plan? Mike van Bunnens, managing director, Comms365, explores the issue.

What is the most important consideration for a business buying a new broadband connection? From the way many businesses are making the investment decision, the answer appears to be cost: with most expecting to achieve the same rock bottom prices on offer in the domestic market. But with more and more businesses running VoIP and cloud based applications, their choice of broadband connection is essential. Any glitch in service will have a massive knock on effect on productivity and customer relationships. So why are businesses not considering resilience or how broadband failure fits into the disaster recovery plan? Why are many not even ascertaining the speed and quality of the broadband options before moving to a new office premises?

A high quality, resilient broadband connection is now one of the most critical aspects of any business’ set up. So why are business owners still applying domestic thinking to business critical communications?



BSI is seeking feedback on the draft BS 12999 standard. ‘BS 12999 Damage Management - Stabilization, mitigation, and restoration of properties, contents, facilities and assets following incident damage’ is intended to provide recommendations to individuals and organizations involved in carrying out damage management. It will be applicable to domestic, commercial and public buildings and includes the following main contents:

  • Introduction
  • Scope
  • Terms, definitions and abbreviations
  • Damage incident instructions, intake and response planning4 On‑site damage assessment
  • Stabilization
  • Damage scoping
  • Damage recovery and restoration
  • Completion sign-off and handover.

The deadline for comments is June 30th 2015.

Click here to read the draft standard and take part in the consultation.

‘Agile’ is still a buzzword. That’s quite a feat in today’s high-speed business and technological environments, where concepts date so rapidly. The original ‘Manifesto for Agile Software Development’ appeared in 2001, some 14 years ago. Since then, the word and the concept it labels have been applied to different business areas, including marketing and supply chain operations. Recently, it has also cropped up in the phrase ‘agile recovery’. But is this taking the ‘agile concept’ too far?



Last week, we learned that cybercriminals undermined the identity verification of the IRS’ Get Transcript app and gained access to the tax returns on 104,000 US citizens, so it’s only fitting in this analyst spotlight, we interview one of the team’s leading analysts for identity and access management (IAM), VP and Principal Analyst, Andras Cser. Andras consistently produces some of the most widely read research not just for our team but across all of Forrester. And clients seek his insight across a number of coverage areas beyond IAM, including cloud security, enterprise fraud management, and secure payments. As the tallest member of our S&R team at 6’5”, Andras also provides guidance to clients on the emerging fields of height intel and altitude management.


Andras Cser Image


Before joining Forrester, Andras worked as a security architect at Netegrity and then CA Technical Services. He also worked in a number of technical and sales capacities at Sun Microsystems prior to joining Netegrity. In his roles on the vendor-side, he architected and implemented IAM and provisioning solutions at Fortune 500 companies.


Listen to this month’s podcast below to hear Andras talk about his most common client questions, counterintuitive insights, and vendors to watch. And as you can tell from our analyst interview, Andras prides himself on being clear and concise.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) urges residents across the nation to prepare for the 2015 Atlantic Hurricane season, which begins today and runs through November 30. 

Hurricanes and tropical systems can cause serious damage on both coastal and inland areas. Their hazards can come in many forms including: storm surge, heavy rainfall, inland flooding, high winds, and tornadoes. To prepare for these powerful storms, FEMA is encouraging families, businesses, and individuals to be aware of their risks; know your sources of reliable information; prepare your home and workplace; and be familiar with evacuation routes.

“One hurricane hitting where you live is enough to significantly disrupt your life and make for a very bad hurricane season,” said FEMA Administrator Craig Fugate. “Every person has a role to play in being prepared – you should know if you live or work in an evacuation zone and take time now to learn that route so you’re prepared to protect yourself and your family from disaster.”

This year, FEMA is placing an emphasis on preparing communities to understand the importance of evacuations, which are more common than many people realize. When community evacuations become necessary, local officials provide information to the public through the media. In some circumstances, other warning methods, such as, text alerts, emails, or telephone calls are used. Information on evacuation routes and places to stay is available at www.ready.gov/evacuating-yourself-and-your-family.

Additionally, knowing and practicing what to do in an emergency, in advance of the event, can make a difference in the ability to take immediate and informed action, and enable you to recover more quickly. To help communities prepare and enhance preparedness efforts nationwide, FEMA is offering two new products.

  • FEMA launched a new feature to its App, available for free in the App Store for Apple devices and Google Play for Android devices. The new feature enables users to receive weather alerts from the National Weather Service for up to five locations anywhere in the United States, including U.S. territories, even if the mobile device is not located in the weather alert area. The app also provides information on what to do before, during, and after a disaster in both English and Spanish.
  • The Ready campaign and America’s PrepareAthon! developed a social media toolkit that you can download and share with others at www.ready.gov/ready2015. The kit contains information on actions communities can take to practice getting ready for disasters.

While much attention is often given to the Atlantic Hurricane Season, there are tropical systems that can affect other U.S. interests as well. The Eastern Pacific Hurricane Season runs from May 15 through November 30. The Central Pacific Hurricane Season runs from May 15 to November 30. To learn more about each hurricane season and the geographical areas they may affect, visit www.noaa.gov.

Additional tips and resources:

  • Learn how to prepare for hurricane season at www.ready.gov/hurricanes
  • Talk with your family today about how you will communicate with each other during a significant weather event when you may not be together or during an evacuation order. Download the family communications at www.ready.gov/family-communications.
  • For information on how to create an emergency supply kit, visit www.ready.gov/build-a-kit
  • Consider how you will care for pets during an evacuation by visiting www.ready.gov/caring-animals
  • Use the Emergency Financial First Aid Kit (EFFAK) to identify your important documents, medical records, and household contracts. When completing the kit, be sure to include pictures or a video of your home and your belongings and keep all of your documents in a safe space. The EFFAK is a joint publication from Operation Hope and FEMA. Download a copy at www.ready.gov/financial-preparedness.
  • If you own or manage a business, visit www.ready.gov/business for specific resources on response and continuity planning.
  • The National Weather Service proactively sends free Wireless Emergency Alerts, or WEAs, to most cell phones for hurricanes, tornadoes, flash flooding and other weather-related warnings. State and local public safety officials may also send WEAs for severe or extreme emergency conditions. If you receive a Wireless Emergency Alert on your cell phone, follow the instructions, take protective action and seek additional information from local media. To determine if your wireless device can receive WEA alerts contact your wireless carrier for more information or visit www.ctia.org/WEA.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

(TNS) — While the global fracking boom has stabilized North America’s energy prices, Chicago — America’s third largest city and the busiest crossroads of the nation’s railroad network — has become ground zero for the debate over heavy crude moved by oil trains.

With the Windy City experiencing a 4,000 percent increase in oil-train traffic since 2008, Chicago and its many densely populated suburbs have become a focal point as Congress considers a number of safety reforms this year.

Many oil trains are 100 or more cars long, carrying hydraulically fracked crude and its highly explosive, associated vapors from the Bakken region of Montana, North Dakota, Saskatchewan, and Manitoba.



Hackers illegally accessed the personal information of 104,000 taxpayers this spring, according to the U.S. Internal Revenue Service (IRS).

And as a result, the IRS tops this week's list of IT security newsmakers to watch, followed by Woolworths, Google (GOOG) and Kaspersky Lab.

What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:



(TNS) — It was the year they ran out of names.

The hurricane season that began 10 years ago Monday generated so many storms — 27 in all — that, for the first time since officials started using names in 1953, they went through a list of 21 names and had to start on the Greek alphabet: from Arlene on June 9, just nine days in, to Zeta, which finally fizzled on Jan. 6, 2006, a month after that manic 6-month season officially ended.

Right in the middle was Katrina, which raised serious issues that had little to do with meteorology. And for South Florida, so late in the season that its cleanup competed with Halloween preparations, was Wilma. It brought billions in damage, much of that to Palm Beach County, still recovering from two hurricanes three weeks apart in the previous year's "mean season."



(TNS) — Staring at an image of your home and neighborhood inundated with 2, 6 or maybe 9 feet of rushing water from a hurricane storm surge can be horrifying.

At least that’s what Pinellas County Emergency Management Director Sally Bishop is hoping.

As the 2015 hurricane season dawns on Monday, Bishop is unveiling her department’s newest tool for storm preparation: a Storm Surge Protector computer application that gives people a realistic view of what can happen when a hurricane comes ashore.



Last week the Ponemon Institute rolled out the results of yet another Global Cost of Data Breach report and, surprising very few people in the security world, the stats show costs rising again. Sponsored by IBM, the report benchmarked 350 companies across 11 countries. It found that the consolidated total cost of a breach has now risen to $3.8 million, about 23 percent higher than the figure back in 2013. They're compelling statistics for anyone in the managed services world trying to offer customers justification for improved security coverage.

According to the report, there are three big factors that are contributing to the rising costs of breaches.



The data breach at the IRS that left the personal information of 104,000 taxpayers in the hands of thieves is the latest wrinkle in a mammoth problem faced by tax authorities: Identity theft and its crippling consequences.

An unprecedented surge in online tax scams by increasingly sophisticated criminals has challenged the IRS to respond quickly to get ahead of the fraudsters, especially during this year’s tax season after hackers targeted TurboTax, the country’s largest online filing service.

The vulnerability of taxpayers’ personal data was identified last fall by the IRS’s independent watchdog as the agency’s number one problem. Tax officials estimate that the government has lost billions of dollars in recent years to fraudulent refunds filed by hackers who steal personal information on tax returns, then use it to claim a refund in a taxpayer’s name before he or she files.



FEMA Officials Encourage Those With Concerns about Hurricane Sandy Flood Insurance Claims to Call 866-337-4262

WASHINGTON – The Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program (NFIP) announced the start of Hurricane Sandy flood insurance claims review. The review is part of a broad process to reform NFIP claims and appeals procedures.       

FEMA opened the Hurricane Sandy claims review process and began mailing letters to approximately 142,000 NFIP policyholders, offering them an opportunity to have their claims from Hurricane Sandy reviewed. In the review, policyholders who have not pursued litigation or already received the maximum amount under their policy will have an opportunity to have their files reviewed. FEMA will contact policyholders and explain how to request this review.

“Flood insurance issues arising from Hurricane Sandy are of great concern to FEMA,” said Deputy Associate Administrator for Federal Insurance Brad Kieserman. “We are committed to administering a program that is survivor-centric and helps policyholders recover from flooding in a fair, transparent, and expeditious way. I encourage anyone who suspects they may have been treated unfairly to call 866-337-4262.”

Flooding is the most common natural disaster in the United States. Between 1980 and 2013, the United States suffered more than $260 billion in flood-related damages. Flood insurance is a vital service that protects communities from the most common and costly disaster we face, and those who purchase insurance must be able to count on it being there when it is needed to help rebuild their lives.

Policyholders who incurred losses from Hurricane Sandy from Oct. 27, 2012, through Nov. 6, 2012, and want their claim reviewed may contact FEMA by:

  • Calling toll-free at 866-337-4262.
  • Email by downloading an application online and submitting it to This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Fax by downloading an application online and submitting it to 202-646-7970.
  • For individuals who are deaf, hard of hearing or have a speech disability using 711 or VRS, please call 1-866-337-4262.  For individuals using a TTY, please call 800-462-7585.

As FEMA reviews Hurricane Sandy claim files, the agency will also begin overhauling the claims and appeal process and improving the customer experience. FEMA’s goals are excellent customer experience, responsiveness, transparency, low risk of waste, fraud and abuse, and continuous improvement. While settling these legal matters, FEMA is instituting additional oversight of Write Your Own insurance companies to hold them accountable.

FEMA will continue to work closely with Congress and federal, state, local, tribal, and community officials to ensure policyholders are paid every dollar to which they are entitled and to improve the flood insurance program going forward.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Monday, 01 June 2015 00:00

Real Tools to Manage Shadow IT

It is the rare enterprise these days that does not have some form of shadow IT in its midst. If you think otherwise, maybe it’s time to do a little digging into what your business groups have been up to.

But while the consensus is that the enterprise should embrace shadow IT rather than fight it, there has not been a whole lot of guidance as to how this should be done, other than vague recommendations about becoming more proactive and transitioning IT to cloud brokerage.

Lately, however, the industry has started to see a trickle of actual solutions that enhance the enterprise’s ability to get a handle on shadow IT – not to combat it, mind you, but to help integrate it into a broader computing architecture.



Monday, 01 June 2015 00:00

2015 Hurricane Season Opener

By now you’ll have read the latest forecasts calling for a below-average Atlantic hurricane season.

NOAA, Colorado State University’s Tropical Meteorology Project, North Carolina State University, WSI and London-based consortium Tropical Storm Risk all seem to concur in their respective outlooks that the 2015 hurricane season which officially begins June 1 will be well below-norm.

TSR, for example, predicts Atlantic hurricane activity in 2015 will be about 65 percent below the long-term average. Should this forecast verify, TSR noted that it would imply that the active phase for Atlantic hurricane activity which began in 1995 has likely ended.

Still it’s important to note that the forecasts come with the caveat that all predictions are just that, and the likelihood of issuing a precise forecast in late May is at best moderate. In other words, uncertainties remain.



According to the 2015 Makovsky Wall Street Reputation Study, released Thursday, 42% of U.S. consumers believe that failure to protect personal and financial information is the biggest threat to the reputation of the financial firms they use. What’s more, three-quarters of respondents said that the unauthorized access of their personal and financial information would likely lead them to take their business elsewhere. In fact, security of personal and financial information is much more important to customers compared to a financial services firm’s ethical responsibility to customers and the community (23%).

Executives from financial services firms seem to know this already: 83% agree that the ability to combat cyber threats and protect personal data will be one of the biggest issues in building reputation in the next year.

The study found that this trend is already having a very real impact: 44% of financial services companies report losing 20% or more of their business in the past year due to reputation and customer satisfaction issues. When asked to rank the issues that negatively affected their company’s reputation over the last 12 months, the top three “strongly agree” responses in 2015 from communications, marketing and investor relations executives at financial services firms were:



Houston, the fourth-largest city in the United States, has been struggling through extreme storms and some of the worst flooding in years over the past few days. Roadways were blocked, drivers were left stranded, and homes were completely destroyed due to the flash flooding.

More than 1,000 residents have been displaced and area businesses have come to a screeching halt. Once the storms and flash flooding started, I reached out to some of my clients in the area to make sure they were okay and find out what they were doing to help affected individuals and businesses.



In the enterprise world, keeping a business afloat is not enough. The true mark of success comes when a company brings innovation and evolution to the forefront. And the most successful businesses find ways to constantly grow and change with the flow of the market.

But how does an enterprise go about identifying, fostering and delivering the right innovations? How is it possible to have this level of coordinated effort flow through departments and provide the outcome that the business needs? In a term, the answer is program management.

First, program management is not project management. Program management involves goals that generally affect the company as a whole—often the bottom line. And managing programs requires commitment. It involves long-term strategy and in many cases an ongoing dedication to improvement of processes, products and people.

Of course, program management isn’t something that can be started on a whim to bring about competitive change in a company. To help the enterprise begin a successful program management process, Satish P. Subramanian wrote the book, “Transforming Business with Program Management: Integrating Strategy, People, Process, Technology, Structure and Measurement.”



Engagement is critical to the success of a firm. Years ago, I did a competitive study between Sony and Dell. Sony had better looking, more reliable hardware; Dell’s stuff wasn’t as attractive and it broke a lot in comparison. Sony sucked at engaging with customers; Dell led the segment. The end result was that Sony failed and Dell succeeded.

Dell’s Annual Analyst Conference (DAAC) is this week and HP Discover is next week. Many of the analysts here at DAAC have decided not to attend Discover because they don’t feel HP is really relevant anymore and the others who are going have indicated that they are going to confirm that the firm is effectively dead. At this same time, I’m getting notes from folks who have left HP for Oracle and are sharing how much better Oracle is than HP, in their opinion.

I can’t believe the experienced executives at HP realize they are sending a strong message that they are effectively managing their company out of business. Nor that if this results, that this is likely their last job because the failure will inevitably stain their resumes. The reason they don’t see this is that they don’t engage, and this behavior starts at the top.

I spent some time with Michael Dell this trip. I follow Meg Whitman and have met with her in person as well and the distinct difference between the two people is like night and day.



All organizations with a Business Continuity Management (BCM) or Disaster Recovery (DR) program always strive to have their Business Continuity Plans (BCP) / Disaster Recovery Plans (DRP) in a state they can use: in a state they believe will cover them in any and all situations. They want their plans to at least cover the basic minimum so that they can be responsive to any situation. But if an organization takes its program – and related plans – seriously, then these plans are never fully complete.

For a plan to be truly viable and robust, it must be able to address as many possible situations as possible and be flexible enough to adapt to any potential unknown situations.

This includes incorporating lessons learned captured from news headlines and then incorporating the potential new activities or considerations that may not be in the current BCM / DRP plan. These plans aren’t quick fixes or static response to disasters; they are living and breathing documents that need new information to grow and become robust. This is why they should never be considered as complete; as the organization grows and changes – and the circumstances surrounding the organization changes – so to must the BCM and DRP plans.



(TNS) — Despite more predictions Wednesday from experts that it will likely be a quieter than normal hurricane season, the information comes with two caveats — they don't know where the storms will go and below average doesn't mean zero.

NOAA predicts 6-11 named storms (winds of 39 mph or higher), of which 3-6 could become hurricanes (winds of 74 mph or higher), including 0-2 major hurricanes (winds of 111 mph or higher) for the 2015 hurricane season. They also project a 70 percent likelihood that it will be below average.

In a similar report released last month, Colorado State forecasters William Gray and Phil Klotzbach also projected a season that won't make the average of 12 named storms, six hurricanes and two major hurricanes.



(TNS) — The National Bio and Agro-Defense Facility is more than just a big project for Kansas and Kansas State University – it will be the front line in protecting the nation’s food supply.

That was the consensus of federal and state leaders who gathered Wednesday to celebrate the start of construction on the $1.25 billion national laboratory complex that will be built across the street from Kansas State University’s football stadium.

“The NBAF laboratory will provide the nation with cutting-edge, state-of-the-art lab capabilities to help protect our food supply and the nation’s public health,” said U.S. Secretary of Homeland Security Jeh Johnson. “The NBAF addresses a serious vulnerability: biological or agricultural threats, deliberate or natural.

“We will now be able to ensure availability of vaccines and other rapid-response capabilities to curb any outbreak.”



Climate change is taking a toll on Texas, and the devastating floods that have killed at least 15 people and left 12 others missing across the state are some of the best evidence yet of that phenomenon, state climatologist John Nielsen-Gammon said in an interview Wednesday. 

"We have observed an increase of heavy rain events, at least in the South-Central United States, including Texas," said Nielsen-Gammon, who was appointed by former Gov. George W. Bush in 2000. "And it's consistent with what we would expect from climate change." 

But the state's Republican leaders are deeply skeptical of the scientific consensus that human activity is changing the climate, with top environmental regulators in Texas questioning whether the planet is warming at all. And attempts by Democratic lawmakers during the 2015 legislative session to discuss the issue have come up short.



A business model focused on cutting costs has obvious limitations. That’s why managed services have to be about much more than lowering the cost of IT. And as much as customers love a bargain, most understand this intuitively, often citing other objectives for adopting managed services – improved uptime, access to technology advances and better security among them.

A new poll of MSPs by the MSPAlliance found customers hire MSPs primarily because they want to pay more attention to their core business. “Fifty percent of MSPs point to ‘focusing on core competencies’ as one of the leading reasons customers buy their managed services,” said MSPAlliance CEO Charles Weaver.



Recognising business continuity talent in India

Business continuity may be a developing industry in India but there still exists a wealth of talent across the country and those at the top of the profession were recognised at an awards ceremony at the India Business and IT Resilience Summit in Mumbai where the Business Continuity Institute presented their annual India Awards.

The BCI Awards consist of seven categories – six of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from across the region.

The winners were:

Continuity and Resilience Consultant of the Year 2015
Kaustubh Vazalwar MBCI of Hewlett Packard

Continuity and Resilience Professional of the Year 2015 (Private Sector)
Kapil Punwani CBCI of Reliance Life Insurance Company Limited

Continuity and Resilience Team of the Year 2015
JP Morgan Chase, CIB Resilience Team

Continuity and Resilience Provider of the Year 2015 (Service/Product)
Sungard Availability Services

Continuity and Resilience Innovation of the Year 2015
Sungard Availability Services

Most Effective Recovery of the Year 2015
JP Morgan Chase, CIB Resilience Team

Industry Personality 2015 of the Year 2015

Ramachandran Vaidhyanathan MBCI of Cognizant Technology Solutions

The BCI India Awards are one of seven regional awards held by the BCI, which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI India Awards are automatically entered into the Global Awards.

In investing, diversification is often seen as a good thing, unless you ask Warren Buffet, who is famously quoted as saying, “Wide diversification is only required when investors do not understand what they are doing.” Cloud services are the same way. So let’s change that quote up a bit, “Diversification of cloud-based file sharing is only required when managed service providers (MSPs) do not understand what they are doing.”

The average organization uses 721 cloud services, according to a recent study by Skyhigh Networks. How do companies end up with so many cloud solutions? Is it a good or bad thing for their organization? MSPs must understand how companies get themselves into this situation and the downsides they face to be able to take the first steps to getting them out of it and helping them unify their cloud infrastructure.



Thursday, 28 May 2015 00:00

Backing Up Large Data Sets

Some MSPs may be understandably worried about taking on the responsibility of backing up large data sets. After all, just about any analyst you talk to is projecting data growth of 30% or more per year. So is it a wise move to add to existing service responsibilities by taking on an additional service such as cloud backup? The answer is a resounding yes.

Here are the facts. First, a major potential headache when it comes to cloud backup is the initial backup of a large data set from a new customer. That’s the one that could take a while. But it doesn’t take that long when you use an enterprise-class cloud backup solution.

A recent independent test by Mediatronics revealed Zetta.net could back up half a TB of data in less than 3 hours with a 1Gbit connection. After that, an incremental backup--using a 5% change rate for a worst-case scenario--took only an hour. In reality, 5% is actually an aggressive change rate. Surveys show that the rate of change in any organization is typically only about 2% of the entire data set. This opens the door for a larger total dataset in the cloud.



Thursday, 28 May 2015 00:00

Why insider threats are succeeding

Many companies still lack the means or motivation to protect themselves from malicious insiders; but the effects of insider threats are simply too big to ignore. According to a report by the market research company Forrester, 46 percent of nearly 200 technology decision-makers reported internal incidents as the most common cause of the breaches they experienced in the past year. Out of those respondents, almost half said the breach stemmed from a malicious insider.

In this article TK Keanini looks at the practical steps that organizations can take to protect data and systems from insider threats.



The Ponemon Institute has released its annual Cost of Data Breach Study: Global Analysis, sponsored by IBM. According to the benchmark study of 350 companies spanning 11 countries, the average consolidated total cost of a data breach is $3.8 million1 representing a 23 percent increase since 2013.

The study also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased six percent from a consolidated average of $145 to $154. Healthcare emerged as the industry with the highest cost per stolen record with the average cost for organizations reaching as high as $363. Additionally, retailers have seen their average cost per stolen record jump dramatically from $105 last year to $165 in this year's study.

"Based on our field research, we identified three major reasons why the cost keeps climbing," said Dr. Larry Ponemon, chairman and founder, Ponemon Institute. First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."

The first Cost of Data Breach study was conducted 10 years ago in the United States. Since then, the research has expanded to 11 countries. Ponemon Institute's Cost of Data Breach research is based on actual data of hundreds of indirect and direct cost categories collected at the company level using field-based research methods and an activity-based costing framework. This approach has been validated from the analysis of more than 1,600 companies that experienced a material data breach over the past 10 years in 11 countries.



I continue my exploration of actions you can take to improve your compliance program during an economic downturn with a review of what my colleague Jan Farley, the Chief Compliance Officer (CCO) at Dresser-Rand, called the ‘Desktop Risk Assessment’. Both the Department of Justice (DOJ) and Securities and Exchange Commission (SEC) make clear the need for a risk assessment to inform your compliance program. I believe that most, if not all CCOs and compliance practitioners understand this well articulated need. The FCPA Guidance could not have been clearer when it stated, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” While many compliance practitioners have difficulty getting their collective arms about what is required for a risk assessment and then how precisely to use it; the FCPA Guidance makes clear there is no ‘one size fits all’ for about anything in an effective compliance program.

One type of risk assessment can consist of a full-blown, worldwide exercise, where teams of lawyers and fiscal consultants travel around the globe, interviewing and auditing. Of course this can be a notoriously expense exercise and if you are in Houston, the energy industry or any sector in the economic doldrums about now, this may be something you can even seek funding for at this time. Moreover, you may also be constrained by reduced compliance personnel so that you can not even perform a full-blown risk assessment with internal resources.



By conventional standards, business continuity cannot exceed one hundred percent. Business continuity of less than 100% is obviously possible, although measurements of just how much less may only be approximate. However, if everything is working properly, full business continuity has been achieved. Does it make sense to then talk about ‘fuller than full’ or a business continuity index that is more than 100%?



Most of the commentary regarding the cloud these days (mine included) focuses on the myriad ways in which abstract, distributed architectures can remake the enterprise as we know it.

We talk of software-defined data environments, hyperscale infrastructure and advanced Big Data and mobile application environments that will allow organizations to shed their rusty legacy environments in favor of a brave new world of computing.

The trouble is, most organizations don’t want that – at least, not right away.

The simple fact of the matter is that radical change is frightening to most people, and the typical CIO or data management executive is driven not by a desire to deploy the latest and greatest technology but to implement solutions that contribute to the bottom line.



(TNS) — The recent rioting and unrest in Baltimore will cost the city an estimated $20 million, officials said Tuesday.

The expenses — which go before the city’s spending board for approval Wednesday — include overtime for police and firefighters, damage to city-owned property and repaying other jurisdictions for police and other assistance.

Henry J. Raymond, Baltimore’s finance director, said the city can temporarily cover the costs from its rainy-day fund while seeking reimbursement for up to 75 percent from the Federal Emergency Management Agency.

“The city remains on strong financial footing,” Raymond said. “Hopefully, with the FEMA reimbursement, it will reduce the financial stress that we’re under. In terms of the city’s overall revenue structure, we’re on firm footing and we’ll move forward.”



Thursday, 28 May 2015 00:00

BCI: The Cost of Data Breaches

According to a new study by the Ponemon Institute, sponsored by IBM, the average consolidated total cost of a data breach is $3.8 million, representing a 23% increase since 2013. The annual 'Cost of Data Breach Study' also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased 6% from a consolidated average of $145 to $154.

"Based on our field research, we identified three major reasons why the cost keeps climbing," said Dr Larry Ponemon, chairman and founder, Ponemon Institute. "First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."

Data breaches are a significant threat to organizations as highlighted in the Business Continuity Institute's latest Horizon Scan report which revealed that 82% of respondents to a survey were either concerned or extremely concerned about this threat materialising while 74% expressed the same level of concern to a data breach, making them the first and third greatest threats respectively.

Some of the highlights from the Ponemon Institute’s research include:

  • Board level involvement and the purchase of insurance can reduce the cost of a data breach. The study looked at the positive consequences that can result when boards of directors take a more active role when an organization had a data breach. Board involvement reduces the cost by $5.50 per record. Insurance protection reduces the cost by $4.40 per record.
  • Business continuity management plays an important role in reducing the cost of data breach. The research reveals that having business continuity management involved in the remediation of the breach can reduce the cost by an average of $7.10 per compromised record.
  • The most costly breaches continue to occur in the US and Germany at $217 and $211 per compromised record respectively. India and Brazil still have the least expensive breaches at $56 and $78 respectively.
  • The cost of data breach varies by industry. The average global cost of data breach per lost or stolen record is $154. However, if a healthcare organization has a breach, the average cost could be as high as $363, and in education the average cost could be as high as $300. The lowest cost per lost or stolen record is in transportation ($121) and public sector ($68).
  • Hackers and criminal insiders cause the most data breaches. 47% of all breaches in this year's study were caused by malicious or criminal attacks. The average cost per record to resolve such an attack is $170. In contrast, system glitches cost $142 per record and human error or negligence is $137 per record. The US and Germany spend the most to resolve a malicious or criminal attack ($230 and $224 per record, respectively).
  • Notification costs remain low, but costs associated with lost business steadily increase. Lost business costs are abnormal turnover of customers, increased customer acquisition activities, reputation losses and diminished good will. The average cost has increased from $1.23 million in 2013 to $1.57 million in 2015. Notification costs decreased from $190,000 to $170,000 since last year.
  • Time to identify and contain a data breach affects the cost. The study shows the relationship between how quickly an organization can identify and contain data breach incidents and financial consequences. Malicious attacks can take an average of 256 days to identify while data breaches caused by human error take an average of 158 days to identify. As discussed earlier, malicious or criminal attacks are the most costly data breaches.

I. Bill Gates is an optimist.

Ask him, and he'll tell you himself. "I'm very optimistic," he says. See?

And why shouldn't Bill Gates be an optimist? He's one of the richest men in the world. He basically invented the form of personal computing that dominated for decades. He runs a foundation immersed in the world's worst problems — child mortality, malaria, polio — but he can see them getting better. Hell, he can measure them getting better. Child mortality has fallen by half since 1990. To him, optimism is simply realism.

But lately, Gates has been obsessing over a dark question: what's likeliest to kill more than 10 million human beings in the next 20 years? He ticks off the disaster movie stuff — "big volcanic explosion, gigantic earthquake, asteroid" — but says the more he learns about them, the more he realizes the probability is "very low."



Wednesday, 27 May 2015 00:00

Reinventing the Data Center Stack

Now that the cloud is becoming a common fixture in the enterprise, the IT industry is starting to look at how a cloud-facing, mobile-driven environment will affect that full data stack.

Naturally, this is mostly conjecture at this point because many leading experts still do not know how the technology, user requirements, business models and even entire industries will be affected by this transformation. From an historical perspective, the current decade is very similar to about 100 years ago as utility-based electrical grids were first powering up: People are in awe of an amazing new technology, even though its full ramifications cannot be discerned.

Still, there are those who are willing to give it a try, particularly when it comes to the all-software IT deployment capabilities that abstract architectures represent. MapR Technologies’ Jack Norris recently explored the potentialities of “re-platforming” the enterprise toward a more data-centric footing.  This will naturally require a new view of physical infrastructure, such as the current separation of compute and storage, but it also has implications higher up the stack, as in the need to maintain separate production and analytics architectures. This new stack will also require global resource management, linear scalability and real-time processing and systems configuration.



It’s been about eight months since IT services giant and top-ranked MSPmentor 501 2015 company Dimension Data announced it would deploy globally standardized managed services for data centers.

The service, built on the organization’s managed services automation platform, manages server, storage and networks for on-premise, cloud and hybrid data centers, the company said in a statement in September. Those services can be in the client’s data centers, colocation facilities, in the public cloud, in a private cloud, or in Dimension Data’s cloud.



Wednesday, 27 May 2015 00:00

Another Strand of the Resilience Web

One of the problems that is related to our ability to understand how resilient we can possibly be in the future is that we expect the future to be based on our normalities.  We expect (and would probably like) a degree of stability based upon what we know and understand to be our current terms of reference. Unfortunately, things change; and alongside the political and international tectonic shifts that appear to be accelerating at the moment, we should also consider those structures and capabilities upon which we have long relied and the fact that we may be losing control of them.

The structures of our societies, the underpinning elements of the way that we live can also have a profound influence on our ability to live in the same way in the future. An interesting combination of debt and demographic is influencing the potential longevity of our economic structures according to the European chief executive of Goldman Sachs Asset Management.



We have become not only acculturated to interruptions, but addicted to them. We have the mistaken belief that interruptions are a perfectly normal way of life, despite knowing deep down that “time is a precious commodity that we cannot afford to waste.”

Therein lies the essential message of Edward Brown, founder and president of Cohen Brown Management Group, a culture change and time management consulting and training firm in Los Angeles. But at least he’s trying to do something about it. He’s the author of The Time Bandit Solution: Recovering Stolen Time You Never Knew You Had,” and he feels strongly enough about the issue to take time out for an in-depth email interview on the topic.

I learned a lot from that interview about the extent to which we allow ourselves to be interrupted, and the price we pay as a result. To set the stage for the discussion, Brown pointed out that there are two key types of interruptions that we tolerate: those coming from other people, and those coming from our devices. He said other people are inveterate time bandits, and the fact that their intent is innocent doesn’t matter:



(TNS) — After a major accident or disaster, rescue operations have always focused on the nuts and bolts — saving the survivors, searching for those who didn’t make it, securing the evidence.

Now an added dimension — the consumer perspective — has expanded how disaster planners think. Philadelphia emergency management officials say it guided their response to the Amtrak derailment that killed eight people and injured more than 200 on May 12.

Passengers are going through “the most traumatic time of their lives,” said Everett A. Gillison, Mayor Michael Nutter’s chief of staff and deputy mayor for public safety. “Seeing the world through their eyes really kind of forces us to always question: ‘Are we providing what we really need to provide to them?’ “

That includes understanding what frantic families are going through. “If you haven’t heard from somebody, you kind of have to assume the worst,” he said.



(TNS) — Climate change may be triggering an evolution in hurricanes, with some researchers predicting the violent storms could move farther north, out of the Caribbean Sea and the Gulf of Mexico, where they have threatened coastlines for centuries.

Hurricane season in the Atlantic Ocean began Monday, and forecasters are predicting a relatively quiet season. They say three hurricanes are expected over the next six months, and only one will turn into a major hurricane.

Florida hasn’t been hit by a hurricane in a decade, and researchers are increasingly pointing to climate change as a potential factor.



(TNS) --On a day that brought a new round of fierce thunderstorms and torrential rains, authorities continued a grim search Monday for 12 people still missing after being swept from riverfront homes, and property owners returned to dramatic scenes of destruction.

San Marcos and Hays County officials revised upward the property damage wrought by the historic flood, saying 72 homes had been washed away. Texas Gov. Greg Abbott, who toured the scene, said the storms brought a punch that "you cannot candy coat" and declared a disaster area in 24 counties, including Bastrop and Hays.

Abbott said the flood in the Wimberley valley is "the highest flood we've ever recorded in the history of the state of Texas."

"It's a powerful message to anyone in harm's way of the relentless, tsunami-type power this wave of water can pose to people," he said.



Wednesday, 27 May 2015 00:00

The Importance of Risk Culture

When objective parties, armed with the benefit of 20/20 hindsight, can easily see warning signs that something was either wrong or wasn’t working and that executive management either missed or chose to ignore these same warning signs, it is fair to assert that management was encumbered with a blind spot. A culture that is conducive to effective risk management encourages open and upward communication, sharing of knowledge and best practices, continuous process improvement and a strong commitment to ethical and responsible business behavior.

Effective risk management doesn’t function in a vacuum and rarely survives a leadership failure. The risk management function can review, inform, advise, monitor, measure and even resign. It cannot control, decide or abort; that’s management’s job. Without an effective internal environment in place to ensure that adequate attention is given to protecting enterprise value, entrepreneurial behavior can run amok, completely unbridled and without boundaries or constraints. By “internal environment,” we mean the total package – the control environment, management’s operating style, the incentive compensation structure, a commitment to ethical and responsible business behavior, open and transparent reporting, clear accountability for results and other aspects of the organization’s culture.

Our premise is that ensuring an effective risk culture is an important task for executive management and the Board. Unfortunately, despite its importance, risk culture is often either given lip service or simply ignored.



You’re ready for advancement, you want to learn and you’re looking for an educational programme to encompass the needs of your current or planned role in the protection and preservation of your organisation’s functionality, viability and profitability. The MSc Organisational Resilience at Buckinghamshire New University will be good for you – here’s why:

You will become confident, capable and thorough in your knowledge and understanding of organisational resilience

You will understand how resilience needs to match the context of a changing global operating and threat landscape

You will develop the important skill of not just being able to talk about resilience, but also to take an analytical approach that allows you to offer balanced and evaluated solutions to real problems and issues



To ensure the availability of high performance, mission critical IT services, IT departments need both solid monitoring capabilities and dedicated IT resources to resolve issues as they occur. But even with the right tools in place, when an abundance of alerts and alarms start streaming in, it can quickly become overwhelming , particularly when IT staff have been asked to focus time and attention on activities that both support the organization’s end users and add to the company’s bottom line.

Logicalis US suggests that organizations need to ask the following five key questions to help ensure that enterprise IT monitoring is fit for purpose:

1. Is your monitoring tool configured properly? Most organizations have off-the-shelf monitoring tools that gather information from all of the devices on their network. The information coming from these tools can be overwhelming, and while it may be helpful to have access to all of that data, weeding through it in crunch-time can be cumbersome. To limit alerts to those that are most important takes training, knowledge and expertise, which leads many organizations that want to manage IT monitoring in house to employ full-time experts just to configure and manage their monitoring tools.

2. Do you update regularly? Since rules are continually being added to monitoring tools, monitoring isn’t an ‘implement and forget it’ situation, which means IT departments spend a considerable amount of time making sure the tools they depend on for alerts are as current and up-to-date as possible.

3. Can your tool provide event correlation? A single network error can have a ripple effect impacting applications that would otherwise be completely unrelated. As a result, it’s critical that an IT monitoring tool provide event correlation to speed diagnosis and remediation in all affected areas.

4. Does your monitoring tool offer historical trending data? When managing an enterprise environment, IT pros need to analyze historical trend data to identify recurring issues as well as to do capacity planning which, in many cases, can help prevent issues before they arise. Some of today’s popular monitoring tools, however, either operate in real time or store historical data for 30 days or less. Knowing what your tool offers is important information since being able to intelligently analyze and manage an organization’s IT environment can depend on having access to this historical data long term.

5. Do you have the right expertise in house? In an enterprise IT environment, it’s important to consider internal staffing needs and the expertise required to manage the monitoring tools and process in house. Keeping an enterprise environment up and running is no longer IT’s value-add; it’s an expectation. Today, most organizations want their IT staff delivering business results, which is why it may make sense to consider outsourcing monitoring to a third party skilled in assessing and limiting incident reports to only the handful that a busy internal staff actually needs to address.


On 12th December 2014 NATS, the UK's leading provider of air traffic control services, experienced a failure in its Swanwick flight data system. The outage resulted in widespread flight delays and cancellations. A report has now been published which details the events behind the outage and subsequent business continuity response.

Written by an enquiry panel led by Sir Robert Walmsley the report finds that:

  • Failure occurred on the 12th December because of a latent software fault that was present from the 1990s. The fault lay in the software’s performance of a check on the maximum permitted number of Controller and Supervisor roles.
  • The system error was caused because of a number of new Controller roles that had been added to the system the day before.
  • The standard practice in NATS is that engineering recovery is coordinated through a group of designated engineers, known as the Engineering Technical Incident Cell (ETIC) and drawn from those available in the Systems Control Centre adjacent to the Operations Room. While some recovery actions are automated, ETIC manually control all key recovery actions, e.g. the restoration of data, to ensure that decisions are made with due and careful deliberation; this is important, as the wrong decisions could have further downgraded performance.
  • Identifying a software fault in such a large system (the total application exceeds 2 million lines of code), within only a few hours, is a surprising and impressive achievement. This was made possible because system logs contain details of the interactions at the workstations.

The detailed 93 page report is available here as a PDF and should be of interest to business continuity managers whatever their sector. It shows how legacy systems can have unexpected and unanticipated impacts as well as giving useful details about the business continuity plans and strategies that were in place at the time of the incident.

The report makes clear that although this was a high profile incident which caused difficulties for NATS' direct customers and the supply chain, it was undoubtedly a business continuity success. Without a strong recovery team response and the pre-planned procedures that were in place the incident and disruption would have been much worse.

According to a new market research report published by MarketsandMarkets the mass notification market is estimated to grow from $3.81 billion in 2015 to $8.57 billion in 2020. This represents a compound annual growth rate (CAGR) of 17.6 percent from 2015 to 2020.

The major forces driving this market are the growing need for public safety, increasing awareness for emergency communication solutions, the requirement for mass notification for business continuity, and the trend towards mobility.

The report says that business continuity and disaster recovery and public safety compliance standards are boosting the sales of mass notification solutions.

Mass notification solutions providers are expected to collaborate and provide better competitive services to take advantage of the emerging mass notification market and to meet the need for complete crisis communication solutions.

Obtain the ‘Mass Notification Market by Solution (In-Building, Wide-Area, Distributed Recipient), by Application (Interoperable Emergency Communications, Business Continuity & Disaster Recovery, Integrated Public Alert & Warning, Business Operations), by Deployment, by Vertical & by Region - Global Forecast to 2020’ report from here.

Most people are visually oriented when it comes to taking in information. They also prefer analogue displays to digital ones. In other words, when it comes to understanding risk as part of business continuity, they like colours and graphics, rather than numbers in a spreadsheet. That makes the risk heat map a popular choice for presenting summary risk information to non-risk experts or senior management. Typically, areas in red on the heat map indicate the biggest risks and areas in green the smallest/most acceptable risks. But does this approach in fact too limited?



Tuesday, 26 May 2015 00:00

New Approaches to IT Efficiency

Virtually everyone is in favor of an energy-efficient data center. But if that is the case, why has the industry struggled so mightily to reduce power consumption?

Even with the remarkable gains in virtualization and other advanced architectures, the data center remains one of the primary energy consumers on the planet, and even worse, a top cost-center for the business.

But the options for driving greater efficiency in the data center are multiplying by the day – from low-power, scale-out hardware to advanced infrastructure and facilities management software to new forms of power generation and storage. As well, there is the option to offload infrastructure completely to the cloud and refocus IT around service and application delivery, in which case things like power consumption and efficiency become someone else’s problem.



Editor’s Note: This is part of a series on the factors changing data analytics and integration. The first post covered cloud infrastructure; the second discussed new data types, and the third focused on data services.

Data keeps expanding, but only recently have organizations been able to store the data in useful ways. Now, organizations can theoretically keep data at the ready, whether it’s in the cloud, a data lake or in-memory appliance.

Hopefully, it will soon be archaic to hear my doctor say, “Oh, we sent that x-ray to tape. We could get it — but it’s a huge hassle.”

The ability to store mass data is one of the five data evolutions that David Linthicum cited in his thesis on “The Death of Traditional Data Integration.” The ability to pool Big Data sets would not be disruptive, though, if it weren’t coupled with the ability to access it easily and as needed for analytics. As Informatica CEO Sohaib Abbasi points out, this “richness of big data is disrupting the analytics infrastructure.”



One of the often overlooked aspects of Big Data and the Internet of Things is the ability to model and simulate advanced data architectures. This is likely to become a crucial element in the emerging data-driven economy because it allows business leaders to further optimize their digital footprints in support of business goals without disrupting current operations.

As expected, there is a plethora of new simulation platforms hitting the channel that utilize both cloud and on-premises resources to, ironically, model cloud and on-premises infrastructure in support of advanced development and productivity applications.



ATLANTA – As the 2015 hurricane season begins, FEMA has launched a new feature to its mobile app to help you be prepared and stay informed about severe weather. The free feature allows you to receive weather alerts from five locations you select anywhere in the country, even if the phone is not located in the area. This tool makes it easy to follow severe weather that may be threatening your family and friends in other areas.

“Whether this years’ hurricane season is mild or wild, it’s important to be prepared,” said Regional Administrator Gracia Szczech. “Despite forecasters’ predictions for a below-normal number of storms, fewer storms do not necessarily mean a less destructive season. FEMA is reinforcing preparedness basics and resources to help people be ready whether they live along the coast or farther inland.” Visit FEMA’s www.ready.gov/hurricanes for step-by-step information and resources for what to do before, during and after a hurricane.

Cellphones and mobile devices are a major part of our lives and an essential part of how emergency responders and survivors get information during disasters. According to a recent survey by Pew Research, 40 percent of Americans have used their smartphone to look up government services or information. Additionally, a majority of smartphone owners use their devices to keep up to date with breaking news, and to be informed about what is happening in their community.

The new weather alert feature adds to existing features in the app to help Americans through emergencies. In addition to this upgrade, the app also provides a customizable checklist of emergency supplies, maps of open shelters and Disaster Recovery Centers, and tips on how to survive natural and manmade disasters. The FEMA app also offers a “Disaster Reporter” feature, where users can upload and share photos of disaster damage. The app defaults to Spanish language content for smartphones that have Spanish set as their default language.

The latest version of the FEMA app is available for free in the App Store for Apple devices and Google Play for Android devices. Users who already have the app downloaded on their smartphones should download the latest update for the new alerts feature to take effect. To learn more about the FEMA app, visit: The FEMA App: Helping Your Family Weather the Storm.


eFax Corporate recently hosted a webinar to inform covered entities in healthcare of the dangers that today’s sophisticated cyber hackers pose to their electronic protected health information (ePHI) and other intellectual property.

We chose healthcare because it is a favored target among hackers and other “malicious actors,” as the FBI calls them. This is largely because the personal data that health providers hold includes information valuable to criminals--names, birth dates, Social Security numbers. According to the Department of Health and Human Services’ Office of Civil Rights, data breaches of health providers in 2014 affected as many as 10 million people. And breaches like these were up an astonishing 1,800% from 2008 to 2013!

But the common pitfalls and best practices we identified in this webinar relate not only to healthcare-related businesses; they can also apply to organizations in all industries. So here’s a brief overview of the key points we discussed in the webinar--details you might want to share with your corporate clients.



One of the things that IT security folks don’t appreciate about the proliferation of mobile computing devices everywhere is how trusting those devices are. Every mobile computing device just naturally assumes that a radio signal within its reach is a trusted source of Internet access.

It turns out, however, that digital criminals are starting to abuse that trust by setting up fake wireless networks to hijack those radio signals using a process commonly referred to as “commjacking.” Once a fairly expensive ruse to set up, there are now open source kits that can be had for as little as $29 that enable criminals to set up a wireless network that for all intents and purposes looks like any other open wireless network. Once a mobile device connects to that network the digital criminals that run it simple steal all the data they can, including everything from credit card numbers to any unencrypted emails.



Fighting corruption has reached new heights on the global agenda, driven by the recognition that corruption fuels inequality, poverty, conflict, terrorism and failures of development.  Governments in India, Brazil, the UK, Canada, China and some other countries have followed enforcement of the U.S. Foreign Corrupt Practices Act by promulgating national anti-corruption laws that focus on the bribery of public officials by companies, generally with sweeping extraterritorial authority. The appropriate corporate response, we are told, is to build anti-corruption compliance programs; regulators even offer the private sector detailed guidance about best practices. All this has spawned a lucrative consulting industry dominated by investigation companies and accounting and law firms – what the Economist refers to as “FCPA Inc.” With little excuse for ignorance, it would seem that enterprises need only adhere to guidance from regulators and roll out the mandated programs.

It’s not working. Compliance officers tell of delayed rollouts, inadequate budgets, company-wide coordination problems and their own lack of organizational influence. Even when companies get past operational issues, the evidence suggests that a “tick-the-box” approach to compliance is inadequate. Many of the companies currently under investigation by the U.S. Department of Justice and the Securities and Exchange Commission already had hugely expensive, state-of-the-art compliance programs. A recent OECD review of successful corruption prosecutions cites involvement by senior management or Chief Executive Officers in more than 50 percent of global anti-corruption cases to date — revealing deliberately unethical decision making by executives who decisively outrank Chief Compliance Officers. This narrative of systemic degradation is at odds with the dominant “rogue employee under the radar” explanation of wrongdoing. It exposes a legal system that has mistakenly, or perhaps willfully, chosen to focus on a misleading proxy indicator of performance: individual accountability.



It was only a matter of time before there was a serious security flaw affecting the Internet of Things (IoT). It comes by way of a vulnerability in NetUSB, which lets devices that are connected over USB to a computer be shared with other machines on a local network. The vulnerability, which could lead to remote code execution or denial of service if exploited, may affect some of the most popular routers in our homes and workplaces.

Details of the vulnerability were released by SEC Consult. According to Forbes, the weakness is somewhat rare, but it works this way:

When a PC or other client connects to NetUSB, it provides a name so it can be recognised as an authorised device. Whilst the authentication process is ‘useless’ as the encryption keys used are easy to extract … it’s also possible for an attacker who has acquired access to the network to force a buffer overflow by providing a name longer than 64 characters.



A period of upheaval is on the near-horizon for MSPs, and it’s going to be especially hard on providers overly focused on technology. They must adapt by shifting their focus to delivering business solutions, and seek opportunities in cloud and virtual desktop services.

“I think there’s going to be a lot of casualties over the next three to five years in the MSP space, and primarily it’s because many MSPs today have been started by technologists,” Tommy Wald, president of TW Tech Ventures in Austin, Texas, said in a recent interview with MSPmentor.



(TNS) — Colorado will spend $1.2 million over the next two years on a "revolutionary" fire prediction system that uses atmospheric weather data to predict the behavior of wildfires up to 18 hours in advance.

Gov. John Hickenlooper signed House Bill 1129 on Wednesday afternoon at a fire station in Arvada, implementing one of several bills lawmakers drafted in response to wildfires in El Paso County and elsewhere.

"This bill will predict the intensity and the direction of fires 12 to 18 hours ahead of time. That is really important so we know where to direct our planes, the aircraft we had a bill for last year, and our firefighters," said Rep. Tracy Kraft-Tharp, D-Arvada, who introduced the bill. "This is really revolutionary."



(TNS) — Congressman Tom Cole (OK-04) introduced legislation this week that would help families rebuilding their homes after disasters. Currently, the Small Business Administration provides homeowners, renters and personal-property owners with low-interest loans to help recover from a disaster.

The Tornado Family Safety Act of 2015, introduced by Cole, clarifies that SBA disaster loans can be used by homeowners for construction of safe room shelters within rebuilt homes.

“Oklahomans are no strangers to severe weather and the terrible destruction that can result from it,” said Cole. “Considering the yearly risk and unpredictability of tornadoes that exists, it is not a matter of ‘if’ but ‘when’ it will occur.

This legislation underscores the type of projects that are eligible for these SBA disaster loans, which includes loans for construction of safe rooms. Under current law, SBA can increase the size of a home disaster loan up to 20 percent of the total damage to lessen the risk of property damage by future disasters of the same kind.



The typical organization loses 5% of revenue each year to fraud – a potential projected global fraud loss of $3.7 trillion annually, according to the ACFE 2014 Report to the Nations on Occupational Fraud and Abuse.

In its new Embezzlement Watchlist, Hiscox examines employee theft cases that were active in United States federal courts in 2014, with a specific focus on businesses with fewer than 500 employees to get a better sense of the range of employee theft risks these businesses face. While sizes and types of thefts vary across industries, smaller organizations saw higher incidences of embezzlement overall.

According to the report, “When we looked at the totality of federal actions involving employee theft over the calendar year, nearly 72% involved organizations with fewer than 500 employees. Within that data set, we found that four of every five victim organizations had fewer than 100 employees; more than half had fewer than 25 employees.”



The task of staying on top of all of the alerts and alarms that security monitoring tools send out constantly is becoming an unsustainable burden to some IT departments. In balancing setting up and manning these alerts – sometimes millions of them -- while at the same time providing other mission-critical services to grow the business, something has to give. The problem has even been blamed in the massive 2014 Target breach, in which relevant alarms were not noticed in a timely manner.

Security monitoring tools are all but useless without human IT resources to follow up on them, and quickly. It’s become a specialized service area for some enterprises, who want to outsource the monitoring to experts who do nothing but, and know the ins and outs of setting thresholds and balancing monitoring of multiple systems.

Managed service provider Logicalis US has compiled five questions for CIOs considering bringing on a monitoring service provider to support IT’s security responsibilities.



The SMB Group released information on its State of SMB Adoption of Mobile Apps and Management Solutions recently. It was a relief to see that SMBs were finally recognizing the importance of mobile solutions to their businesses, with 55 percent of the small and 65 percent of the midsize businesses strongly agreeing that these are critical. However, Kapsersky Lab’s own report on BYOD shows that a surprising number of SMB owners “don’t see a danger” with their employees using personal devices at work.

The Kaspersky report provides data that shows that BYOD could be the real security issue for SMBs, according to CBR Online. In the report, 92 percent of those surveyed said they “keep sensitive corporate information on smartphones and tablets, which they use for both work and personal activities.” That is a dangerously high number of businesses that put a lot of trust in their mobile security efforts, despite the fact that they also think that “basic security tools provided within free solutions” are enough to protect that data. Most also say they don’t see a reason to budget more money toward better security.



Wednesday, 20 May 2015 00:00

BMC’s Remedy for IT Obsolescence

Of the companies I follow, one stands out with the singular mission of assuring that IT doesn’t again become obsolete in the face of ever more powerful direct to line management offerings like Amazon Web Services. Most firms tend to treat Amazon’s offering as a competitor or potential customer and miss that it is actually a very different beast. It isn’t really going after IT as customers for the most part, it is rendering IT obsolete by going after IT’s customers directly. If we were talking about this in terms of sales channels, this would be like talking about what Amazon did to retail; it made the retail store obsolete in order to sell directly to their user customers. In effect, Amazon changed the game. BMC is the only enterprise vendor that has figured out that the proper defense isn’t to fight Amazon or to sell to Amazon -- it is to protect IT.

The MyIT effort validates this strategy and the new Remedy 9 platform is the latest in the company’s quiver of arrows designed to help IT defend against obsolescence.

In short, BMC’s goal is to make IT a better choice for employees than any cloud service, partially by embracing them, but mostly by driving IT to focus on making IT’s own customers more satisfied.



Wednesday, 20 May 2015 00:00

Managing the Hybrid Application Stack

The best part about moving data operations to the cloud is that you no longer have to worry about provisioning and managing infrastructure. The drawback, of course, is that you have to shift to a service/application-centric approach to management and then somehow integrate that with all of your legacy management systems.

Fortunately, hybrid data management is gaining a fair bit of traction in the development community as vendors seek to get the jump on what is likely to be the dominant enterprise data architecture going forward. According to BlueStripe’s Vic Nyman, the hybrid data center is likely to contain a broad mix of virtualized infrastructure, operating systems and container platforms, as well as a variety of database formats, third-party web services and distributed applications. To manage such diversity, the enterprise will need to deploy key functions such as dynamic application mapping and updating, seamless multi-platform visibility, real-time response time measurement and reporting – and this is before we can even think about expanding to microservices and application component aggregation.



(TNS) — When a bridge falls, when a water main fails or when a train crashes, news crews and commentators report on the sorry state of our nation’s infrastructure. Policymakers on both sides of the aisle say we need to do something to fix our roads and rails, our ports and pipes. This flurry of activity lasts for a few days, but then little to nothing happens.

Why isn’t there more action?

Despite infrastructure’s fundamental role in the health and safety of the American people and the economy, the United States has underinvested for decades. Today, infrastructure spending as a share of gross domestic product is about 2.5 percent, much lower than the 3.9 percent in peer countries such as Canada, Australia and South Korea. The figure for Europe as a whole is closer to 5 percent and between 9 and 12 percent for China.

The McKinsey Global Institute estimates that the United States should spend at least an additional $150 billion a year on infrastructure through 2020 to meet its needs. This investment is expected to add about 1.5 percent to annual GDP and create at least 1.8 million jobs.



Applications accepted for ocean, fisheries programs through July
Resilience means bouncing back. (Credit: NOAA)

(Credit: NOAA)

Two new NOAA grant programs will help coastal communities and their managers create on-the-ground projects to make them more resilient to the effects of extreme weather events, climate hazards, and changing ocean conditions.

This builds on NOAA’s commitment to provide information, tools, and services to help coastal communities reduce risk and plan for future severe events.

NOAA’s National Ocean Service is supporting the effort with $5 million in competitive grant awards through the 2015 Regional Coastal Resilience Grant Program and NOAA Fisheries is administering the companion $4 million Coastal Ecosystem Resiliency Grants Program.

“Coastal communities around the country are becoming more vulnerable to natural disasters and long-term environmental changes,” said Holly Bamford, Ph.D., assistant NOAA administrator for NOAA's National Ocean Service performing the duties of the assistant secretary of commerce for conservation and management. “These new grant opportunities will help support local efforts to build resilience of U.S. coastal ecosystems and communities, while finding new and innovative ways to mitigate the threats of severe weather, climate change and changing ocean conditions.”

The National Ocean Service 2015 Regional Coastal Resilience Grant Program will help coastal communities and organizations prepare for and recover from adverse events while adapting to changing environmental, economic, and social conditions. The grants will be awarded to  organizations to plan and implement resilience strategies regionally to reduce current and potential future risks. Proposals are due by July 24.

The NOAA Fisheries’ Coastal Ecosystem Resiliency Grants Program will focus on developing  healthy and sustainable coastal ecosystems through habitat restoration and conservation. The winning proposals will demonstrate socioeconomic benefits associated with restoration of healthy and resilient coastal ecosystems, support healthy fish populations, and demonstrate collaboration among multiple stakeholders. Proposals are due by July 2.   

Each grant proposal may request between $500,000 to $1 million in federal funds for the Regional Coastal Resilience Grant Program and $200,000 to $2 million for the Coastal Ecosystem Resiliency Grants Program. Eligible funding applicants include nonprofit organizations, institutions of higher education, regional organizations, private (for profit) entities, and local, state, and tribal government.

Details on the grant programs can be found at the NOAA Fisheries Coastal Ecosystem Resiliency Grants webpage (http://www.habitat.noaa.gov/funding/coastalresiliency.html) and the NOAA Ocean Service Regional Coastal Resilience Grant Program webpage (http://www.coast.noaa.gov/resilience-grant/). To apply visit http://www.grants.gov/

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.

There’s been a lot in the news recently about the vulnerability of the electric power grid in the United States. Last month’s incident in which a severed transmission line in Maryland cut power to much of Washington came on the heels of a March USA Today reportabout “bracing for a big power grid attack.” That report spotlighted a coordinated attack in April 2013 on Pacific Gas & Electric's Metcalf substation in California, which resulted in $15 million in damage to its fiber-optic lines and transformers.

“The country’s aging power grid leaves millions vulnerable and could have devastating consequences for not only everyday Americans, but some of the nation’s largest enterprises,” said Robert DiLossi, director of crisis management at Sungard Availability Services, a cloud computing, disaster recovery, and managed hosting services provider in Wayne, Pa. In a recent email interview, DiLossi shared some enlightening tips for CIOs and other IT leaders on how to prepare for an attack on the power grid.

“Increasingly, chief information officers and security leaders at enterprises are turning to resiliency plans to mitigate the impact of any attempt or success at hacking into their IT systems,” DiLossi said. “They are considering or employing several defenses in the event an attack strikes the nation’s power grid.”



Fraud is an increasingly serious threat for businesses around the world, eroding data integrity and security, consumer confidence and brand integrity. Based on the latest ACFE (Association of Certified Fraud Examiners) study, organizations lose 5 percent of revenue each year to insider fraud.

According to the study, the majority of insider fraud losses — as high as 80 percent — are caused by collusion of two or more employees, even though only 45 percent of the incidents are attributed to collusion. One reason why the losses are higher is that when more people are involved, there are more opportunities to commit fraud and it becomes easier to circumvent anti-fraud controls and conceal the fraud for longer.

Companies invest in implementing controls such as requiring that transactions above certain thresholds be authorized by a second employee and preventing the same person from re-activating an account and transferring funds. But just by coordinating their efforts, employees can work together to circumvent these measures.



University of Pittsburgh Medical Center (UPMC) recently informed patients that some of their personal information may have been compromised.

And as a result, UPMC topped this week's list of IT security newsmakers, followed by BakerHostetler, Juniper Research and The MetroHealth System.

What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:



No enterprise is immune to bad ideas. Some of them can be spectacularly bad, like deserting loyal customers in order to chase new markets that never materialise, or betting the company on a technology that never actually works. A company can have everything going for it and still get it wrong. The case of Webvan with its e-tailing advantages of lower costs and better services targeting the wrong customer group is just one example. However, this kind of failure is not caused by one bad idea alone, but by one bad idea being accepted and pursued by the organisation overall. In other words, it’s groupthink, a frequent enemy of business continuity.



It’s been clear for some time that the traditional storage area network (SAN) has been under siege in the data center. With server infrastructure becoming increasingly distributed, both at home and in the cloud, a centralized array supported by advanced storage-optimized networking is increasingly seen as a hindrance to data productivity.

But if storage is to be distributed along with processing, how do you overcome the obvious difficulties of aggregating resources and establishing effective tiering capabilities? And how can you effectively scale storage independently from increasingly virtualized server and networking infrastructure in order to satisfy diverse requirements of emerging data loads?

One solution is the server SAN, says TechRepublic’s Keith Townsend. By leveraging server and storage convergence, systems like EMC’s ScaleIO and Nutanix can run traditional workloads on virtualized cloud architectures while still providing the SAN functionality that the enterprise has come to rely on.  Indeed, performance of more than 1 million IOPS is already being reported across several dozen to several hundred nodes, and free or community-based distributions are reducing start-up costs to near zero.



Once a month I use my blog to highlight some of S&R’s most recent and trending research. When I first became research director of the S&R team more than five years ago, I was amazed to discover that 30% to 35% of the thousands of client questions the team fielded each year were related to IAM. And it’s still true today. Even though no individual technology within IAM has reached the dizzying heights of other buzz inducing trends (e.g. DLP circa 2010 and actionable threat intelligence circa 2014), IAM has remained a consistent problem/opportunity within security. Why? I think it’s because:



(TNS) — The more scientists learn, the more they are fine-tuning who is ordered to leave when a hurricane threatens and where and when officials open evacuation shelters.

And the result very likely will be that fewer, not more, people can expect to leave their homes, and still fewer will feel the need to use a hurricane shelter, officials said at last week's Florida Governor's Hurricane Conference.

The American Red Cross is doing a full review of its shelter guidelines, set to be finished in 2017. That's the same year the National Hurricane Center will start issuing a public watch and warning format that combines the traditional wind threats with storm surge. The timing is no coincidence.



(TNS) — When Mount St. Helens erupted 35 years ago Monday, killing 57 people and blanketing much of Central Washington in ash, officials were ill-prepared for the magnitude of the emergency.

“When the mountain blew, everyone was kind of out there on their own,” said Charles Erwin, emergency management specialist for the city of Yakima. “That’s what got the county started on doing disaster planning and coordinating with all the local jurisdictions.”

The explosion caused two different disasters on either side of the mountains. While the west side was dealing with mud and debris flows taking out bridges and roads, the prevailing winds pushed an estimated 520 million tons of ash eastward, turning Sunday morning in Yakima into midnight.



(TNS) — Under a new state law signed by Gov. Jay Inslee on Thursday, May 14, large railroads will be required to plan with the state for “worst-case spills” from crude oil unit trains, but exactly what that worst-case scenario looks like is not yet clear.

The law requires railroads to plan for the “largest foreseeable spill in adverse weather conditions,” but doesn’t define “largest foreseeable spill.”

In April, BNSF railway employees told Washington emergency responders that the company currently considers 150,000 gallons of crude oil – enough to fill five rail tank cars – its worst-case scenario when planning for spills into waterways. Crude oil trains usually carry about 100 rail tank cars.



Saturday, 16 May 2015 00:00

Risking It

The challenge of planning is significant; anyone who has ever been required to plan anything in detail will know of the problems and issues that even thinking about it planning is difficult and can quite easily spin beyond the controllable. Plans can be effective or useless for various reasons, and the translation of thoughts to realities can be fraught with issues.

In attempting to make informed judgements, the perceived effectiveness of response and protective measures has been traditionally based on a combination of anticipation, information and intelligence assessment and a suitable selection of mitigation measures.  However there is perhaps also an element of chance and luck in detecting and deterring any type of malicious activity and this has served to add to the range of variables which can complicate an attempt to manage risks.   The most thorough risk analysis will not be able to address all variables which will hamper the effectiveness of managerial processes in providing an adequate contribution to pre-emptively managed protective efforts.



Editor’s Note: This is part of a series on the factors changing data analytics and integration. The first post covered cloud infrastructure.

It’s a truism that technology changes quickly and ages fast — and yet, despite massive network and computer evolutions, not much changed for data until Big Data came along.

To be fair, for all practical purposes, Big Data was first seen as a natural extension of the relational database, but with larger amounts of data and faster processing speed. Almost immediately, though, vendors like IBM and research firms like Gartner pushed the definition of Big Data to include other data types — semi-structured and unstructured data, delivered at high speeds, which can mean real time, near-time and streaming or, as I privately call it, all time data.



CHICAGO – May is Building Safety Month, a public awareness campaign to help individuals, families and businesses understand what it takes to create safe and sustainable structures by increasing awareness about how building codes and code officials improve and protect the places where we live, learn, work, worship and play.

“We’re all at some level of disaster risk,” said Andrew Velasquez III, FEMA Region V administrator.  “It is important that we prepare now for the impacts that disasters can have on our homes, our businesses and in our communities.”

The power of natural disasters can be overwhelming. While you can't stop natural disasters from happening, there are steps you can take to increase your home's chance of survival, even in the face of the worst Mother Nature can dish out.

1. Reinforce your Residence. Consider retrofitting options, or steps to improve your home’s protection from natural disasters, including high wind events. One of the most common types of wind damage to a structure is called “uplift”— which occurs when a roof lifts and collapses back down on the house causing costly damage. Fortunately, you can minimize the chances of this happening by installing straps connecting the structural members of your roof to the wall studs or columns.

Other risk reduction ideas include:
a. Use shingles rated for 90+ mph wind and use a minimum of four nails per shingle.
b. Make sure windows and doors are properly shimmed and nailed into the framed opening, tying the window and door frames into the adjacent studs, and 
c. Install a garage door that is designed for higher wind speeds.

FEMA recommends consulting with a certified home inspector to determine if these are viable options for your home. For even more home strengthening options, click here.

2. Fortify Your Home’s Floors. Homeowners can secure their structure to the foundation by using anchors or metal straps. Your builder should ensure there are properly installed anchor bolt connections between the plate and the foundation at least every four feet to ensure maximum fastening to the foundation.

Consult with your local building code official as well as a certified home inspector to determine the best options for you. For more information on wind-resistant home construction techniques, click here.

3. Trim & Tighten. High velocity winds from thunderstorms and tornadoes can turn patio furniture, grills and tree branches into destructive missiles. In addition, if the area immediately surrounding your house contains trees, outbuildings, trash cans, yard debris, or other materials that can be moved by the wind, your house will more likely be damaged during a tornado or windstorm.

All storage sheds and other outbuildings should be securely anchored, either to a permanent foundation or with straps and ground anchors. The straps and ground anchors used for manufactured homes can be used as anchoring systems for outbuildings, such as garden sheds, which are not placed on a permanent foundation. Outdoor furniture and barbecue grills can be secured by bolting them to decks or patios or by attaching them to ground anchors with cables or chains. Trees should also be trimmed so they’re at a safe distance away from your home.
4. Elevation is a Smart Renovation. Flooding is a real risk, and elevating your home and its critical utilities can significantly reduce the risk of water damage. Elevating your home may even reduce your flood insurance premiums. Contact your local floodplain manager to learn the flood risk and elevation requirements for your residence. For more information on elevation techniques to protect your home from flood damage, click here

5. Assure You’re Fully Insured. Take the time to review your insurance coverage. Are you adequately insured for the risks your community faces? Are you covered for wind, flood and sewer backup? Has your policy been updated to reflect the value of your home? For a list of questions to ask your insurance agent, click here. Many homeowners find out too late that their insurance coverage has not increased with the value of their home. Contact your insurance agent to get these questions answered and ensure your home is financially protected.

To learn more about Building Safety Month and how you can protect your home, business and valuables, visit www.iccsafe.org.  For even more readiness information follow FEMA Region V at twitter.com/femaregion5 and facebook.com/fema. Individuals can always find valuable preparedness information at www.Ready.gov or download the free FEMA app, available for Android, Apple or Blackberry devices.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

The prevailing wisdom holds that cloud architectures will float comfortably on a layer of virtualization that itself will rest on commodity hardware. As long as underlying bulk resources are available in sufficient amounts, all of the fine-tuning and optimization for higher-level applications and services can be done on abstract, software-defined planes.

This isn’t necessarily wrong, but it isn’t the whole truth either – at least according to those who are developing next-generation, cloud-optimized hardware.

For the current crop of hardware vendors to survive much longer, it is hard to see how they can avoid devising cloud-facing product lines. According to IDC, about 30 percent of the IT hardware spend is in support of cloud infrastructure, up more than 14 percent from a year ago. The private cloud alone accounts for some $10 billion in revenue, generating annual growth of about 20 percent, while public infrastructure spending tops $16.5 billion and is growing at 17.5 percent per year.



(TNS) — The ER was already busy, close to full — gunshots, car wrecks, strokes — when the “get ready” call came in at 9:45 p.m.

By 10:30, they began arriving by police car, ambulance, anything.

By midnight, 54 had made it to Temple University Hospital, which treated more passengers from Amtrak’s Tuesday night disaster than any other emergency room.

The most critical patients were rushed into one of the three trauma bays just inside the ER door. Teams of doctors and nurses were assigned to each bay, responsible for stabilizing patients and moving them through with skill and speed, making room for the next.



Most IT organizations provide services to the business in several forms. According to author Terry Critchley, services are comprised of three things:

  • Products
  • Processes
  • People

Each of these things come together to ensure that required business functions are available. But every service has the potential for failure and outages even though today’s world demands that uptime be as close to 100 percent as possible. In this scenario, IT must use all of its technologies to provide this availability, including virtualization, cloud computing, disaster recovery, business continuity and strong security. Still, human factors can prevent services from being available, too.



When drive-by drills, known as lockdown in most of the country, were widely used in response to school shootings with little or no adaptation of tactics, we began down a path that ultimately led to the tragic shooting at Sandy Hook that took 26 innocent lives. There were stops along the way in places called Columbine, Virginia Tech, Aurora and many others. These were all opportunities to learn that our model for response was at great risk from those who would seek to use our plan (or lack of plan) against us.

Plans continued to emphasize single-option lockdown, with location dependency on classrooms for a response. Vague and largely unworkable mentions of reverse evacuations or reverse fire drills back to classrooms for active threats or terrorism inside the building, over facility evacuation, continued to be widely used. The single-option hiding concept became common practice in buildings, though every room was occupied. Shoving people into bathrooms, closets, under desks and into corners became recommended, despite the tragic effects of limiting movement. Being mobile in a crisis equals increasing survivability.



(TNS) — Disaster recovery just from extreme weather and wildfires cost American taxpayers $300 billion in the past decade, the White House's former "resilience" specialist told the general session of the 29th annual Florida Governor's Hurricane Conference.

"That is just what Uncle Sam spent," Josh Sawislak told the conference. He said the figure doesn't count billions in insured and uninsured losses by individuals, businesses and local governments. Nearly half of that was just from 2011 to 2013.

"So when someone tells me, 'We can't afford to pay for resilience,'" Sawislak said, "I immediately ask, 'How can we afford not to?'"



(TNS) — Tuesday night's fatal derailment was the worst Philadelphia train disaster in decades. The timing seemed chillingly prophetic: Just one day before the crash, the city's Office of Emergency Management had held a "mass casualty workshop" with police, fire and health personnel.

Moments after Train 188 careened off the tracks, emergency calls went out across the city and scores of first responders rushed to the scene to find the mangled bodies of those killed and more than 200 injured and bloodied passengers.

Here's a look at how the city's response unfolded throughout Tuesday night and into Wednesday:



What do Edward Snowden, the U.S. PRISM scandal and the corporate data hack on Sony Corp. have in common? All involved breaches in data security and sovereignty. While the cloud offers many benefits--such as cost savings, scalability and flexibility--there are also added risks. Data security always tops that list of risks.

To combat these risks, it’s crucial for service providers to have a fundamental understanding of data security and data sovereignty. Use these 10 facts as your foundation to ensure you’re offering customers the best security, reliability and performance in the market.



When a disruptive incident impacts critical national or regional infrastructure, or when public safety is at stake, multiple emergency agencies are often involved in the response.

Those responders could be from federal or state agencies as well as local teams of EMT’s, police, firefighters and other volunteers.  Emergency response organizations specialize in a certain aspect of response based on their skill sets.  From coastguards, firefighters, bomb-disposable squads and EMT’s animal control and hazmat clean-up or cyber expert, those teams’ skills and actions are generally unique, well defined and perfected through regular practice.

In the event of multi-disciplinary emergency response, command, control and communication (between the responders) are critical for an effective – and efficient – response.  Protocols for collaboration among responders are defined by NIMS (the National Information Management System) of which the Incident Command System (ICS) is a critical component.



Taking the whole concept of data security to its most logical conclusion, Secure Islands has come up with a method that automates the application of security to any piece of data, depending on how it’s classified, as that data is being generated.

Secure Islands CEO Aki Eldar says version 5.0 of the IQProtector Suite (IQP) adds what the company describes as a Data Immunization process. IQProtector automatically assigns security controls to data at the point that data is actually created, regardless of location. Those controls then attach themselves to that data wherever it is consumed.

Based on rights management technology developed by Microsoft, Secure Islands has different renditions of IQProtector for endpoints, servers, clouds and applications to make sure that wherever data is created, a security policy gets enforced.



There’s been plenty of attention paid over the past few years to what appears to be a growing IT skills gap.  Managed service providers (MSPs) can help alleviate the pain of this this gap for customers by providing services that customers would normally handle inhouse. For instance, they can offer and manage cloud-based file sharing and other IT services.

In her recent article for FierceCIO.com, Sarah Lahav weighs in on the IT talent shortage.

“Whether the IT talent shortage is myth or reality, I believe IT leaders can agree on at least one thing: some roles are harder to fill than others,” says Lahav.  “The needs of IT and the business have shifted faster than educators and professionals adapt.”



Thursday, 14 May 2015 00:00

High Availability IT Services

Reliability and Availability

This book starts with the basic premise that a service is comprised of the 3Ps—products, processes, and people. Moreover, these entities and their sub-entities interlink to support the services that end users require to run and support a business. This widens the scope of any availability design far beyond hardware and software. It also increases the potential for service failure for reasons beyond just hardware and software; the concept of logical outages.



With so many of today's businesses dependent on SAP as the core technology platform for some of their most critical business functions, it would follow that IT organizations would dedicate significant effort in securing SAP systems. But the truth is that SAP and other enterprise resource planning (ERP) software remain largely forgotten by even the most security-conscious organizations today. And the attackers have found this gap.

For years now, security researchers have warned of hefty security vulnerabilities in SAP that make it possible to create ghost accounts, change records in some of the most sensitive financial tracking applications and use the platform to break into other connected systems. And while security researchers and consultants confirm that attackers are already exploiting these vulnerabilities for malicious purposes, these attacks have largely gone unreported to the public. That all changed this week.



Thursday, 14 May 2015 00:00

How Would You Hire an Emergency Manager?

Let’s suppose you want to fill a position in your organisation by hiring an emergency manager. The role of this person is to coordinate the actions of different services responding to a sizable disaster, to translate strategy into tactics, and to keep senior officials or management informed of the situation and progress towards resolution. So far, so good – except this kind of person, or experience, doesn’t grow on trees. However, it is a role that is needed in many public sector areas, including utilities, health, education, airports and port authorities. You could place an ad asking for candidates, but what do you then need to know to evaluate applications?



DENTON, Texas – People who live in Texas are urged to get ready now for the possibility of flooding, following days of rain and with more potential rain in the forecast.

The Federal Emergency Management Agency’s (FEMA) Region 6 office continues to monitor the flooding threat across parts of the state and stands ready to support state and local partners as needed and requested in any affected areas.

Know Your Risk Before a Flood:

•    Do your homework. Be aware of the potential flooding risks for the particular area where you live.
•    Familiarize yourself with the terms used to identify a flooding hazard. Some of the more common terms used are:

  •  A Flash Flood Watch: Flash flooding is possible.  Be prepared to move to higher ground; monitor NOAA Weather Radio, commercial radio, or television for information.
  • A Flash Flood Warning: A flash flood is occurring; seek higher ground on foot immediately.

Take Action Before and During a Flood:

•    Build an emergency kit and make a family communications plan.
•    Listen to local officials and monitor your local radio or television for information.
•    Do not drive into flooded areas. Turn Around; Don’t Drown. Two feet of rushing water can carry away most vehicles.
•    Do not walk through flowing water.  Six inches of swiftly moving water can knock you off your feet.
•    Wireless Emergency Alerts (WEAs) are now being sent directly to many cell phones on participating wireless carriers' networks.  WEAs sent by public safety officials such as the National Weather Service are designed to get your attention and to provide brief, critical instructions to warn about imminent threats like severe weather.  Take the alert seriously and follow instructions.  More information is available on WEA at www.fema.gov/wireless-emergency-alerts.

Visit www.ready.gov or www.nws.noaa.gov for more information on preparing for floods or other disasters.


FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at http://twitter.com/femaregion6 , and the FEMA Blog at http://blog.fema.gov

It’s clear that our relationship to data is changing, both in terms of how we work with data and our relationship on individual levels. That, in turn, is triggering changes in the underlying technologies.

Integration technology in particular is in the spotlight these days. After all, you can use the data only as fast as you can integrate, wrangle or blend the data. That’s leading to a lot of talk from vendors about “modern integration” that’s less concerned with on-premise, batch integration and more concerned with real-time access for business users.

At Informatica’s recent customer conference, CEO Sohaib Abbasi identified four disruptive technology trends changing data. His opinion is more significant than most because he heads one of the industry’s leading integration vendors, and despite a thriving data integration market, that company was recently acquired.



The data center has been the foundation of enterprise IT operations since the dawn of the computer age, so it is understandable that there is a lot of uncertainty now that it is undergoing the most monumental change in its history.

Indeed, many executives are still trying to wrap their heads around the idea of having no data center at all, or at best a rack or two of modular boxes devoted to maintaining access to external applications and resources.

But those who have been to the mountaintop say that the other side is indeed a lush, green valley in which advanced services and capabilities can be had at low cost and with little effort, and that the flexibility that comes from shedding fixed hardware assets more than makes up for the loss of direct control over infrastructure. The key, though, is to first realize that the new data environment does not serve the same purpose as the old, and then to learn how to leverage that app-centric, service-based environment for your business model.



(TNS) — Investigators rushed to the scene of a derailed Amtrak train in Philadelphia Wednesday morning as the death toll climbed to six after the deadly accident in one of the nation’s busiest transportation corridors.

Dawn showed the extent of the devastation from the Tuesday disaster with all seven cars of the Amtrak train askew, off the rails in a chaotic wreck. One car was seemingly collapsed like an accordion and three cars were overturned. The other three were a twisted mess.

Hundreds of rescue workers using heavy equipment were at the scene, searching for survivors.

“It is an absolute disastrous mess,” Philadelphia Mayor Michael A. Nutter told reporters. “I have never seen anything like this in my life.”

The train was carrying 238 passengers and five crew when it left Washington for New York Tuesday.



(TNS)East Naples might not be the place most people think of when they think of rising sea levels, but that's what Jerry Kurtz sees.

On the north side of U.S. 41, not far from the Walmart, a weir that controls water flows into Haldeman Creek and eventually Naples Bay is one of four aging weirs that sit on the county's front line against climate change.

With the National Oceanic and Atmospheric Administration predicting sea levels to rise as much as 2 feet by 2050 and by as much as 6.6 feet by 2100, the new weirs being planned need to be built to handle any extra water that might slosh their way, Kurtz said.



Wednesday, 13 May 2015 00:00

Cyber Losses vs. Property Losses

The financial impact of cyber exposures is close to exceeding those of traditional property, yet companies are reluctant to purchase cyber insurance coverage.

These are the striking findings of a new Ponemon Institute  survey sponsored by Aon.

Companies surveyed estimate that the value of the largest loss (probable maximum loss) that could result from theft or destruction of information assets is approximately $617 million, compared to an average loss of $648 million that could result from damage or total destruction of property, plant and equipment (PP&E).

Yet on average, only 12 percent of information assets are covered by insurance. By comparison, about 51 percent of PP&E assets are covered by insurance.




Nearly nine in 10 financial services firms plan to increase their investment in risk management capabilities in the next two years in response to the emerging risks of cyber security and fraud, according to a new report from Accenture.

The Accenture 2015 Global Risk Management Study – based on a survey of more than 450 senior risk management executives in the banking, capital markets and insurance industries – found that 86 percent of respondents said their organizations plan to increase their investment in risk management capabilities in the next two years, with one in four (26 percent) planning to increase it by more than 20 percent. In addition, three in 10 respondents (29 percent) said their companies plan to increase by more than 20 percent their investment in cloud / software-as-a-service (SaaS) and big data and analytics.

The report found clear evidence of the increasing impact that cyber security and fraud is having on financial services firms’ business and the risk management function in particular.  For example:

  • More than one-third (34 percent) of respondents said that understanding cyber risk will be the most-needed capability in their risk function.
  • Nearly two-thirds (65 percent) of respondents said that cyber/IT risk will have an increased impact on their business in the next two years, with 26 percent saying that the increase would be significant.
  • More than eight in 10 respondents (82 percent) said that emerging risks, such as cyber and social media, account for more of the chief risk officer’s (CRO) time than ever before.

“The combination of market forces, advances in technology and customer demands are pushing financial institutions to become more digital and requiring a broader range of skills from today’s risk management professionals,” said Steve Culp, senior global managing director for Accenture Finance and Risk Services. “Financial services firms are struggling to keep pace with the demand for people with highly specialized skills, such as cyber risk experts, business analysts, security specialists and fraud experts. To fill these gaps, most firms will have to look outside of their organizations — and the competition for the right people is increasingly intense.”

The report indicates that the surging demand for talent by financial services institutions in recent years shows no signs of abating. While firms are focusing on enhancing their specialized skills, fewer than half (41 percent) claim to have extensive skills in understanding digital technologies. Only 10 percent said that their risk function has the resources needed in specialized areas like emerging risks.  Many respondents said that in the past two years, their recruiting has targeted cyber risk experts (cited by 48 percent of respondents) and fraud experts (36 percent), and 36 percent of firms said they have hired former hackers.

Rising impact of digital

In response to today’s low-growth, low-return environment, financial institutions are focusing on new paths to profitability. As a result, risk appetites are increasing, although in a targeted fashion.  More than four in 10 financial services firms (43 percent) said they have a higher risk appetite for developing new products than they had two years ago, and more than one-third (36 percent) have a greater appetite for taking on major digital initiatives. 

“At a time when the regulatory focus has never been keener, financial services firms are taking a hard look at their existing strategies and starting to identify where they want to extend their business to achieve growth,” Culp said. “The willingness to accept greater business risks will also expose financial services firms to emerging risks – including cyber, data privacy, reputational, social media and new conduct risks – requiring risk professionals to play an enhanced role.”

Nearly three-quarters (73 percent) of respondents said that managing emerging digital risks and the increased velocity, variety and volume of data challenge their ability to be effective. Fewer than one in 10 (9 percent) said that consistent and updated data is regularly available to decision makers across the organization.

Increased role of the risk function 

Increasingly, CROs seek to play a more strategic role in their companies. Only 36 percent of capital markets respondents and 29 percent of banks said that, when delivering regulatory change programs, their senior managers go beyond basic regulatory compliance, such as by integrating with ongoing change initiatives.  For firms that go beyond basic compliance, there is much greater coordination on regulatory issues between the risk function and the rest of the business.

At the same time, the majority of financial services firms have some distance to travel before risk management becomes fully aligned with broader strategic planning.  While more than eight in 10 respondents (83 percent) said they believe that risk management has contributed to enabling long-term profitable growth for their company, nearly three-quarters (73 percent) said that gaining the trust of the business is a top challenge to their effectiveness. Fewer than one in five respondents (17 percent) said that their companies have a framework that supports major strategic decision-making with input from risk management.

“CROs can help their institutions become digital leaders by capitalizing on the insights generated from the wealth of data they hold,” Culp said. “While many have said the increase in data has posed a challenge, risk teams can free up time by automating data collection and analysis in order to focus on more strategic management activities. Better data is required by regulation, but it will also help CROs advise their stakeholders on meeting key goals around risk-adjusted profitability and performance.”


A great start to the Australasian BCI Summit in Sydney today. If you are reading this at the Summit please come and find me to say hello. If you are not able to attend the event you can still interact with attendees and the ideas being presented via the Twitter tag

The theme for the conference is intriguing, “Looking to the future, learning from the past” and it will be interesting to see if this realises the potential. Good start thus far!



As so many IT security experts and analysts have preached through the years, small to midsize businesses (SMBs) should be just as concerned with cybersecurity as large enterprises. It seems the warnings are finally sinking in. A recent survey by the Endurance International Group shows that 81 percent of SMBs are currently concerned about cybersecurity and 91 percent think about it “often.”

In a release, Hari Ravichandran, CEO of Endurance International Group, says it’s time for small businesses to take cybersecurity to heart, but perhaps more should be done:



As a business continuity or disaster recovery professional you’ve probably put in a lot of effort setting up your emergency mass notification system. You’ve likely already:

  • Determined the different user types your system will support as well what security/permissions each user type will have.
  • Confirmed how to get your user/stakeholder information into the system...either via upload, integration with another software platform, or via self-registration of your users.
  • Set-up user groups, uploaded important crisis communication related documents.
  • Linked your ENS with the appropriate social media accounts
  • Integrated your ENS with various external notification devices such as digital displays, sirens, and desktop disruption.
  • Developed notification templates.
  • Tested the system, and more.



If disaster, such as a flood or power outage, struck right now, would you be prepared to recover your vital data and applications to continue business operations? Do you have a business continuity plan in place to make sure you’re never left in the dark – unable to get work done?

Learn more about the great work Keith and his team at Procyon Solutions are doing to help prepare businesses in Little Rock for any upcoming disaster.

Keith Jetton from Procyon Solutions knows the importance of having a business continuity plan in place.

“We take a different approach to business continuity than most other IT companies. We’re seeing lots of technology move to the cloud – starting with email, file sharing, and phones, all hosted in the cloud,” he said.



I wrote a post last week about a study commissioned by Dell and Intel that provided some enlightening information about employees’ technology adoption and expectations.

Beyond what was covered in that post, Steve Lalla, a Dell vice president and general manager who contributed the commentary, was also able to address how this type of research helps guide Dell’s strategy, and what’s changed since Dell last conducted this survey in 2011.

The 2014 “Global Evolving Workforce Study” was commissioned by Dell and Intel, and conducted by TNS, a global market research firm. As for how this type of research aids Dell in its strategic planning, Lalla said that Dell launched the initiative to fully understand exactly how technology is shaping the workforce of the future and in turn, help its customers respond to the challenges and opportunities of the evolving workforce. He listed three “actionable insights” for Dell and its customers that came out of the study:



Tuesday, 12 May 2015 00:00

Where the Enterprise Is Cloud-Wise

How does the typical enterprise view the cloud, and will a consensus ever emerge as to how clouds are to be architected and utilized?

Believe it or not, we are still very early in the cloud transition, and the truth of the matter is, we could be a good two to three years away from seeing the cloud firmly established as the dominant form of IT infrastructure. In that time, expect to see a myriad of platforms, architectures, service configurations and other advancements, many of which will fail to gain traction or emerge as hot prospects only to fade over time.

But if you could take a snapshot right now, what would be the dominant themes within the cloud computing movement, and do they have the stuff to stand the test of time?



Can military principles and processes really be applied to corporate crisis management? Jonathan Hemus thinks they can…

By Jonathan Hemus, managing director, Insignia

Managing crises is, hopefully, a rare experience for most organizations. For the armed forces it’s part of their daily lives. Crisis management terms that are bandied around in corporate circles (tactics, strategy, exercising, war-gaming) are well rehearsed techniques and practices deployed by the armed forces to manage life and death situations. But can military principles and processes really be applied to corporate crisis management? With the 'command and control' approach to management in disrepute, it would be easy to assume that what works in the armed forces would backfire in the commercial world.

Look more closely though and the parallels are clear: scenario planning (a military discipline) is a critical part of preparing to manage a crisis. Giving your 'troops' the training they need to work quickly, efficiently and effectively under intense pressure is a prerequisite for crisis management success. Rehearsing your team and plan with crisis simulation exercises in 'peacetime' is the only way of finding out whether it will work for real.



Tuesday, 12 May 2015 00:00

Ten crisis management tips

An unexpected crisis can ruin a hard-won reputation, decimate your bottom line, and put the future of your company in jeopardy. Having a strategic plan in place in case the worst happens is vital insurance for any company , according to Jane Kroese, PR director at KISS PR.

“Some managers are reluctant to undertake crisis planning: crisis is by its nature unpredictable, making it difficult to know where to start. Acknowledging that you could face an emergency is uncomfortable, and it’s not always clear where crisis planning should fit amongst your day to day tasks,” explains Jane.

“There are far too many companies that are not adequately prepared for a crisis. But crossing your fingers and hoping it won’t happen to you isn’t good enough. Even if you’re committed to the highest standards and always implement best practice, a crisis could come from an unexpected place: the actions of a member of staff, a sector-wide emergency or a problem with a supplier or distributor could impact your business too.”

“A crisis can be an opportunity. When we produce crisis strategies, we aim for the company’s reputation to be equal to the status it had before the crisis - if not better. With a strong plan you can not only avoid damage, but come out ahead. No one can control a crisis, but they are most open to positive influence through strong communications in their earliest stages. Having a good plan in place allows you to react quickly and appropriately.”

Here are KISS PR’s ten tips for crisis management:

1. Have a strong communications plan.  This will help maintain good relationships with all your stakeholder groups. These relationships are tested in a crisis, and these are the people you may need to call on for their support. Remember your stakeholders aren’t just your customers: they include your staff, neighbours and journalists.

2. Scan for potential risks and issues. If you have good communication with your stakeholders you can spot an issue when it emerges, and intervene before you have a crisis on your hands. Good issues scanning depends on monitoring developments in your sector, legislative changes, media attitudes and the behaviour of your competitors, and being responsive to your customers’ needs.

3. Identify your key spokespeople.  Ensure that all key spokespeople have been trained in handling crises and dealing with the media. Your spokespeople should be members of senior management who can keep calm under pressure and will be comfortable speaking to journalists at short notice.

4. Have a well co-ordinated crisis team. During a crisis all communications should be co-ordinated by the crisis team: advise your staff to direct external enquiries to them, and not to speak to the media on their own initiative! Appoint alternates for your team, in case someone is off that day or you have a long crisis and need to rotate your personnel. Remember it’s too late to learn the skills you need during the crisis – don’t wait until you have an emergency on your hands.

5. Have your crisis communications plan ready. Each crisis is different, but you can have your media lists, fact sheets and even holding statements prepared in advance. This will give journalists something to work from while you investigate the crisis and ascertain the facts. You want to be in as much control as possible from the start, and a pre-prepared media pack will help. Don’t forget to store copies of all your crisis materials off site in case there’s an emergency at your premises.

6. Regularly update your stakeholders and media throughout the crisis. Be proactive in approaching your media contacts and providing them with information: you want to be seen as the authoritative source of information on the crisis, and you don’t want the public getting their information from other - potentially prejudiced sources.

7. It’s OK to admit you don’t have all the answers yet. Tell people what you’re doing to investigate the crisis and when you expect to have the information they need. Don’t say anything you’re not certain of, or make promises you won’t be able to keep.

8. Act quickly to address any information you know to be wrong. Swift and direct clarification rectifies the situation. It’s important to keep on top of what’s being said about you during a crisis.

9. Online speculation means your crisis activity now needs to be 24/7. The internet is the first place your stakeholders will go when they’re looking for information on the crisis, and they will expect to be able to contact you directly on your social media channels. Resource will need to be directed to responding quickly, accurately and reassuringly to points made and questions asked across all your streams.

10. You need to give thought to how you will rebuild your reputation after a crisis. What would a crisis ‘win’ look like for your company? After the crisis has passed and your investigation has concluded it might become clear your company wasn’t at fault, and it’s to your advantage to communicate this effectively. Ask what you can learn from the crisis to re-position your company.


Building a lean, mean supply chain machine is the dream of many organisations. On the face of it, lean sounds like a good idea. By streamlining and simplifying processes, and by cutting out flab and wastage, enterprises can boost productivity and profitability, and of course end-customer satisfaction. Just the muscle without the adipose layers is the goal. Companies aim for ever fewer suppliers, fewer product touch points and faster operations. Yet there comes a point where a supply chain starts to look more like a skeleton than a living, evolving business organism. It is at this point that the slightest shock to the system can break it. In other words, the fragility of your supply chain becomes a major risk for your business continuity.



Business users aren’t just technology savvy these days. They’re also increasingly data savvy, and that’s lead to a major shift in what business users expect when it comes to accessing and using data, according to data integration veteran Sachin Chawla.

“These guys don’t even exactly know the questions,” Chawla said during an interview with IT Business Edge. “They want to start playing with the data and then the questions will emerge as they do that, and the value will emerge as they do that. So it’s more about exploration than ‘Oh, tell me how much product we’ve sold in this region every month.’”

This represents a significant shift from the traditional approach, in which business users request reports that may take IT months to produce, Chawla said.



Cloud security has always been a sensitive topic. For many years, security was listed as the number-one reason why companies shied away from adopting cloud technologies. Cloud security has improved considerably over the years, but a survey conducted by Perspecsys shows just how far we have to go, especially when it comes to understanding where and how data is protected.

While at RSA, the folks from Perspecsys surveyed more than 125 attendees about data control in the cloud and more than half (57 percent) said they don’t have a complete picture of where their sensitive data is stored. Perhaps more alarming, 48 percent of the respondents said they don’t have a lot of faith in their cloud providers to protect their data. And because of this lack of trust, cloud adoption is slowed.

Maybe we haven’t come that far in cloud security, or at least the perception of cloud security, after all. Although, I have to say, the findings in the Perspecsys survey are a lot more encouraging than the results of a Ponemon Institute study of a year ago that found, according to eSecurity Planet:



(TNS) — Lawmakers and federal officials trying to overhaul the National Flood Insurance Program are considering dismantling a sprawling system that relies on more than 80 separate companies to sell policies, collect premiums and calculate damages after disasters.

The move, in response to allegations that claims were underpaid after superstorm Sandy, would dramatically reshape a government initiative that insures 90,000 homes and businesses on Long Island and 5.2 million nationwide.

Though the federal government underwrites flood insurance, it has long hired private companies including Allstate, Travelers and others to sell and manage policies. Those partnerships have allowed Washington to provide coverage without the staff and infrastructure of an entire insurance company.



Sally Beauty Holdings (SBH) has begun investigating a data breach that may have affected 25,000 customer records.

And as a result, the professional beauty supplies company topped this week's list of IT security newsmakers, followed by Consumer Reports, Tiversa and Ponemon Institute.

What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:



(TNS) — In 2008, a 7.9 earthquake left a path of destruction in the Chinese province of Sichuan, leveling whole communities and leaving as many as 88,000 dead.

The chaos and confusion was made worse because the temblor disabled more than 2,000 cellphone towers, leaving huge communication gaps that lasted weeks.

On Friday, Los Angeles became the first city in the nation to approve seismic standards for new cellphone towers, part of an effort to strengthen communications infrastructure in preparation for the next big quake.



Transportation departments spent more than $1 billion since last October plowing highways, salting roads and coping with winter weather, according to a new survey.

The tally of 23 states, conducted by the American Association of State Highway and Transportation Officials (AASHTO), put the total cost at more than $1.13 billion. The full costs are higher, as several snowy states did not provide figures for the survey. 

This the first year that AASHTO conducted the survey. The most recent winter was milder in much of the United States than the one before, but the impact varied by region.

Pennsylvania spent the most of any state in the survey, with expenses of $272 million. The state transportation department estimates it took 2.5 million man hours to respond to the storms.



For SAP, the rise of the Internet of Things (IoT) is not so much about connecting things to the Internet as it is automating business processes.

At the recent Sapphire Now conference, SAP outlined how it will make use of a lightweight implementation of the SAP HANA in-memory computing platform to push both transaction processing and analytics as far out to the edge as possible via a cloud-enabled IoT service running on top of SAP HANA.

But Michael Lynch, global co-lead for IoT at SAP, says that’s really only the first step. The second step is to then begin moving from the realm of predictive analytics to a world where prescriptive analytics enable business processes to be dynamically adjusted in real time. For example, the appearance of a tropical depression off the coast of North America would change flight schedules, which would then trigger the sending of an alert to passengers, and also dispatch a car service to pick up passengers to bring them to the airport at the new time.



(TNS) — The Department of Defense raised the security level Friday at military bases across the United States in response to growing concern that they could be targeted for attack.

Under Force Protection Condition Bravo — the third of five security levels — more guards may be deployed at base entrances, and people and goods entering bases are likely to be subjected to closer scrutiny.

A spokesman for U.S. Northern Command said it's the first time the security level has been raised nationwide since Sept. 11, 2011, the 10th anniversary of the attacks on New York and Washington.



E-commerce business models have many advantages over brick-and-mortar retailers, including lower overhead, more flexibility in product and price testing, and more opportunities to manage inventory at optimal levels based on shopper behavior and current web analytics. However, an e-commerce business can’t escape all the realities of merchants with physical storefronts—including shoplifters.

Here are six tips for preventing virtual shoplifters:



Monday, 11 May 2015 00:00

Hail Claims Add Up During April

We’re reading about the economic and insurance impact of severe thunderstorms in the United States in April 2015, as reported by Aon Benfield’s latest Global Catastrophe Recap report.

Five separate thunderstorm events in central and eastern parts of the U.S. caused expected insured losses of $2 billion, including more than $750 million from one event alone.

What was the $750 million event?

A widespread multi-day severe weather outbreak that hit central and eastern parts of the U.S. from April 7-10, leaving at least 3 dead and dozens injured.



Tech career news this week included taking a fresh look at roles in cybersecurity, imagining what a day without data would be like, new hiring problems in Silicon Valley and more.

Cybersecurity Hiring Hot – and Cool

Hiring in security-related IT positions has been strong for awhile now, and Ben Johnson, chief security strategist with Bit9 + Carbon Black, says demand will continue to be high for several reasons, not the least of which is that mainstream culture is making the job look cool. In “Latest Cybersecurity Crisis: Where’s the Talent?” Johnson shares advice for those who want to break into the area, and those responsible for doing the hiring, including how to leverage existing skill sets and how to redefine roles and team needs.



The data industry is naturally buzzing about the new Tesla Powerwall battery. As a relatively low-cost means to capture and store energy, it makes not only an effective back-up solution but also a means to utilize solar, wind and other renewable sources during long periods of inactivity.

But as with any solution, there are always a few trees in the forest, and with batteries we run the very real risk of simply trading one set of problems for another.

Tesla, of course, is not the first company to develop a high-capacity battery solution, nor is it the first to utilize lithium-ion (Li-ion) as the primary power source. But if initial claims are true, the company has come up with a reliable, easily deployable solution capable of hitting a very reasonable price point of about $350 per kWh, which should make many facilities managers jump for joy. These costs, however, do not include installation, maintenance and other factors, so organizations will need to do some number crunching before signing on the dotted line.



Global insurers’ level of satisfaction with their enterprise risk management (ERM) performance grew by 10 percentage points over the last two years (63% compared to 53%). This was highlighted by a 16-percentage-point increase in Asia Pacific (51% compared to 35%) and less pronounced in North America and Europe (with a seven-point increase), according to Towers Watson’s Eighth Biennial Global Enterprise Risk Management Survey.

According to the survey, 74% of global insurers said their executives and board members view the risk management function of their enterprise as an important strategic partner that adds value to the business. Notably, carriers that share this view are almost twice as likely to say they’re satisfied (73% compared to 38%) with their company’s ERM performance compared to those that believe ERM is merely a provider of risk assurance (18%) or for regulatory compliance (8%).

Insurers’ opinions of their ERM program were determined by factors such as clear links to business goals. In fact, carriers with ERM functions that are well integrated into their business planning noted higher rates of satisfaction (82%) than those without an integrated strategic plan (53%). Similarly, those with a risk appetite framework linked to specific risk limits expressed higher rates of satisfaction (76%) than their peers with no framework in place (50%).



Friday, 08 May 2015 00:00

Where The Cloud Is Heading for MSPs

In a continuously evolving IT environment, it’s important to always remain on the cutting edge. Where possible, it’s even more beneficial to remain a step ahead. In order for managed service providers (MSPs) and their clients to do so, they must be able to accurately forecast where the future of cloud storage and cloud-based file sharing is heading.

In the RightScale 2015 State of The Cloud Report, the enterprise cloud management company found that enterprises are increasingly implementing hybrid cloud strategies that encompass both public and private clouds. However, as discussed in a recent report from ZDNet, does RightScale’s cloud survey actually suggest that hybrid and public clouds are growing at the expense of private clouds?




In 2022, Qatar will host one of the biggest sporting events in the world - the FIFA World Cup. In doing so it will become the first Arab country to host such a prestigious tournament, and perhaps the smallest country ever to do so.

So how does a small desert country with a population of less than 2 million manage such an event? How does Qatar ensure that the immense investment required delivers a sustainable return once the final has been played? That is one of the roles of the Supreme Committee for Delivery and Legacy. The SCDL was set up with the aim of ensuring the "successful delivery of all infrastructure required for Qatar to host an amazing and historic FIFA World Cup that is in line with national development plans and leaves a lasting legacy for Qatar, the Middle East and the world."

National Resilience Capability also stands to benefit from the staging of this major international event. All organizations – private and public – will be inspired to work together, building stronger and more resilient Qatar as a result. Dorothy Crossan is the Head of the National Resilience Capability Programme within the SCDL and she will be discussing 'Business Continuity and Resilience – A National Perspective' at the BCI Middle East conference in May.

During her presentation, Dorothy will highlight the potential role of the private sector in supporting national resilience which is a key building block in delivering a safe and secure event. Organizations exist within a national framework and are affected by potential risks beyond their control, however they are in a position to help mitigate the effects of these risks on their staff and customers, their organization, and consequently the wider community. A shared understanding of risks, built on clear authoritative advice and the promotion of good practice within and between sectors promotes consistency in planning focussed on assessed threats. In this way every organization can contribute to strengthening national resilience, strengthening their own in turn.

Prior to her current position as Head of the National Resilience Capability Programme for Qatar’s Supreme Committee for Delivery and Legacy, Dorothy spent 25 years in the Metropolitan Police in the UK where she gained extensive experience in strategic planning on security matters, working at the National level. In 2011, she developed the London Security Resilience Framework to improve information-sharing, coordination and planning for protective security across the UK capital. She was closely involved in the development of London’s Cross-sector Safety and Security Communications (CSSC) programme, an innovative private sector engagement initiative developed for the London 2012 Olympics, still flourishing in legacy. She is a particular champion for the inclusion of private sector representatives in security exercises.

To learn more about what Dorothy has to say about national resilience, come along to the BCI Middle East Conference. There is a packed programme of activities throughout the two days of the conference so to find out more, or to book your place, click here.

KANSAS CITY, Mo. – With the potential for severe weather across the plains and several Midwestern states the remainder of this week and into the weekend, staff at the U.S. Department of Homeland Security’s Federal Emergency Management Agency’s (FEMA) Region VII office are coordinating with state and local officials in Iowa, Kansas, Missouri, and Nebraska and urge the public to prepare to stay safe.

 “With the threat of severe weather developing, we urge residents to listen to NOAA Weather Radio and local newscasts, monitor digital media feeds for updates and follow the instructions provided by local emergency officials,” said FEMA Region VII Administrator Beth Freeman. “As folks make their weekend plans, this severe weather threat is a reminder everyone needs to remain vigilant as we can’t always anticipate when or where a disaster might strike.”

Make A Plan!
Your family may not be together when a disaster strikes so it is important to plan in advance. For more information on creating your family’s emergency plan, visit http://www.ready.gov/make-a-plan.

Have an Emergency Supply Kit!
To prepare for power outages and the disruption of essential services, FEMA urges families to prepare an emergency supply kit for their homes and cars. For more information, visit http://www.ready.gov/build-a-kit.  When preparing a kit, remember water, medications, and items needed for the well-being of your pets.

Stay Informed!
Pay attention to and follow instructions from local emergency officials.

FEMA App Has Weather Alerts (NEW!)
Download the FEMA app (available in English and Spanish, for Apple, Blackberry and Android) to get severe weather alerts from the National Weather Service, https://www.fema.gov/mobile-app.

Social Media—A great monitoring tool!
Most local emergency managers, state and government agencies, including the National Weather Service, have an active social media presence and use it to provide fast, current and critical information before, during and after emergencies. Consider following the Facebook, Twitter or Instagram handles of your local emergency management office, as well as hospitals, schools and voluntary organizations serving your community.

If you don’t already have one, consider using a social media list to monitor the severe weather threat; how local officials are responding; and what they may ask of you and your family.  @FEMARegion7 on Twitter has created social media lists for Iowa, Kansas, Missouri and Nebraska. Subscribe to your state’s list, www.twitter.com/femaregion7/lists, or use it as a template to create your own. Learn and chat about creating Twitter and Facebook lists using #PrepList.

Tips for Severe Weather Safety!

If you have severe weather in your area, keep these safety tips in mind:

  • Become familiar with the terms used to identify a severe weather hazard and talk to your family about what you will do if a watch or warning is issued. Here are the terms you need to know:

WATCH: Meteorologists are monitoring an area or region for the formation of a specific type of threat (e.g. flooding, severe thunderstorms, or tornados).

WARNING: Specific life and property threatening conditions are occurring and imminent. Take appropriate safety precautions.

  • If there’s a tornado warning, you’ll need to know what to do no matter where you are. Learn more before the storms arrive, http://www.ready.gov/tornadoes.
  • DISTANCE TO SAFE ROOM MATTERS: While community safe rooms offer significant reassurance and protection during a severe weather event, always make the safe and certain choice about where to seek shelter – particularly if there is little time to travel to the location of the community safe room. It is always best to seek shelter in your basement or in the lowest possible structure in your residence if time and warning are limited when severe weather hits.
  • LOCATION MATTERS: Know your surroundings and your structures if you’re planning to attend an event, take vacation, visit family, or if you are staying in a location other than your home like a hotel, campground or cabin. Be sure to familiarize yourself with the facility’s emergency plans including: sirens and warnings, how to shelter in place, and steps to be taken in the event of an evacuation.
  • MOBILE HOMES: Mobile homes, even if tied down, offer little protection from tornadoes and should be abandoned. A mobile home can overturn very easily even if precautions have been taken to tie down the unit. Residents of mobile homes must plan in advance and identify safe shelter in a nearby building.
  • FLOODING: Be aware that flash flooding can occur within minutes and with little notice.  If there is any possibility of a flash flood, move immediately to higher ground.  Do not wait for instructions to move. Do not drive through flood water. When you see flood waters ahead: Turn Around, Don't Drown!
  • SAFETY AFTER THE STORM: Injury may occur when people walk amid disaster debris and enter damaged buildings. Wear sturdy shoes or boots, long sleeves and gloves when handling or walking on or near debris.

    Be aware of possible structural, electrical or gas-leak hazards in or around your home. Contact your local city or county building inspectors for information on structural safety codes and standards and before going back to a property with downed power lines, or the possibility of a gas leak. Do not touch downed power lines or objects in contact with downed lines. Report downed power lines and electrical hazards to the police and the utility company.  They may also offer suggestions on finding a qualified contractor to do work for you. 


Follow FEMA online at www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema.  Find regional updates from FEMA Region VII at www.twitter.com/femaregion7. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.  The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.



(TNS) — People living in California and the West Coast still face the highest earthquake risk. But a new study says they are not alone.

That report found that close to half of all Americans — nearly 150 million people — are threatened by shaking from earthquakes strong enough to cause damage.

That figure is a sharp jump from the figure in 1994, when the Federal Emergency Management Agency estimated that just 75 million Americans were at risk from earthquakes.

One reason for the sharp increase in exposure to quake damage is population increases in areas prone to earthquakes, especially California, said William Leith, a co-author and USGS senior science advisor for earthquake and geologic hazards.



There is no stopping the evolution of technology, which seemingly occurs at warp speed. One technologically advanced industry that typically is not thought of as such, except when it is critically needed, is the life safety and emergency services industry.

Like others, the industry is pressured to do more with less because of shrinking tax revenues and limited grant program availability over the last decade. Yet public safety agencies are expanding their service offerings and providing better and faster emergency response because the mission matters. In many instances it is advanced technologies that are enabling emergency response entities to meet this challenge.

This is particularly true in the thousands of public safety answering points (PSAPs), which handle the nation’s 911 emergency calls. A PSAP is staffed by telecommunicators, or call-takers, who have been trained to field calls from the public and gather information related to an emergency situation. Telecommunicators also dispatch first responders to the emergency, including law enforcement, fire and emergency medical services (EMS). Dispatch operations entail taking the information received from the 911 call regarding the emergency situation and appropriately coordinating activity among the various first responders. Sometimes PSAPs are organized to segregate the dispatching of emergency services into dedicated groups corresponding to law enforcement, fire and EMS.



Cloud services and the steady increase in cloud-based file sharing shows that cloud services just continue to grow. Indeed, a recent report says enterprise cloud adoption increased by 43 percent in 2014. This is good news for managed service providers (MSPs) looking to onboard new clients and expand their offerings.

Skyhigh Networks, a global cloud security and enablement company, recently released their quarterly Cloud Adoption and Risk Report.  The report presents the state of the cloud industry, based on analysis of actual cloud usage from over 15 million enterprise employees and 350 enterprises. HeraldOnline.com chronicled the report, writing that, “with a full year of usage statistics, this latest edition of the report is the industry’s most comprehensive to date.”

Many of the usage statistics published in the report paint a terrific outlook for the cloud—and for MSPs.



FRANKFORT, KY – Residents and business owners who applied for federal assistance resulting from the severe storms and flooding in April will hear soon from damage inspectors.

People who suffered losses in Bath, Bourbon, Carter, Elliott, Franklin, Jefferson, Lawrence, Madison, Rowan, and Scott counties may be eligible for assistance by registering with the Federal Emergency Management Agency (FEMA).

Following registration, FEMA usually schedules inspections within seven (7) to 10 days. An inspector first examines structural damage to a house or business, then assesses damage to appliances, such as the washer, dryer, refrigerator, and stove. The inspector also gathers information about serious needs, such as lost or damaged clothing. Homeowners should identify all known damages and tell the inspector if they have a septic system or a well.

Property owners need to show proof of ownership and occupancy. Renters need to show proof of occupancy. If insurance papers are available, residents should show them to the inspector.

Inspectors will ask applicants to show identification. At the same time, applicants should ask for identification from everyone identifying themselves as damage inspectors. All inspectors carry official photo identification.

“If an inspector is not wearing an identification card or badge, please make sure you ask to see it,” said Joe M. Girot, FEMA’s Federal Coordinating Officer for Kentucky.

Girot said it is also important to keep in mind that official inspectors do not charge for this service.

Those who have suffered losses as a result of the April storms, but have not yet applied for assistance are encouraged to do so as soon as possible.

The fastest and easiest way to register for assistance is online at www.DisasterAssistance.gov or by calling 1-800-621- 3362 (FEMA) or by web-enabled mobile device at m.fema.gov.  Disaster assistance applicants who have a speech disability or hearing loss and use TTY should call 1-800-462-7585 directly; those who use 711 or Video Relay Service may call 1-800-621-3362. The toll-free telephone numbers will operate from 7 a.m. to 10 p.m. eastern, seven days a week until further notice.


FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s temporary housing assistance and grants for public transportation expenses, medical and dental expenses, and funeral and burial expenses do not require individuals to apply for an SBA loan. However, applicants who receive SBA loan applications must submit them to SBA loan officers to be eligible for assistance that covers personal property, vehicle repair or replacement, and moving and storage expenses.

For more information on Kentucky’s disaster recovery, visit www.fema.gov or http://kyem.ky.gov. On Facebook, go to http://www.facebook.com/KYEmergencyManagement. To receive Twitter updates: http://twitter.com/kyempio or www.twitter.com/femaregion4.

Thursday, 07 May 2015 00:00

The benefits of agentless backup

By Gabriel Gambill, senior systems engineer for EMEA, Quorum

Agentless backup is one of the latest buzzwords in disaster recovery and business continuity, but how much do we really know about it or what it means for organizations using it?

Most people probably know that agents are the small applications installed on a server to perform a particular function. For backup, the agent is installed onto the host server that the system administrator wants to back up. Agentless backup is, as its name suggests, backup without the use of such an agent.

In an effort to distinguish themselves from their rivals, several backup and recovery vendors claim to provide agentless backup. In many instances, however, these vendors inject an agent at the beginning of the process and remove it before the backup finishes in order to achieve application consistency. Strictly speaking, they aren’t providing agentless backup because they are still using an agent in parts of the process.



Phoenix has published the results of a national survey of UK employees on their use of and attitudes towards workplace IT. One of the survey’s key findings highlights UK workers’ widespread use of their own electronic devices for work, posing a potential major threat to business security.

The survey, conducted with workers aged 18 and over, who use IT and electronic devices as part of their day-to-day business, across a wide range of industry sectors, revealed that, while over half (51 percent) primarily use their own devices, an incredible 59 percent of those workers have not used their company IT support to setup their devices. This indicates a significant number of devices being used in the UK economy that may not comply with corporate IT policies or have sufficient security measures in place.

Alistair Blaxill, managing director of Phoenix’s Partner Business, said: “Mobility is one of the most significant driving forces for the IT sector and an increasing number of people want to be fully connected to work all of the time. However, the emergence of BYOD in the workplace is creating a real challenge for IT departments, with workers using their own unmanaged devices to access corporate networks and sensitive data. The findings of our survey underline this trend in the UK and it reinforces the need for businesses to stay on top of how employees access IT and ensure that they are appropriately protected.

“We think the best way to achieve this shift is to look at the ways in which IT departments are interacting with workers. Employees’ attitudes to IT support are changing and they want instant, real-time solutions to their device issues. Our survey tells us that just 23 percent and 32 percent of workers received their IT support either primarily face-to-face or a mix of face-to-face and remotely respectively. Savvy employers are now looking to provide workers with an IT support service that mirrors the personal experience they receive outside of work when resolving issues with their own personal devices.”


Cyber insurance should become as common a purchase for UK businesses as property insurance within the next 10 years, according to the Association of British Insurers (ABI).

Speaking at the ABI’s conference on cyber insurance, Huw Evans, director general at the ABI, said:

"Cyber risk is growing rapidly. At the moment, despite more than 80 percent of large businesses suffering a cyber security breach in a 12 month period, only around 10 per cent have any form of cyber insurance."



Boards are failing to navigate the changing risk landscape effectively, resulting in significant loss of value, according to research from leading players in the business community. As a result, corporate risk leadership needs rethinking and boards should consider appointing an executive voice of risk.

The above is one of the key points made in a new report, ‘Tomorrow’s Risk Leadership: delivering risk resilience and business performance’ which has been written by global business think tank Tomorrow’s Company and launched in collaboration with the Good Governance Forum members, Airmic, CIMA, IHG, Korn Ferry, PwC and Zurich.

The report challenges businesses and business leaders to consider whether the risk leadership in their organizations is sufficient to meet the demands of an increasingly fast-paced and interconnected world. While companies are usually strong at managing their core risks, all too often, the management of risk remains a siloed operation, detached from strategy.

The report’s key recommendation is that organizations consider establishing an executive voice of risk who leads the risk agenda, helps deliver the business model and drives business performance. The risk leader would be at or close to board level and should help boards to be more forward looking, enhance their decision-making capabilities and provide a corporate-wide view of risk.

The risk leader should have a strategic skillset and broad business knowledge to spot early-warning indicators of the genesis of an atypical crisis event and enable a more rounded approach to risk. Only then, according to the report, can a business truly drive resilience within the organization.

The report also says that setting the right risk culture is vital. It recommends taking an integrated approach to risk, defining the appropriate risk appetite for the organization, and creating the supporting culture and behaviours required.

Read the report.

Try this simple test, made possible thanks to the ubiquity of the smartphone and its on-board camera. First, imagine a crisis that would put your organisation in a difficult posture with the public. A generally applicable example is breach of your confidential business data, including your customer records. Now take your smartphone and record a selfie video of you making a supposedly public statement about the incident. Stop the recording and play it back. Give yourself a score for each of the following aspects: clarity of speech, clarity of statements made, credibility, and level of positive appeal to an angry public looking to lynch a suspect. Scores rather lower than you’d like? You’re on the way to discovering the crucial role of the spokesperson in a crisis.



Many computing operations throw off lots of copies: prime offenders include backup, analytics, snapshots, cloning, and test/dev. And not only do you have many copies by many processes, each of these copies is proprietary to its generating application. It is not possible to re-use that data for multiple processes, leaving your storage landscape littered with duplicate data that cannot be leveraged or re-used. Not even cloud users get away scot-free; they are still paying for that storage space and bandwidth, and those copies will be exclusive to the process that created them.

For decades this siloed, crazy quilt environment has been business-as-usual because there was nothing much that people could do about it. Data protection, analytics, and testing systems all generated their own copies of data because they had to: it was the only way any of the processes could work.

This challenging state of affairs spurred Actifio to launch data copy management in 2009. The question they asked was: what if a single product could eliminate duplicate data across multiple processes by providing a single golden copy of that data for all of them? What if a single product could capture data copies from multiple applications, store a single copy of that data, and then virtualize it wherever it was needed by data protection and business applications? 



Even though 65 percent of small to midsize businesses (SMBs) have set up data backups on premise as part of a business continuity (BC) strategy, the time has come to consider more up-to-date options. Carbonite and IDC recently shared the results of their joint 2015 Business Continuity Study, which reveals some remarkable data on the subject.

It seems that SMBs have realized how important the cloud will be to current and future company business. Of the 700 SMBs surveyed, 81 percent are currently considering updating their BC strategies. Within the next year to two years, 72 percent of these businesses expect to boost their investments in BC technologies—which makes sense when you consider that more than 80 percent of these SMBs have had downtime in the past that cost “from $82,000 to $256,000 for a single event,” according to the report.

Mohamad Ali, Cabonite’s CEO, recently told website Talkin’ Cloud more about what SMBs need in a BC solution:



British Gas revealed that it will form a data science team this week, according to the UK site, V3. The reason may serve as a strategic case for establishing data science teams going forward: The company says the team will help more business users delve into and use its Hadoop data lake.

The announcement reflects a subtle shift in focus, from hiring a team to make Big Data feasible to using a team approach to democratize Big Data.

"We're setting up a data science team to assist our business users so they can fish in the lake themselves," Phil Crannage, head of applications development at British Gas, told V3.



The security implications of the Internet of Things (IoT) are mind boggling. In many visions, the IoT is deeply enmeshed in the lives of users—even those who are doing their best to steer clear of it. So, the potential for mischief and malevolent behavior is great.

Bruce Schneier is one of the best known electronic security experts and in a Network World interview with Tim Greene, Schneier didn’t pull any punches on where the industry is on IoT security. In response to a question on the practical steps that can be taken, Schneier did the equivalent of throwing up his hands:

There’s nothing you can do. This is very much like the computer field in the ‘90s. No one’s paying any attention to security, no one’s doing updates, no one knows anything - it’s all really, really bad and it’s going to come crashing down.



The security implications of the Internet of Things (IoT) are mind boggling. In many visions, the IoT is deeply enmeshed in the lives of users—even those who are doing their best to steer clear of it. So, the potential for mischief and malevolent behavior is great.

Bruce Schneier is one of the best known electronic security experts and in a Network World interview with Tim Greene, Schneier didn’t pull any punches on where the industry is on IoT security. In response to a question on the practical steps that can be taken, Schneier did the equivalent of throwing up his hands:

There’s nothing you can do. This is very much like the computer field in the ‘90s. No one’s paying any attention to security, no one’s doing updates, no one knows anything - it’s all really, really bad and it’s going to come crashing down.



To paraphrase the great humorist Mark Twain, rumors of the death of passwords have been greatly exaggerated. While people lament the challenges and problems posed by passwords, they remain a core authentication and security technology.

My colleague Andras Cser and I have been fielding so many client inquiries around passwords that we are undertaking a quantitative, anonymous survey from end user organizations to gauge their current password policies and usage. This online survey asks about your organization’s current password policies and challenge as well as the future role of passwords in your organization. We also are using the survey to gain perspectives on the future of passwords and how other technologies might replace passwords completely.

The survey is completely confidential, but participants who provide contact details will receive a complimentary copy of the report when it’s published later this year.

You can access the survey here:


Recent 2015 audit surveys report some interesting findings about the current role of audit committees. They highlight not only how complex the world of risk management and oversight has become in the corporate world, but also the enormous breadth of responsibilities that the audit committee is expected to bear.

The requirements of internal audit will only continue to expand because, as PwC’s recent “2015 State of the Internal Audit Profession Study” shows, 60 percent of CAEs believe that within the next five years their internal audit function will need to be providing not only value-added services, but also proactive advice for the business.

Additionally, in KPMG’s recent “2015 Global Audit Committee Survey,” 74 percent of audit committee respondents said that more time is required to perform their role. Key areas of the internal auditor’s role that will require more time include:



What would you do if the files you rely on every day were unavailable?


Most of us become accustomed to storing much of the data we use – spreadsheets, forms, slide packs, photos and other documents – on ‘shared files’.  Whether it’s on a corporate “S: drive” or a SharePoint site, information stored on shared facilities is a productive and relatively inexpensive means of saving, retrieving & archiving documents we create, maintain and use.  Shared facilities are an alternative to saving files on our device’s “C: drive” (a Business Continuity no-no!), or on a USB device – both of which create access and security problems.

It is often a common assumption that –following a data center disruption – our SharePoint application or ‘S: drive’ will be restored concurrent with, or slightly following other mission-critical IT systems and applications.  That might be true; then again, it might be days or weeks before the shared files are restored.



According to a new study by Aon Risk Solutions, damage to brand and reputation was cited as the top overall concern facing organizations globally. The Aon Global Risk Management Survey also revealed that, for the first time ever, cyber risk had entered the top ten at number nine.

Aon’s global clients strongly felt that damage to brand and reputation ranked as a top concern across almost all regions and industries. This can be attributed to the growing challenges businesses are facing amongst the other risks found in the top ten, such as cyber risk, but also including business interruption, property damage and failure to innovate.

The eventual inclusion of cyber risk in the top ten is perhaps no surprise as both cyber attack and data breach have routinely featured as top three threats in the Business Continuity Institute’s annual Horizon Scan report. Damage to reputation being at number one and the entry of the cyber risk to the top ten further underscores the increasing importance of cyber risk as it has been regularly linked to brand and reputation issues in the wake of recent data breaches.

Stephen Cross, Chief Innovation Officer, Aon Risk Solutions said “The insights provided by this survey help us understand how risks are changing as the global environment evolves. It’s little surprise to see cyber risk enter the top ten at the same time we are seeing increasing concern about corporate reputation as the two issues are a great example of the interconnectivity of risk.”

Rory Moloney, Chief Executive Officer, Aon Global Risk Consulting, said “While new risks such as cyber have moved to centre stage, established risks like damage to reputation or brand, are taking on new dimensions and complexities. The interconnected nature of these risks reinforces the importance of strategic risk management in every organisation.

Failure to innovate/meet customer needs remained in sixth spot. Respondents in the technology industry indicated that this is the most significant risk to their business. Property damage also re-entered the top 10 global risk list for the first time since 2007, up from 17 in 2013. This risk was ranked highest by hotels and hospitality, non-aviation transportation and real estate. Unprecedented weather events in recent years have bundled this risk with the cause and effect of business interruption, which took the seventh spot on the 2015 list with reported losses down more than 10% from the 2013 survey.

The top 10 risks are:

  1. Damage to reputation/brand
  2. Economic slowdown/slow recovery
  3. Regulatory/legislative changes
  4. Increasing competition
  5. Failure to attract or retain top talent
  6. Failure to innovate/meet customer needs
  7. Business interruption
  8. Third party liability
  9. Cyberrisk (computer crime/hacking/ viruses/malicious codes)
  10. Property damage

(TNS) — In the collapsed village of Sankhu, 12 miles east of Kathmandu, most residents sleep in tents, but ignore police warnings and enter caved-in brick buildings that lean precariously over mounds of rubble. As rescue teams wielded shovels last week to remove the last of 64 dead bodies, nearby residents salvaged bricks, stone blocks and timber to reuse for the eventual and inevitable rebuilding.

“We have to rebuild. As soon as possible,” said Gunkeshari Dangol, 45, standing in the alley next to the three-story brick house constructed by her grandfather. Her 10-year-old grandson lies entombed there until police can safely remove the child’s body.

In Sankhu and throughout Nepal, people are still counting losses. Death tolls may head to 10,000 or more. Six of Kathmandu Valley’s seven UNESCO World Heritage sites, more than 57 other temples and palaces, and hundreds of thousands of houses have been reduced to rubble or have suffered deep wounds. The government has asked international rescue teams to return to their countries, as hope for miracles has faded.



(TNS) -- Within a few hours of the devastating 7.8-magnitude earthquake hitting Nepal last Saturday, Facebook stepped in to help.

Users around the world with Facebook friends in the affected region started getting notifications that their friend was “marked safe.”

Later that afternoon, Facebook CEO Mark Zuckerberg explained why in a post on his timeline.

“When disasters happen, people need to know their loved ones are safe,” he wrote. “It’s moments like this that being able to connect really matters.”

The feature is called “Safety Check,” and it locates Facebook users in the region of a disaster site either by through the city listed on a user’s profile or from where they last used the Internet.



Although all London councils have disaster recovery procedures in place for electoral data, 40 percent have not tested them in the last 12 months, according to freedom of information requests made by disaster recovery specialists Databarracks.

The freedom of information requests were sent to all London Boroughs, the majority of which obliged with details on their business continuity practices, specifically in relation to electoral data.

Managing director of Databarracks, Peter Groucutt, says that 40 percent is an alarmingly high number to have failed to test, especially with the UK General Election taking place on 7th May. “It’s worrying that with the general election just a day away, many local councils have not tested that their procedures actually work in the event of a disaster. As expected, all councils that responded to our request had thorough backup and disaster recovery plans in place – which is excellent – but without testing, they could be proved useless at their time of need," said Mr Groucutt. “We always recommend performing a DR test at least once a year. At any time in the year councils are under scrutiny to keep sensitive data secure and systems running smoothly. So the run-up to a General Election, when the electoral roll is most important, it is vital to ensure your procedures are water-tight.”

Another concerning finding from the freedom of information requests is that the current RTOs (recovery time objectives) and RPOs (recovery point objectives) of many of the boroughs were relatively long.

Groucutt comments: “Most of the councils that did respond to us told us that their recovery time objective for electoral data was 24 hours, with some even as long as 7 days or in one case up to 2 weeks. It was also interesting to see that different councils have very different classifications for how critical the electoral register is. For some it is a ‘Priority 1’ system and requires the fastest recovery possible but for others there is no prioritisation, and for some the register is not included on their continuity list or would only be recovered on a ‘best-effort basis’. We put a lot of faith in IT infrastructure to just work. Imagine if a council thought its RPO was 30 minutes but when it came down to it, it was actually 48 hours? If they haven’t tested their DR capabilities, they really have no idea of how they’d cope should disaster strike at the very time that would cause most damage.”


Wednesday, 06 May 2015 00:00

Balancing IT Risk and Opportunity

For business managers, moving portions of our company’s most valued information assets into the public cloud, while compelling economically, raises a thicket of difficult risk and compliance questions.

·      From a business perspective, considering reputational and other risks, do the economic advantages outweigh the risks?

·      Can anybody in my company really answer:  if we move these processes and data into the cloud, will we still be fully compliant with all of the necessary “legs and regs” we must comply with?  How do we really prove that?

·      Frankly, our IT partners are hardly impartial in the decision; we’re allocating our IT shop’s funds to buy cloud services.  Are their security concerns perhaps a little overblown?



All too often, I run into BCM and DR practitioners that talk about their ‘Awareness’ programs and what they do to get their message of BCM/DR awareness across to the rest of the organization. Let’s face it, we all have an Awareness component to our programs but it’s how the Awareness component is executed that will make the difference.

We tend to build our other components such as BIAs, Crisis Plans, Crisis Teams, Continuity Plans, Technology Recovery Plans and others, before we turn to the Awareness component. We tend to wait until we get to a specific point before we begin to focus on getting the BCM/DR message across. I think differently.

The BCM/DR awareness message starts the moment the practitioner begins their role. It’s up to them to educate and work with others in their organization to get the message out there when they start, not when they get near the end or when it seems there’s enough information to communicate. You can communicate awareness right away; there is no reason to wait in getting the message out there.



Once a month, my co research director and partner in crime, Chris McClean, and I will use our blog to highlight one of the 26 people that collaborate to deliver our team’s research and services and always make Chris and I look really, really good. Each “Analyst Spotlight” includes an informational podcast and an offbeat interview with the analyst. This month’s Analyst Spotlight features our newest analyst, Martin Whitworth. Based in London and bringing experience as a CISO and Head of Security across several industries, Martin will cover the most pressing issues keeping CISOs reaching for another bourbon on the rocks, including security strategy, maturity, skills and staffing, business alignment, and everyone’s favorite pastime, reporting to the board.



An old sports tenet says that you can’t tell the players without a scorecard. It is equally true that you can’t play the game without a playbook. Yet most emergency operations centers are doing just that.

EOCs all share one basic currency — information. At its core, an EOC is an information processing and dissemination mechanism that supports and coordinates operations in the field. So how information is analyzed, processed and acted upon often means the difference between life and death. But there is a systemic problem.

All too often, emergency operations plans and EOC standard operating procedures state that the operations center will establish and maintain situational awareness and disseminate a common operating picture. Unfortunately no one ever tells you how to do that. Why does that matter? Because every single decision EOC responders make depends on accurate, complete and current situational awareness and a common operating picture, otherwise known as SA/COP. But several issues complicate the problem.



Tripwire, Inc., has announced the results of a study conducted by Dimensional Research on improving the cybersecurity literacy of Fortune 500 boards and executives. The study examined corporate executives’ view of cybersecurity risks, as well as measured their confidence and preparedness in the event of a security breach. Study respondents included 200 business executives and 200 IT security professionals at US companies with annual revenues of more than $5 billion.

Key findings include:

  • C-level executives are less confident (68 percent) than non C-level executives (80 percent) that cybersecurity briefings presented to the board accurately represented the urgency and intensity of the cyberthreats targeting their organizations.
  • C-level executives (65 percent) were less confident than non C-level executives and IT executives (87 percent and 78 percent respectively) in the accuracy of the tools their organization uses to present cybersecurity risks to the board.
  • 100 percent of C-level executives and 84 percent of non C-level executives consider themselves ‘cybersecurity literate,’ despite ongoing cyberattacks and high profile breaches.

“The lower level of confidence on the part of C-level executives reflects a sea change in the way that executives handle cybersecurity risks,” said Dwayne Melancon, chief technology officer for Tripwire. “The reality is that an extremely secure business may not operate as well as an extremely innovative business. This means executives and boards have to collaborate on an acceptable risk threshold that may need adjustment as the business grows and changes. The good news is that this study signals that conversations are beginning to happen at all levels of the organization. This is a critical step in changing the culture of business to better manage the ongoing and rapid changes in cybersecurity risks.”

While the results of the Tripwire study indicate an increased preparedness on the part of IT professionals, they expose the uncertainty at the C-level and point toward the need to increase literacy in cybersecurity and its attendant risks in the near-term. Competitive pressures to deploy cost-effective business technologies may affect resource investment calculations for security; these competing business pressures mean that conscientious and comprehensive oversight of cybersecurity risk at the board level is essential.

"I'm not surprised that C-level executives are less confident than their boards or IT executive staff,” said Melancon. “That lack of confidence comes, in large part, from the networking and informal benchmarking that takes place among C-level executives at the peer level. There is a lot of 'comparing notes' that happens between C-level peers. When this happens, you are able to get a more informed view of where you are in your overall cyber risk preparedness. This is in direct contrast to IT professionals who generally have a more insulated view of their own cyber risk, which can lead to a false sense of security. That difference in perspective – internal inputs vs. external inputs — may very well explain the confidence gap this survey highlights.”

To download the whitepaper of this study, please click here.

Cloud deployments such as cloud-based file sharing and cloud storage have been growing at such a rapid rate, they are expected to become the largest percent of IT budgets as early as 2016. The industry is keeping up with this rapid growth by creating standards and guidelines for how cloud service providers and MSPs should operate.

A proposed international standard released earlier this year focuses on data privacy in public clouds – specifically in relation to business-to-business cloud usage – and how customers should maintain control of their personally identifiable information.

The new international standard, designated ISO/IEC 27018 is described by ISO as “an important first step for protecting PII in the cloud. It is built on previous ISO guidance and will continue to evolve along with [cloud service providers] to provide more secure services upon which businesses can grow.”



It hasn’t even been a week since Nepal’s massive earthquake killed thousands and destroyed businesses, homes, roads and hospitals across the country. But already, the United Nations has called for $415 million in aid; more than $50 million has been pledged by 53 countries and foundations for immediate relief. Private donors, foundation and businesses will likely promise millions more.

Outsiders were similarly generous after the earthquake in Haiti, the Indonesian tsunami and Hurricanes Katrina and Sandy. This money is important — it enables emergency response teams like the ones I’ve been on to restore essential services and provide water, shelter and food.

But are these teams spending this money effectively? Are we doing the best we can to reach the most people as quickly as possible? Nobody knows.



WASHINGTON – Wildfires can occur anywhere in the country with the potential to destroy homes, businesses, infrastructure, natural resources, and agriculture. Last year, the United States experienced over 63,000 wildfires that burned more than three million acres. National Wildfire Community Preparedness Day is Saturday, May 2, and people across the nation will dedicate time to making their communities a safer place should a wildfire occur.

Wildfires can start in remote wilderness areas, national parks, or even your backyard.  They can start from natural causes, such as lightning, but most are caused by humans, either accidentally—from cigarettes, campfires, or outdoor burning—or intentionally. 

“When our citizens prepare and adopt the principles of fire-adapted communities, the loss of life and property from wildland fires is greatly reduced,” said United States Fire Administrator Ernest Mitchell.  

Protect your family and community from a wildfire by taking action before one happens.  On National Wildfire Community Preparedness Day, join your friends, family members, faith-based group or youth organization, and volunteer your time to improve your community’s ability to withstand and recover from a wildfire, which also may improve the safety of firefighters.

There are many ways to help protect homes, neighborhoods, businesses, and entire communities:

  • Reduce the amount of flammable materials and brush that can burn around your home or business;
  • Create a fire-free area within the first five feet of your home using non-flammable materials and high moisture-content plantings;
  • Maintain an area that is clear of flammable materials and debris for at least 30 feet on all sides from your home or business; and
  • Move wood piles and propane tanks to at least 30 feet from your home or business.

National Wildfire Community Preparedness Day is part of America’s PrepareAthon! a grassroots campaign for action to get people better prepared for emergencies through group discussions, drills and exercises.  You can take steps to prepare to reduce the devastating effects of any disaster by creating a family communication plan and practicing how you will evacuate and communicate with friends and family members in an emergency. Register your action at www.ready.gov/prepare.

Learn more about National Wildfire Community Preparedness Day. Visit the ready.gov and learn how to prepare for a wildfire.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.


The enterprise has been working out its cloud transition strategies for well over two years now, but it seems that many decisions regarding deployment and usage models are still being made blindly.

While it’s true that the lack of real-world production experience makes it difficult to judge how the cloud will function, it nevertheless seems as if the enterprise is ready to trust the cloud with all forms of data even though there is still no clear understanding of the basic characteristics of the technology.

Cost is a prime example. The common perception is that the public cloud is significantly less expensive than private clouds and provides greater scale and flexibility to boot. But a recent analysis by 451 Research suggests that the differences may not be all that dramatic. According to the group’s findings, an OpenStack private cloud distribution will run about eight cents per virtual machine per hour, just slightly better than a commercial platform like VMware or Microsoft. But both come in far less than the $1.70 per application hour that is common on the public cloud, or even the 80 cents per app hour available on Amazon’s Reserved Instances platform.



Eric Pickering is the deputy operations section chief for the New Orleans Office of Homeland Security and Emergency Preparedness. He has spent 12 years in emergency response, including serving as commander of the New Orleans CERT during Hurricane Katrina. He shared with Emergency Management some personal opinions about the responsibility and costs of mitigation and recovery.

Emergency Management: You said recently that emergency management is becoming federalized. How did this happen and what does it mean?

Eric Pickering: Actually it has been more nationalized and less federalized, meaning the states collectively. The world moves much faster than it ever did before, and most of us expect things instantly. That extends to disaster relief as well. We see people who want to help after a disaster and that’s a good thing.



A guest post from researcher Enza Iannopollo.

Upcoming changes to privacy regulation in the EU as well as rising business awareness that effective data privacy means competitive differentiation in the market makes privacy a business priority today. And this is not only relevant for tech giants: protecting both customer and employee privacy is a business priority for companies of all sizes and across industries.

But where do you start? Many companies start by hiring a chief privacy officer. Some have built brand-new privacy teams that manage privacy for the whole firm, while others prefer a decentralized model where responsibilities are shared across teams. What are the pros and cons of each approach? Which organizational structure would better meet the needs of your firm?



A study by research firm IDC carried out on  behalf of Carbonite has revealed that over 80% of small to medium sized businesses (SMBs) have experienced downtime in the past, and that the costs associated with this downtime conservatively range from $82,200 to $256,000 for a single event.

Small businesses are by no means exempt from disruption and the latest Horizon Scan report carried out by the Business Continuity Institute shows that business continuity professionals working for smaller organizations have concerns about the same threats that their counterparts in larger organizations have. What is potentially a greater danger for these SMBs however, is that they often have less capacity to absorb any disruption.

The survey does show that for many SMBs, the threats they face are not going unchallenged. The survey of 700 SMBs worldwide found that 81% of those currently using business continuity solutions are considering improvements to their strategies, while 72% plan to increase investments in business continuity over the next 12 to 24 months.

Small businesses are facing operational challenges stemming from persistent data growth, budgetary constraints and the need to produce more with less which is driving adoption of cloud computing, data analytics and mobility similar to their enterprise counterparts,” said Laura DuBois, Vice President of IDC’s storage practice. “To address these challenges, SMBs have signalled a need and intention to drive material spending on business continuity in the next 12 to 24 months.”

The main driver behind increased investment in business continuity is the threat of downtime which 76% of SMBs surveyed cited as the single biggest reason for purchasing business continuity solutions. The reason for this is clear as the study highlights that the average estimated cost for an hour of downtime for an SMB ranges from $8,220 to $25,600, and typically an unplanned event can last for as long as 24 hours – which could be devastating to a small business.

When it comes to disaster recovery, the stakes are higher for small businesses,” said Mohamed Ali, Carbonite’s President and CEO. “SMBs realize that a business continuity solution can mean the difference between staying in business or losing everything they’ve worked for, and the data shows they are investing accordingly."