Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 29, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (6562)

Teaching prospects about the Health Insurance Portability and Accountability Act (HIPAA) could help managed service providers (MSPs) boost their revenues, according to RapidFire Tools.

The company behind the Network Detective application and reporting tool this week released a survey that revealed many MSPs are using HIPAA compliance assessments to increase business and better engage prospects.



The cost and consequence of a product recall

The number of food recalls and their costs to business are rising according to a new publication by Swiss Re which highlighted that since 2002, the annual number of recalls in the US has almost doubled. Food contamination costs US health authorities US$ 15.6 billion per year (nearly nine million Americans became sick from contaminated food in 2013 alone) and half of all food recalls cost the affected companies more than US$ 10 million.

Food manufacturers operate in a vast, globalised supply chain, making risk management for food recalls more difficult, yet one mislabelled product or contaminated ingredient can cause sickness, death, multi-million dollar losses and massive reputational damage for the affected companies. Swiss Re's Food Safety in a Globalised World examines how the increasing number food recalls is impacting consumers, public health services, governments and companies globally.

Product quality incidents or product safety incidents may not have been identified as a major threat to organizations according to the Business Continuity Institute’s latest Horizon Scan Report, but they do still raise some concerns among business continuity professionals. 26% of respondents to a survey expressed either concern or extreme concern about the prospect of a product quality incident that would disrupt the organization and 19% expressed the same level of concern over a product safety incident.

The latest Supply Chain Resilience Report produced by the BCI revealed that 40% of respondents to a survey claimed their supply chain had been impacted upon by a product quality incident during the previous twelve months. Many of these did suggest that the impact was low, but there was still an impact that can be disruptive.

"In a more globalised economy, ensuring the highest level of food safety is becoming an ever greater challenge for firms," says Jayne Plunkett, Head of Casualty Reinsurance at Swiss Re. "Today ingredients and technologies are sourced worldwide. This leads to greater complexity for food manufacturers and consumer and regulatory demands on companies are continually increasing."

(TNS) - Few countries know how to deal with widespread disaster better than Japan, and on Thursday, Japanese firefighter Junichi Matsuo told his Yakima Valley counterparts what it was like to respond to the devastating 2011 earthquake and tsunami that killed more than 13,000 people.

“That was the first time I’d ever seen such a terrible situation,” said Matsuo, a veteran firefighter with decades of emergency response experience.

But the disaster also held lessons on the importance of community planning and community involvement in responding to a crisis, he said.

The magnitude 9.0 earthquake that struck March 11, 2011, was the most powerful recorded earthquake ever to hit Japan and the fourth-strongest worldwide since modern record-keeping began in 1900.



“The Internet of Things is the biggest game changer for the future of security,” emphasizes David Bennett, vice president of Worldwide Consumer and SMB Sales at Webroot. “We have to figure out how to deal with smart TVs, printers, thermostats and household appliances, all with Internet connectivity, which all represent potential security exposures.”

Simply put, the days of waiting for an attack to happen, mitigating its impact and then cleaning up the mess afterward are gone. Nor is it practical to just lock the virtual door with a firewall and hope nothing gets in--the stakes are too high. The goal instead must be to predict potential exposure, and that requires comprehensive efforts to gather threat intelligence. According to Bennett, such efforts should be:



Monday, 20 July 2015 00:00

Top Ten Tips for DR as a Service

In this article, we provide tips for what can be a particularly challenge task: deciding when and how to implement DRaaS in the enterprise.

Buy It, Don’t Hire It 

Some organizations already have an in-house team with the necessary expertise to establish and maintain a sophisticated DR plan. But plenty of others don’t even come close. In those cases, it is probably easier to buy the necessary DR technology and resources from the cloud than to try to hire it and build it in house.

“DRaaS is often a good fit for small to midsize businesses that lack the necessary expertise to develop, configure, test, and maintain an effective disaster recovery plan,” said Wayne Meriwether, an analyst for IT research firm Computer Economics.



WASHINGTON — In the month since a devastating computer systems breach at the Office of Personnel Management, digital Swat teams have been racing to plug the most glaring security holes in government computer networks and prevent another embarrassing theft of personal information, financial data and national security secrets.

But senior cybersecurity officials, lawmakers and technology experts said in interviews that the 30-day “cybersprint” ordered by President Obama after the attacks is little more than digital triage on federal computer networks that are cobbled together with out-of-date equipment and defended with the software equivalent of Bubble Wrap.

In an effort to highlight its corrective actions, the White House will announce shortly that teams of federal employees and volunteer hackers have made progress over the last month. At some agencies, 100 percent of users are, for the first time, logging in with two-factor authentication, a basic security feature, officials said. Security holes that have lingered for years despite obvious fixes are being patched. And thousands of low-level employees and contractors with access to the nation’s most sensitive secrets have been cut off.



CVS (CVS) last week notified CVSphoto.com customers that the independent vendor managing online payments for the website may have suffered a credit card breach.

And as a result, CVS tops this week's list of IT security news makers, followed by University of California, Los Angeles (UCLA) Health SystemUniversity of Pittsburgh Medical Center (UPMC) Health Plan and Symantec (SYMC).

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



(TNS) - The mayors of four communities in south Mississippi weren't so eager at first to recall the events of 10 years ago, when they were new to the job and Hurricane Katrina had devastated their cities. But during a program Wednesday, they remembered the destruction, the people who came to help ­-- and the chickens.

Those in the audience of the Katrina +10 presentation at the Ohr-O'Keefe Museum of Art nodded as they remembered with the mayors how it was in the days after the storm and laughed at some of their stories.

Moderator Joe Spraggins had just retired as a brigadier general in the Air Force and said he asked the Lord, "I want to have a challenge in my next career."

His first day on the job as the head of emergency operations for Harrison County was Aug. 29, 2005 ­-- the day Katrina hit.



You may have read that the Justice Department is warning food manufacturers that they could face criminal and civil penalties if they poison their customers with contaminated food.

Recent high profile food recalls, such as the one at Texas-based Blue Bell Creameries and another at Ohio-based Jeni’s Splendid Ice Creams, have drawn attention to this issue once again.

Now a new report by Swiss Re finds that the number of food recalls per year in the United States has almost doubled since 2002, while the costs are also rising.

Half of all food recalls cost the affected companies more than $10 million each and losses of up to $100 million are possible, Swiss Re says. These figures exclude the reputational damage that may take years for a company to recover from.



Over the next few weeks the shortlisted articles and papers in Continuity Central’s Business Continuity Paper of the Year competition will be published, with the winner announced after that. This is the first shortlisted paper, written by Ken Simpson, FBCI.

Are you looking to build a high-performing team? Where each member understands their role, and how they fit with other team members’ roles? A team that can execute on the prepared game plan - while at the same time has the capability to improvise as the situation warrants?

That description might be something your business continuity, incident response and/or crisis management teams aspire to - or it may be just as appropriate a goal for your ‘business as usual (BAU)’ functional teams. In any case it applies to teams that seek to compete in elite level sports and perhaps we can learn something about how to prepare teams from the methods used in the sporting domain.

The nature of training and preparation changes as players and teams move from the participation and social levels of sport into elite competitions. Basic drills, sloppy execution and general fitness regimes are replaced with targeted training programs - building high-level skills, disciplined execution and embedding team concepts.



As cyber threats emerge and evolve each day, they pose challenges for organizations of all sizes, in all industries. Even though most industries are investing heavily in cybersecurity, many companies are still playing catch up, discovering breaches days, months, and even years after they occur. The 2015 Verizon DBIR shows that this “detection deficit” is still increasing: The time taken for attackers to compromise networks is significantly less than the time it takes for organizations to discover breaches.

The risk posed by third parties complicates the issue further. How can an organization allocate time and resources to trust their partners’ security when they are struggling to keep up with their own? Over the years, audits, questionnaires, and penetration tests have helped to assess third party risk. However, in today’s ever-changing cyber landscape, these tools alone do not offer an up-to-date, objective view. While continuous monitoring solutions can improve detection and remediation times for all organizations, the retail, healthcare, and utilities industries can especially benefit from greater adoption.



Don’t Fall into the Vendor Lock-In Trap of Hyper-convergence

About two years ago, I wrote a Blog (Storage Vendor Lock-in – Is the End Near?) that discusses how two emerging technologies, convergence and VM-aware storage, and more importantly the synergy among them, may provide the relief from vendor lock-in. Two years later, these two technologies have matured quite a bit and the synergy among them, widely referred to as hyper-convergence, is a pretty hot trend in IT.

For many customers, flexibility and avoiding vendor lock-in are primary concerns and a key reason for considering hyper-convergence. While all of us at Maxta have been busy improving our hyper-converged solutions and maintaining them to be flexible and free of vendor lock-in, this is not the case for some of our competitors. Unfortunately, some vendors are not leveraging the inherent potential of hyper-convergence to reduce vendor lock-in. Moreover, others are making moves to increase vendor lock-in to their own offerings.



Thursday, 16 July 2015 00:00

MSPs, Don't Ignore Cloud Opportunities

Just as the IT channel was getting comfortable a half-dozen years ago with managed services, another new service model was vying for recognition – the cloud. Many MSPs have since added cloud-based services, but some still struggle with how to go about it.

If you ask Michael Corey why, the founder and president of Dedham, Massachusetts-based MSP Ntirety will tell you one of the main obstacles is self-imposed: IT service providers fear cloud-based services will cannibalize parts of their businesses. They’ve made money delivering services in a certain way for so long that the idea of replacing it with a cloud model scares them.



(TNS) - Twenty years ago this week, Chicago was gripped by one of the city's worst natural disasters: a scorching heat wave that claimed more than 700 victims, mostly the poor, elderly and others on society's margins.

The temperature hit 106 degrees on July 13, 1995, and would hover between the high 90s and low triple digits for the next five days. Dozens of bodies filled the Cook County medical examiner's office. On a single day — July 15 — the number of heat-related deaths reached its highest daily tally of 215; refrigerated trucks were summoned to handle the overflow of corpses.

Two decades later, the collective failings that contributed to the death toll are now well-documented: a city caught off guard, social isolation, a power grid that couldn't meet demand and a lack of awareness on the perils of brutal heat.



‘Banana Skins’ poll reflects industry risk perception

A new survey charting the top risks in the global insurance sector shows that cyber risk and interest rates are now among the top risks for insurers. Their entry, new into the rankings of this fifth successive survey, are indicative of how high a concern they have become for the industry when looked at in conjunction with regulatory developments and the broader macro-economy.

The CSFI’s latest ‘Insurance Banana Skins 2015’ survey, conducted in association with PwC, polled over 800 insurance practitioners and industry observers in 54 countries, to find out where they saw the greatest risks over the next 2-3 years.

Regulatory risk emerged as the overall top risk for participants in the survey for the third successive time, underlining the deep impact regulatory change is having.



The virtual data center is one of those things that sounded like a great idea at first, only to lose much of its appeal upon reflection. But while few organizations are pursuing a fully abstracted, end-to-end data environment, it appears that many data processes will benefit tremendously by not having to rely on integrated hardware/software infrastructure.

The virtual data center has gotten a boost from a number of key software developments lately that remove much of the complexity in creating functional data stacks in either on-premises or third-party clouds. One is the Mesosphere Datacenter Operating System (DCOS), which recently saw the release of a software development kit that allows cluster-wide installation and operation of Java, Go and Python services using a simple web or command-line interface. The system features a range of schedulers for various application types, such as long-term micro services, batch processing and storage, allowing enterprises to custom-build data frameworks to support highly specialized functions.



Thursday, 16 July 2015 00:00

BCI: Facing a skills shortage

A new study by the Confederation of British Industry and Pearson has shown that demand for higher-level skills in British industry is set to grow in the years ahead, with sectors central to future growth – manufacturing and construction – particularly hard-pressed.

The Education and Skills Survey highlighted that over two-thirds of businesses (68%) expect their need for staff with higher level skills to grow in the years ahead, but more than half of those surveyed (55%) fear that they will not be able to access enough workers with the required skills.

Availability of talents/key skills may not have been the greatest threat to organizations according to the Business Continuity Institute’s latest Horizon Scan Report, but it is still a threat. 43% of business continuity professionals surveyed expressed either concern or extreme concern about the prospect of their organization suffering from a lack of availability.

Katja Hall, Deputy Director-General at the CBI, said: “The Government has set out its stall to create a high-skilled economy, but firms are facing a skills emergency now, threatening to starve economic growth. Worryingly, it’s those high-growth, high-value sectors with the most potential which are the ones under most pressure."

Rod Bristow, President of Pearson’s UK business, said: “Better skills are not only the lifeblood of the UK economy – as fundamental to British business as improving our infrastructure, technology and transport links – they are also critical to improving young people's life chances, of enabling them to be a success in life and work."

The Office of Personnel Management (OPM) breach is in the news again. As you may have heard, it is much worse than originally thought, with nearly 22 million records compromised. With this news, this breach is the second one in less than three months that has hit a little too close to home for me personally.

It’s also not surprising. Our government is ridiculously lax in its cybersecurity efforts, especially when you consider the amount of personally identifiable information held in government databases. Remember, the OPM breach didn’t just have Social Security numbers and birthdates. PII revealed also included things like fingerprints and findings from security clearance investigations. The stealing of this data has created a new level of identity theft problems for the individuals affected, according to the security experts at NuData, who provided the following commentary to me in an email:



Climate markers continue to show global warming trend

State of the Climate in 2014 report available online. (Credit: NOAA).

State of the Climate in 2014 report available online. (Credit: NOAA)

In 2014, the most essential indicators of Earth’s changing climate continued to reflect trends of a warming planet, with several  markers such as rising land and ocean temperature, sea levels and greenhouse gases ─ setting new records.  These key findings and others can be found in the State of the Climate in 2014 report released online today by the American Meteorological Society (AMS).

The report, compiled by NOAA’s Center for Weather and Climate at the National Centers for Environmental Information is based on contributions from 413 scientists from 58 countries around the world (highlight, full report). It provides a detailed update on global climate indicators, notable weather events, and other data collected by environmental monitoring stations and instruments located on land, water, ice, and in space.  

“This report represents data from around the globe, from hundreds of scientists and gives us a picture of what happened in 2014. The variety of indicators shows us how our climate is changing, not just in temperature but from the depths of the oceans to the outer atmosphere,” said Thomas R. Karl, L.H.D, Director, NOAA National Centers for Environmental Information.

For State of the Climate in 2014 maps, images and highlights, visit Climate.gov. (Credit: NOAA).

For State of the Climate in 2014 maps, images and highlights, visit Climate.gov. (Credit: NOAA)

The report’s climate indicators show patterns, changes and trends of the global climate system. Examples of the indicators include various types of greenhouse gases; temperatures throughout the atmosphere, ocean, and land; cloud cover; sea level; ocean salinity; sea ice extent; and snow cover. The indicators often reflect many thousands of measurements from multiple independent datasets.

“This is the 25th report in this important annual series, as well as the 20th report that has been produced for publication in BAMS,” said Keith Seitter, AMS Executive Director. “Over the years we have seen clearly the value of careful and consistent monitoring of our climate which allows us to document real changes occurring in the Earth’s climate system.”

Key highlights from the report include:

  • Greenhouse gases continued to climb: Major greenhouse gas concentrations, including carbon dioxide, methane and nitrous oxide, continued to rise during 2014, once again reaching historic high values. Atmospheric CO2 concentrations increased by 1.9 ppm in 2014, reaching a global average of 397.2 ppm for the year. This compares with a global average of 354.0 in 1990 when this report was first published just 25 years ago.
  • Record temperatures observed near the Earth’s surface: Four independent global datasets showed that 2014 was the warmest year on record. The warmth was widespread across land areas. Europe experienced its warmest year on record, with more than 20 countries exceeding their previous records. Africa had above-average temperatures across most of the continent throughout 2014, Australia saw its third warmest year on record, Mexico had its warmest year on record, and Argentina and Uruguay each had their second warmest year on record. Eastern North America was the only major region to experience below-average annual temperatures.
  • Tropical Pacific Ocean moves towards El Niño–Southern Oscillation conditions: The El Niño–Southern Oscillation was in a neutral state during 2014, although it was on the cool side of neutral at the beginning of the year and approached warm El Niño conditions by the end of the year. This pattern played a major role in several regional climate outcomes.  
  • Sea surface temperatures were record high: The globally averaged sea surface temperature was the highest on record. The warmth was particularly notable in the North Pacific Ocean, where temperatures are in part likely driven by a transition of the Pacific decadal oscillation – a recurring pattern of ocean-atmosphere climate variability centered in the region.
  • Global upper ocean heat content was record high: Globally, upper ocean heat content reached a record high for the year, reflecting the continuing accumulation of thermal energy in the upper layer of the oceans. Oceans absorb over 90 percent of Earth’s excess heat from greenhouse gas forcing.
  • Global sea level was record high: Global average sea level rose to a record high in 2014. This keeps pace with the 3.2 ± 0.4 mm per year trend in sea level growth observed over the past two decades.
  • The Arctic continued to warm; sea ice extent remained low: The Arctic experienced its fourth warmest year since records began in the early 20th century. Arctic snow melt occurred 20–30 days earlier than the 1998–2010 average. On the North Slope of Alaska, record high temperatures at 20-meter depth were measured at four of five permafrost observatories. The Arctic minimum sea ice extent reached 1.94 million square miles on September 17, the sixth lowest since satellite observations began in 1979. The eight lowest minimum sea ice extents during this period have occurred in the last eight years.
  • The Antarctic showed highly variable temperature patterns; sea ice extent reached record high: Temperature patterns across the Antarctic showed strong seasonal and regional patterns of warmer-than-normal and cooler-than-normal conditions, resulting in near-average conditions for the year for the continent as a whole. The Antarctic maximum sea ice extent reached a record high of 7.78 million square miles on September 20. This is 220,000 square miles more than the previous record of 7.56 million square miles that occurred in 2013. This was the third consecutive year of record maximum sea ice extent. 
  • Tropical cyclones above average overall: There were 91 tropical cyclones in 2014, well above the 1981–2010 average of 82 storms. The 22 named storms in the Eastern/Central Pacific were the most to occur in the basin since 1992. Similar to 2013, the North Atlantic season was quieter than most years of the last two decades with respect to the number of storms.

The State of the Climate in 2014 is the 25th edition in a peer-reviewed series published annually as a special supplement to the Bulletin of the American Meteorological Society. The journal makes the full report openly available online.

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.

Given the complexity of managing IT environments these days, it’s now only a matter of time before machine learning is routinely applied to manage IT operations. One of the first companies to provide such a capability is SIOS Technology, which today announced the general availability of SIOS iQ software for VMware environments that applies analytics based on machine learning algorithms to both IT infrastructure and application software.

Available in both standard and free editions, SIOS Technology COO Jerry Melnick says the machine learning software first automatically discovers what should be defined as normal within any IT environment, and then over time learns what deviations from normal will result in a particular performance threshold being broken or potential vulnerability being created.

Melnick says SIOS Technology decided to focus initially on the VMware environment because of the size of the installed base, but the technology will soon be more broadly applied. At its core is an implementation of a Postgres database running machine learning software that IT organizations download onto a VMware virtual machine. Via a SIOS PERC Dashboard, SIOS iQ then recommends the best solution to any particular issue it discovers.



Wednesday, 15 July 2015 00:00

The New Needs of Digital Business

Digital business requires change across a very wide range of areas. There is an increasing use of storage, vastly expanded networking requirements, and a rise in the virtualization of all equipment. Digital systems deployed on the network can be replicated, modeled, and situated anywhere, so we have seen virtual networks, virtual servers, virtual mobile solutions, and virtual workstations of all types. Virtualization creates a need for new management techniques that control, replicate, and abandon virtual components on an automatic basis and manage their various interactions. Information technology is moving outside the firm to the public cloud, either directly or connected through a hybrid cloud mechanism. All aspects of IT are becoming increasingly connected to all the artifacts and processes of the firm.

The frameworks used in EA are also continuing to evolve and include elements such as big data, the cloud, mobile, and the other familiar elements of the changing environment. But what has not evolved so swiftly is the ability to rapidly change the models themselves and what they include as the cycles of technology change continue to accelerate. Continued development of digital business creates a space of massively interconnected data and processing, which must evolve into a more effectively governed system.



He declined to live tweet his upcoming wedding from the altar, but there is no doubt that Nick Hayes is the social media expert on Forrester’s S&R team. He has extensive knowledge of the security, privacy, archiving, and compliance challenges of social media, as well as the technical controls used to address them. He also specializes in the tools that monitor and analyze social data to improve oversight and mitigation tactics of myriad reputational, third-party, security, and operational risks. He is certainly aware of the reputational risk of staring at your cell phone when you’re supposed to say, “I do”, but maybe if you follow him (@nickhayes10), you might get lucky with a pic or two -- and some good risk thoughts to boot.

Nick advises clients on a range of governance, risk, and compliance (GRC) topics, including corporate culture, training and awareness, and corporate social responsibility. He presents at leading industry and technology conferences, and he works with organizations of all sizes across all major industries.



It’s no surprise that in today’s world, data grows by leaps and bounds daily. In fact, IDC and EMC report that global data will increase “by 50 times by 2020.” With the use of mobile devices, social networks and cloud applications, all businesses, large and small, can benefit from capturing and analyzing consumer and business data. Several companies have come forward with BI solutions for such businesses in recent months.

Most recently, Quatrro Business Support Services has created a leading-edge new business intelligence (BI) and financial analytics tool to help small to midsize businesses (SMBs) gather unstructured data and use it to make informed business decisions.

The BI Tool features financial dashboards, reporting templates and alerts to assist SMBs in making sense of the mounds of unstructured data they collect. According to PCWorld, SMBs will benefit from the BI Tool’s analysis and planning features to set up benchmarking and unit comparisons when attempting to identify trends in a market. It can also help with budgeting, forecasting and predictive analysis, which can give SMBs the ability to grow and expand.



A new study from the Ponemon Institute confirms that most healthcare organizations have been the victims of cyber attacks, placing sensitive patient data such as Social Security numbers and insurance information in the hands of identity thieves and organized criminals. With more and more healthcare organizations turning to managed service providers (MSPs) and cloud-based file sharing to store and administer their substantial number of patient records, healthcare organizations’ third-party vendors are increasingly held responsible for complying with industry standards for data protection.

The Fifth Annual Benchmark Study on Privacy & Security of Healthcare Data investigated data breaches among 90 healthcare organizations and 88 of their business associates. Their findings show a shocking increase in cyberattacks and identity theft across the healthcare industry.



Tuesday, 14 July 2015 00:00

U.S. Winter Storm Losses Mount

As my kids head off for their snowy-themed day at camp, the statistic that jumps off the page in the 2015 Half-Year Natural Catastrophe Review jointly presented by Munich Re and the Insurance Information Institute (I.I.I.) is the record $2.9 billion (and counting) in aggregate insured losses caused by the second winter of brutal cold across the Northeastern United States.

As Munich Re illustrates in the following slide, a total of 11 winter storm and cold wave events resulted in 80 fatalities and caused an estimated $3.8 billion in overall economic losses in the period from January 2015 to the end of winter:



The strategic value of business continuity

What is the value of business continuity? That is a question those working in the profession often grapple with, certainly when attempting to justify its existence to top management. In the latest edition of the Business Continuity Institute's Working Paper Series, Dr Clifford Ferguson explores the issue of strategic value and offers a way forward by integrating business continuity into an organization’s strategic plan.

This is timely given the growing interest in resilience as a quality that allows organizations to increase their adaptive capacity to sudden shocks or long-term, incremental changes. With his work revisiting some of the models featured in existing literature, Dr Ferguson makes the case for articulating the strategic value of BC and its relationship to resilience.

This paper is also relevant given its focus on the public sector. The 2015 BCI Horizon Scan report revealed BC funding cuts in 30% of public sector organizations sampled worldwide. These budget cuts present clear pressures for BC practitioners in the public sector to demonstrate value for money while maintaining standards of delivery.

Dr Ferguson concludes that business continuity should be both a cost saver and a strategic risk reduction tool. It cannot be independent from the corporate strategy and it should be embedded into the organizational value system. A business continuity culture will have a direct influence on the services the state offers its citizens and will give rise to a reduction of reputational risk. The continuity culture may be the best driver of continuous service delivery improvement.

To download your free copy of 'the strategic value of business continuity (BC): Integrating BC into an organization’s strategic plan' click here.

Joe Young, CEO of GDS in Pembroke, Massachusetts has started to focus on the lack of IT security with Government organizations across New England. Recently Joe and his team hosted an awesome webinar on this topic. You have to watch it

There’s no question about it: cybercrime is on the rise – from links in bogus phishing emails to malware-ridden websites to data-stealing ransomware, cybercrime is becoming more and more sophisticated and complex each and every day. A few months ago, in Wayland, Massachusetts, police investigated and discovered someone accessed the town’s bank accounts.

As you can imagine, the hackers stole a significant amount of money – withdrawing over $4 million from the town’s bank account. Unfortunately, they weren’t prepared to handle the attack. That’s why it’s absolutely vital to ensure you’re truly protecting governments from hackers.



There is growing concern that corporate boards and senior executives are not prepared to govern their organization’s exposure to cyberrisk. While true to some degree, executive management can learn to identify and focus on the strategic and systemic sources of cyberrisk, without becoming distracted by complex technology-related symptoms, by understanding the organization’s ability to make well-informed decisions about cyberrisk and reliably execute those decisions.

Making well-informed cyberrisk decisions

To gain greater confidence regarding cyberrisk decision-making, executives should ensure that their organizations are functioning well in two areas: visibility into the cyber risk landscape, and risk analysis accuracy.



TNS - The World Health Organization must undergo fundamental changes if it is to fulfill its function of protecting global health, according to an independent panel of experts that reviewed the agency’s bungled response to the deadly Ebola outbreak.

“The panel considers that WHO does not currently possess the capacity or organizational culture to deliver a full emergency public health response,” it says in a scathing report released Tuesday.

The panel, headed by Barbara Stocking, a former head of the aid group Oxfam GB, urged the WHO to create a division to oversee preparations for the next major outbreak and coordinate the response.



Bloggers like me often comment on how organizations are dealing with crises. I do often with a sense of dread knowing that I really don’t know what is going on inside and may not be aware of critical issues that are affecting the response.

That danger was highlighted to me to the extreme when I read today’s comments by Deborah Watson on PR Daily’s blog. My biggest concern is by getting the facts so wrong, the real lessons to be learned from BP’s reputation problems are missed, and therefore those interested will likely take away the wrong things.

I’ll comment on each of the five points she raises as BP’s biggest blunders. (Her comments are italicized).



It is easy to look at challenges to the enterprise data environment and view them as infrastructure problems or architectural problems or system problems. In reality, it is all a data problem – as in too much data coming in too quickly and in too much of a disjointed fashion.

And things are only going to get worse as organizations attempt to deal with Big Data, the Internet of Things, data mobility, and a host of other initiatives coming down the pike. So while measures to improve and expand infrastructure and architecture are vital as data trends emerge, so are ways to capture and manage these ever-increasing volumes without breaking the IT budget but still preserving the value of data within the overall business model.

According to MarketsandMarkets, the enterprise data management space is expected to nearly double in size by the end of the decade from today’s $64 billion to more than $105 billion, a compound annual growth rate of 10.2 percent. Key drivers here include the need for business continuity in the event of data loss plus the need to reduce the total cost of ownership of data, both of which are exacerbated by the flood of data coming into the enterprise. To meet this challenge, data management platforms are incorporating a wide range of disciplines, including integration, migration, warehousing, governance and security.



Peter Cerrato is a principal consultant for Forrester's Business Technology consulting practice.  

A very strange and sudden thing happened 66 million years ago. A comet crashing into the Mexican Yucatan peninsula near Chicxulub put an end to the long reign of the dinosaurs. But not so fast. We now know that some of those dinosaurs survived the massive Cretaceous-Tertiary extinction event: the smaller, faster, feathered and headed-toward-warm-blooded early ancestors of our eagles and hawks.

What can we as security and risk professionals learn from those early ancestors of today’s great raptors (and other birds) to make the leap required to survive the massive extinction event the business world is undergoing: the age of the customer?



The fact of the matter is that it should be more convenient to share threat-related information than it is right now, but an evolved level of suspicion between government and the private sector seems to supersede an understanding at the executive level of the severity of the threats…. Without better information sharing, particularly in the cyber arena, the critical infrastructure of this country remains vulnerable.  (Searle, ASA News & Notes, July 2013)

I’ve spent a fair amount of time since 2009 calling for action in the area of information sharing between the public and private sector.  Any progress that might have been made in this area – the increasingly helpful role and information sharing from the FBI with the private sector, for example – has been wiped out by several recent breaches of highly confidential data reported to the Internal Revenue Service (IRS) and the Office of Personnel Management (OPM).

The four Basel-defined sources of financial loss that can spring from operational risk are unchanging:  people, process, systems and external events.  These days, we have no better illustration of those four sources than in the area of data breaches, where the numbers are staggering, but where the true costs may not yet have been factored.  On the U.S. private sector side, we continue to identify a cost of $201 per record with an average overall breach cost of $5.85M; and we know that 43% of U.S. firms experienced a 2014 breach (www.insidecounsel.com).  Further, a 2014 Ponemon study suggests that if your organization has more than 10,000 records, the probability of breach is 22%, whether or not your firm knows that it has been breached.



On January 28, 1986, nearly 30 years ago, the space shuttle Challenger broke apart 73 seconds into its flight, leading to the tragic deaths of its seven crew members.[1] As the spacecraft disintegrated over the Atlantic Ocean, the paradigm of risk management shifted from reactive to proactive. Taxonomies, frameworks, methodologies and tools developed rapidly to meet this need to manage risk proactively. And while, nearly 30 years later, we are more confident through the evolution of risk management that has taken place to answer the reactive question, “Are we riskier today than we were yesterday?” we face the stark realization that we are not truly able to answer an even more important question: “Will we be riskier tomorrow than we are today?”

Realizing a collective vision to have informative dashboards that look forward, providing confidence in assessments of how risky things are that lie ahead, is the work of the current generation. That makes today an exciting time for risk management. Great progress has been made, but as we reflect today, we know so much more can and must be done.

At this point, we thought we would take a pause and look back 30 years on how risk management has evolved and some of the lessons we can draw from the past.



Are you a risky partner? According to a recent Skyhigh Networks survey, nearly 8 percent of cloud partners are given access to company data that is considered high-risk. For MSPs, it’s vital that your clients see your cloud-based file sharing services as a safe move for their company.

In order to effectively work with clients, you must work to show yourself as a low-risk partner, one that works hard to secure their cloud sharing for their other partners. The average company works with 1,500 business partners via the cloud. By first proving yourself as a trusted partner, you can then start working to protect your clients against the other 1,499.



U.S. Office of Personnel Management (OPM) Director Katherine Archuleta resigned last week after OPM officials discovered a data breach in April.

And as a result, OPM tops this week's list of IT security news makers, followed by the Army National Guard, Service Systems Associates (SSA) and "Gunpoder" malware.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



The majority of IT decision makers in large and midsize U.S. companies want to outsource their public cloud management to managed service providers, with 70 percent preferring to deal with a single vendor to manage their entire IT infrastructure, according to a new report.

Digital Fortress, a managed cloud and colocation provider with data centers in Seattle, surveyed 100 IT decision makers online in June. The company found that 65 percent of companies plan to partially outsource management of public cloud to a third-party.



Disaster management officials from APEC member economies have voiced support for the introduction of financial incentives to encourage businesses in the Asia-Pacific region to develop business continuity plans.

An incentives-based approach was backed by officials over mandatory measures during a recent public-private sector meeting in Bangkok to promote business continuity planning. The focus will be on lifting the low adoption rate by small and medium enterprises which account for more than 97 percent of businesses, 60 percent of GDP and over half of employment in APEC economies, and are an emerging yet vulnerable driver of cross-border production and supply chains.

“Small businesses play a significant and growing role in the international production and trade of goods, particularly as suppliers of component parts and equipment for larger manufacturers, but their disaster risk exposure remains disproportionately high,” explained Dr Li Wei-sen, co-chair of the APEC Emergency Preparedness Working Group, which oversees member cooperation on related issues.



More than 70% of women in insurance believe the industry is making progress toward gender equality and, for the second year in a row, over two-thirds think their company is working to promote gender diversity, according to a new survey from the Insurance Industry Charitable Foundation.

After the IICF Women in Insurance Global Conference, which brought together 650 insurance professionals, senior executive speakers, and CEOs to discuss how the industry can increase gender diversity in the workplace, the foundation polled attendees on the current reality of gender diversity and its evolution across the insurance industry.

Almost half of attendees agree that their company is working to promote gender diversity with another 19% strongly agreeing, but 24.5% disagreed, and 7.1% disagreed strongly. Biases in advancement (51%) and lack of opportunities for professional advancement (24.6%) remain the biggest barriers for women seeking leadership positions in their companies, respondents said. The industry may be making some progress on those issues, however, as the percentage of women who named “biases in advancement” and “lack of opportunities for professional advancement” as the chief barriers fell to 68% from 76% last year.



Thousands of controversial .sucks domains emerged from their sunrise period on Sunday 21st June and became available to the general public. But just 20 percent of the UK’s top brands have snapped them up, leaving the rest in danger from online trolls , according to domain name registrar 34SP.com. 80 percent of the leading 100 UK brands are yet to register the top level domains (TLDs) that pose a reputational threat.

Vodafone, Barclays, ASDA, and ASOS are some of the more cautious UK brands to purchase the controversial domains released by Canadian domain registrar, Vox Populi, before they fell into the wrong hands. Vodafone, Barclays, Lloyds, and Nationwide have gone as far as to splash out on .sucks domains under a variety of versions of their brand terms or well-known phrases.

US brands were vocal when preregistering the domains whilst they were in their sunrise period and only available to trademarked holders, with Taylor Swift, Kevin Spacey, and Microsoft all saying they’d bought them. And a similar response was anticipated by 34SP.com for UK brands once the TLDs were available to the general public.



Disaster management officials from Asia-Pacific Economic Cooperation (APEC) member economies have voiced support for the introduction of financial incentives to spur emergency preparedness among businesses in the Asia-Pacific as the risk of shocks to trade and growth rises in the world’s most natural disaster hit region.

An incentives-based approach was backed by officials over mandatory measures during a public-private sector meeting in Bangkok to promote business continuity planning. Focus is on lifting the low adoption rate by small and medium enterprises which account for more than 97% of businesses, 60% of GDP and over half of employment in APEC economies, and are an emerging yet vulnerable driver of cross-border production and supply chains.

Small businesses play a significant and growing role in the international production and trade of goods, particularly as suppliers of component parts and equipment for larger manufacturers, but their disaster risk exposure remains disproportionately high,” explained Dr Li Wei-sen, Co-Chair of the APEC Emergency Preparedness Working Group, which oversees member cooperation on related issues.

The knock-on effects of small business disruptions or shutdowns can be substantial given the increasingly globalised nature of production and trade, as earthquakes, floods and other natural disasters in the Asia-Pacific have shown. The adoption of business continuity plans by small and medium enterprises is critical to mitigating the disaster threat within the sector and to the global economy but their recognition of this need and action to address it is often lacking.

APEC economies are hit by more 70% of the world’s natural disasters and suffered US$68 billion annually in related costs from 2003 to 2013. But just 13% of small and medium enterprises in the region have business continuity plans in place which involve raising disaster risk awareness, identifying vulnerabilities and organizing teams to address them. This gap leaves the sector more susceptible to business disruptions, financial losses and bankruptcy.

Possible financial incentives to encourage small and medium enterprises to adopt business continuity plans include tax cuts, reduced insurance costs and lower interest rates to help them overcome the initial investment of setting up their plans,” said Natori Kiyoshi, who is also Co-Chair of the APEC Emergency Preparedness Working Group. “There is no one-size-fits-all approach given variations in economic and financial conditions among the region’s economies.

New Ponemon research, highlighting that UK businesses are unable to determine the risk to 58 percent of the confidential data stored in the cloud and 28 percent of the sensitive information held on premise, has been published.

The study, supported by Informatica Corporation, explored how UK organizations are approaching data security, and reveals that businesses are failing to identify sensitive or confidential information.

Less than half (45 percent) have a common process in place for discovering and classifying the sensitive or confidential data on premise and only a quarter have a process in place for data in the cloud.

As information continues to proliferate, not knowing where sensitive or confidential data resides is one of the biggest concerns for 55 percent of IT and IT security practitioners.



Have you noticed that you almost never hear about “green computing” anymore? It was all the rage a few years ago, but now, it seems, the topic draws about as much attention as a Palm Pilot. I don’t pretend to know exactly why that is, but my hunch is that IT professionals have so much to deal with in their quest to improve the efficiency of their operations, issues with labels that conjure up touchy-feely images of tree huggers and “Save the Planet” stickers simply don’t rise to a level that makes it on to a lot of IT department radars.

The irony, of course, is that, when you think about it, “green computing” and efficient operations are inseparable. Whether or not you call it something that makes for a good bumper sticker, it’s all about efficient enterprise facilities management.

Enterprise facilities management, or EFM, was the topic of my recent email interview with Paul Morgan, vice president and general manager of the Global Workplace Solutions (GWS) unit of Johnson Controls in Milwaukee. GWS is a provider of outsourced EFM services, and I thought it would be helpful to start off by clarifying how GWS defines EFM. Morgan prefaced his definition by noting that the facilities management industry and the business needs of building owners and occupiers continue to evolve.



To many, the data center is still the heart of the enterprise, responsible for pushing vital digital nutrients to an increasingly diverse organism. To others, it is more like an anchor, weighing down what would otherwise be a nimble craft as it trawls the data sea in search of treasure.

Both camps recognize that dramatic changes that are taking place within and outside data center infrastructure, but they come to radically different conclusions as to what they mean and what is the best way for the enterprise to engage the next-generation data environment.

According to 451 Research, 87 percent of those with O&O data centers in North America and Europe plan on maintaining or even increasing their facilities in the coming year, with a quarter of those set to increase spending within the next three months. The spread covered medium-sized and large organizations, particularly in the healthcare and finance industry, which is a strong indication that if any group is liable to shed direct control of data infrastructure it is the SMB market, which has relatively little infrastructure to begin with.



WASHINGTON — The Foreign Intelligence Surveillance Court ruled late Monday that the National Security Agency may temporarily resume its once-secret program that systematically collects records of Americans’ domestic phone calls in bulk.

But the American Civil Liberties Union said Tuesday that it would ask the United States Court of Appeals for the Second Circuit, which had ruled that the surveillance program was illegal, to issue an injunction to halt the program, setting up a potential conflict between the two courts.

The program lapsed on June 1, when a law on which it was based, Section 215 of the USA Patriot Act, expired. Congress revived that provision on June 2 with a bill called the USA Freedom Act, which said the provision could not be used for bulk collection after six months.

The six-month period was intended to give intelligence agencies time to move to a new system in which the phone records — which include information like phone numbers and the duration of calls but not the contents of conversations — would stay in the hands of phone companies. Under those rules, the agency would still be able to gain access to the records to analyze links between callers and suspected terrorists.



John Ball, MBCI, describes how taking business continuity training in-house can pay dividends for public sector organizations.

I would like to take a few moments to consider how most organizations, particularly the public sector, approach the training of business continuity, and offer up a low cost, continuous improvement model to push that training further into the organization.
Generally speaking many public sector organizations develop or employ one expert, who is trained to a recognised standard and responsible for business continuity across the organization. In some cases business continuity is combined with emergency planning and risk under the title of ‘resilience manager’. Personally I think that putting three jobs into one is not ideal, however I understand that organizations have to ‘cut their cloth’ according to the pressures they face.

Whatever the setup, and depending on the budget, the business continuity programme will be delivered via a project team, a single manager, or a manager guiding a number of business continuity representatives (in addition to the day job) that receive training as they go along. These are all tried and tested processes, the result of which sees us where we are today. Many organizations aspire to align with ISO 22301 and, consequently, the business continuity programme is driven along those lines.

It is important that business continuity managers should be trained to a high level of expertise. This is a necessary, yet expensive process, but brings with it a measurable return on investment in the form of continued service delivery.



OKLAHOMA CITY As the waters recede and Oklahomans begin to assess the damage caused by the severe storms and flooding that washed across the state this spring, questions start to arise about how and when those with National Flood Insurance Program (NFIP) policies should file claims.

The first step is notification. Homeowners, renters and business owners with NFIP coverage should immediately report flood damage to their insurance company or agent. A claims adjuster will inspect your damages, estimate the repair costs, and send an estimate to the insurance company for review and payment approval.

As part of their claim, policyholders are required to submit a “Proof of Loss” statement which includes an estimate of the damages on both your structure and its contents. Insurance companies usually provide this form and in most cases will help you fill it out. A “Proof of Loss” is not a release of claim, but a statement of loss facts and damages claimed.

Your claims package should be supported by photos of water in the structure and the resulting damage. You should also compile an itemized list of all flood damage and retain swatches of carpets or fabrics that were damaged. Be sure to make copies of the insurance claim, proof of loss and all other supporting documents for your own records.

An important point to keep in mind is that you do not have to accept the initial estimate of the damage prepared by the claims adjuster. All issues should be addressed with the adjuster and the company’s management. However, if you believe the claims adjuster did not address all of your flood damage in their estimate, you can file a supplemental claim for the additional damages. For example, there may have been hidden damage not detected by the claims adjuster during their property inspection. 

Be aware there are strict deadlines for filing flood insurance claims. Regardless of whether you agree with the claims adjuster’s estimate, your proof of loss statement must be submitted to the NFIP or the insurance company within 240 days of the loss. This extension of the 60-day policy wording is specific to the current Oklahoma flood.

If your claim is denied, the Federal Emergency Management Agency (FEMA) has established a formal appeals process. You can start this process as soon as the insurance company issues its final determination in the form of a written denial (in whole or in part) of your claim.

The written appeal must be filed within 60 days of the insurance company's final claim determination. FEMA will acknowledge receipt of your appeal in writing and advise if additional information or documents are required for full consideration of your appeal. Next, FEMA will review your documentation and conduct any additional investigation needed. Finally, the policyholder and their insurance company will be advised of FEMA's decision regarding the appeal.

Even if you file an appeal with FEMA, that does not relinquish or replace your right to file a lawsuit against the insurance company, nor does it expand or change the one-year statute of limitation to file suit against the insurer for the disallowed portion of your claim.

To avoid conflicting results and duplicated effort, a policyholder who files suit against an insurance company is prohibited from filing an appeal with FEMA under this process. As a result, homeowners are encouraged to file an appeal with FEMA first.

Oklahomans who don’t have NFIP insurance – and who sustained losses or damages in the May 5 through June 4 storms – may be eligible for state and federal assistance. You can apply online at DisasterAssistance.gov or via smartphone at m.fema.gov or by phone at 800-621-3362 (Voice or 7-1-1/ Relay) or TTY 800-462-7585. For information about U.S. Small Business Administration (SBA) programs, applicants should call 800-659-2955 (TTY 800-877-8339).

Even if you have a NFIP policy, you may also be entitled to FEMA Individual Assistance payments for housing allowance, contents losses, or moving and storage expenses.

For more information about flood insurance, go to www.FloodSmart.gov. For more information on Oklahoma disaster recovery, click http://www.fema.gov/disaster/4222 or visit OEM at www.oem.ok.gov


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers services to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

Liquid cooling is gaining, well, steam (sorry) in the data center as compute densities creep up and organizations look for ways to keep temperatures within tolerance without busting the budget on less-efficient air-handling infrastructure.

But there are a number of approaches to liquid cooling, ranging from running simple cold water in and around the data center to full immersion of chips and motherboards in non-conducting dielectric solutions.

According to Research and Markets, the data center cooling market as a whole is on pace to hit compound annual growth of 6.67 percent between now and 2019. The report summary available on the web does not break out the performance of specific cooling categories, but it does note that high adoption of liquid-immersion technologies is one of the key growth factors. As cloud computing and data analytics ramp up in the enterprise, data infrastructure across the board will have to provide greater performance within small, most likely modular, footprints, which means more heat and a more direct way to whisk it away from sensitive data equipment.



Recently, I checked out all the iOS apps available from my home state, Kentucky.  I wasn’t impressed.

The parks system has a nice app — the same one available for other states, thanks to a private company. In fact, all of the apps I found were actually produced by private companies, and even so, they were pretty unimpressive. Tourism, for example, has collaborated on an app that basically gives you a .pdf of its main publication.

If mobile apps are the Internet in small, Kentucky seems to be making the same mistakes I saw it make back in 2000, when it was building a web presence. There’s no clear strategy of prioritizing critical services first.



AUSTIN, Texas – State and federal dollars are flowing into Texas communities recovering from the May 4 through June 19 storms, straight-line winds, tornadoes and floods.

To date, more than $137 million in state and federal grants, U.S. Small Business Administration (SBA) low-interest disaster loans, and National Flood Insurance Program claims have been approved and/or paid to Texans.

The Texas Division of Emergency Management (TDEM) and the Federal Emergency Management Agency (FEMA), partners in the state’s recovery, provide the following summary of disaster assistance efforts as of June 30:

        NUMBER            ACTIVITY

  • $75.9 million         NFIP Flood claims paid to Texans since May 4
  • $27.7 million        SBA low-interest disaster loans
  • $34.1 million        Housing, Other Needs Grants
  • 22,158                  Total FEMA Registrations
  • 16,544                  Home inspections completed
  • 8,380                    National Flood Insurance Program claims to date 
  • 1,846                    Visits to Disaster Recovery Centers
  • 800                      Number of federal workers in Texas assisting with disaster recovery
  • 264                      Billboard and outdoor electronic signs displaying FEMA information
  • 58                       Number of counties designated for Public Assistance
  • 31                       Number of counties designated for Individual Assistance
  • 25                      Fixed and mobile disaster recovery and mobile registration intake centers

The deadline to register with FEMA is July 28. To register for assistance, Texans can apply online at www.disasterassistance.gov. or by calling 800-621-3362, (TTY) 800-462-7585 for the speech- and hearing-impaired. Both numbers are available from 7 a.m. to 10 p.m. local time daily, until further notice. More information is available online at www.fema.gov or at www.txdps.state.tx.us/dem.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY, call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling 800-659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call 800-877-8339.

FEMA’s temporary housing assistance and grants for childcare, medical, dental expenses and/or funeral expenses do not require individuals to apply for an SBA loan. However, those who receive SBA loan applications must submit them to SBA to be eligible for assistance that covers personal property, transportation, vehicle repair or replacement, and moving and storage expenses.

Visit www.fema.gov/texas-disaster-mitigation for publications and reference material on rebuilding and repairing safer and stronger.

TNS - While it's not the sort of threat we would immediately associate with the phrase "homeland security," New York's preparedness teams are hatching plans for the potential arrival of an avian flu that has already wiped out more than 40 million chickens in the Midwest.

Several weeks ago, officials announced that this year's State Fair and county celebrations wouldn't include poultry exhibits. In addition, there will be added inspection of poultry from out of state, and additional inspectors deployed to the handful of live bird markets that serve the burgeoning immigrant groups in New York City.

"We do not have an avian flu outbreak at this time, but we are planning for one," said Kelly Nilsson, an emergency preparedness and planning manager for the state Department of Agriculture and Markets.



Over the past few years, there has been skyrocketing growth in the use of social media to get the word out during emergency situations. From fires to disease outbreaks to police shootings, more and more people turn to Twitter, Facebook or other social media sites to get the latest updates on incidents from reliable sources and "friends."

Earlier this year, Emergency Management magazine ran a story titled: Can You Make Disaster Information Go Viral? In that piece, new efforts were highlighted to improve the reliability of emergency communications using social media during man-made and natural disasters.

I applaud these social media efforts, and this emergency management communications trend has been a very good thing up to this point. But dark clouds are on the horizon. And soon, maybe you'll need to hold-off on that retweet.



The US National Institute of Standards and Technology (NIST) has named experts in business continuity planning and the post-disaster recovery of telecommunication networks to serve as NIST Disaster Resilience Fellows.

George B. Huff Jr., founder and director of The Continuity Project, Alexandria, Va., and Steve Poupos, AT&T’s director of technology operations, will assist NIST as it finalizes its Community Resilience Planning Guide for Buildings and Infrastructure. They also will contribute to follow-on efforts to support US counties, cities and towns in implementing the guide.

Issued in April, 2015, as a draft for public review, the planning guide lays out a flexible approach that communities can adapt and use to set priorities, allocate resources, and take actions that will help them to withstand and bounce back from the shocks and stresses of extreme weather and other hazards. NIST plans to issue the initial version in September, 2105. The guide will be updated periodically.



Thursday, 02 July 2015 00:00

Defining the Future of DR Storage

More and more workloads are being shunted off to the cloud. It appears that the days of having an arsenal of in-house hardware are over. Gone, too, will be expensive offsite mirror Disaster Recovery (DR) facilities – at least for all but the largest, richest and highest-end businesses. So what does this mean to the storage manager?

The future of DR appears to be moving steadily away from the primary site and recovery site concept. It is being gradually replaced with the ability to migrate or burst workloads seamlessly from site to site. As the cloud gains ground, the ownership of the sites involved is becoming less of an issue. Some may be customer owned, such as in a data center, a private cloud, a hosted data center or a colocation facility; others may be completely in the hands of an outside party. The key is that data must be able to shift dynamically on demand between the various sites involved while being able to attain always-on availability.

Sometimes companies will set things up this way purely for DR purposes. But this kind of more loosely coupled arrangement enables them to do other things.



TNS - Connecticut’s emergency dispatchers in the not-too-distant future will be fielding not only 911 calls and texts, but perhaps even viewing photos and videos of crimes or accidents.

The state’s changeover to the Next Generation 911 system has started at 10 pilot sites across the state, including locally at the Mashantucket Pequot Public Safety Department and Valley Shore Emergency Communications in Westbrook.

All of the state’s 104 public service answering points are scheduled for a changeover by next year.



Wednesday, 01 July 2015 00:00

The Data Lake as an Exploration Platform

The data lake is an attractive use case for enterprises seeking to capitalize on Hadoop’s big data processing capabilities. This is because it offers a platform for solving a major problem affecting most organizations: how to collect, store, and assimilate a range of data that exists in multiple, varying, and often incompatible formats strung out across the organization in different sources and file systems.

In the data lake scenario, Hadoop serves as a repository for managing multiple kinds of data: structured, unstructured, and semistructured. But what do you do with all this data once you get it into Hadoop? After all, unless it is used to gain some sort of business value, the data lake will end up becoming just another “data swamp” (sorry, couldn’t resist the metaphor). For this reason, some organizations are using the data lake as the foundation for their enterprise data exploration platform.



The web-based system used for federal background investigations for employees and contractors has been suspended after “a vulnerability” was detected, the Office of Personnel Management (OPM) announced Monday.

OPM has been the subject of intense congressional probing following the cyber attack on the personnel records of at least 4.2 million current and former federal employees. The decision to suspend the agency’s “E-Qip” system, however, is not directly related to that hack or another one of a security clearance data base that was previously announced.

“The actions OPM has taken are not the direct result of malicious activity on this network, and there is no evidence that the vulnerability in question has been exploited,” an OPM statement said. “Rather, OPM is taking this step proactively, as a result of its comprehensive security assessment, to ensure the ongoing security of its network.”



Traffic video cameras were installed to keep the roads moving by letting transportation departments see trouble spots, dispatch assistance and arrange detours as quickly as possible. But this wealth of real-time video intelligence has proven to be an exceptional resource for emergency operations centers (EOCs) across the United States.

“Live traffic video substantially boosts our situational awareness,” said Michael Walter, public information officer with the Houston, Texas, Office of Emergency Management. “It makes a real difference to how we do our jobs.”



BI is about to take a big step forward, and a major driver for new capabilities will be self-service data integration capabilities, according to Jamil Rashdi, a senior infrastructure development manager.

Rashdi, a veteran IT leader and cloud infrastructure architect, takes a look at this year’s business intelligence self-service trends. Of course, BI is by its nature self-serve, but as he points out, that’s primarily been limited to simpler data discovery functions such as search, dashboards and visualization tools.

New advancements are pushing well beyond these self-serve features, he writes. Advancements in both BI and analytics solutions “are significantly broadening the scope of self-service BI” to include data preparation and manipulation tools — including ETL and data wrangling, or lightweight tools for transforming, integrating and cleansing data.



AUSTIN, Texas – Texans will have the opportunity to assist with the state’s disaster recovery from the severe storms, tornadoes, and flooding that occurred from May 4 to June 19. Dozens of qualified Texans will be offered temporary jobs as local hires of the Federal Emergency Management Agency (FEMA) in its Austin, Denton, and Houston offices.

FEMA has partnered in this venture with the Texas Workforce Commission. Those interested may go to http://www.workintexas.com and create an account. Once logged in, click on “Search All Jobs” and type “FEMA” into the search bar.

Currently, there are six job categories posted:

  • Administrative/Clerical
  • Customer service
  • Logistics
  • Report writing
  • Switchboard/Help desk
  • Technical/Architecture/Engineering

FEMA positions with detailed job descriptions will remain posted through July 24 or until the jobs are filled.

Candidates must be 18 years of age or older and must be a U.S. citizen. Qualified applications will be forwarded to FEMA staff, who will select candidates for interviews. Selected candidates should have a valid government identification card, such as a driver’s license or military ID. Candidates will be required to complete a background investigation, which includes finger printing, and additional ID, such as Social Security card, birth certificate or passport. The hiring process may take up to 15 days from the date of application.

FEMA is committed to employing a highly qualified workforce that reflects the diversity of our nation. All applicants will receive consideration without regard to race, color, national origin, sex, age, political affiliation, non-disqualifying physical handicap, sexual orientation, and any other non-merit factor. The federal government is an Equal Opportunity Employer.

More positions may be posted on the TWC webpage as the disaster recovery continues.

All are encouraged to visit https://www.fema.gov/disaster/4223 for news and information about this disaster.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY, call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

FEMA’s temporary housing assistance and grants for childcare, medical, dental expenses and/or funeral expenses do not require individuals to apply for an SBA loan. However, those who receive SBA loan applications must submit them to SBA to be eligible for assistance that covers personal property, transportation, vehicle repair or replacement, and moving and storage expenses.

Visit www.fema.gov/texas-disaster-mitigation for publications and reference material on rebuilding and repairing safer and stronger

What does it take to get PC or server backups to work properly and bring computers back to operational status? Correctly stored data files are a critical component for most organisations. However, on their own they won’t let you get back to business. You’ll also need the applications that generated those data files and you’ll need the associated configuration and profile information. That includes user and account-specific information and any purpose-built software modules to link your system to others in your enterprise. The smart solution would be to back up all of this information within the same process.



Recovery is the least understood (and least studied) part of the emergency management cycle with little systematic information about tracking progress geographically and over an extended time. Unfortunately, once the disaster field offices close in local communities, recovery activity wanes. For hard-hit communities, recovery is a long-term process of rebuilding lives, livelihoods and the sense of place that once characterized the community. Recovery takes months to years in some places and decades for other communities.

Hurricanes Katrina and Sandy afforded an opportunity to conduct a natural experiment to compare recovery from two different storms and their effects on two different locales: coastal New Jersey in the case of Sandy and coastal Mississippi for Katrina. While the storms were different in magnitudes and timing, each resulted in significant storm surge impacts affecting a large section of the coastline. For New Jersey, storm surge flooding occurred from Upper New York Bay south to Delaware Bay, ranging between eight feet at Sandy Hook to four feet in Downe Township. The entire Mississippi coastline was affected with storm surges ranging from 28 feet nearest to Katrina’s track close to the border with Louisiana and Bay St. Louis to 17 feet farther to the east in Pascagoula.



Information overload. Big data. Social media. Mobile computing. Bring-your-own-device policies. Cloud computing. New technologies. Records and information management continues to struggle with fundamental and, to a degree, existential challenges. The challenges to records and information management created by today’s technology are unprecedented and ever changing. Executives responsible for ethics and compliance must now address growing complexities in the management of records and information within their organizations. They must identify and implement new tools and techniques to match the challenges of today and the future while creating a culture of compliance in the records and information management sphere that aligns with the needs of 21st century business.

The Definition of a Record Is Changing: Records Are Created and Stored Differently

The vast majority of today’s business is fueled by, and conducted using, technology. Business records are almost exclusively becoming electronic and are generated by a wide variety of ever-changing devices, systems and applications. Records managers who have historically employed retention schedules to detail appropriate retention periods and records disposition actions are faced with adjusting their thinking to accommodate new and different types of records. The volume of data and the proliferation of that data across many platforms, repositories and devices makes capturing, preserving, managing and eventually disposing of records exceedingly difficult.



(TNS) — Philadelphia’s security preparations for Pope Francis’ 48-hour visit have been going on for more than a year. For Ignazio Marino, mayor of Rome, papal security is an everyday issue.

“It’s pretty tough because the pope is a terrific person, he attracts millions of people, so traffic and security is a huge, huge issue — particularly in these days and time with possibility of terroristic attacks, we are always concerned,” Marino said Thursday outside his office in Rome.

The final day of the Philadelphia delegation’s trip to Rome focused largely on getting input from Roman and Vatican City authorities on security and infrastructure for large-scale events featuring the pope. A separate news conference discussed the programming for the World Meeting of Families.



Monday, 29 June 2015 00:00

Preparing for the Unexpected

Stuff happens. We may not like it, we may even consider it unfair, but it is a fact of life. In the business environment, the question is: Are management and the Board prepared to respond?

Two years ago, I had the opportunity to talk with the Chairman of the Board for a major institution. He observed he had talked with some of his peers about recurring situations across America that had caused a reputation hit. There was a train of thought in this discussion that there had to be a connection between an organization’s risk assessment and its crisis management. In other words, should the risk assessment process inform the organization’s crisis response team?

It’s a fair question. And it’s important. Even the proudest organizations and brands are not immune to being called out by the unexpected.



Sure, the average consumer is worried about storing their data in the cloud or sharing it through cloud-based file sharing, but how can managed service providers (MSPs) respond to an enterprise when even their own IT professionals are worried about the state of security in the public cloud?

In 2011, Symantec and the National Cyber Security released a study that reveals cyber attacks cost small- and medium-sized businesses an average of $188,242. Perhaps even more alarming, research conducted by Gartner shows that nearly 90 percent of the companies that were victimized by a major data loss went out of business within six months of the attack.

One-third of the 1000 IT professionals responding to a Bitglass survey said that they experienced more security breaches with the public cloud than their internal information technology function.



The Online Trust Alliance (OTA) recently released its 2015 Online Trust Audit & Honor Roll. For the report, OTA analyzed approximately 1,000 websites in three categories: consumer protection, privacy and security. According to a release, the seventh annual audit now includes websites of the top 50 leading Internet of Things device makers, wearable technologies and connected home products.

It’s tough to make the honor roll; that’s what makes it special. But then, this is the type of honor roll you want companies to make, especially if it is a company you do business with (or if it is your website being evaluated). Unfortunately, nearly half of all of the websites failed. Even more alarming was that the new category of IoT had an even more dismal showing, with a 76 percent failure rate.

In an ITProPortal article, Craig Spiezle, executive director and president of OTA, stated:



TNS - Miss Piggy is flying again.

But even as the lumbering P-3 Orion aircraft takes part in its first mission since getting two new engines in a life-extending overhaul, the National Oceanic and Atmospheric Administration is looking for the next generation of hurricane hunting aircraft.

Miss Piggy and NOAA’s other Orion, named Kermit, are stationed at MacDill Air Force Base. Each plane was put into service during the mid-70s and has flown more than 10,000 hours, into more than 80 hurricanes. They are long, grueling missions, often subjecting the crew to zero gravity as the aircraft lurch up and down in buffeting winds. With the pounding they’ve taken, the planes need the $42 million refurbishing to stay on the job during the June through November hurricane season and beyond.

But even with new engines, new wings and upgraded avionics and scientific instrumentation, they won’t fly forever. More like 15 years.



Hershey Entertainment and Resorts, the company that owns Hershey Park, is investigating a possible data breach.

And as a result, Hershey Park tops this week's list of IT security news makers, followed by Damballa, Malwarebytes and The Hartford.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



The lifecycle of any given technological innovation follows a fairly standard path: proposal, development, deployment and then either success or failure based on cost, efficacy, execution or a number of other factors.

With the cloud, however, we seem to be diverging from this pattern, or at the very least the process is being drawn out due to the radical and fundamental way it affects the entire data stack, and indeed the entire business model.

The private cloud in particular seems to be caught in a no-man’s land of doubt/certainty, confusion/clarity, and ongoing debate between those who support it to the nines and those who chalk it up to so much wishful thinking. On any given day, a web search of the terms “private cloud” can produce the following results:



While managed service providers (MSPs) are certainly well-versed in the areas of cloud-based file sharing and data storage, it pays to be just as familiar with some of the areas of interest of your clients. As MSPs see more healthcare companies migrating their services to the cloud – whether due to a relaxation of restrictions or a decision to evolve – the need for familiarity in this potentially lucrative market is as important as ever.

When the Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996, data security and privacy on the internet were not exactly the big concerns of the day. Then again, the MSP business model we know and love today didn’t even exist.

Fast forward about 20 years – and through a couple of generations of computing platforms – and HIPAA compliance has become a hot topic as health care organizations, at long last, begin to crawl out from under mountains of paper and into the digital world.



Security experts have a lot of concerns and added responsibilities as connected devices, large and small, burrow their way ever deeper into people’s lives. Nowhere is the increasing need for oversight greater than in health care.

This week, the Workgroup for Electronic Data Interchange (WEDI) released a primer on how a health care organization should protect itself in cyberspace. In its story on the primer, Health IT Security carries a statement from WEDI President and CEO Devin Jopp illustrating the acceleration of health care compromises. From 2010 to 2014, 37 million health care records were compromised in breaches. That sounds like a lot, until it is considered that there were 99 million compromises in just the first quarter of this year. The primer has sections on the lifecycle of cyberattacks and defense, the anatomy of an attack, and ways of “building a culture of prevention.”

Those attacks were aimed at gathering patients’ financial and related data. Another health care vulnerability – and one that is in many ways even more frightening – is attacking connected health care devices in order to hurt people. For some reason, there are people in this world who find it okay to interfere with a heart patient’s pacemaker.



Sites in northern and central California and Montana selected to showcase climate resilience approach


The Department of the Interior (DOI), Department of Agriculture (USDA), Environmental Protection Agency (EPA), National Oceanic and Atmospheric Administration (NOAA), and the U.S. Army Corps of Engineers (USACE) today recognized three new collaborative landscape partnerships across the country where Federal agencies will focus efforts with partners to conserve and restore important lands and waters and make them more resilient to a changing climate. These include the California Headwaters, California’s North-Central Coast and Russian River Watershed, and Crown of the Continent.

Building on existing collaborations, these Resilient Lands and Waters partnerships – located in California and Montana/British Columbia – will help build the resilience of valuable natural resources and the people, businesses and communities that depend on them in regions vulnerable to climate change and related challenges. They will also showcase the benefits of landscape-scale management approaches and help enhance the carbon storage capacity of these natural areas.

The selected lands and waters face a wide range of climate impacts and other ecological stressors related to climate change, including drought, wildfire, sea level rise, species migration and invasive species. At each location, Federal agencies will work closely with state, tribal, and local partners to prepare for and prevent these and other threats, and ensure that long-term conservation efforts take climate change into account.

The Russian River meanders through Mendocino and Sonoma counties in Northern California mountains and meets the Pacific Ocean at Jenner, California. Credit: NOAA

The Russian River meanders through Mendocino and Sonoma counties in Northern California mountains and meets the Pacific Ocean at Jenner, California. (Credit: NOAA)

These new Resilient Lands and Waters sites follow President Obama’s announcement of the first set of Resilient Landscape partnerships (PDF, 209K) (southwest Florida, Hawaii, Washington and the Great Lakes region) at the 2015 Earth Day event in the Everglades.

Efforts in all Resilient Lands and Waters regions are relying on an approach that addresses the needs of the entire landscape. Over the next 18 months, Federal, state, local, and tribal partners will work together in these landscapes to develop more explicit strategies and maps in their programs of work. Developing these strategies will benefit wildfire management, mitigation investments, restoration efforts, water and air quality, carbon storage, and the communities that depend upon natural systems for their own resilience. By tracking successes and sharing lessons learned, the initiative will encourage the development of similar resilience efforts in other areas across the country.

For example, in the California Headwaters, an area that contributes greatly to state’s water supply, the partnership will build upon and unify existing collaborative efforts to identify areas for restoration that will help improve water quality and quantity, promote healthy forests, and reduce wildfire risk. In California’s North-Central Coast and Russian River Watershed, partners will explore methods to improve flood risk reduction and water supply reliability, restore habitats, and inform coastal and ocean resource management efforts. In Montana, extending into British Columbia, the Crown of the Continent partnership will focus on identifying critical areas for building habitat connectivity and ecosystem resilience to help ensure the long-term health and integrity of this landscape.

"From the Redwoods to the Rockies to the Great Lakes and the Everglades, climate change threatens many of our treasured landscapes, which impacts our natural and cultural heritage, public health and economic activity," said Secretary of the Interior Sally Jewell. “The key to making these areas more resilient is collaboration through sound science and partnerships that take a landscape-level approach to preparing for and adapting to climate change.

“As several years of historic drought continue to plague the West Coast, there is an enormous opportunity and responsibility across federal, state and private lands to protect and improve the landscapes that generate our most critical water supplies,” said Secretary of Agriculture Tom Vilsack. “Healthy forest and meadows play a key role in ensuring water quality, yield and reliability throughout the year. The partnerships announced today will help us add resiliency to natural resource systems to cope with changing climate patterns.”

“Landscape-scale conservation can help protect communities from climate impacts like floods, drought, and fire by keeping watersheds healthy and making natural resources more resilient,” said EPA Administrator Gina McCarthy. “EPA is proud to take part in the Resilient Lands and Waters Initiative.

“Around the nation, our natural resources and the communities that depend on them are becoming more vulnerable to natural disasters and long-term environmental change," said Kathryn Sullivan, Ph.D., NOAA Administrator. “The lands and waters initiative will provide actionable information that resource managers and decision makers need to build more resilient landscapes, communities and economies."

"The Army Corps of Engineers is bringing our best scientific minds together to participate in this effort. We are working to ensure that critical watersheds are resilient to changing climate,” said Jo-Ellen Darcy, Assistant Secretary of the Army for Civil Works. “The Army Corps’ participation in this effort along with our local, state and federal partners demonstrates our commitment to implement President Obama's Climate Action Plan in all of our missions."

The Resilient Lands and Waters initiative is a key part of the Administration’s Climate and Natural Resources Priority Agenda (PDF, 8.9MB), a first of its kind, comprehensive commitment across the Federal Government to support resilience of America’s vital natural resources. It also directly addresses Goal 1 of the National Fish Wildlife and Plant Climate Adaptation Strategy to conserve habitat that supports healthy fish, wildlife, and plant populations and ecosystem functions in a changing climate.

When President Obama launched his Climate Action Plan (PDF, 319K) in 2013, he directed Federal agencies to identify and evaluate approaches to improve our natural defenses against extreme weather, protect biodiversity and conserve natural resources in the face of a changing climate. The Climate Action Plan also directs agencies to manage our public lands and natural systems to store more carbon.

Learn more information about the three selected landscapes (California Headwaters, California’s North-Central Coast and Russian River Watershed, and Crown of the Continent)

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter, Instagram and our other social media channels.

Public sector becomes top target for malware attacks in the UK

Public sector organisations are the number one target for malware attacks in the UK. This is according to the 2015 Global Threat Intelligence Report (GTIR) – an analysis of over six billion security attacks in 2014 – announced by NTT Com Security, the global information security and risk management company.

While financial services continues to represent the number one targeted sector globally with 18% of all detected attacks, in the UK market nearly 40% of malware attacks were against public sector organisations. This was three times more than the next sector, insurance (13%) and nearly five times more than the media and finance sectors (both 9%).

However, according to the GTIR, attacks against business and professional services organisations saw a sharp rise this year from 9% to 15% globally, while this sector also accounted for 15% of malware observed. Typically, these businesses are seen as being much softer than other targets, but due to their connection and relationship with much larger organisations, are high value targets for attackers. In the UK, this sector represented 6% of all malware attacks.

It is perhaps interesting to note that the Business Continuity Institute's latest Horizon Scan report identified that business continuity professionals in the financial and insurance sector expressed greater concern at the prospect of a cyber attack occurring. 56% of respondents to a global survey who work in the financial and insurance sector expressed extreme concern compared to only 34% and 30% in the professional services sector and public administration sector respectively.

Stuart Reed, Senior Director, Global Product Marketing at NTT Com Security, comments: “The fact that public sector figures are so high compared to other sectors in the UK is due largely to the value of the data that many of these organisations have, which makes them attractive and highly prized targets for malware attacks. While the level of threat may vary from organisation to organisation, they all have information that would be of interest to cyber criminals."

It’s also interesting that we have seen some campaigns specifically targeting business & professional services. It’s possible that companies in this sector may not have the equivalent security resources and skills in-house that many other larger companies do, yet they potentially yield high value for attackers as both an end target and a gateway target to strategic partners.

The cybersecurity insurance industry is booming, with demand for this specialty coverage vastly outpacing any other emerging risk line, according to a new survey by London-based broker RKH Specialty. In fact, 70% of the insurance professionals surveyed listed cyber as the top casualty exposure.

The brokers, agents, insurers and risk managers RKH queried after April’s RIMS 2015 conference said their top casualty concerns after cyber are product recall and drones (11% each), with others including e-cigarettes, autonomous vehicles and telematics totaling only eight percent.



Wednesday, 24 June 2015 00:00

Another Cloud Boom Coming Our Way?

As a managed service provider (MSP), you must know that cloud adoption is in full-swing, right? Well, what if we were to tell you that another cloud computing boom is still to come? Whether you believe it or not, research suggests that a slew of new opportunities could be on the way for MSPs in the world of cloud data storage and cloud-based file sharing

When a new technology is unleashed on the world, it often makes itself known in waves. First, there is in the initial announcement and discussion of the technology. Upon release, there are the early adopters that look to take hold of it. Then, perhaps after some feedback and revisions, most technologies that are destined for longevity will see a great boom in acceptance and adoption.



As drought grips California, floods overpower Texas and Eastern cities grapple with crumbling sewers that pump contaminated runoff into waterways, state and local governments are revisiting how they get, use and manage water. 

One method is to harness the rain. Some governments are doing this through massive systems that treat and pump stormwater back to residents, while others are looking to the installation of rain collection systems for homes and businesses. A few cities are introducing green infrastructure designed to put water back into the ground rather than letting it flow down the street.

Sally Brown, an associate professor at the University of Washington, said the last time governments spent significant amounts of money on water issues was after the Clean Water Act in the 1970s, when they had to change how they treated water and wastewater. Today, environmental factors coupled with water availability are forcing state and local officials to create new policies and invest financially to ensure future access to water.



Continuous monitoring on its own is great for the detection and remediation of security events that may lead to breaches. But when it comes to allowing us to measure and compare the effectiveness of our security programs, there are many ways that simply monitoring falls short. Most significantly, it does not allow us to answer the question of whether not we are more or less secure than we were yesterday, last week or last year.

This is a question that we all have grappled with in the security community, and more recently, in the board room. No matter how many new tools you install, settings you adjust, or events you remediate, there are few ways to objectively determine your security posture and that of your vendors and third parties. How do you know if the changes and decisions you have made have positively impacted your security posture if there is no way to measure your effectiveness over time?



An unusual combination of big and small tech companies are working on ways to accelerate the development of cloud computing technologies.

On Tuesday, an organization called Docker announced that its commercial software, used to create and maintain other software applications easily for millions of computers and mobile phones, would become generally available.

The commercial product follows an initial open source release of Docker, and it includes among other things a way that companies can securely store and share their software. In an unusually broad partnership, the product would be available not just from Docker, but from Amazon’s cloud computing business, AWS; IBM; and Microsoft.



Wednesday, 24 June 2015 00:00

Shape Your Risk Culture

Today, institutions have become sophisticated in establishing an enterprise risk management infrastructure that includes risk management departments, appetite, framework, policies, limits, models, governance, key risk indicators, reporting and processes. Organizations are set up to manage risks of different kinds: strategic, business, market, credit, counterparty, earnings, capital, liquidity, concentration, legal, operational, model, reputational, funding, and even emerging risks. Effective risk management is not just about the infrastructure, it is also about the people. A major shortcoming that many institutions can improve on is putting boundaries on what is an acceptable “risk culture.” It has been a cause of many disastrous financial failures, including the, LIBOR rate manipulation, collapse of Bear Stearns, and Madoff’s Ponzi scheme. It is a risk management critical success and “random” risk aversion factor. It has become a buzz term and is on the radar of many institutions including hedge funds, banks, insurance companies, corporations, and regulators. For example, the Financial Conduct Authority (FCA) is a new regulatory body created on April 2013 as one of the successors to the United Kingdom’s Financial Services Authority. It has the power to regulate conduct related to the marketing of financial products and investigate organizations and individuals.

What is risk culture? See Figure 1. Ultimately it is behavior that is influenced by ethics, values, and beliefs of people in an organization that collectively supports the risk management of an organization. It is then easy to understand how well it supports risk management, which should be driven by five risk culture conditioning elements: leadership, risk knowledge, risk understanding, risk transparency, and reward system.



Here’s the conundrum: There is a shortage of IT professionals who have the skills that employers need, and at the same time, there is an abundance of bright, eager people who dream of obtaining those skills and building a career in IT, but who simply lack the wherewithal to obtain a four-year college degree to realize that dream. The solution to this problem has long seemed destined to elude us. But maybe there is an answer after all.

That’s the conclusion I drew after learning about the Creating IT Futures Foundation (CITFF), the philanthropic arm of CompTIA, the Downers Grove, Ill.-based IT trade association best known for its certification programs. Formerly called the CompTIA Educational Foundation, CITFF is headed by CEO Charles Eaton, who was brought on board in 2010 “to find a more impactful way to engage in our strategy.” That strategy, in Eaton’s words, is to “move the needle on getting people who need an opportunity into IT careers.”



Tuesday, 23 June 2015 00:00

Creating a Risk Intelligent Organization

Many organizations spend time and effort building and developing robust risk mitigation frameworks and strategies to handle business-specific risks. In spite of constant monitoring through dashboards and reports, many companies still face major and unexpected issues. One of the main reasons for shortfalls in risk management is the general attitude towards risk mitigation. Although companies are well-prepared with an infrastructure in place, they often struggle when cultivating a sense of risk awareness, responsibility and intelligence into and across the fabric of an organization, which results in gaps and deficiencies.

Every organization realizes the significance of risk intelligence, but they frequently face issues in the initial stage of their transition. Developing a risk culture is frequently viewed as just a requirement to be fulfilled rather than something that adds value to an enterprise. Without a clear agenda, many companies find it impossible to cultivate risk-taking capabilities into its employee base.

Risk intelligence demands that every individual in an organization take responsibility for managing risks in the day-to-day operations. Senior management should assess the existing risk management strategy and gauge its effectiveness in alleviating risks as well as developing awareness throughout the organizational structure.



With great convenience comes great responsibility...

Once a month I use my blog to highlight some of S&R’s latest and greatest. The cloud is attractive for many reasons -- the possibility of working from home, the vast array of performance and analytical capabilities available, knowing that your backups are safe from that fateful coffee spill, etc. Although the cloud is not a new concept, the security essentials behind it unfortunately remain a mystery to practically all users. What’s worse, the security professionals tasked with protecting corporate data rarely have visibility into all the risk -- it’s simply too easy for users to make critical cloud decisions without process or oversight.   

Underestimating or neglecting the necessary security practices that a cloud requires can lead to hacks, breaches, and horrendous data leaks. We’ve seen our fair share of security embarrassments that range from Hollywood execs to the US government, and S&R pros know that these are far from done.



Tuesday, 23 June 2015 00:00

Three Problems that Prove You Need a CDO

A few signs show that organizations might be retreating from the idea of a chief data officer. Instead, some organizations are adding strategic data functions to the CIO’s job. But is that enough or does the growing demand require a dedicated data executive?

Here are three reasons why I think organizations may want to embrace chief data officers.

First, as I shared in my last piece, most CIOs don’t want the data officer task. Experian surveyed CIOs last November and found that an incredible 92 percent of CIOs “are calling out for a CDO role to release the data pressures they face and enable a corporate wide approach to data management.” Call me crazy, but to me, it’s pretty clear that the people who have thus far handled the job say it needs a separate role.



(TNS) — Private security guards working at Iowa malls, schools and corporations have no required training and no recurring background checks, despite increased threats at these facilities.

Lawmakers and the public are raising questions about licensing requirements for private security companies after an off-duty guard fatally shot a woman June 12 at Coral Ridge Mall in Coralville.

Alexander M. Kozak, 22, of North Liberty, is being held on first-degree murder charges that he targeted mall employee Andrea Farrington, 20, and gunned her down amid hundreds of shoppers.

“Most organizations want to give the appearance of security, but they don’t want the substance,” said Tom M. Conley, president and chief executive officer of the Conley Group, a private security company in Urbandale.



If the FirstNet national first responder network succeeds, it’ll be because federal officials who are planning and deploying the network forged strong partnerships with states and localities. That’s why comments from state CIOs at the NASCIO Midyear Conference in April are troubling.

Although state CIOs generally support the concept of a nationwide interoperable public safety network, they’re clearly frustrated with the lack of details coming from the federal First Responder Network Authority about how the new network will be built and paid for.

“FirstNet is a fantastic idea, but people like me are very skeptical of something where nobody can show me the plan and nobody can show me the cost,” said Alabama CIO Brunson White. “I’ll remain skeptical until somebody does that, and we’ve been asking for a while now.”



Tuesday, 23 June 2015 00:00

Tangents on Resilience

It seems that it now officially become a buzz-phrase – ‘Organisational Resilience’: impossible to define because there are many differing perceptions about what it is.  BS 65000-2014 says that it’s this: ‘ability of an organization to anticipate, prepare for, and respond and adapt to incremental change and sudden disruptions in order to survive and prosper’.  So I’m going with that for the time being.  I want to particularly focus on the last three words: ‘survive and prosper’. I think that there is too much emphasis on the ‘survive’ part when in fact it is probably the focus of most organisations to prosper, unless there is an oncoming wave of water, disease or armed terrorists.  The fact that there may well be a variable risk of such waves affecting many elements of our societies at some level or another is probably lost – or at least ignored by – most business organisations. The truth is they have to focus on the bottom line – and scaremongering about the catastrophes that may (not will) befall them will cut no ice.



When it comes to singling out sectors that are in the forefront of disaster recovery, finance is often quoted as an example. With so much depending on the ability to recover systems and data rapidly after any incident, major banks were among the first to implement hot failover data centres for instance – as well as being among the only organisations that could afford them. At the other end of the scale, there are those that are particularly ill-equipped to deal with IT disasters. The education sector has been identified as one example, but another group falling short of the levels required could surprise you.



Why should data be erased?

Companies, no matter whether they are part of a large corporation or a smaller business, definitely need to use a professional data erasure method if they want to ensure that their data doesn’t fall into the wrong hands, like the Brighton and Sussex University Hospitals NHS Trust experienced in 2008.

Generally speaking, due to legal and internal regulations, data should be erased at the end of its so-called lifecycle. There are a number of existing national rules, regulations and laws that already require companies to comply with data protection measures, and thus also with data erasure. The provisions concerning data erasure will also become significantly tougher with the introduction of the European data protection regulation. The central element of this regulation, which is expected to come into force early next year, is certainly Article 17, which gives force of law to the “right to deletion” or the “right to be forgotten”.

To cut a long story short: Article 17 requires that all saved personal information that is no longer needed for its original purpose, for which no consent was given for its processing, or if its agreed retention period has expired, is to be securely erased. This requirement applies to all data collected, structured, transmitted and distributed concerning EU citizens, irrespective of the country or the storage system where the data is saved. For all companies, regardless of their size, this means that they should prepare intensively as of now and adapt all their processes to the new rules.



The conventional wisdom these days seems to be that MSPs should ditch break-fix all together. We’ve heard this advice from MSP partners like Guy Baroan and Vince Tinnirello. According to both of them, the full managed services model makes sense because it’s simple to invoice, easy to budget for, and both clients and the provider have service agreements that make it all quite simple. Not to mention the fact that it’s a much more proactive method where maintenance occurs constantly, not just when something goes wrong.

Little did we know, however, that there are plenty of MSPs that are happy to work as hybrids, and they have some good reasons for doing so:



Low river flow and nutrient loading reason for smaller predicted size


Scientists are expecting that this year’s Chesapeake Bay hypoxic low-oxygen zone, also called the “dead zone,” will be approximately 1.37 cubic miles – about the volume of 2.3 million Olympic-size swimming pools. While still large, this is 10 percent lower than the long-term average as measured since 1950.

The anoxic portion of the zone, which contains no oxygen at all, is predicted to be 0.27 cubic miles in early summer, growing to 0.28 cubic miles by late summer. Low river flow and low nutrient loading from the Susquehanna River this spring account for the smaller predicted size.

This chart shows, in the upper portion, the location of hypoxic (yellow, orange and red shading) bottom waters of Chesapeake Bay during the  early July,2014 survey. The bottom portion shows a longitudinal "slice" of the Chesapeake Bay main stem showing the depth of the hypoxic waters thru the central area of the Bay.  These data are collected by Maryland and Virginia as part of the comprehensive Chesapeake Bay Monitoring Program. (Credit: Maryland Department of Natural Resources)

This chart shows, in the upper portion, the location of hypoxic (yellow, orange and red shading) bottom waters of Chesapeake Bay during the early July,2014 survey. The bottom portion shows a longitudinal “slice” of the Chesapeake Bay main stem showing the depth of the hypoxic waters thru the central area of the Bay. These data are collected by Maryland and Virginia as part of the comprehensive Chesapeake Bay Monitoring Program. (Credit: Maryland Department of Natural Resources)

This is the ninth year for the Bay outlook which, because of the shallow nature of large areas of the estuary, focuses on water volume or cubic miles, instead of square mileage as used in the Gulf of Mexico dead zone forecast announced last week. The history of hypoxia in the Chesapeake Bay since 1985 can be found at EcoCheck, a website from the University of Maryland Center for Environmental Science.

The Bay’s hypoxic and anoxic zones are caused by excessive nutrient pollution, primarily from human activities such as agriculture and wastewater. The nutrients stimulate large algal blooms that deplete oxygen from the water as they decay. The low oxygen levels are insufficient to support most marine life and habitats in near-bottom waters and threaten the Bay’s production of crabs, oysters and other important fisheries.

The Chesapeake Bay Program coordinates a multi-year effort to restore the water and habitat quality to enhance its productivity. The forecast and oxygen measurements taken during summer monitoring cruises are used to test and improve our understanding of how nutrients, hydrology, and other factors affect the size of the hypoxic zone. They are key to developing effective hypoxia reduction strategies.

The predicted “dead zone” size is based on models that forecast three features of the zone to give a comprehensive view of expected conditions: midsummer volume of the low-oxygen hypoxic zone, early-summer oxygen-free anoxic zone, and late-summer oxygen-free anoxic zone. The models were developed by NOAA-sponsored researchers at the University of Maryland Center for Environmental Science and the University of Michigan. They rely on nutrient loading estimates from the U. S. Geological Survey.

“These ecological forecasts are good examples of the critical environmental intelligence products and tools that NOAA is providing to stakeholders and interagency management bodies such as the Chesapeake Bay Program,” said Kathryn D. Sullivan, Ph.D., under secretary of commerce for oceans and atmosphere and NOAA administrator. “With this information, we can work collectively on ways to reduce pollution and protect our marine environments for future generations.”

The hypoxia forecast is based on the relationship between nutrient loading and oxygen. Aspects of weather, including wind speed, wind direction, precipitation and temperature also impact the size of dead zones. For example, in 2014, sustained winds from Hurricane Arthur mixed Chesapeake Bay waters, delivering oxygen to the bottom and dramatically reducing the size of the hypoxic zone to 0.58 cubic miles.

“Tracking how nutrient levels are changing in streams, rivers, and groundwater and how the estuary is responding to these changes is critical information for evaluating overall progress in improving the health of the Bay,” said William Werkheiser, USGS associate director for water. “Local, state and regional partners rely on this tracking data to inform their adaptive management strategies in Bay watersheds.”

The USGS provides the nutrient runoff and river stream data that are used in the forecast models. USGS estimates that 58 million pounds of nitrogen were transported to the Chesapeake Bay from January to May 2015, which is 29 percent below average conditions. The Chesapeake data are funded through a cooperative agreement between USGS and the Maryland Department of Natural Resources. USGS operates more than 400 real-time stream gages and collects water quality data at numerous long-term stations throughout the Chesapeake Bay basin to track how nutrient loads are changing over time.

“Forecasting how a major coastal ecosystem, the Chesapeake Bay, responds to decreasing nutrient pollution is a challenge due to year-to-year variations and natural lags,” said Dr. Donald Boesch, president of the University of Maryland Center for Environmental Science, “But we are heading in the right direction.”

Later this year researchers will measure oxygen levels in the Chesapeake Bay. The final measurement in the Chesapeake will come in October following surveys by the Chesapeake Bay Program’s partners from the Maryland Department of Natural Resources (DNR) and the Virginia Department of Environmental Quality. Bimonthly monitoring cruise updates on Maryland Bay oxygen levels can be found on DNR’s Eyes on the Bay website at www.EyesontheBay.net

USGS provides science for a changing world. Visit USGS.gov, and follow us on Twitter @USGS and our other social media channels. Subscribe to our news releases via e-mailRSS or Twitter.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitterInstagram and our other social media channels.

Tuesday, 23 June 2015 00:00

6 Steps to Reduce Business Travel Risks

Serious medical emergencies, political unrest and devastating natural disasters – these are just a few of the dangers business travelers face as they travel the world on behalf of their companies.  Even seemingly smaller travel issues, such as a lost prescription, a stolen passport or even a cancelled flight can wreak havoc on one’s travel plans at the worst possible moment. All of these risks are abundant in business travel, and as employees circle the globe, it’s your responsibility to protect them from these risks with proactive crisis management.

A key component of any well-rounded Travel Risk Management (TRM) strategy, proactive crisis management can help organizations meet their Duty of Care objectives and prevent issues from becoming even more serious.  Companies must be ready to deal with crises as opposed to simply just reacting to them – and this knowledge can only come through experience. This experience is best found by incorporating crisis response exercises into your company’s TRM strategy. Here’s how:



The term Internet of Things may have sprung to fame only recently, but its origin dates back several years. Apparently, it was first used in 1999 at a research facility located at the famous American university MIT, the Massachusetts Institute of Technology.

But what exactly is the Internet of Things? Conceptually, the IoT is simple: it describes a reality where things are capable of exchanging information. To fully understand the IoT’s potential, imagine that a growing quantity of objects- not PCs, smartphones and tablets, but common everyday objects – become capable of communicating with one another, exchanging data collected from sensors, accelerometers and GPS systems to provide us with services and information based on these readings.

This type of communication among objects is generally referred to by the acronym M2M, representing the Machine to Machine communication that allows wireless and wired devices to converse.

But what are the possible applications for the Internet of Things?



(TNS) — As the nation mobilizes to determine what motivated the gunman in the Charleston, S.C., massacre, the shootings highlight what a number of experts said Thursday is a chilling reality: The greatest danger from terrorism may be from our own ranks and within our own borders.

“Since 9/11, our country has been fixated on the threat of jihadi terrorism,” said Richard Cohen, president of the Southern Poverty Law Center. “But the horrific tragedy at the Emanuel AME reminds us that the threat of homegrown domestic terrorism is very real.”

Dylann Storm Roof, 21, was arrested Thursday in Shelby, N.C., ending a massive manhunt that began after the killing of nine people attending a Bible study at the Emanuel African Methodist Episcopal Church on Wednesday night.

Now comes the investigation into how and why it happened.



(TNS) — The Buckskin fire looks a little different on Matthew Krunglevich's computer screen, an adornment of yellow dots smeared across part of a southwestern Oregon map, with dashes of orange along the blaze's eastern and southern edges.

At a glance, this view from NASA's MODIS — Moderate Resolution Imaging Spectroradiometer — satellite doesn't look like much. But it actually tells Krunglevich, of the Oregon Department of Department of Forestry, a lot. The splashes of color southwest of Cave Junction show where the fire is burning and where it's burning hottest: yellow equals warm, orange equals warmer. Predictably, the orange is shown where the fire is burning outward, where the flames are newer.

"It gives us an idea of — but a really rough approach — to how big a fire is, where there's heat activity on a fire on a broad scale," Krunglevich says.

It's one tool in a growing high-tech toolbox that can help crews prioritize resources as needed. Because in an area such as southwestern Oregon that's so consistently primed for summer wildfires, the more information, the better, fire officials say.



It’s a little known fact that flash-based storage can be too much for most systems. Designed back in the days when slow hard disk drives (HDDs) carried out the reading and writing of data, today’s channels for information transport often can’t cut it when loaded up with flash. The result is bottlenecking applications: the combined might of multicore processors, abundant RAM and flash pack far more processing punch than can be relayed by the associated storage protocols and bus architectures.

Enter Non-Volatile Memory Express (NVMe). It's a PCIe-based approach to resolving those bottlenecks. And it’s about to capture the imagination of the storage world.

“I’ve been at this for more than 20 years and NVMe is one of the most revolutionary, most anticipated and most exciting developments I’ve seen,” said Doug Rollins, Senior Technical Marketing Engineer, Enterprise Solid State Drives for the Storage Business Unit at Micron Technology.



Monday, 22 June 2015 00:00

Look Forward with Your Hybrid Cloud

The cloud industry is starting to look a lot like the wine industry: Experts galore are ready to declare what is and is not a quality cloud, and hybrids and cross-breeds incorporate various components to produce a wide variety of options for consumers.

The debate over the efficacy of the various cloud approaches now on the market will likely continue for some time to come, as neither public nor private infrastructure appears to be going anywhere soon. But remember that all data infrastructure solutions are a means to an end, so it is important to keep your ultimate goals in mind when pursuing any one strategy.

This can be a tricky thing to do, says IBM’s John Easton, because most IT professionals tend to view cloud solutions from their own perspectives as managers of traditional data center infrastructure. In fact, he says he can guess a person’s particular job based on their rationales for migrating to the cloud, such as improved systems management or greater scalability. But this ultimately diminishes the return on any cloud investment because it focuses on how the cloud can solve current problems rather than how it can open new opportunities for the future. This is why most hybrid cloud deployments have proven to be of middling success at best – they are geared largely toward cost-saving and infrastructure efficiency rather than more forward-looking data portability and development agility opportunities.



If you work in an office, you might think that everyone’s favorite pastime is to complain about how inefficient IT is at helping to solve your technical issues in a timely manner. But surprise, surprise; according to a new study from Landesk, a majority of employees actually reported being very satisfied with their organization’s IT customer service.

Landesk announced the results of its 2015 Global State of IT Support Study, which surveyed 2,500 employees in the United States and Europe to determine how satisfied they are with their organization’s IT customer service. According to the survey, 80 percent of respondents said they would give their IT departments a grade of either “A” or “B” in terms of customer satisfaction, which seriously bucks the stereotype of inefficient IT workers.



The future of enterprise IT will be defined by the need to securely deliver a more consumer-like application experience that can be updated in a matter of minutes.

Speaking at the launch this week of a VMware Business Mobility initiative, VMware CEO Pat Gelsinger says this brave new world of enterprise IT will require not only fundamental changes to the way enterprise applications are built and delivered, but also the way IT infrastructure is provisioned and managed.

The VMware Business Mobility initiative unifies the delivery of identity management as a service provided by the AirWatch unit of VMware and the software-defined networking (SDN) technologies that VMware gained when it acquired Nicira in 2012.



The Federal Communications Commission (FCC) is asking AT&T, in the form of a possible $100 million fine, to explain why it apparently throttled subscribers when it said their services were unlimited. The FCC says that the limitations kicked in after consumption of 5GB of data in a month.

This, Computerworld reports, has been happening since 2011: The company has 30 days to respond to the allegations. The FCC then will make an official determination. Even if the $100 million hit stands, it may have been worth it for AT&T:

The FCC said it's aware that the fine, while large, is a fraction of the revenue AT&T made from offering its unlimited plan to consumers. It is also considering other redress, including requiring AT&T to individually inform customers that its disclosures were in violation of rules and to allow them out of applicable contracts with no penalty.



OKLAHOMA CITY – The recent severe storms, floods, straight-line winds and tornadoes occurring May 5 through June 4 damaged public and private roads and bridges.

The Federal Emergency Management Agency (FEMA) and the U.S. Small Business Administration (SBA) may be able to help when repairing privately owned access roads and bridges.

FEMA’s Individual Assistance program could cover the expenses of repairing privately owned access roads if the following criteria are met:

  • It is the applicant’s primary residence;
  • It is the only access to the property;
  • It is impossible to access the home with the damaged infrastructure; or
  • The safety of the occupants could be adversely affected.

SBA is FEMA’s federal partner in disaster recovery, and may also help. Private property owners, established homeowner associations and properties governed by covenant may apply for a low-interest disaster loans directly through SBA. These funds can be used to repair or replace private roads and bridges. Privately owned access roads owned by homeowner associations may apply directly to the SBA.

Homeowners who jointly own access roads and bridges may also be eligible for repair grants or SBA loans under certain circumstances. In some cases, sharing the cost of repairs with funds obtained through a combination of FEMA, SBA loans and private funds may be another option. The affected homeowners should each register with FEMA individually.

Survivors can apply for state and federal assistance online at www.DisasterAssistance.gov or by calling 800-621-FEMA (3362) or (TTY) 800-462-7585. Those who use 711-Relay or Video Relay Services can call 800-621-3362 to register.

Each request for private road or bridge repair assistance is evaluated on a case-by-case basis.

Repair awards through Individual Assistance funding are for disaster-related damages and will not include improvements to the road’s pre-disaster condition, unless improvements are required by current local or state building codes of ordinances.

To register online visit www.DisasterAssistance.gov, by phone at toll-free 800-621-3362 or (TTY) 1-800-462-7585, or via smartphone or tablet at m.fema.gov.

For more information on Oklahoma disaster recovery, click http://www.fema.gov/disaster/4222 or visit OEM at www.oem.ok.gov


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers services to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

Let’s face it, you don’t know what’s happening until It’s happened; it takes time to find out what has occurred. What it major? Is it minor? Did IT get impacted? Was revenue (or other financial impacts) lost? Does the public know? Or worse, does the media know?

I’m all for plans and planning but you just won’t know everything up front when some sort of operational interruption occurs; be it weather related, power related, or some other interruption that causes a major disruption for the organization. Confusion is going to be present and it’s going to be present until you’ve got a handle on the situation. The amount of time from the disaster or operational interruption to the time you have a handle on what’s going on – and what needs to be done by way of a response, is where your plans and processes kick in.



(TNS) — Allstate said Wednesday that it is one step closer to using drones to assess damages after catastrophes.

The insurer, based in the Chicago suburb of Northbrook, said that a new ruling by the Federal Aviation Administration will allow the consortium it works with to research the benefits of flying drones to assess property claims.

The year-old Property Drone Consortium is led by EagleView Technology, whose services include aerial imagery and data analysis.

Allstate said that in a disaster, access to neighborhoods might be restricted by debris or local authorities and that drones could help claims professionals serve customers in spite of those restrictions.



This week, I ventured up to West Glocester, Rhode Island, home of the coolest place any insurance broker, insurance client, or risk management journalist can visit: the FM Global Research Campus.

Because FM Global is intently focused on prevention of loss as the chief means of minimizing claims, the company maintains a 1,600-acre campus dedicated to property loss prevention scientific research. The biggest center of its kind, the research center features some of the most advanced technology to conduct research on fire, natural hazards, electrical hazards, and hydraulics. Here, experts can recreate clients’ warehouse conditions to test whether existing suppression practices would be sufficient in the event of a massive fire, for example. Fabricated hail or seven-foot 2x4s are shot from a cannon-like instrument at plywood, windows, or roofing to test whether these materials can withstand debris that goes flying in hurricane-strength winds. Hydraulic, mechanical and environmental tests are conducted on components of fire protection systems, like sprinklers, to ensure effectiveness overall and under the specific conditions clients face. Indeed, these hydraulic tests have led the company’s scientists and engineers to design and patent their own, more effective sprinklers, the rights to which are released so anyone can manufacture these improved safety measures.



Small to midsize businesses (SMBs) may be finally realizing the extent to which cybercrimes can affect them, but do they realize just how intently hackers are targeting them? A report by Check Point Software says that SMBs have become “the cybercriminal’s ‘sweet spot,” due to a lower level of IT security but still a decent level of valuable information that can be utilized to make money.

The Check Point report says that appropriately 63 percent of SMBs are worried about malware, and 38 percent are worried about possible phishing scams, but 31 percent aren’t doing anything to protect against such threats. This report also cites statistics from the CyberSecurity Alliance that say 36 percent of cyberattacks target small businesses and of those businesses that are attacked, 60 percent will be forced to close within six months following—likely due to the fact that the average cost for a data breach at an SMB is $36, 000.



Increasing complexity means that business continuity professionals need to rethink some of the paradigms of the practice, says Geary Sikich.


Business continuity professionals need to rethink some of the paradigms of the practice. All too often we tend to fall back on what are considered the tried and true ways of doing things. This essentially leaves us in two camps; the first, evolved out of information technology and disaster recovery and the second, evolved out of emergency preparedness (tactical planning), financial risk management (operational) and strategic planning (strategic). These two camps each offer much to be desired. The first, having renamed disaster recovery and calling it business continuity still retains a strong focus on systems continuity rather than true business continuity; but this is not a bad thing. The second, has begun a forced merger of sorts; combining the varied practices at three levels (tactical, operational and strategic) and renaming it enterprise risk management (ERM). The second group still retains strong perspectives on risk management; that is why I have divided it into the three sub-groups (tactical, operational and strategic).



There has been a lot of talk about the degree of enterprise readiness of the cloud. Some argue that it doesn’t have the performance capabilities of data center-based applications. Maybe the question we should be asking is whether the service is enterprise-ready. Many existing cloud services have a consumer heritage—fine for individual users and perhaps a very small business. And therein lies the problem. An enterprise-ready service should be designed from the ground up to operate in the cloud and provide enterprise-level performance, features and security.



OKLAHOMA CITY – Not all of the damage from flooding takes place while your home or business is under water. Long after the flood waters have receded, mold and mildew can present serious and ongoing health issues.

Oklahomans impacted by the severe storms and flooding that took place between May 5 and June 4 should take steps to protect the health of their family or employees by treating or discarding mold- and mildew-infected items as soon as possible.

Health experts urge those who find mold to act fast. Cleaning mold quickly and properly is essential for a healthy home or work place, especially for people who suffer from allergies or asthma.

Mold and mildew can start growing within 24 hours after a flood, and can lurk throughout a home or business, from the attic and basement to crawl spaces and store rooms. The best defense is to clean, dry or discard moldy items. A top-to-bottom cleanup is your best defense, according to the experts.

Many materials are prone to developing mold if they remain damp or wet for too long. Start a post-flood cleanup by sorting all items exposed to floodwaters:

  • Wood and upholstered furniture and other porous materials can trap mold and may need to be discarded.
  • Carpeting presents a problem because drying it does not remove mold spores. Carpets with mold and mildew should be removed.
  • Glass, plastic and metal objects and other items made of hardened or nonporous materials can often be cleaned, disinfected and reused.

All flood-dampened surfaces should be cleaned, disinfected and dried as soon as possible. Follow these tips to ensure a safe and effective cleanup:

  • Open windows for ventilation and wear rubber gloves and eye protection when cleaning. Consider using a mask (rated N-95 or higher) if heavy concentrations of mold are present.
  • Use a non-ammonia soap or detergent to clean all areas and washable items that came in contact with floodwaters.
  • Mix 1.5 cups of household bleach in one gallon of water and thoroughly rinse and disinfect the area. Never mix bleach with ammonia, as the fumes are toxic.
  • Cleaned areas can take several days to dry thoroughly. The use of heat, fans and dehumidifiers can speed up the drying process.
  • Check all odors. Mold often hides in the walls or behind wall coverings. Find all mold sources and clean them properly.
  • Remove and discard all materials that can’t be cleaned like wallboard, fiberglass and other fibrous goods. Clean the wall studs where wallboard has been removed and allow the area to dry thoroughly before replacing the wallboard.

For other tips about post-flooding cleanup, visit www.fema.gov, www.oem.ok.gov, www.epa.gov, or www.cdc.gov.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers services to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

WASHINGTON – Today, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) signed Memoranda of Understanding (MOU) with seven technology organizations to provide state, local, tribal and territorial governments with technology resources during a disaster to expedite response and recovery. Cisco Systems, Google, Humanity Road, Information Technology Disaster Resource Center, Intel, Joint Communications Task Force and Microsoft have joined FEMA’s new Tech Corps program – a nationwide network of skilled, trained technology volunteers who can address critical technology gaps during a disaster.

During major disasters or emergencies, trained technology volunteers can complement ongoing response and recovery efforts, including installing temporary networks; enabling internet connectivity, and telephone, and radio communications; and providing other support, such as geographic information system (GIS) capacity, coding, and data analytics.  In 2002, Senator Ron Wyden (D-OR) proposed a mechanism of leveraging private sector technology capabilities to innovate the way federal, state, local and tribal governments respond to disasters. Tech Corps is based on this model, which was developed beginning in 2013 to assemble the initial group of companies for the voluntary program.

“When disaster strikes, we all have a role to play in helping survivors recover, and that includes the private sector,” said FEMA Administrator Craig Fugate. “Tech Corps volunteers will bring a vital skill set to our emergency management team to help the survivors we serve recover more quickly after disasters. We’re grateful to Senator Wyden and the private sector for contributing to this effort and we look forward to partnering with them to make communities stronger and safer.” 

“Tech Corps harnesses a deep well of technical expertise and private-sector manpower to make sure every resource is available immediately when disaster strikes,” said Senator Wyden. “Information technology is often critical to saving lives, and this program ensures that red tape won’t stand in the way of volunteer experts who can stand up temporary cell networks and Wi-Fi solutions that are so important in disaster areas. I’m hopeful today’s partners are the first of many to sign up to work hand-in-hand with emergency responders to help craft more resilient and effective responses to future disasters.”

Already, Tech Corps partners have been active on their own during national and global technology disaster response efforts, including providing support during Hurricane Sandy and the earthquakes in Nepal and Haiti. This initiative signifies a greater level of coordination between volunteers and the emergency management community through FEMA. 

To learn more about Tech Corps, please visit: fema.gov/tech-corps.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Data center infrastructure is supposed to be the rock upon which higher order applications and services are built. So what are we to think when someone comes along and says we can do all kinds of wonderful things by severing the application’s ties to this foundation?

In a way, what is happening to data architectures mirrors what we can see in the data center. The floor is concrete, but the racks are made of metal. The servers themselves are not welded to the rack but can slide in and out for easy replacement. At each delineation, the goal is to produce maximum flexibility while still rooting the system in the strength of its supporting infrastructure.

The latest iterations of virtual infrastructure are taking this idea to an entirely new level, however, because they purport to remove infrastructure concerns entirely from the business model. This can be seen in solutions like Nutanix’s Xtreme Computing Platform (XCP), which aims for full application independence from what the company is now calling “invisible infrastructure.” With the app now enjoying full mobility, native virtualization and even consumer-level search capabilities, it subsumes virtually all of the provisioning, orchestration and other functions it needs to support business processes at scale. In this way, organizations can finally rid themselves of costly infrastructure concerns and focus on what matters to them: making money through app-level innovation.



All countries need to be prepared for the unanticipated spread of serious infectious diseases says WHO.

After a meeting on the 17th June, the United Nations World Health Organization (WHO) declared that the Middle East Respiratory Syndrome, or MERS, outbreak that spread from the Middle East to the Republic of Korea does not constitute a ‘public health emergency of international concern’ but is nonetheless a ‘wake-up call’ for all countries to be prepared for the unanticipated spread of serious infectious diseases.

The Emergency Committee, convened by the WHO Director-General under the International Health Regulations regarding Middle East respiratory syndrome coronavirus (MERS-CoV) in regards to the outbreak in the Republic of Korea, also recommended against the application of any travel or trade restrictions and considers screening at points of entry to be unnecessary at this time.

WHO did recommend “raising awareness about MERS and its symptoms among those travelling to and from affected areas” as “good public health practice.”

The Committee noted that there are still many gaps in knowledge regarding the transmission of this virus between people, including the potential role of environmental contamination, poor ventilation and other factors, and indicated that continued research in these areas was critical.

Meanwhile, in a JAMA Viewpoint article, Georgetown public health law professor Lawrence O. Gostin and infectious disease physician Daniel Lucey state that MERS-CoV requires constant vigilance and could spread to other countries including the United States. However, MERS can be brought under control with effective public health strategies.

In the Viewpoint, published online on June 17th, the authors outline strategies for managing the outbreak, focusing on transparency, trust and infection control in health care settings. The duo also outline weaknesses in the World Health Organization's framework designed to govern patents on certain viruses, which is likely to impact critical future research.

Key points Gostin and Lucey make about MERS-CoV infection control include:

  • Training health workers and conducting diagnostic testing of certain travelers;
  • Limiting quarantine quarantines use to well-documented exposures using the least restrictive means possible;
  • Restricting travel should be avoided as it would be ineffective as evidence is lacking of MERS-CoV community transmission; and
  • Closing schools also should be avoided given the lack of community transmission of MERS-CoV.

In addition, Gostin and Lucey say the WHO's Pandemic Influenza Preparedness Framework fails to cover non-influenza pathogens like MERS-CoV noting, "...there remain substantial holes in international rules needed to facilitate critical research."

A recent Information Management article argues that chief data officers (CDOs) are making “gradual gains” this year. The piece backs this up with a list of recent appointments, as well as a stat from Experian that says roughly 60 percent of chief information officers hope to hire CDOs this year.

With all due respect, I disagree. In fact, there are several signs that CDOs as a concept may falter, and their functions may be absorbed by other existing roles.

First, the list actually includes only one CDO appointment. That was at Clinical Ink, a company that develops health care patient engagement technology. Obviously, that’s a step forward, but if I may be frank, I’m a bit surprised a company like that didn’t already have a chief data officer, since their work is patient engagement.



While every organization is at risk of employee theft–with the typical company losing 5% of revenue to fraud each year–smaller organizations with less than 500 employees (72%) were the most targeted.

According to The 2015 Hiscox Embezzlement Watchlist: A Snapshot of Employee Theft in the U.S., of the smaller companies targeted, four out of five had less than 100 employees and more than half had fewer than 25 employees. Smaller organizations also had the largest losses, according to the survey. Financial services companies were most at risk (21%), followed by non-profits, labor unions and municipalities.

Hiscox noted steps organizations can take to minimize employee theft, adding that this is most important for small- to medium-sized businesses, which can be more impacted by theft. In fact, the survey found that 58% showed no recovery of their losses.



Improving server utilization is like walking on a frozen pond during a spring thaw. The more comfortable the air temperature gets, the greater the danger of falling through.

With utilization, the higher you go, the less overhead you have when the inevitable data spikes arrive. Sure, you could have cloud-based IaaS at the ready, but now you are simply leasing underutilized resources rather than buying them.

This is why the tendency to view recent reports of underutilized servers with consternation is wrong-headed. The latest is Anthesis Group’s finding that 30 percent of servers worldwide are “comatose,” says eWeek’sJeffrey Burt, representing about $30 billion in “wasted” IT infrastructure. This may cause non-IT people to wring their hands, but anyone who has even a modicum of experience in data infrastructure will know that a 70 percent utilization rate is actually quite good—in fact, it is historically high given that in the days before virtualization, a typical server could sit idle maybe 80 percent of the time.



Thanks to a new report from Trustwave, it is easy to see why cybercrime has become so prevalent. It pays very well.

The 2015 Trustwave Global Security Report (free download with registration) looked at all sorts of issues on the cybersecurity front, from spam to passwords to where compromises are actually happening. Though the report presented a fascinating and all-encompassing look at the state of cybersecurity today, unfortunately, it isn’t pretty.

The bit of information that appears to have caught the most attention is how lucrative cybercrime is for hackers. The report stated that hackers receive an estimated 1,425 percent return on investment for exploit kit and ransomware schemes, or nearly $6000 for a single ransomware campaign. That’s a stunning amount of money. TechWeek Europe explained why cybercrime is so lucrative:



Tuesday, 16 June 2015 00:00

Mastering IT Risk Assessment

The foundation of your organization’s defense against cyber theft is a mastery of IT risk assessment. It is an essential part of any information security program, and in fact, is mandated by regulatory frameworks such as SSAE 16, SOC 2, PCI DSS, ISO 27001, HIPAA and FISMA.

Compliance with those frameworks means that your organization not only has to complete an IT risk assessment but it must also assess and address the risks by implementing security controls.

In the event of a breach, an effective IT risk management plan—which details exactly what your IT department is going to do and how they’re going to do it—and implementation of the critical security controls that have the potential to save your organization millions of dollars in direct response costs, legal fees, regulatory fines, and costs associated with rebuilding a damaged corporate reputation.



Tuesday, 16 June 2015 00:00

Selecting the Right Kind of Cloud

Saying that the cloud is becoming more specialized is like saying the days are getting longer now that summer is here: It is such a natural phenomenon that it barely needs to be stated.

But I’m going to state it anyway, because this facet of cloud computing alone will probably do more to capture critical enterprise loads and break down the psychological barriers to cloud adoption than any mere technological development.

Across a number of fronts, organizations are gaining the ability to deploy not just the cloud, but a highly specialized data ecosystem tailored to specific functions, industry verticals and even individuals. In a way, this follows that same pattern of software development in general, except that now the application software is backed by a cloud component that caters to its every whim.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) launched a National Flood Insurance Program (NFIP) call center pilot program to serve and support policyholders with the servicing of their claims.

Flood insurance claims can be complicated, and policyholders may have questions in the days and weeks following a disaster.

The NFIP call center is reachable at 1-800-621-3362, and will operate from 8 a.m. to 6 p.m. (CDT) Monday through Friday. Specialists will be available to assist policyholders with the servicing of their claims, provide general information regarding their policies, and/or offer technical assistance to aid in recovery.

For those who prefer to put their concerns in writing, a “Request for Support” form is posted at www.fema.gov/national-flood-insurance-program, which can be filled out and emailed to FEMA-NFIP-Support@fema.dhs.gov or faxed to 540-504-2360.

Call center staff will be able to answer questions, such as “How do I file a flood insurance claim? What type of documentation is needed? Can I still obtain disaster assistance even though I have a flood policy?” as well as more complicated insurance questions about the extent of coverage, policy ratings, and more.  The call center will also be open to disaster survivors who have general questions about the NFIP.

“Flood insurance provides residents with the ability to protect themselves financially against the most common disaster we see in America,” said Roy Wright, Deputy Associate Administrator for the Federal Insurance and Mitigation Administration. “We’re providing this new resource to ensure that the people we serve have another way get information they may need to understand how flood insurance works and how to navigate the claims process.  This hotline also provides us with a direct connection to policyholders themselves should they have concerns to report about how their claims are being handled and enabling us to take prompt action to ensure that they receive every dollar they are owed under their policies.”

Flood insurance plays a critical role in assisting survivors on their road to recovery. Like other types of insurance, it does not cover all losses, but it is the first line of defense against a flood. While the policy payouts won’t make the insured whole, our top priority is to ensure policyholders get what they are due under their coverage. This initiative is part of FEMA’s ongoing commitment to effective, long-term improvements to the NFIP.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

The Office of Personnel Management has some explaining to do.

Cyberthieves have pilfered the personal information of millions of federal employees – notably including the private data of those with security clearances – and the story seems to grow worse by the day.

While investigating a cyberattack on the information of about 4 million feds, officials discovered “a separate intrusion into OPM systems that may have compromised information related to the background investigations of current, former, and prospective Federal government employees, and other individuals for whom a federal background investigation was conducted,” Samuel Schumach, OPM’s press secretary, said Sunday.



For many people, IT security is about keeping the bad guys out of the data centre by using firewalls to control external access and anti-malware programs to prevent hackers from infecting servers. That is only half the picture however. The threat that has also been growing comes from people already within the security perimeter of the data centre. They have legitimate access to servers, but are misusing that access either unintentionally or deliberately to take data out. The challenge in resolving this kind of insider threat is that it is typically not a malware attack, but a personal ‘manual’ attack.



Using Twitter and Google search trend data in the wake of the very limited US Ebola outbreak of October 2014, a team of researchers from Arizona State University, Purdue University and Oregon State University have found that news media is extraordinarily effective in creating public panic.

Because only five people were ultimately infected yet Ebola dominated the US media in the weeks after the first imported case, the researchers set out to determine mass media's impact on people's behavior on social media.

"Social media data have been suggested as a way to track the spread of a disease in a population, but there is a problem that in an emerging outbreak people also use social media to express concern about the situation," explains study team leader Sherry Towers of ASU's Simon A. Levin Mathematical, Computational and Modeling Sciences Center. "It is hard to separate the two effects in a real outbreak situation."



Often crisis management case studies focus on what went wrong in badly handled crises. In this article Charlie Maclean-Bristol FBCI takes five lessons from an incident that was well managed.

After commenting on so many organizations that get their crisis management wrong, it is refreshing to see an organization which in the main have got their response to a serious incident right! The handling of the response to a recent accident at its Alton Towers theme park by Merlin Entertainments has not been quite ‘text book’ but it has been close to it. On June 2nd two cars on the Smiler rollercoaster crashed in to each other resulting in four serious and twelve minor injuries to those on the ride. Subsequently one of the riders had to have part of her leg amputated. Often it takes a poor response and criticism for an organization to ‘put its house in order’ and to improve its response. Here they got it right first time.

So what are the five lessons learned from this incident?



(TNS) — Iowa Agriculture Secretary Bill Northey said Monday the Bird flu outbreak ranks as Iowa’s worst animal health emergency and could cost federal and state agencies up to $300 million in the cleanup, disposal and disinfection process on top of the sizable losses being incurred by producers.

“Animal-health wise, there is nothing that we’ve ever had like it,” said Northey, who held out hope the spread is “winding down,” since Iowa recently has reported fewer confirmed cases of the highly pathogenic flu that has led to the deaths and euthanizing of more than 32.7 million commercial layers and turkeys on 76 farms in 18 Iowa counties. All the infected birds in Iowa have been depopulated and humanely destroyed, he said.

Northey said hotter temperatures and decontamination efforts have slowed the outbreak, although state officials Monday said they were investigating a possible new case. He noted that Minnesota saw a resurge in cases after a brief lull, and nearly 2,300 federal and state response personnel remained at work Monday in the field assessing Iowa’s situation and looking ahead to what might happen once fall weather returns along with migratory bird activity.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) launched a new data visualization tool that enables users to see when and where disaster declarations have occurred across the country. As hurricane season kicks off, the tool helps provide important information about the history of hurricanes and other disasters in their communities and what residents can do to prepare.

The data visualization tool is accessible at fema.gov/data-visualization and allows users to view and interact with a wide array of FEMA data. Through an interactive platform, users can view the history of disaster declarations by hazard type or year and the financial support provided to states, tribes and territories, and access public datasets for further research and analysis. On the site, you can see compelling visual representations of federal grant data as it relates to fire, preparedness, mitigation, individual assistance and public assistance.

“We have a wealth of data that can be of great use to the public,” said FEMA’s Deputy Administrator of Protection and National Preparedness Tim Manning. “By providing this information in a way that is visual and easy to understand, people will be moved to action to prepare their families and communities.”

The data visualization tool builds on FEMA’s commitment to transparency by making it easy to convert historical data – already available via the OpenFEMA initiative - into a readable and interactive map. Users can see the types of disasters that have occurred in their community and FEMA’s support to build and sustain the capabilities needed to prevent, protect, mitigate against, respond to, and recover from those threats and hazards in the future. The tool also provides ways for users to take action to prepare for future disasters by supporting community preparedness planning, providing information on individual preparedness actions people can take, or joining a local Citizen Corps program.

FEMA encourages all individuals to interact with the tool, learn more about the emergency management process, and provide feedback. FEMA will continue to develop additional visualizations based on feedback and the availability of public data.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

One of most effective risk management philosophies is to work smarter, not harder, implementing holistic tools, such as predictive analytics to ensure it is minimized. More often than not, companies implement blanketed management programs, applying the same strategies to all employees regardless of performance. With this approach, employers waste time and effort focusing on employees who are not at risk, leaving room for at-risk employees to go unnoticed. On an opposing front, many companies use the “squeaky wheel” approach, diverting all of their attention to employees that actively demonstrate troublesome behaviors. While this approach targets a greater amount of at-risk employees, it still leaves room for some to go undetected.

Alternatively, a strategic employee-specific management program allows employers to identify at-risk employees regardless of how “squeaky” they are. The theory behind an employee-specific management program is simple – monitor your employees for changes that indicative risky behavior.



As emergency management evolves as a profession and grows in diversity, there’s a blending of personalities, viewpoints and different structures that come to the fore. People will come from different backgrounds, experiences and professions and have different styles and perspectives. They can blend to become a healthy whole, said Nim Kidd, Texas Division of Emergency Management chief, in a keynote address at the 2015 National Homeland Security Conference this week in San Antonio.

Kidd came from the fire service and acknowledged that his experience and style is different from others rising in the emergency management ranks from the military, law enforcement, health care and academia. None of those have the market cornered on the “right way” to do things, and there are advantages and disadvantages to how each communicates and approaches situations.

For instance, law enforcement isn’t known for being the best at communicating information, for good reason and sometimes not so good. The military and fire service bring invaluable experience to the emergency management field, and what health care and academia lack in experience, they make up for in knowledge and information.



I’ve seen the strides made in cloud security over the years, but a couple of new studies show that there is still a long way to go.

The study from Netskope found that sensitive data stored in the cloud has a one in five chance of being exposed. Okay, the flip side to that is a four out of five chance that your sensitive data won’t be exposed, but when you are dealing with health information, Social Security numbers, and other data that could result in identity theft for unsuspecting consumers, that number isn’t good enough – at least not for those who are still wary about migrating to the cloud.

The primary culprit of data loss is cloud storage apps, where 90 percent of all data loss prevention violations occurred. This result was a surprise, Sanjay Beri, Netskope's CEO and founder, told eSecurity Planet:



OKLAHOMA CITY – Oklahoma residents whose properties were damaged in the recent storms and flooding are warned to be alert for, and urged to report, any potential fraud during recovery and rebuilding efforts, according to the Oklahoma Department of Emergency Management and the Federal Emergency Management Agency.

The aftermath of a disaster can attract opportunists and confidence artists. Homeowners, renters and businesses can follow some simple steps to avoid being swindled.

Be suspicious if a contractor:

  • Demands cash or full payment up front for repair work;
  • Has no physical address or identification;
  • Urges you to borrow to pay for repairs, then steers you to a specific lender or tries to act as an intermediary between you and a lender;
  • Asks you to sign something you have not had time to review; or
  • Wants your personal financial information to start the repair or lending process.

To avoid fraud:

  • Question strangers offering to do repair work and demand to see identification;
  • Do your own research before borrowing money for repairs. Compare quotes, repayment schedules and rates. If they differ significantly, ask why;
  • Never give any personal financial information to an unfamiliar person; and
  • Never sign any document without first reading it fully. Ask for an explanation of any terms or conditions you do not understand.

Disasters also attract people who claim to represent charities but do not. The Federal Trade Commission warns people to be careful and follow some simple rules:

  • Donate to charities you know and trust. Be alert for charities that seem to have sprung up overnight.
  • If you’re solicited for a donation, ask if the caller is a paid fundraiser, whom they work for, and the percentage of your donation that will go to the charity and to the fundraiser. If you don’t get a clear answer — or if you don’t like the answer you get — consider donating to a different organization.
  • Do not give out personal or financial information – including your credit card or bank account number – unless you know the charity is reputable.
  • Never send cash: you can’t be sure the organization will receive your donation.
  • Check out a charity before you donate. Contact the Better Business Bureau’s Wise Giving Alliance at www.give.org.

If you believe you are the victim of a contracting scam, price-gouging or bogus charity solicitations, contact local law enforcement and report it to the Oklahoma Office of the Attorney General. Find a complaint form online at www.ok.gov/oag. The Federal Trade Commission takes complaints at www.ftc.gov/complaint.

Many legitimate people — insurance agents, FEMA Disaster Survivor Assistance personnel, local inspectors and actual contractors — may have to visit your storm-damaged property. Survivors could, however, encounter people posing as inspectors, government officials or contractors in a bid to obtain personal information or collect payment for repair work. Your best strategy to protect yourself against fraud is to ask to see identification in all cases and to safeguard your personal financial information. Please keep in mind that local, state and federal employees do not solicit or accept money for their services to the citizens.

All FEMA employees and contractors will have a laminated photo ID. A FEMA shirt or jacket alone is not proof of identity. FEMA generally will request an applicant's Social Security or bank account numbers only during the initial registration process. However, FEMA inspectors might require verification of identity. FEMA and U.S. Small Business Administration staff never charge applicants for disaster assistance, inspections or help filling out applications. FEMA inspectors verify damages but do not recommend or hire specific contractors to fix homes.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers service to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster.

(TNS) — When Justin McQuillen died in 1994 after being hit by a pitched baseball, the technology for automated external defibrillators was not as sophisticated as it is today.

Today, the lightweight, portable devices can check a person’s heart rhythm, recognize when a shock is required and advise the rescuer when to administer it.

Some AEDs use voice prompts, lights and even text messaging to tell the user what steps to take. Most range in cost from $1,500 to $2,000, according to the American Heart Association, though less expensive models can be found.

McQuillen, 9, of Honey Brook, Pa., died in May 1994 after being struck in the chest with a baseball in a Twin Valley youth league game. An AED was not immediately available at the field.



Forrester research analyst Michael Gualtieri made a bold prediction at this week’s Hadoop Summit. Gualtieri told attendees that 100 percent of all large enterprises eventually would adopt some form of Hadoop, according to Information Week Editor-at-Large Charles Babcock.

Babcock points out that Hadoop has a way to go, since actual deployment is currently around 26 percent, with only 11 percent planning to invest in the next 12 months.

Still, I think Gualtieri’s prediction is reasonable. Enterprises tend to be more conservative than, say, Internet start-ups, so typically they try to hit that sweet spot between disruption and too late to the game. In fact, Capgemini’s research found that leading businesses are already using Big Data to disrupt markets and threaten their competitors.

“In our study, a surprising 64% of respondents said that big data is changing traditional business boundaries and enabling non-traditional providers to move into their industry,” the report, released earlier this year, notes. “Companies report a significant level of disruption from new competitors moving into their industry from adjacent industries (27%), and over half (53%) expect to face increased competition from start-ups enabled by data.”



Heat is a form of energy, and energy is a commodity. And commodities, of course, can be sold for a profit.

So it is something of a misnomer to say that data centers are constantly dealing with the problem of waste heat when what is really going on is that they are failing to capitalize on their heat-generating capabilities.

But a few are starting to realize the commercial possibilities of the heat coming off the server racks. Probably the most innovative is the Foundry Project in Cleveland, Ohio, which is pumping heat from an underground data center to a $4.5 million co-located fish farm devoted to raising Mediterranean sea bass. The data center itself will measure about 40,000 square feet and is linked by three 100Gbps fiber networks. Foundry executives say they already have a client lined up but have yet to reveal a name. Meanwhile, the fish farm is expected to produce about 500,000 pounds per year, and waste from the fish will be delivered to a nearby orchard as fertilizer.



Monday, 15 June 2015 00:00

What to Expect from a FEMA Inspection

After you register for assistance, an inspector from the Federal Emergency Management Agency (FEMA) will call you for an appointment to inspect your damaged property.

Q. Why is the inspector there?
A. Verifying disaster damage is part of the process to establish the amount and type of damage you suffered.  The inspectors have construction backgrounds and are fully qualified to do the job.

Q. How do I know the Inspector is from FEMA?
A. You should ask to see the inspector's identification.  All FEMA housing inspectors will have a FEMA badge displayed. Also, each disaster survivor is provided a unique FEMA registration number when they register for assistance.  The inspector will know your FEMA registration number.

If you have concerns with the legitimacy of a FEMA housing inspector, you should contact your local law enforcement as they will be able to validate their identification. 

Q. What does the inspector look for?
A. The inspector determines whether the house is livable by checking the structure, including heating, plumbing, electrical, flooring, wallboard, and foundation.

Q. How about personal property?
A. Damage to major appliances - washer, dryer, refrigerator, stove - is assessed. Other serious needs such as clothing lost or damaged in the disaster are surveyed.

Q. Do I need to have any paperwork on hand?
A. Some evidence that the property is your usual residence or evidence that you own the property will be required.  It might be a recent utility bill, mortgage payment record, or rent receipts.

Q. Will I find out the results of the inspection?
A. If you are eligible for assistance, you will receive a check in the mail.  You will be notified by letter if you are not eligible.  You have 60 days to appeal the decision, and the appeal process is outlined in the letter.

Q. What other inspections should I expect?
A. Depending on the types of assistance for which you may be eligible, your losses may be verified by FEMA, the U.S. Small Business Administration (SBA), and your local building inspector's office.

In a world of constantly emerging threats, security is a tough job: but the concepts of best practice have been devised for a reason. The challenge for organizations is to attain that balance between unworkable change control practices and an anarchic environment that provides ample opportunities to hide.

However strong the perimeter security, in the vast majority of organizations there are far too many opportunities for hackers or malware attacks to slide in undetected.

Forensic-level monitoring of system changes provides a means whereby subtle breach activity can be exposed, but just having the means to detect changes is only part of the solution.

In the same way that seemingly clear pond water is revealed to be teaming with life when placed under a microscope, the amount of noise created on a daily basis by critical upgrades, system patches and required updates once visible is overwhelming. And when it comes to breach detection, it is virtually impossible to distinguish between the expected file and registry changes prompted by these changes and nefarious activity.



Business continuity and disaster recovery are two common reasons why organizations consider cloud migration: but sometimes the decision to migrate is put off due to fears that the process will be difficult. In this article, Lilac Schoenbeck offers some tips to help smooth the migration path.

Are you looking to utilise the business continuity and disaster recovery advantages that the cloud offers? Are you running out of data centre space? Do you need to reduce the time spent maintaining physical hardware? The reasons to transition to cloud continue to stack up, and stories about cloud benefits and successes are only becoming more prominent. Still, many organizations and IT teams continue to be wary of making the move because of the challenges associated with migrating their applications.

The good news? Cloud migration does not have to be as daunting as it once was. Others have helped pave the way, establishing best practices and systematic approaches to ease the process. Here are six tips to help make your migration a smooth one:



Remember the U.S. Office of Personnel Management (OPM) data breach that was reported earlier this month? OPM officials last week said the incident now appears to have affected millions of federal employees and contractors.

And as a result, the OPM once again tops this week's list of IT security news makers to watch, followed by Microsoft (MSFT), the "Punkey" malware and Blue Shield of California.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



As important as it is for managed service providers (MSPs) to protect your clients from external threats, it can be just as important to protect organizations from themselves. By managing security and access in cloud data storage and cloud-based file sharing, MSPs can help to prevent employee misuse within an organization.

Over the past couple years, the news all around the world has been littered with the narratives of major security breaches from outside hackers. As organizations (and MSPs) rush to patch up any openings in their security protection against the external invaders, they better be just as cognizant of the potential threats that can compromise their data from inside their own walls.



(TNS) — Energy firms in Wyoming are being urged to take precautions against potential cybersecurity attacks.

Michael Bobbitt, a supervisory special agent with the FBI, told attendees at the Wyoming Infrastructure Authority's energy conference on Friday that companies should be aware of the growing number of threats on both the national and international levels.

Bobbitt is the team leader for the FBI's criminal and national cybersecurity squad in the agency's Denver office.

He said any business that uses computers faces the risk of being hacked or exposed to a cyberattack.



(TNS) — The newly appointed director of the National Flood Insurance Program said the organization needs to focus more on the welfare of disaster victims and rethink gaps in coverage that bedeviled homeowners after superstorm Sandy.

Roy Wright, who takes over the federal program next week, said in an interview Tuesday that flood insurance policies have become laden with complex loopholes that nickel-and-dime homeowners and undermine their ability to rebuild after floods.

"The center of gravity needs to continue to shift in favor of the policyholder," Wright said.



More than 20% of consumers use passwords that are more than 10 years old, and 47% use passwords that have not been changed in five years, according to a recent report by account security company TeleSign. What’s more, respondents had an average of 24 online accounts, but only six unique passwords to protect them. A total of 73% of accounts use duplicate passwords.

Consumers recognize their own vulnerability. Four out of five consumers worry about online security, with 45% saying they are extremely or very concerned about their accounts being hacked – something 40% of respondents had experienced in the past year.



Thursday, 11 June 2015 00:00

Is It Time for the Data Center OS?

It doesn’t take a lot of imagination to see the digital ecosystem as a series of concentric circles. On the processor level, there are a number of cores all linked by internal logic. The PC contains multiple chips and related devices controlled by an operating system. The data center ties multiple PCs, servers, storage devices and the like into a working environment, and now the cloud is connecting multiple data centers across distributed architectures.

At each circle, then, there is a collection of parts overseen by a software management stack, and as circles are added to the perimeter, the need for tighter integration within the inner architectures increases in order to better serve the entire data ecosystem.

It is for this reason that many data architects are warming to the idea of the data center operating system. With the data center now just a piece of a larger computing environment, it makes no more sense to manage pieces like servers, storage and networking on an individual basis than to have multiple OS’s on the PC, one for the processors, another for the disk drive, etc. As tech investor Sudip Chakrabarti noted on InfoWorld recently, the advent of virtualization, microservices and scale-out infrastructure in general are fueling the need to manage the data center as a computer so the distributed architecture can assume the role of the data center.



Thursday, 11 June 2015 00:00

Look Who’s Doing Risk Management

If you’re wondering how much risk management should become part of your organisation’s rulebook, you may already be looking around to see who else is doing it. Insurers and bankers are obvious examples, because their businesses are centred on risk calculation, whether in terms of setting insurance premiums or defining credit interest rates. Many insurers are also ready to discuss risk management with potential customers in a variety of different industry sectors. These can range from agriculture and aviation to sports and transportation. However, there are other perhaps unexpected examples that show how far the concept of risk management has spread in general.



Thursday, 11 June 2015 00:00

Rising Concerns Over Next Global Pandemic

As South Korean authorities step up efforts to stop the outbreak of Middle East Respiratory Syndrome, or MERS, from spreading further, the president of the World Bank Jim Yong Kim has warned that the next global pandemic could be far deadlier than any experienced in recent years.

Speaking in Frankfurt earlier this week, Dr Kim said Ebola revealed the shortcomings of international and national systems to prevent, detect and respond to infectious disease outbreaks.

The next pandemic could move much more rapidly than Ebola, Dr Kim noted:

The Spanish Flu of 1918 killed an estimated 25 million people in 25 weeks. Bill Gates asked researchers to model the effect of a Spanish Flu-like illness on the modern world, and they predicted a similar disease would kill 33 million people in 250 days.”



Wednesday, 10 June 2015 00:00

Quantifying supply chain risk

Today, more businesses around the world depend on efficient and resilient global supply chains to drive performance and achieve ongoing success. By quantifying where and how value is generated along the supply chain and overlaying of the array of risks that might cause the most significant disruptions, risk managers will help their businesses determine how to deploy mitigation resources in ways that will deliver the most return in strengthening the resiliency of their supply chains. At the same time, they will gain needed insights to make critical decisions on risk transfer and insurance solutions to protect their companies against the financial consequences of potential disruptions.

As businesses evaluate their supply chain risk and develop strategies for managing it, they might consider using a quantification framework, which can be adapted to any traditional or emerging risk.



Enterprises will account for 46 percent of Internet of Things (IoT) device shipments this year, BI Intelligence predicts. That’s not surprising when you consider the incredible predictions around IoT savings (billions, according to Business Insider) and IoT revenues ($14.4 trillion by 2022, according to this Forbes column).

But first, there will be raw data — terabytes of it, warns Elle Wood in a recent post for analytics vendor AppDynamics’ blog.

“With a sensor on absolutely everything – from cars and houses to your family members – it goes without saying there will be some challenges with these massive amounts of data,” Wood writes. “After all, IoT isn’t just about connecting things to the Internet; it’s about generating meaningful data.”



SURAT, India — “I don’t have to go to the gym,” says Urmil Kumar Vyas with an impish smile. “Don’t you think climbing 400 steps is enough exercise for a day?”

Vyas and I are wending our way toward a high-rise building in one of the wealthier zones of Surat, a city of 5 million in western India about five hours north of Mumbai. Vyas is a primary health worker in the Surat Municipal Corporation’s Vector Borne Diseases Control Department. He has spent 21 years on the job, and has seen his share of sickness and death. But his energy and sense of humor remain intact.

Vyas joined the city workforce in 1994, the year Surat exploded onto the front pages of newspapers worldwide in the aftermath of a virulent plague. More than 50 people died. Hundreds of thousands more, including migrant workers, fled the city out of fear; businesses across the city shut down.



It seems like once a week, we see yet another story about a security failure involving passwords. In May alone, for instance, the news came that an unpatched vulnerability in Oracle’s PeopleSoft could open a hole for thieves to steal passwords; Google revealed that those security questions that help you retrieve a lost password are anything but secure; and Starbucks blamed passwords for its own recent hack attack.

It’s no wonder, then, that passwords (and usernames) were a popular topic at the RSA Conference this year. One of those speaking about the problem of passwords, Phillip Dunkelberger, president and CEO at Nok Nok Labs, said a number of significant problems with passwords make them a poor single method of authentication.

“First, passwords are a symmetric secret – we enter a password on our PC or smartphone that is matched up on a server, this means that organizations are holding hundreds of millions of passwords in large databases. Despite using techniques such as salting and hashing of password databases, security professionals have found it practically impossible to secure this infrastructure, so passwords are very vulnerable to massive, scalable hacks,” he said.



Businesses often struggle on with legacy server rooms due to budget constraints and fear of upgrade risks. In this article Mark Allingham challenges BC managers to face up to this problem.

One of the basic rules of business continuity management is to ensure that everyday information technology systems are protected and fit for purpose but often businesses struggle on with legacy server rooms. Mark Allingham challenges BC managers to

The server room is the beating heart of any but the smallest business. You rely on your servers for vital files, essential information and the day to day running of the organization, so any risk of failure is a considerable threat to business continuity. Legacy server rooms with outdated equipment and limited capacity are liable to power outages, downtime and worse. So any business continuity manager should consider carefully whether their existing server room is fit for purpose.



Andrew MacLeod argues that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and can therefore be used to enhance organizational resilience. For an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.

By Andrew MacLeod BA (Hons) MBCI

“The concept of organizational culture must be recognised as one of vital importance to the understanding of organization and all activities and processes operating within and in connection with organization.” (Brooks, 2003)

As Brooks states, the concept of culture and therefore insights into its operation within an organization are fundamental. However, to fully understand how culture can enhance organizational resilience, one must be clear by what is meant by both organizational resilience and organizational culture. This paper will define organizational resilience in the contemporary context and explore what is meant by culture. It will be demonstrated that culture is a complex field of study and that every organization has its own unique culture which is interwoven with concepts of individual and national culture. This paper will argue that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and these insights can be used to enhance organizational resilience. It will be shown that for an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.



Many organizations are hesitant to adopt cloud services for cloud storage and cloud-based file sharing.  Although there are many customers that don’t understand cloud security (and don’t want to), slow adopters present a special challenge for managed service providers (MSPs).

One way to sell organizations on the importance of their own security practices is to point out how far cloud services have come in terms of safety and reliability.  How can you do this?  Here are some ways that MSPs can convince slow-to-adopt organizations to take responsibility for their data security:



In the aftermath of the 2008 global financial crisis, postmortems were convened in countries around the world to identify what went wrong. A unanimous conclusion was that Boards of Directors of public companies in general, and financial institutions in particular, need to do more to oversee “management’s risk appetite and tolerance” if future crises are to be avoided.

This finding represents a significant paradigm shift in role expectations while introducing a new concept the Financial Stability Board (FSB) has coined effective “Risk Appetite Frameworks” (RAFs). Regulators around the world are now moving at varying speeds to implement these conclusions by enacting new laws and regulations. What regulators appear to be seriously underestimating is the amount of change necessary to make this laudable goal a reality.



Are you embarking on an IT career? Are you maybe a few years in and looking to make a big move in your career if you can find the right opportunity?

What are your expectations for your next IT job? Perhaps you expect the following:

  • To be treated by management with respect.
  • To have invigorating, exciting work and to feel your work is appreciated.
  • To have co-workers you admire and who admire you.
  • To be compensated well—because you’re worth it!

What you might want to do right now is write these expectations down.

Then, go out in the backyard and LIGHT THEM ON FIRE.

Congratulations! You have just liberated yourself from job disillusionment and career self-sabotage.



In addition to announcing that it is making its core engine available as an open source Project Apex technology, DataTorrent has released an update to its Big Data analytics software for Hadoop that eliminates the dependencies organizations now have on developers to create these applications.

John Fanelli, vice president of marketing for DataTorrent, says the latest version of DataTorrent enables individuals to assemble Big Data analytics applications without having to write code. In addition, end users can make use of a library of visualizations to create dashboards in a matter of minutes.

Finally, DataTorrent 3.0 comes with pre-built connectors for integrating with both enterprise applications and custom Java applications in addition to graphical tools that make it simpler to ingest data into a Big Data application.



Drug abuse with people sharing the same syringe

In a small, rural town in Southern Indiana, a public health crisis emerges.  In a community that normally sees fewer than five new HIV diagnoses a year, more than a hundred new cases are diagnosed and almost all are coinfected with hepatitis C virus (HCV).

How was this outbreak discovered, and what caused this widespread transmission? Indiana state and local public health officials – supported by CDC – set out to answers these questions and help stop the spread of HIV and HCV in this community.

The Outbreak

In January 2015, Indiana disease intervention specialists noticed that 11 new HIV diagnoses were all linked to the same rural community.  This spike in HIV diagnoses in an area never before considered high-risk for the spread of HIV, launched a larger investigation into the cause and impact of these related cases.

The investigation began by investigating the 11 newly diagnosed cases. This process involved talking to newly diagnosed individuals about their health and sexual behaviors, as well as past drug use. In the United States, HIV is spread mainly by having sex or sharing injection drug equipment such as needles with someone who has HIV.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

In the case of the 11 related diagnoses in Indiana, almost all were linked to injection drug use. Investigators discovered that syringe-sharing was a common practice in this community–often used to inject the prescription Opana; opioid oxymorphone (a powerful oral semi-synthetic opioid medicine used for pain.)  HIV can be spread through injection drug use when injection drug equipment, such as syringes, cookers (bottle caps, spoons, or other containers), or cottons (pieces of cotton or cigarette filters used to filter out particles that could block the needle) are contaminated with HIV-infected blood. The most common cause of HIV transmission from injection drug use is syringe-sharing. Persons who inject drugs (PWID) are also at risk for HCV infection. Co-infection with HCV is common among HIV-infected PWID. Between 50-90% of all persons who inject drugs are infected with both HIV and HCV.

The Investigation

“Contact tracing” is the process of identifying all individuals who may have potentially been exposed to an ill person, in this case a person infected with HIV.  Contact tracing involves interviewing the newly diagnosed patients to identify their syringe-sharing and sex partners.  These “contacts” are then tested for HIV and HCV infection, and if found infected are likewise interviewed to identify their syringe-sharing and sex partners. This cycle continues until no more new contacts are located.

As of May 18, contract tracing and increased HIV testing efforts throughout the community identified 155 adult and adolescent HIV infections. The investigation has revealed  that injection drug use in this community is a multi-generational activity, with as many as three generations of a family and multiple community members injecting together and that due to the short half-life of the drug, persons who inject drugs may have injected multiple times per day (up to 10 in one case). may be needed .

Early HIV treatment not only helps people live longer but it also dramatically reduces the chance of transmitting the virus to others.  People who do not have HIV and who are at high risk for HIV can also benefit more directly from the drugs used to treat HIV to prevent them from acquiring HIV.  This is known as pre-exposure prophylaxis (PrEP). Post-exposure prophylaxis, or PEP, is an option for those who do not have HIV but could have been potentially exposed in a single event.

The Response


So what is the next step in addressing this staggering outbreak? First, public health officials must work to get every person exposed to HIV tested. All persons diagnosed with HIV need to be linked to healthcare and treated with antiretroviral medication. Persons not infected with HIV are counseled on effective prevention and risk reduction methods; including condom use, PrEP, PEP, harm reduction, and substance abuse treatment. Getting messages about the benefits of HIV treatment to newly diagnosed individuals and prevention information to at-risk members of the community are key components to control this outbreak.

The underlying factors of the Indiana outbreak are not completely unique. Across the United States, many communities are dealing with increases in injection drug use and HCV infections; these communities are vulnerable to experiencing similar HIV outbreaks. CDC asked state health departments to monitor data from a variety of sources to identify jurisdictions that, like this county in Indiana, may be at risk of an IDU-related HIV outbreak.  These data include drug arrest records, overdose deaths, opioid sales and prescriptions, availability of insurance, emergency medical services, and social and demographic data. Although CDC has not seen evidence of another similar HIV outbreak, the agency issued a health alert to state, local, and territorial health departments urging them to examine their HIV and HCV surveillance data and to ensure prevention and care services are available for people living with HIV and/or HCV.

The work that has been done thus far, as well as the continued efforts being made to address this response, highlight importance of partnerships between federal, state and local health agencies. The work done by Indiana State Department of Health’s disease intervention specialist to link the initial HIV cases to this rural community, and the work of the local health officials to respond quickly and thoroughly to investigate all possible exposures and spread important health prevention information demonstrates the critical importance of strong public health surveillance and response.

The Division of HIV/AIDS Prevention commends the efforts of all the individuals involved in controlling the HIV outbreak in Indiana. The response illustrates that together we are committed to improving the health of our communities across the nation.

AUSTIN, Texas – State and federal recovery officials urge Texans affected by the ongoing severe storms and floods to watch for and report any suspicious activity or potential fraud.

Even as government agencies and charitable groups continue to provide disaster assistance, scam artists, identity thieves and other criminals may attempt to prey on vulnerable survivors. The most common post-disaster fraud practices include phony housing inspectors, fraudulent building contractors, bogus pleas for disaster donations and fake offers of state or federal aid.

“Scam attempts can be made over the phone, by mail or email, or in person,” said Federal Coordinating Officer Kevin Hannes of Federal Emergency Management Agency (FEMA). “Con artists are creative and resourceful, so we urge Texans to remain alert, ask questions and require identification when someone claims to represent a government agency.”      

Survivors should also keep in mind that state and federal workers never ask for or accept money, and always carry identification badges with a photograph. There is no fee required to apply for or to get disaster assistance from FEMA, the U.S. Small Business Administration (SBA) or the state. Additionally, no state or federal government disaster assistance agency will call to ask for your financial account information; unless you place a call to the agency yourself, you should not provide personal information over the phone – it can lead to identity theft.

Those who suspect fraud can call the FEMA Disaster Fraud Hotline at 866-720-5721 (toll free). Complaints may also be made to local law enforcement agencies.

Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

(TNS) — On May 23, the extended Taylor family had just sat down for dinner at their River Road house when the phone rang. It was a pre-recorded call from Hays County emergency officials warning residents with homes along the Blanco River that the water was rising quickly and flooding was likely.

It was the first of several such calls his father-in-law took during the course of the meal, recalled Scott Sura. “But he sort of brushed it off. He’s been through several floods, and he wasn’t worried. In fact, he later went to bed.”

Across the river and downstream, on Flite Acres Road, Frances Tise said she and her husband Charles also fielded the emergency calls that evening. “But I had seen the river rise before, and it just came up to our backyard,” she said. “We just didn’t realize how fast it was coming up.”



(TNS) — In a narrow parking lot, Brett Kennedy and Sisir Karumanchi stand around what looks like a suitcase. But then four limbs extend from its sides, bending and clicking into position. Two spread out like legs and two rise up like arms as the robot goes through several poses, looking for all the world like a Transformer doing yoga.

This is RoboSimian, a prototype rescue robot whose builders at NASA's Jet Propulsion Laboratory hope can win the $2-million prize at the DARPA Robotics Challenge. The goal: to foster a new generation of rescue robots that could help save lives when the next disaster hits.

Twenty-four teams from around the U.S. and the globe have sent their best and brightest bots to compete in a grueling obstacle course — a robot Olympics, if you will.



What cloud services should managed service providers (MSPs) sell to customers, and how profitable can those services really be? These are questions that MSPs are grappling with every day right now. Service Leadership CEO Paul Dippell presided over several sessions at LabTech Automation Nation 2015 last week that provided perspective on these questions.

Here are some of the takeways from a couple of Dippell's sessions, including an overview of the cloud market from his company and some real-world perspective from a panel of MSPs. Let’s start with an overview of the cloud market today.



Tuesday, 09 June 2015 00:00

Attivio Updates Big Data Indexing Engine

For all the excitement that Big Data often generates with an organization, one of the fundamental challenges most of them face comes down to data management plumbing. There’s no shortage of data, but organizing all of it in a way that makes it consumable by a Big Data analytics application is problematic.

To enable IT organizations to manage that process better, Attivio today launched an update to its namesake indexing engine for data within an enterprise that adds a range of self-service capabilities for business analysts and data scientists to identify and unify self-selected data tables from the universal index.

Attivio CEO Stephen Baker says Attivio is squarely focused on applying search and indexing technologies to better manage data assets within an enterprise. All too often, IT organizations have hundreds of enterprise applications, but no one is quite sure what data resides inside each. As a result, these same organizations wind up investing in hiring a data scientist, only to watch the person spend months trying to organize all the data inside the organization. Attivio, says Baker, provides a mechanism to reduce the manual effort associated with integrating all that data by as much as 80 percent.



(TNS) — Drone photography could soon take off for Victoria, Texas’ emergency responders.

Compared to the time, cost and challenges associated with using helicopters for search and rescue, drones could be a game-changer for the future of emergency response, said Emergency Management Coordinator Rick McBrayer.

Emergency responders used a drone at no cost to taxpayers to track in real time the Guadalupe River flood through Victoria. Now, officials are exploring the legality and permitting process to use drones again.



Who was responsible for the recent U.S. Office of Personnel Management (OPM) data breach? Congressman Michael McCaul told CBS News that Chinese hackers could be the culprits in the incident that resulted in the theft of personal information from more than 4 million current and former federal employees.

And as a result, the OPM tops this week's list of IT security newsmakers to watch, followed by U.S. HealthWorks, the Dyre malware and CTERA Networks.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



Tuesday, 09 June 2015 00:00

Getting a Handle on This Dev/Ops Thing

Is Dev/Ops for real, or is it simply the latest marketing tool to get you to buy more stuff for your data center? Or is it a little of both, a potentially revolutionary change to enterprise infrastructure management provided you can see through all the Dev/Ops-washing that is going on?

As with most technology initiatives, the concept behind Dev/Ops is solid – it offers a more flexible approach to the data-resource allocation challenges present in hyperscale and Big Data environments. But by the same token, success or failure is usually determined by the execution, not the initial design. So the real challenge with Dev/Ops is not in selecting the right platform but in taking the designs and concepts currently in the channel and making them your own.



There really isn’t anything new under the sun. More than a century ago, Nikola Tesla made great strides in his dream of the wireless transmission of electricity. Tesla came up short, but his dream increasingly is coming true more than a century later.

Popular Science and InformationWeek report on research from the University of Washington that could pave the way for devices to be charged by Wi-Fi. The InformationWeek story says that the approach, which of course is called power over Wi-Fi (PoWiFi), could work at up to 28 feet. Prototypes (temperature and camera sensors) are operational to 20 feet.

Popular Science has more detail, saying that about 1 watt of power is transmitted as a normal part of Wi-Fi operations. The technology is aimed at capturing and putting that energy to work. The 1 watt of power isn’t enough to charge phones or perform other higher-level jobs. However, many tasks associated with the Internet of Things (IoT) can be satisfied. Wrote Dave Gershgorn:

This technology isn’t new. Companies like Energous have already brought products to market that send power over similar Wi-Fi signals, and they claim to be able to charge cell phones. Yet the novel feature of PoWiFi is the ability to harness power with pre-existing hardware, and the University of Washington team says their routers transmit both power and data in the same signal.



Not surprisingly, I’ve heard from a lot of people regarding the announcement of the Office of Personnel Management (OPM) breach, but what Andy Hayter, security evangelist for G DATA, told me in an email jumped out at me – in part because of the imagery but also because it was eerily similar to a thought that I had. Hayter said:

I have to think that it must appear to threat actors all over the globe that the U.S. government's IT systems are full of holes, like Swiss cheese, and the response from the U.S. is to play whack-a-mole every time, in a valiant attempt to close each hole. With all of these attacks, it’s likely that each one is arming cyber criminals with exactly what they need and want to execute another one, and the vicious cycle continues. Unfortunately every time there's another breach on a Federal agency, it spells out our vulnerabilities loud and clear to our adversaries, letting them know there are many more opportunities for them to hack our systems and networks over and over again.

Whack-a-mole security. It really is easy to think that way. The OPM breach is just the latest – and perhaps most damaging because of the vast amount of data that could be compromised – incident within the federal government, and now we are at a point where we’re going to wait for the next incident to pop up.



Does resilience in your enterprise spring from its senior management as a source of inspiration to all? Or is it perhaps embedded in your organisational culture, lovingly nurtured and developed over the years? Either possibility would be gratifying. However, some recent information suggests that neither is the primary source of resilience. Researchers Sarah Bond and Gillian Shapiro surveyed 835 employees from a cross-section of firms in Britain and found that 90% of those employees considered their resilience to be inherently within themselves; and only 10% thought their organisation provided them with resilience. If this is true more generally, there are some important consequences for any enterprise to consider.



Helping your clients remain compliant with the laws and standards set forth by the governing bodies presiding over their industries is an essential component of the role of managed service providers (MSP).  When it comes to protecting sensitive data being stored in the cloud or transmitted via cloud-based file sharing, MSPs often need to protect their clients from themselves.

Among the industries that appear to be fighting this battle against their own personnel, perhaps none is more scrutinized than the healthcare industry. While there are many strict stipulations in place for handling sensitive health data, there are also many employees that have access to the data from a host of endpoints.

The healthcare industry’s HIPAA regulations go a long way towards ensuring that the private, sensitive, personal information of patients is handled very carefully. What the regulations don’t stipulate well enough, however, is the management of an organization’s own administrative, physical, and technical safeguards.  According to HealthIT Security, “If a recent survey is any indication, health and pharmaceutical companies, along with other industries, might be falling behind when it comes to protecting sensitive data.”



(TNS) — The Hawaii National Guard is holding the largest disaster preparedness exercise in its history with more than 2,200 participants from multiple states responding to a simulated hurricane and other events across Oahu, Hawaii island, Maui and Kauai.

Some Chinook and Black Hawk helicopter activity will be seen, Waimanalo will request assistance — possibly for debris clearance — a mass-casualty exercise will take place at the Queen’s Medical Center-West Oahu, and harbor chemical spills will be dealt with in Honolulu and on Hawaii island, officials said.

“It combines the civilian government and military organizations, and that’s important because we need to get the organizations working together — understanding each other’s capabilities — before we get to a natural disaster, a real natural disaster event,” said Brig Gen. Bruce Oliveira, the head of the Hawaii Army National Guard.



(TNS) — Florida has more homes at risk from the devastating damage of hurricane-powered storm surges than any other state, according to a new study by CoreLogic, a California-based real estate information firm.

While the designation will come as no surprise to anyone living smack in the path of hurricane alley, the numbers reported by CoreLogic are sobering. More than 2.5 million homes in the state are at risk for some kind of damage from storm surge, according to the study. Rebuilding costs statewide from an extreme worst-case surge could amount to $491 billion — more than the gross domestic products of Austria, Chile, Venezuela or a dozen other countries.

In the tri-county area between Miami and West Palm Beach, CoreLogic found more than a half million homes are at risk. The company estimated rebuilding costs for a worst-case flooding from storm surge at $105 billion.



The adoption rates have been slower than that of other industries, but financial institutions are finally starting to leverage the cloud in greater numbers. But the real story isn’t that they’re adopting it—it’s what they are adopting it for. As we discussed in a recent post, financial firms are more concerned about the security risks of cloud-based file sharing than most MSPs would like to hear.

CRM, application development, email and back-end services—these are the functions that most financial firms are prioritizing. Why is file-sharing noticeably absent? In an interview with eWeek, Luciano Santos, vice president of research and member services at the Cloud Security Alliance alluded to the reason:  

"Primarily the top security concerns were more focused around data protection. Data confidentiality, data governance and data breach were the top-ranked security concerns identified by the financial institutions that participated."



In communicating with the business and the board about the consequences of data breaches, IT is always going to be asked to place dollar figures, which can be difficult to do, even with increasing access to predictive analytics and historical data from any previous breaches in the organization. One of the most extensive benchmark studies that IT can use to help with this is the Ponemon Institute’s annual “Cost of Data Breach Study: Global Analysis.” In its 10th year, and sponsored by IBM, the recently released 2015 edition covers 11 countries, 350 companies, and detailed data about direct and indirect costs of data breaches.

Three major reasons are contributing to a rapid increase in the average cost of a data breach and the average cost per breached record – this last varying by industry – according to Chairman and Founder Dr. Larry Ponemon:

“First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."



AUSTIN, Texas – Recovery specialists have some sound advice for Texans whose homes and property took on floodwaters: Protect your family’s health and your own by treating or discarding mold- and mildew-infected items.

Health experts urge those who find mold to act fast. Cleaning mold quickly and properly is essential for a healthy home, especially for people who suffer from allergies and asthma, said the Federal Emergency Management Agency (FEMA).

Mold and mildew can start growing within 24 hours after a flood, and can lurk throughout a home, from the attic to the basement and crawl spaces. The best defense is to clean, dry or, as a last resort, discard moldy items.

Although it can be hard to get rid of a favorite chair, a child’s doll or any other precious treasure to safeguard the well-being of your loved ones, a top-to-bottom home cleanup is your best defense, according to the experts.

Many materials are prone to developing mold if they remain damp or wet for too long. Start a post-flood cleanup by sorting all items exposed to floodwaters:

  • Wood and upholstered furniture, and other porous materials can trap mold and may need to be discarded.
  • Carpeting presents a problem because drying it does not remove mold spores. Carpets with mold and mildew should be removed.
  • However, glass, plastic and metal objects and other items made of hardened or nonporous materials can often be cleaned, disinfected and reused.

All flood-dampened surfaces should be cleaned, disinfected and dried as soon as possible. Follow these tips to ensure a safe and effective cleanup:

  • Open windows for ventilation and wear rubber gloves and eye protection when cleaning. Consider using a mask rated N-95 or higher if heavy concentrations of mold are present.
  • Use a non-ammonia soap or detergent to clean all areas and washable items that came in contact with floodwaters.
  • Mix 1-1/2 cups of household bleach in one gallon of water and thoroughly rinse and disinfect the area. Never mix bleach with ammonia as the fumes are toxic.
  • Cleaned areas can take several days to dry thoroughly. The use of heat, fans and dehumidifiers can speed up the drying process.
  • Check out all odors. It’s possible for mold to hide in the walls or behind wall coverings. Find all mold sources and clean them properly.
  • Remove and discard all materials that can’t be cleaned, such as wallboard, fiberglass and cellulose areas. Then clean the wall studs where wallboard has been removed, and allow the area to dry thoroughly before replacing the wallboard.

 For other tips about post-flooding cleanup, visit www.fema.gov, www.epa.gov, or www.cdc.gov.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing disastercustomerservice@sba.gov, or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

I was leafing through a pile of old BCI documents when I stumbled across a paper detailing a presentation, entitled “Resilience isn’t the future of business continuity” given by Charlotte Newnham at the BCM World Conference and Exhibition in November 2012.

In the presentation a number of facts and figures were provided which explain a great deal about “actual” resilience capabilities. The figures provided, included the facts that, of the existing resilience departments approximately 50% were in the public sector and that 76% of organisations extend their remit to incident/emergency management. Whilst, these figures seem positive for the resilience function, only 30% oversaw security or risk management and just 7% had any involvement in IT continuity.



Friday, 05 June 2015 00:00

Storm Surge: The Trillion Dollar Risk

More than 6.6 million homes on the Atlantic and Gulf coasts are at risk of hurricane-driven storm surge with a total reconstruction cost value (RCV) of nearly $1.5 trillion.

The latest annual analysis from CoreLogic finds that the Atlantic Coast has more than 3.8 million homes at risk of storm surge in 2015 with a total projected reconstruction cost value of $939 billion, while the Gulf Coast has just under 2.8 million homes at risk and nearly $549 billion in potential exposure.

Which states have the highest total number of properties at risk?

Six states—Florida, Louisiana, New York, New Jersey, Texas and Virginia—account for more than three-quarters of all at-risk homes across the United States. Florida has the highest total number of properties at various risk levels (2.5 million), followed by Louisiana (769,272), New York (464,534), New Jersey (446,148), Texas (441, 304) and Virginia (420,052).



Friday, 05 June 2015 00:00

What to Do About Reputation Risk

Of executives surveyed, 87% rate reputation risk as either more important or much more important than any other strategic risks their companies face, according to a new study from Forbes Insights and Deloitte Touche Tohmatsu Limited. Further, 88% say their companies are explicitly focusing on managing reputation risk.

Yet a bevy of factors contribute to reputation risk, making monitoring and mitigating the dangers seem particularly unwieldy. These include business decisions and performance in the following areas:




Global temperature trends.

(Credit: NOAA)

A new study published online today in the journal Science finds that the rate of global warming during the last 15 years has been as fast as or faster than that seen during the latter half of the 20th Century. The study refutes the notion that there has been a slowdown or "hiatus" in the rate of global warming in recent years.


The study is the work of a team of scientists from the National Oceanic and Atmospheric Administration's (NOAA) National Centers for Environmental Information* (NCEI) using the latest global surface temperature data.

"Adding in the last two years of global surface temperature data and other improvements in the quality of the observed record provide evidence that contradict the notion of a hiatus in recent global warming trends," said Thomas R. Karl, L.H.D., Director, NOAA's National Centers for Environmental Information. "Our new analysis suggests that the apparent hiatus may have been largely the result of limitations in past datasets, and that the rate of warming over the first 15 years of this century has, in fact, been as fast or faster than that seen over the last half of the 20th century." 

The apparent observed slowing or decrease in the upward rate of global surface temperature warming has been nicknamed the "hiatus." The Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report, released in stages between September 2013 and November 2014, concluded that the upward global surface temperature trend from 1998­­-2012 was markedly lower than the trend from 1951-2012.


Since the release of the IPCC report, NOAA scientists have made significant improvements in the calculation of trends and now use a global surface temperature record that includes the most recent two years of data, 2013 and 2014--the hottest year on record. The calculations also use improved versions of both sea surface temperature and land surface air temperature datasets. One of the most substantial improvements is a correction that accounts for the difference in data collected from buoys and ship-based data.

No slow down in global warming.

(Credit: NOAA)

Prior to the mid-1970s, ships were the predominant way to measure sea surface temperatures, and since then buoys have been used in increasing numbers. Compared to ships, buoys provide measurements of significantly greater accuracy. "In regards to sea surface temperature, scientists have shown that across the board, data collected from buoys are cooler than ship-based data," said Dr. Thomas C. Peterson, principal scientist at NOAA's National Centers for Environmental Information and one of the study's authors. "In order to accurately compare ship measurements and buoy measurements over the long-term, they need to be compatible. Scientists have developed a method to correct the difference between ship and buoy measurements, and we are using this in our trend analysis." 

In addition, more detailed information has been obtained regarding each ship's observation method. This information was also used to provide improved corrections for changes in the mix of observing methods.   

New analyses with these data demonstrate that incomplete spatial coverage also led to underestimates of the true global temperature change previously reported in the 2013 IPCC report. The integration of dozens of data sets has improved spatial coverage over many areas, including the Arctic, where temperatures have been rapidly increasing in recent decades. For example, the release of the International Surface Temperature Initiative databank, integrated with NOAA's Global Historical Climatology Network-Daily dataset and forty additional historical data sources, has more than doubled the number of weather stations available for analysis.

Lastly, the incorporation of additional years of data, 2013 and 2014, with 2014 being the warmest year on record, has had a notable impact on the temperature assessment. As stated by the IPCC, the "hiatus" period 1998-2012 is short and began with an unusually warm El Niño year. However, over the full period of record, from 1880 to present, the newly calculated warming trend is not substantially different than reported previously (0.68°C / Century (new) vs 0.65°C / Century (old)), reinforcing that the new corrections mainly have in impact in recent decades. 

On the Web

* Note: NOAA's National Centers for Environmental Information (NCEI) is the merger of the National Climatic Data Center, National Geophysical Data Center, and National Oceanographic Data Center as approved in the Consolidated and Further Continuing Appropriations Act, 2015, Public Law 113-235. From the depths of the ocean to the surface of the sun and from million-year-old sediment records to near real-time satellite images, NCEI is the nation's leading authority for environmental information and data. For more information go to: http://www.ncdc.noaa.gov/news/coming-soon-national-centers-environmental-information 


NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.

Friday, 05 June 2015 00:00

The Real Cost of IT Complexity

IT complexity is one of the enterprise’s biggest challenges, affecting every facet of the organization--from employees to customers.

But how do you define IT complexity, and what is the impact? Lucky for us, Oracle commissioned IDC to look at organizations that simplified their IT environment and to develop an index to quantify IT complexity’s impact.

According to IDC, IT complexity can be defined “as the state of an IT Infrastructure that leads to wasted effort, time, and expense.” Conditions contributing to this include:

  • Heterogeneous environments
  • Using outdated technologies
  • Server, application or data sprawl
  • Lack of sufficient management tools and automation
  • Silo’d IT



Cloud Endure has released the results of a recent survey into public cloud usage, downtime, availability and disaster recovery.

The 2015 Public Cloud Disaster Recovery Survey looks at disaster recovery challenges and best practices. It also benchmarks the best practices of companies that host web applications in the public cloud. The survey received responses from 109 IT professionals from North America and Europe.

Key findings include:

  • The number one risk to system availability is human error followed by networks failures and cloud provider downtime.
  • While the vast majority of the organizations surveyed (83 percent) have a service availability goal of 99.9 percent or better, almost half of the companies (44 percent) had at least one outage in the past three months, and over a quarter (27 percent) had an outage in the past month.
  • The cost of a day of downtime in 37 percent of the organizations is more than $10,000.
  • When it comes to service availability, there is a clear gap between how organizations perceive their track record and the reality of their capabilities. While almost all respondents claim they meet their availability goals consistently (37 percent) or most of the time (50 percent), 28 percent of the organizations surveyed don’t measure service availability at all. It is hard to tell how these organizations claim to meet their goals when they are not able to measure them.
  • The top challenges in meeting availability goals are budget limitations, insufficient IT resources, and lack of in-house expertise.
  • There is a strong correlation between the cost of downtime and the average hours per week invested in backup and disaster recovery.

Read the survey report (registration required).

Celebrating Europe's finest in the business continuity industry

At an awards ceremony at the La Maison du Cygne, a prestigious 17C building on the Grand Place in Brussels, Belgium and once home to the city's butchers' guild, the  Business Continuity Institute recognised the talent that exists in the business continuity industry across the continent as they held their annual European Awards.

The BCI Awards consist of nine categories – eight of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from across the region.

The winners were:

Continuity and Resilience Consultant of the Year 2015
Chris Needham-Bennett MBCI of Needhams 1834

Continuity and Resilience Professional of the Year 2015 (Private Sector)
Michael Crooymans CBCI of SOGETI

Continuity and Resilience Newcomer of the Year 2015
Jacqueline Howard CBCI of Marks and Spencer

Continuity and Resilience Team of the Year 2015
Ulster Bank Business Resilience Team

Continuity and Resilience Provider (Service/Product) of the Year 2015
Sungard Availability Services

Continuity and Resilience Innovation of the Year 2015
PinBellCom Ltd

Most Effective Recovery of the Year 2015

Industry Personality of the Year 2015
David Window MBCI of Continuity Shop

The BCI European Awards are one of seven regional held by the BCI and which culminate in the annual  Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI European Awards are automatically entered into the Global Awards.

(TNS) — Rice University civil engineering professor Philip Bedient is an expert on flooding and how communities can protect themselves from disaster. He directs the Severe Storm Prediction, Education and Evacuation from Disasters Center at Rice University.

On Memorial Day evening, Houston suffered massive flooding after getting nearly 11 inches in 12 hours. Bedient designed the Flood Alert System — now in its third version — which uses radar, rain gauges, cameras and modeling to indicate whether Houston's Brays Bayou is at risk of overflowing and flooding the Texas Medical Center. In an interview with Ryan Holeywell, editor of the Kinder Institute's "Urban Edge" blog, Bedient said more places need this kind of warning system.



Thursday, 04 June 2015 00:00

Implications of the All-Flash Data Center

From a performance perspective, the all-Flash data center certainly makes a lot of sense. In an age when the movement of data from place to place is more important than the amount of data that can be stored or processed in any given location, high I/O in the storage array should be a top priority.

But while no one disputes the efficacy of Flash over disk and tape when it comes to speed, the question remains: Does the all-Flash data center still make sense for the enterprise? And if so, what impact will this have on other systems and architectures up and down the stack?

HP recently pushed the envelope on the all-Flash data center a little further with a new line-up of arrays and services for the 3PAR StoreServ portfolio. The set-up is said to improve performance, lower the physical footprint of storage and reduce cost to about $1.50 per usable GB, which is about 25 percent less than current equivalent solutions. The company is already reporting workload performance of 3.2 million IOPS with sub-millisecond latency among its Flash drives, and the 3PAR family’s Thin Express ASIC provides a high degree of data resiliency between the StoreServ array and the ProLiant server to reduce transmission errors.



Now that management science has taught us how to quantify so many other things, crisis management is a good candidate for being awarded its own scale of seriousness too. The detail you put into such a scale will depend on how much crises afflict your enterprise. If you are battling a continual stream of problems, your scale may be finer (say, 1 to 10), in order to sort out the life-and-death situations from the nuisances. Otherwise, a high-medium-low system of ranking may be sufficient, as long as there are clear definitions for crises to be categorised correctly. So, how does this work in practice?



(TNS) — Thousands of Pinellas County, Fla., beach residents and business owners could hit an unexpected road block trying to return to the barrier islands after a storm evacuation.

Pinellas County Sheriff Bob Gualtieri said Monday his office, working with beach city governments, has developed a hang-tag identification system to allow drivers quick access to the islands after an evacuation.

However, since the program rolled out in February, only 17,000 hang tags have been handed out, while Gualtieri estimates about 88,000 people will need them.

“That gives me a lot of concern,” he said, urging people to get the tags as soon as possible.



AUSTIN, Texas – Texans who sustained property damage as a result of the ongoing severe storms and flooding are urged to register with the Federal Emergency Management Agency (FEMA), as they may be eligible for federal and state disaster assistance.

The presidential disaster declaration of May 29 makes disaster aid available to eligible families, individuals and business owners in Hays, Harris and Van Zandt counties.  

“FEMA wants to help Texans begin their recovery as soon as possible, but we need to hear from them in order to do so,” said FEMA’s Federal Coordinating Officer (FCO) Kevin Hannes. “I urge all survivors to contact us to begin the recovery process.”

People who had storm damage in Harris, Hays, and Van Zandt counties can register for FEMA assistance online at www.DisasterAssistance.gov or via smartphone or web-enabled device at m.fema.gov. Applicants may also call 800-621-3362 or (TTY) 1-800-462-7585 from 6 a.m. to 9 p.m. daily. Flood survivors statewide can call and report their damage to give the state and FEMA a better idea of the assistance that is needed in undesignated counties.

Assistance for eligible survivors can include grants for temporary housing and home repairs, and for other serious disaster-related needs, such as medical and dental expenses or funeral and burial costs. Long-term, low-interest disaster loans from the U.S. Small Business Administration (SBA) also may be available to cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.

Eligible survivors should register with FEMA even if they have insurance. FEMA cannot duplicate insurance payments, but under-insured applicants may receive help after their insurance claims have been settled.

Registering with FEMA is required for federal aid, even if the person has registered with another disaster-relief organization such as the American Red Cross, or local community or church organization. FEMA registrants must use the name that appears on their Social Security card. Applicants will be asked to provide:

  • Social Security number
  • Address of the damaged home or apartment
  • Description of the damage
  • Information about insurance coverage
  • A current contact telephone number
  • An address where they can get mail
  • Proof of residency, such as a utility bill, rent receipts or mortgage payment record
  • Bank account and routing numbers if they want direct deposit of any financial assistance.

(TNS) — When a powerful earthquake in March 2011 triggered a tsunami that devastated Japan’s Fukushima-Daiichi nuclear plant and raised radiation to alarming levels, authorities contemplated sending in robots first to inspect the facility, assess the damage and fix problems where possible. But the robots could not live up to the task and eventually, humans had to complete most of the hazardous work.

Ever since, Defense Advanced Research Projects Agency (DARPA), an agency under the U.S. Department of Defense, has been working to improve the quality of robots. It is now conducting a global competition to design robots that can perform dangerous rescue work after nuclear accidents, earthquakes and tsunamis.

The robots are tested for their ability to open doors, turn valves, connect hoses, use hand tools to cut panels, drive vehicles, clear debris and climb a stair ladder — all tasks that are relatively simple for humans, but very difficult for robots.



Wednesday, 03 June 2015 00:00

Five Myths About the Commoditization of IT

“Commodity” is a bad word among technologists. It implies standardized, unchanging, noninnovative, boring, and cheap. Commodities are misunderstood. This post seeks to dispel some of the myths around the commoditization of IT services (i.e., the cloud).



s a Public Information Officer, Mike was used to communicating health information to the people of his state. When word came that a major hurricane was approaching, he knew people would be facing fear and uncertainty. How could he make sure that the right information got to the right people? How should he react to the public’s negative emotions and false information? Most importantly, how could he help to protect health and lives?  Mike knew exactly where to begin- with the principles of CDC’s Crisis and Emergency Risk Communication training.

CDC’s Crisis and Emergency Risk Communication (CERC) program teaches you how to craft messages that tell the public what the situation means for them and their loved ones, and what they can do to stay safe.

CERC provides a set of principles that teach effective communication before, during, and after an emergency. The six principles of CERC are:

  1. Be First                              4. Express Empathy
  2. Be Right                             5. Promote Action
  3. Be Credible                      6. Show Respect

The CDC CERC program has resources, training, and shared learning where you can participate in online training and receive continuing education credits. CERC also has CERC in Action stories from other public health professionals who have successfully applied CERC to an emergency response.

Communicating during an emergency is challenging, but you’re not alone! CERC can help you figure out how to get the right information to the right people at the right time whether you’re dealing with a family emergency or a hurricane.

CERC in Action

23Frozen powerline.4

PHPR: Health Security in Action

This post is part of a series designed to profile programs from CDC’s Office of Public Health Preparedness and Response.

CERC and CERC training are a service provided by CDC’s Office of Public Health Preparedness and Response’s (OPHPR) Division of Emergency Operations.


Companies are learning the hard way that there’s a downside to data democratization: more data silos.

“On the heels of the consumerization of enterprise software and the growing ubiquity of easy-to-use analytics tools, silos appear to be coming back in all their former collaboration-stifling glory as individual teams and departments pick and choose different tools for different purposes and data sets without enterprise-level oversight,” writes Katherine Noyes in a recent Computerworld article exploring this growing problem.

It’s hard to hear in this age of Big Data and data lakes, but in hindsight, it really isn’t surprising. SaaS made it possible for the lines of business to choose their own applications with nothing more than a credit card. Then Apple tipped the balance on personal devices. Finally, Amazon and others democratized storage and Big Data processing power. It only makes sense that analytics — and more data — would leave the centralizing influence of IT and segregate into silos.



According to a new study conducted by PwC and commissioned by the UK Government to raise awareness of the growing cyber threat, the average cost of the single worst online security breach suffered by big businesses is between £1.46m and £3.14m, up from £600k – £1.15m in 2014. The Information Security Breaches Survey 2015 highlights the rising costs of malicious software attacks and staff related breaches, and illustrates the need for companies to take action. And it is all companies, not just big business, as the research also shows that the equivalent costs for small business is £75k – £311k, up from £65k – £115k a year ago.

It is not just costs that are high, but occurrence too, as the survey also revealed that 90% of large organisations reported they had suffered an information security breach, while 74% of small and medium sized businesses reported the same. The median number of breaches for large organisations was 14 (down from 16 in 2014) while for small businesses it was four (down from six last year). The problem is unlikely to go away as 59% of respondents to the survey expect there will be more security incidents in the coming year.

These figures may not come as a surprise to business continuity professionals who have consistently expressed concern about data breaches, the disruption they can cause and the cost as a consequence. The latest Horizon Scan report published by the Business Continuity Institute revealed that 74% of respondents to a survey expressed concern or extreme concern at the prospect of a data breach occurring and, along with cyber attacks, it has been a top three threat since the survey began.

Attacks from outsiders have become a greater threat for both small and large businesses with 69% of large organisations and 38% of small organisations being attacked by an unauthorised outsider in the last year, although Denial of Service (DoS) attacks have actually decreased with only 30% of large organisations and 16% of small organisations being attacked in such a way. The outsider threat may be high, but when asked about the single worst breach, 50% of organisations stated that it was due to inadvertent human error.

Digital Economy Minister Ed Vaizey said: "The UK’s digital economy is strong and growing, which is why British businesses remain an attractive target for cyber-attack and the cost is rising dramatically. Businesses that take this threat seriously are not only protecting themselves and their customers’ data but securing a competitive advantage."

Andrew Miller, Cyber Security Director at PwC, said: "With 9 out of 10 respondents reporting a cyber breach in the past year, every organisation needs to be considering how they defend and deal with the cyber threats they face. Breaches are becoming increasingly sophisticated, often involving internal staff to amplify their effect, and the impacts we are seeing are increasingly long-lasting and costly to deal with."

Wednesday, 03 June 2015 00:00

Nepal: Risk from the Theoretical to Reality

The Nepal earthquake, which triggered massive destruction from the Himalayan Mountains to India, is more than a tragic story of bad luck. It’s an example of how little we really understand risk and the consequences of our inability to fully absorb events such as earthquakes in Nepal and Haiti and other natural disasters that are so devastating.

Even our perception of these events is skewed.  According to the USGS website, the U.S. government’s official site for monitoring earthquakes, approximately one major earthquake of magnitude 8.0 or greater has occurred each year over the last 24 years.  We tend to discount major disaster in our own lives while believing there is a higher probability that others may suffer calamity.  That may explain why so many say we “never saw that coming” when disaster strikes.  Many of these quakes have occurred with little damage or no deaths, but we remember the ones with a high death toll and quickly dismiss the others.

Given our inability to look into the future, the question is: has our world become more or less risky?  Well, it depends!  For many of us, the perception of risk depends on our own circumstances.  Let’s take two people of similar age but from remarkably different backgrounds.



Financial firms are tasked with a lot of different responsibilities, not the least of which is the responsibility to protect sensitive data and information.  When it comes to the resistance on the part of financial firms choosing to adopt cloud services for data storage and cloud-based file sharing, managed service providers (MSPs) need to preach security as everyone’s top priority.

According to a How Cloud is Being Used in the Financial Sector, a recent study from the Cloud Security Alliance (CSA), a large number of security concerns are keeping financial firms on the sidelines looking in at cloud computing.  Chief among those concerns is data security apprehension.



Wednesday, 03 June 2015 00:00

Datameer Applies Data Governance to Hadoop

One of the biggest inhibitors to applying Hadoop in any production environment is the general lack of governance tools for IT organizations to use to manage access permissions for the data that resides there.

To address that issue, Datameer today announced it has embedded a raft of data governance tools inside its analytics software that runs natively on Hadoop.

Matt Schumpert, director of product management at Datameer, says that because its software runs in memory as a Hadoop application, responsibility for data governance within Hadoop naturally falls to Datameer.



Last week the Ponemon Institute rolled out the results of yet another Global Cost of Data Breach report and, surprising very few people in the security world, the stats show costs rising again. Sponsored by IBM, the report benchmarked 350 companies across 11 countries. It found that the consolidated total cost of a breach has now risen to $3.8 million, about 23 percent higher than the figure back in 2013. They're compelling statistics for anyone in the managed services world trying to offer customers justification for improved security coverage.

According to the report, there are three big factors that are contributing to the rising costs of breaches.



(TNS) — Staring at an image of your home and neighborhood inundated with 2, 6 or maybe 9 feet of rushing water from a hurricane storm surge can be horrifying.

At least that’s what Pinellas County Emergency Management Director Sally Bishop is hoping.

As the 2015 hurricane season dawns on Monday, Bishop is unveiling her department’s newest tool for storm preparation: a Storm Surge Protector computer application that gives people a realistic view of what can happen when a hurricane comes ashore.



(TNS) — It was the year they ran out of names.

The hurricane season that began 10 years ago Monday generated so many storms — 27 in all — that, for the first time since officials started using names in 1953, they went through a list of 21 names and had to start on the Greek alphabet: from Arlene on June 9, just nine days in, to Zeta, which finally fizzled on Jan. 6, 2006, a month after that manic 6-month season officially ended.

Right in the middle was Katrina, which raised serious issues that had little to do with meteorology. And for South Florida, so late in the season that its cleanup competed with Halloween preparations, was Wilma. It brought billions in damage, much of that to Palm Beach County, still recovering from two hurricanes three weeks apart in the previous year's "mean season."



Hackers illegally accessed the personal information of 104,000 taxpayers this spring, according to the U.S. Internal Revenue Service (IRS).

And as a result, the IRS tops this week's list of IT security newsmakers to watch, followed by Woolworths, Google (GOOG) and Kaspersky Lab.

What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:



(TNS) — While the global fracking boom has stabilized North America’s energy prices, Chicago — America’s third largest city and the busiest crossroads of the nation’s railroad network — has become ground zero for the debate over heavy crude moved by oil trains.

With the Windy City experiencing a 4,000 percent increase in oil-train traffic since 2008, Chicago and its many densely populated suburbs have become a focal point as Congress considers a number of safety reforms this year.

Many oil trains are 100 or more cars long, carrying hydraulically fracked crude and its highly explosive, associated vapors from the Bakken region of Montana, North Dakota, Saskatchewan, and Manitoba.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) urges residents across the nation to prepare for the 2015 Atlantic Hurricane season, which begins today and runs through November 30. 

Hurricanes and tropical systems can cause serious damage on both coastal and inland areas. Their hazards can come in many forms including: storm surge, heavy rainfall, inland flooding, high winds, and tornadoes. To prepare for these powerful storms, FEMA is encouraging families, businesses, and individuals to be aware of their risks; know your sources of reliable information; prepare your home and workplace; and be familiar with evacuation routes.

“One hurricane hitting where you live is enough to significantly disrupt your life and make for a very bad hurricane season,” said FEMA Administrator Craig Fugate. “Every person has a role to play in being prepared – you should know if you live or work in an evacuation zone and take time now to learn that route so you’re prepared to protect yourself and your family from disaster.”

This year, FEMA is placing an emphasis on preparing communities to understand the importance of evacuations, which are more common than many people realize. When community evacuations become necessary, local officials provide information to the public through the media. In some circumstances, other warning methods, such as, text alerts, emails, or telephone calls are used. Information on evacuation routes and places to stay is available at www.ready.gov/evacuating-yourself-and-your-family.

Additionally, knowing and practicing what to do in an emergency, in advance of the event, can make a difference in the ability to take immediate and informed action, and enable you to recover more quickly. To help communities prepare and enhance preparedness efforts nationwide, FEMA is offering two new products.

  • FEMA launched a new feature to its App, available for free in the App Store for Apple devices and Google Play for Android devices. The new feature enables users to receive weather alerts from the National Weather Service for up to five locations anywhere in the United States, including U.S. territories, even if the mobile device is not located in the weather alert area. The app also provides information on what to do before, during, and after a disaster in both English and Spanish.
  • The Ready campaign and America’s PrepareAthon! developed a social media toolkit that you can download and share with others at www.ready.gov/ready2015. The kit contains information on actions communities can take to practice getting ready for disasters.

While much attention is often given to the Atlantic Hurricane Season, there are tropical systems that can affect other U.S. interests as well. The Eastern Pacific Hurricane Season runs from May 15 through November 30. The Central Pacific Hurricane Season runs from May 15 to November 30. To learn more about each hurricane season and the geographical areas they may affect, visit www.noaa.gov.

Additional tips and resources:

  • Learn how to prepare for hurricane season at www.ready.gov/hurricanes
  • Talk with your family today about how you will communicate with each other during a significant weather event when you may not be together or during an evacuation order. Download the family communications at www.ready.gov/family-communications.
  • For information on how to create an emergency supply kit, visit www.ready.gov/build-a-kit
  • Consider how you will care for pets during an evacuation by visiting www.ready.gov/caring-animals
  • Use the Emergency Financial First Aid Kit (EFFAK) to identify your important documents, medical records, and household contracts. When completing the kit, be sure to include pictures or a video of your home and your belongings and keep all of your documents in a safe space. The EFFAK is a joint publication from Operation Hope and FEMA. Download a copy at www.ready.gov/financial-preparedness.
  • If you own or manage a business, visit www.ready.gov/business for specific resources on response and continuity planning.
  • The National Weather Service proactively sends free Wireless Emergency Alerts, or WEAs, to most cell phones for hurricanes, tornadoes, flash flooding and other weather-related warnings. State and local public safety officials may also send WEAs for severe or extreme emergency conditions. If you receive a Wireless Emergency Alert on your cell phone, follow the instructions, take protective action and seek additional information from local media. To determine if your wireless device can receive WEA alerts contact your wireless carrier for more information or visit www.ctia.org/WEA.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Last week, we learned that cybercriminals undermined the identity verification of the IRS’ Get Transcript app and gained access to the tax returns on 104,000 US citizens, so it’s only fitting in this analyst spotlight, we interview one of the team’s leading analysts for identity and access management (IAM), VP and Principal Analyst, Andras Cser. Andras consistently produces some of the most widely read research not just for our team but across all of Forrester. And clients seek his insight across a number of coverage areas beyond IAM, including cloud security, enterprise fraud management, and secure payments. As the tallest member of our S&R team at 6’5”, Andras also provides guidance to clients on the emerging fields of height intel and altitude management.


Andras Cser Image


Before joining Forrester, Andras worked as a security architect at Netegrity and then CA Technical Services. He also worked in a number of technical and sales capacities at Sun Microsystems prior to joining Netegrity. In his roles on the vendor-side, he architected and implemented IAM and provisioning solutions at Fortune 500 companies.


Listen to this month’s podcast below to hear Andras talk about his most common client questions, counterintuitive insights, and vendors to watch. And as you can tell from our analyst interview, Andras prides himself on being clear and concise.



‘Agile’ is still a buzzword. That’s quite a feat in today’s high-speed business and technological environments, where concepts date so rapidly. The original ‘Manifesto for Agile Software Development’ appeared in 2001, some 14 years ago. Since then, the word and the concept it labels have been applied to different business areas, including marketing and supply chain operations. Recently, it has also cropped up in the phrase ‘agile recovery’. But is this taking the ‘agile concept’ too far?



BSI is seeking feedback on the draft BS 12999 standard. ‘BS 12999 Damage Management - Stabilization, mitigation, and restoration of properties, contents, facilities and assets following incident damage’ is intended to provide recommendations to individuals and organizations involved in carrying out damage management. It will be applicable to domestic, commercial and public buildings and includes the following main contents:

  • Introduction
  • Scope
  • Terms, definitions and abbreviations
  • Damage incident instructions, intake and response planning4 On‑site damage assessment
  • Stabilization
  • Damage scoping
  • Damage recovery and restoration
  • Completion sign-off and handover.

The deadline for comments is June 30th 2015.

Click here to read the draft standard and take part in the consultation.

Downtime to the broadband connection is now one of the major threats facing today’s organizations, so why are many businesses not considering resilience when purchasing broadband or looking at how broadband failure fits into the disaster recovery plan? Mike van Bunnens, managing director, Comms365, explores the issue.

What is the most important consideration for a business buying a new broadband connection? From the way many businesses are making the investment decision, the answer appears to be cost: with most expecting to achieve the same rock bottom prices on offer in the domestic market. But with more and more businesses running VoIP and cloud based applications, their choice of broadband connection is essential. Any glitch in service will have a massive knock on effect on productivity and customer relationships. So why are businesses not considering resilience or how broadband failure fits into the disaster recovery plan? Why are many not even ascertaining the speed and quality of the broadband options before moving to a new office premises?

A high quality, resilient broadband connection is now one of the most critical aspects of any business’ set up. So why are business owners still applying domestic thinking to business critical communications?



What does the phrase “needle in a haystack” mean to you? For many, it implies the impossible or something that can’t be done. As an MSP, don’t you strive to do the seemingly impossible for your customers? It sure will endear them to you.

One feature that can help you triumph over “needle in a haystack” scenarios is granular recovery. Think back to a customer that got hit with CyrptoLocker or perhaps had a rogue employee who deleted important files. No doubt your customers had that empty feeling that their valuable data was unrecoverable. With granular recovery, it’s not only possible, but also easy. You can easily search documents, emails and attachments by keyword and restore exactly what you need. Now, won’t that impress your customers?



Houston, the fourth-largest city in the United States, has been struggling through extreme storms and some of the worst flooding in years over the past few days. Roadways were blocked, drivers were left stranded, and homes were completely destroyed due to the flash flooding.

More than 1,000 residents have been displaced and area businesses have come to a screeching halt. Once the storms and flash flooding started, I reached out to some of my clients in the area to make sure they were okay and find out what they were doing to help affected individuals and businesses.



According to the 2015 Makovsky Wall Street Reputation Study, released Thursday, 42% of U.S. consumers believe that failure to protect personal and financial information is the biggest threat to the reputation of the financial firms they use. What’s more, three-quarters of respondents said that the unauthorized access of their personal and financial information would likely lead them to take their business elsewhere. In fact, security of personal and financial information is much more important to customers compared to a financial services firm’s ethical responsibility to customers and the community (23%).

Executives from financial services firms seem to know this already: 83% agree that the ability to combat cyber threats and protect personal data will be one of the biggest issues in building reputation in the next year.

The study found that this trend is already having a very real impact: 44% of financial services companies report losing 20% or more of their business in the past year due to reputation and customer satisfaction issues. When asked to rank the issues that negatively affected their company’s reputation over the last 12 months, the top three “strongly agree” responses in 2015 from communications, marketing and investor relations executives at financial services firms were:



Monday, 01 June 2015 00:00

2015 Hurricane Season Opener

By now you’ll have read the latest forecasts calling for a below-average Atlantic hurricane season.

NOAA, Colorado State University’s Tropical Meteorology Project, North Carolina State University, WSI and London-based consortium Tropical Storm Risk all seem to concur in their respective outlooks that the 2015 hurricane season which officially begins June 1 will be well below-norm.

TSR, for example, predicts Atlantic hurricane activity in 2015 will be about 65 percent below the long-term average. Should this forecast verify, TSR noted that it would imply that the active phase for Atlantic hurricane activity which began in 1995 has likely ended.

Still it’s important to note that the forecasts come with the caveat that all predictions are just that, and the likelihood of issuing a precise forecast in late May is at best moderate. In other words, uncertainties remain.