Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Bonus Journal

Volume 29, Issue 5

Full Contents Now Available!

Industry Hot News

Industry Hot News (6926)

Recently, I checked out all the iOS apps available from my home state, Kentucky.  I wasn’t impressed.

The parks system has a nice app — the same one available for other states, thanks to a private company. In fact, all of the apps I found were actually produced by private companies, and even so, they were pretty unimpressive. Tourism, for example, has collaborated on an app that basically gives you a .pdf of its main publication.

If mobile apps are the Internet in small, Kentucky seems to be making the same mistakes I saw it make back in 2000, when it was building a web presence. There’s no clear strategy of prioritizing critical services first.



Liquid cooling is gaining, well, steam (sorry) in the data center as compute densities creep up and organizations look for ways to keep temperatures within tolerance without busting the budget on less-efficient air-handling infrastructure.

But there are a number of approaches to liquid cooling, ranging from running simple cold water in and around the data center to full immersion of chips and motherboards in non-conducting dielectric solutions.

According to Research and Markets, the data center cooling market as a whole is on pace to hit compound annual growth of 6.67 percent between now and 2019. The report summary available on the web does not break out the performance of specific cooling categories, but it does note that high adoption of liquid-immersion technologies is one of the key growth factors. As cloud computing and data analytics ramp up in the enterprise, data infrastructure across the board will have to provide greater performance within small, most likely modular, footprints, which means more heat and a more direct way to whisk it away from sensitive data equipment.



Wednesday, 01 July 2015 00:00

The Data Lake as an Exploration Platform

The data lake is an attractive use case for enterprises seeking to capitalize on Hadoop’s big data processing capabilities. This is because it offers a platform for solving a major problem affecting most organizations: how to collect, store, and assimilate a range of data that exists in multiple, varying, and often incompatible formats strung out across the organization in different sources and file systems.

In the data lake scenario, Hadoop serves as a repository for managing multiple kinds of data: structured, unstructured, and semistructured. But what do you do with all this data once you get it into Hadoop? After all, unless it is used to gain some sort of business value, the data lake will end up becoming just another “data swamp” (sorry, couldn’t resist the metaphor). For this reason, some organizations are using the data lake as the foundation for their enterprise data exploration platform.



TNS - Connecticut’s emergency dispatchers in the not-too-distant future will be fielding not only 911 calls and texts, but perhaps even viewing photos and videos of crimes or accidents.

The state’s changeover to the Next Generation 911 system has started at 10 pilot sites across the state, including locally at the Mashantucket Pequot Public Safety Department and Valley Shore Emergency Communications in Westbrook.

All of the state’s 104 public service answering points are scheduled for a changeover by next year.



Traffic video cameras were installed to keep the roads moving by letting transportation departments see trouble spots, dispatch assistance and arrange detours as quickly as possible. But this wealth of real-time video intelligence has proven to be an exceptional resource for emergency operations centers (EOCs) across the United States.

“Live traffic video substantially boosts our situational awareness,” said Michael Walter, public information officer with the Houston, Texas, Office of Emergency Management. “It makes a real difference to how we do our jobs.”



The web-based system used for federal background investigations for employees and contractors has been suspended after “a vulnerability” was detected, the Office of Personnel Management (OPM) announced Monday.

OPM has been the subject of intense congressional probing following the cyber attack on the personnel records of at least 4.2 million current and former federal employees. The decision to suspend the agency’s “E-Qip” system, however, is not directly related to that hack or another one of a security clearance data base that was previously announced.

“The actions OPM has taken are not the direct result of malicious activity on this network, and there is no evidence that the vulnerability in question has been exploited,” an OPM statement said. “Rather, OPM is taking this step proactively, as a result of its comprehensive security assessment, to ensure the ongoing security of its network.”



What does it take to get PC or server backups to work properly and bring computers back to operational status? Correctly stored data files are a critical component for most organisations. However, on their own they won’t let you get back to business. You’ll also need the applications that generated those data files and you’ll need the associated configuration and profile information. That includes user and account-specific information and any purpose-built software modules to link your system to others in your enterprise. The smart solution would be to back up all of this information within the same process.



AUSTIN, Texas – Texans will have the opportunity to assist with the state’s disaster recovery from the severe storms, tornadoes, and flooding that occurred from May 4 to June 19. Dozens of qualified Texans will be offered temporary jobs as local hires of the Federal Emergency Management Agency (FEMA) in its Austin, Denton, and Houston offices.

FEMA has partnered in this venture with the Texas Workforce Commission. Those interested may go to http://www.workintexas.com and create an account. Once logged in, click on “Search All Jobs” and type “FEMA” into the search bar.

Currently, there are six job categories posted:

  • Administrative/Clerical
  • Customer service
  • Logistics
  • Report writing
  • Switchboard/Help desk
  • Technical/Architecture/Engineering

FEMA positions with detailed job descriptions will remain posted through July 24 or until the jobs are filled.

Candidates must be 18 years of age or older and must be a U.S. citizen. Qualified applications will be forwarded to FEMA staff, who will select candidates for interviews. Selected candidates should have a valid government identification card, such as a driver’s license or military ID. Candidates will be required to complete a background investigation, which includes finger printing, and additional ID, such as Social Security card, birth certificate or passport. The hiring process may take up to 15 days from the date of application.

FEMA is committed to employing a highly qualified workforce that reflects the diversity of our nation. All applicants will receive consideration without regard to race, color, national origin, sex, age, political affiliation, non-disqualifying physical handicap, sexual orientation, and any other non-merit factor. The federal government is an Equal Opportunity Employer.

More positions may be posted on the TWC webpage as the disaster recovery continues.

All are encouraged to visit https://www.fema.gov/disaster/4223 for news and information about this disaster.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY, call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

FEMA’s temporary housing assistance and grants for childcare, medical, dental expenses and/or funeral expenses do not require individuals to apply for an SBA loan. However, those who receive SBA loan applications must submit them to SBA to be eligible for assistance that covers personal property, transportation, vehicle repair or replacement, and moving and storage expenses.

Visit www.fema.gov/texas-disaster-mitigation for publications and reference material on rebuilding and repairing safer and stronger

BI is about to take a big step forward, and a major driver for new capabilities will be self-service data integration capabilities, according to Jamil Rashdi, a senior infrastructure development manager.

Rashdi, a veteran IT leader and cloud infrastructure architect, takes a look at this year’s business intelligence self-service trends. Of course, BI is by its nature self-serve, but as he points out, that’s primarily been limited to simpler data discovery functions such as search, dashboards and visualization tools.

New advancements are pushing well beyond these self-serve features, he writes. Advancements in both BI and analytics solutions “are significantly broadening the scope of self-service BI” to include data preparation and manipulation tools — including ETL and data wrangling, or lightweight tools for transforming, integrating and cleansing data.



The lifecycle of any given technological innovation follows a fairly standard path: proposal, development, deployment and then either success or failure based on cost, efficacy, execution or a number of other factors.

With the cloud, however, we seem to be diverging from this pattern, or at the very least the process is being drawn out due to the radical and fundamental way it affects the entire data stack, and indeed the entire business model.

The private cloud in particular seems to be caught in a no-man’s land of doubt/certainty, confusion/clarity, and ongoing debate between those who support it to the nines and those who chalk it up to so much wishful thinking. On any given day, a web search of the terms “private cloud” can produce the following results:



Hershey Entertainment and Resorts, the company that owns Hershey Park, is investigating a possible data breach.

And as a result, Hershey Park tops this week's list of IT security news makers, followed by Damballa, Malwarebytes and The Hartford.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



TNS - Miss Piggy is flying again.

But even as the lumbering P-3 Orion aircraft takes part in its first mission since getting two new engines in a life-extending overhaul, the National Oceanic and Atmospheric Administration is looking for the next generation of hurricane hunting aircraft.

Miss Piggy and NOAA’s other Orion, named Kermit, are stationed at MacDill Air Force Base. Each plane was put into service during the mid-70s and has flown more than 10,000 hours, into more than 80 hurricanes. They are long, grueling missions, often subjecting the crew to zero gravity as the aircraft lurch up and down in buffeting winds. With the pounding they’ve taken, the planes need the $42 million refurbishing to stay on the job during the June through November hurricane season and beyond.

But even with new engines, new wings and upgraded avionics and scientific instrumentation, they won’t fly forever. More like 15 years.



The Online Trust Alliance (OTA) recently released its 2015 Online Trust Audit & Honor Roll. For the report, OTA analyzed approximately 1,000 websites in three categories: consumer protection, privacy and security. According to a release, the seventh annual audit now includes websites of the top 50 leading Internet of Things device makers, wearable technologies and connected home products.

It’s tough to make the honor roll; that’s what makes it special. But then, this is the type of honor roll you want companies to make, especially if it is a company you do business with (or if it is your website being evaluated). Unfortunately, nearly half of all of the websites failed. Even more alarming was that the new category of IoT had an even more dismal showing, with a 76 percent failure rate.

In an ITProPortal article, Craig Spiezle, executive director and president of OTA, stated:



Sure, the average consumer is worried about storing their data in the cloud or sharing it through cloud-based file sharing, but how can managed service providers (MSPs) respond to an enterprise when even their own IT professionals are worried about the state of security in the public cloud?

In 2011, Symantec and the National Cyber Security released a study that reveals cyber attacks cost small- and medium-sized businesses an average of $188,242. Perhaps even more alarming, research conducted by Gartner shows that nearly 90 percent of the companies that were victimized by a major data loss went out of business within six months of the attack.

One-third of the 1000 IT professionals responding to a Bitglass survey said that they experienced more security breaches with the public cloud than their internal information technology function.



Monday, 29 June 2015 00:00

Preparing for the Unexpected

Stuff happens. We may not like it, we may even consider it unfair, but it is a fact of life. In the business environment, the question is: Are management and the Board prepared to respond?

Two years ago, I had the opportunity to talk with the Chairman of the Board for a major institution. He observed he had talked with some of his peers about recurring situations across America that had caused a reputation hit. There was a train of thought in this discussion that there had to be a connection between an organization’s risk assessment and its crisis management. In other words, should the risk assessment process inform the organization’s crisis response team?

It’s a fair question. And it’s important. Even the proudest organizations and brands are not immune to being called out by the unexpected.



(TNS) — Philadelphia’s security preparations for Pope Francis’ 48-hour visit have been going on for more than a year. For Ignazio Marino, mayor of Rome, papal security is an everyday issue.

“It’s pretty tough because the pope is a terrific person, he attracts millions of people, so traffic and security is a huge, huge issue — particularly in these days and time with possibility of terroristic attacks, we are always concerned,” Marino said Thursday outside his office in Rome.

The final day of the Philadelphia delegation’s trip to Rome focused largely on getting input from Roman and Vatican City authorities on security and infrastructure for large-scale events featuring the pope. A separate news conference discussed the programming for the World Meeting of Families.



Information overload. Big data. Social media. Mobile computing. Bring-your-own-device policies. Cloud computing. New technologies. Records and information management continues to struggle with fundamental and, to a degree, existential challenges. The challenges to records and information management created by today’s technology are unprecedented and ever changing. Executives responsible for ethics and compliance must now address growing complexities in the management of records and information within their organizations. They must identify and implement new tools and techniques to match the challenges of today and the future while creating a culture of compliance in the records and information management sphere that aligns with the needs of 21st century business.

The Definition of a Record Is Changing: Records Are Created and Stored Differently

The vast majority of today’s business is fueled by, and conducted using, technology. Business records are almost exclusively becoming electronic and are generated by a wide variety of ever-changing devices, systems and applications. Records managers who have historically employed retention schedules to detail appropriate retention periods and records disposition actions are faced with adjusting their thinking to accommodate new and different types of records. The volume of data and the proliferation of that data across many platforms, repositories and devices makes capturing, preserving, managing and eventually disposing of records exceedingly difficult.



Recovery is the least understood (and least studied) part of the emergency management cycle with little systematic information about tracking progress geographically and over an extended time. Unfortunately, once the disaster field offices close in local communities, recovery activity wanes. For hard-hit communities, recovery is a long-term process of rebuilding lives, livelihoods and the sense of place that once characterized the community. Recovery takes months to years in some places and decades for other communities.

Hurricanes Katrina and Sandy afforded an opportunity to conduct a natural experiment to compare recovery from two different storms and their effects on two different locales: coastal New Jersey in the case of Sandy and coastal Mississippi for Katrina. While the storms were different in magnitudes and timing, each resulted in significant storm surge impacts affecting a large section of the coastline. For New Jersey, storm surge flooding occurred from Upper New York Bay south to Delaware Bay, ranging between eight feet at Sandy Hook to four feet in Downe Township. The entire Mississippi coastline was affected with storm surges ranging from 28 feet nearest to Katrina’s track close to the border with Louisiana and Bay St. Louis to 17 feet farther to the east in Pascagoula.



The cybersecurity insurance industry is booming, with demand for this specialty coverage vastly outpacing any other emerging risk line, according to a new survey by London-based broker RKH Specialty. In fact, 70% of the insurance professionals surveyed listed cyber as the top casualty exposure.

The brokers, agents, insurers and risk managers RKH queried after April’s RIMS 2015 conference said their top casualty concerns after cyber are product recall and drones (11% each), with others including e-cigarettes, autonomous vehicles and telematics totaling only eight percent.



Public sector becomes top target for malware attacks in the UK

Public sector organisations are the number one target for malware attacks in the UK. This is according to the 2015 Global Threat Intelligence Report (GTIR) – an analysis of over six billion security attacks in 2014 – announced by NTT Com Security, the global information security and risk management company.

While financial services continues to represent the number one targeted sector globally with 18% of all detected attacks, in the UK market nearly 40% of malware attacks were against public sector organisations. This was three times more than the next sector, insurance (13%) and nearly five times more than the media and finance sectors (both 9%).

However, according to the GTIR, attacks against business and professional services organisations saw a sharp rise this year from 9% to 15% globally, while this sector also accounted for 15% of malware observed. Typically, these businesses are seen as being much softer than other targets, but due to their connection and relationship with much larger organisations, are high value targets for attackers. In the UK, this sector represented 6% of all malware attacks.

It is perhaps interesting to note that the Business Continuity Institute's latest Horizon Scan report identified that business continuity professionals in the financial and insurance sector expressed greater concern at the prospect of a cyber attack occurring. 56% of respondents to a global survey who work in the financial and insurance sector expressed extreme concern compared to only 34% and 30% in the professional services sector and public administration sector respectively.

Stuart Reed, Senior Director, Global Product Marketing at NTT Com Security, comments: “The fact that public sector figures are so high compared to other sectors in the UK is due largely to the value of the data that many of these organisations have, which makes them attractive and highly prized targets for malware attacks. While the level of threat may vary from organisation to organisation, they all have information that would be of interest to cyber criminals."

It’s also interesting that we have seen some campaigns specifically targeting business & professional services. It’s possible that companies in this sector may not have the equivalent security resources and skills in-house that many other larger companies do, yet they potentially yield high value for attackers as both an end target and a gateway target to strategic partners.

Sites in northern and central California and Montana selected to showcase climate resilience approach


The Department of the Interior (DOI), Department of Agriculture (USDA), Environmental Protection Agency (EPA), National Oceanic and Atmospheric Administration (NOAA), and the U.S. Army Corps of Engineers (USACE) today recognized three new collaborative landscape partnerships across the country where Federal agencies will focus efforts with partners to conserve and restore important lands and waters and make them more resilient to a changing climate. These include the California Headwaters, California’s North-Central Coast and Russian River Watershed, and Crown of the Continent.

Building on existing collaborations, these Resilient Lands and Waters partnerships – located in California and Montana/British Columbia – will help build the resilience of valuable natural resources and the people, businesses and communities that depend on them in regions vulnerable to climate change and related challenges. They will also showcase the benefits of landscape-scale management approaches and help enhance the carbon storage capacity of these natural areas.

The selected lands and waters face a wide range of climate impacts and other ecological stressors related to climate change, including drought, wildfire, sea level rise, species migration and invasive species. At each location, Federal agencies will work closely with state, tribal, and local partners to prepare for and prevent these and other threats, and ensure that long-term conservation efforts take climate change into account.

The Russian River meanders through Mendocino and Sonoma counties in Northern California mountains and meets the Pacific Ocean at Jenner, California. Credit: NOAA

The Russian River meanders through Mendocino and Sonoma counties in Northern California mountains and meets the Pacific Ocean at Jenner, California. (Credit: NOAA)

These new Resilient Lands and Waters sites follow President Obama’s announcement of the first set of Resilient Landscape partnerships (PDF, 209K) (southwest Florida, Hawaii, Washington and the Great Lakes region) at the 2015 Earth Day event in the Everglades.

Efforts in all Resilient Lands and Waters regions are relying on an approach that addresses the needs of the entire landscape. Over the next 18 months, Federal, state, local, and tribal partners will work together in these landscapes to develop more explicit strategies and maps in their programs of work. Developing these strategies will benefit wildfire management, mitigation investments, restoration efforts, water and air quality, carbon storage, and the communities that depend upon natural systems for their own resilience. By tracking successes and sharing lessons learned, the initiative will encourage the development of similar resilience efforts in other areas across the country.

For example, in the California Headwaters, an area that contributes greatly to state’s water supply, the partnership will build upon and unify existing collaborative efforts to identify areas for restoration that will help improve water quality and quantity, promote healthy forests, and reduce wildfire risk. In California’s North-Central Coast and Russian River Watershed, partners will explore methods to improve flood risk reduction and water supply reliability, restore habitats, and inform coastal and ocean resource management efforts. In Montana, extending into British Columbia, the Crown of the Continent partnership will focus on identifying critical areas for building habitat connectivity and ecosystem resilience to help ensure the long-term health and integrity of this landscape.

"From the Redwoods to the Rockies to the Great Lakes and the Everglades, climate change threatens many of our treasured landscapes, which impacts our natural and cultural heritage, public health and economic activity," said Secretary of the Interior Sally Jewell. “The key to making these areas more resilient is collaboration through sound science and partnerships that take a landscape-level approach to preparing for and adapting to climate change.

“As several years of historic drought continue to plague the West Coast, there is an enormous opportunity and responsibility across federal, state and private lands to protect and improve the landscapes that generate our most critical water supplies,” said Secretary of Agriculture Tom Vilsack. “Healthy forest and meadows play a key role in ensuring water quality, yield and reliability throughout the year. The partnerships announced today will help us add resiliency to natural resource systems to cope with changing climate patterns.”

“Landscape-scale conservation can help protect communities from climate impacts like floods, drought, and fire by keeping watersheds healthy and making natural resources more resilient,” said EPA Administrator Gina McCarthy. “EPA is proud to take part in the Resilient Lands and Waters Initiative.

“Around the nation, our natural resources and the communities that depend on them are becoming more vulnerable to natural disasters and long-term environmental change," said Kathryn Sullivan, Ph.D., NOAA Administrator. “The lands and waters initiative will provide actionable information that resource managers and decision makers need to build more resilient landscapes, communities and economies."

"The Army Corps of Engineers is bringing our best scientific minds together to participate in this effort. We are working to ensure that critical watersheds are resilient to changing climate,” said Jo-Ellen Darcy, Assistant Secretary of the Army for Civil Works. “The Army Corps’ participation in this effort along with our local, state and federal partners demonstrates our commitment to implement President Obama's Climate Action Plan in all of our missions."

The Resilient Lands and Waters initiative is a key part of the Administration’s Climate and Natural Resources Priority Agenda (PDF, 8.9MB), a first of its kind, comprehensive commitment across the Federal Government to support resilience of America’s vital natural resources. It also directly addresses Goal 1 of the National Fish Wildlife and Plant Climate Adaptation Strategy to conserve habitat that supports healthy fish, wildlife, and plant populations and ecosystem functions in a changing climate.

When President Obama launched his Climate Action Plan (PDF, 319K) in 2013, he directed Federal agencies to identify and evaluate approaches to improve our natural defenses against extreme weather, protect biodiversity and conserve natural resources in the face of a changing climate. The Climate Action Plan also directs agencies to manage our public lands and natural systems to store more carbon.

Learn more information about the three selected landscapes (California Headwaters, California’s North-Central Coast and Russian River Watershed, and Crown of the Continent)

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter, Instagram and our other social media channels.

Security experts have a lot of concerns and added responsibilities as connected devices, large and small, burrow their way ever deeper into people’s lives. Nowhere is the increasing need for oversight greater than in health care.

This week, the Workgroup for Electronic Data Interchange (WEDI) released a primer on how a health care organization should protect itself in cyberspace. In its story on the primer, Health IT Security carries a statement from WEDI President and CEO Devin Jopp illustrating the acceleration of health care compromises. From 2010 to 2014, 37 million health care records were compromised in breaches. That sounds like a lot, until it is considered that there were 99 million compromises in just the first quarter of this year. The primer has sections on the lifecycle of cyberattacks and defense, the anatomy of an attack, and ways of “building a culture of prevention.”

Those attacks were aimed at gathering patients’ financial and related data. Another health care vulnerability – and one that is in many ways even more frightening – is attacking connected health care devices in order to hurt people. For some reason, there are people in this world who find it okay to interfere with a heart patient’s pacemaker.



While managed service providers (MSPs) are certainly well-versed in the areas of cloud-based file sharing and data storage, it pays to be just as familiar with some of the areas of interest of your clients. As MSPs see more healthcare companies migrating their services to the cloud – whether due to a relaxation of restrictions or a decision to evolve – the need for familiarity in this potentially lucrative market is as important as ever.

When the Health Insurance Portability and Accountability Act (HIPAA) was enacted in 1996, data security and privacy on the internet were not exactly the big concerns of the day. Then again, the MSP business model we know and love today didn’t even exist.

Fast forward about 20 years – and through a couple of generations of computing platforms – and HIPAA compliance has become a hot topic as health care organizations, at long last, begin to crawl out from under mountains of paper and into the digital world.



Wednesday, 24 June 2015 00:00

Shape Your Risk Culture

Today, institutions have become sophisticated in establishing an enterprise risk management infrastructure that includes risk management departments, appetite, framework, policies, limits, models, governance, key risk indicators, reporting and processes. Organizations are set up to manage risks of different kinds: strategic, business, market, credit, counterparty, earnings, capital, liquidity, concentration, legal, operational, model, reputational, funding, and even emerging risks. Effective risk management is not just about the infrastructure, it is also about the people. A major shortcoming that many institutions can improve on is putting boundaries on what is an acceptable “risk culture.” It has been a cause of many disastrous financial failures, including the, LIBOR rate manipulation, collapse of Bear Stearns, and Madoff’s Ponzi scheme. It is a risk management critical success and “random” risk aversion factor. It has become a buzz term and is on the radar of many institutions including hedge funds, banks, insurance companies, corporations, and regulators. For example, the Financial Conduct Authority (FCA) is a new regulatory body created on April 2013 as one of the successors to the United Kingdom’s Financial Services Authority. It has the power to regulate conduct related to the marketing of financial products and investigate organizations and individuals.

What is risk culture? See Figure 1. Ultimately it is behavior that is influenced by ethics, values, and beliefs of people in an organization that collectively supports the risk management of an organization. It is then easy to understand how well it supports risk management, which should be driven by five risk culture conditioning elements: leadership, risk knowledge, risk understanding, risk transparency, and reward system.



An unusual combination of big and small tech companies are working on ways to accelerate the development of cloud computing technologies.

On Tuesday, an organization called Docker announced that its commercial software, used to create and maintain other software applications easily for millions of computers and mobile phones, would become generally available.

The commercial product follows an initial open source release of Docker, and it includes among other things a way that companies can securely store and share their software. In an unusually broad partnership, the product would be available not just from Docker, but from Amazon’s cloud computing business, AWS; IBM; and Microsoft.



Continuous monitoring on its own is great for the detection and remediation of security events that may lead to breaches. But when it comes to allowing us to measure and compare the effectiveness of our security programs, there are many ways that simply monitoring falls short. Most significantly, it does not allow us to answer the question of whether not we are more or less secure than we were yesterday, last week or last year.

This is a question that we all have grappled with in the security community, and more recently, in the board room. No matter how many new tools you install, settings you adjust, or events you remediate, there are few ways to objectively determine your security posture and that of your vendors and third parties. How do you know if the changes and decisions you have made have positively impacted your security posture if there is no way to measure your effectiveness over time?



As drought grips California, floods overpower Texas and Eastern cities grapple with crumbling sewers that pump contaminated runoff into waterways, state and local governments are revisiting how they get, use and manage water. 

One method is to harness the rain. Some governments are doing this through massive systems that treat and pump stormwater back to residents, while others are looking to the installation of rain collection systems for homes and businesses. A few cities are introducing green infrastructure designed to put water back into the ground rather than letting it flow down the street.

Sally Brown, an associate professor at the University of Washington, said the last time governments spent significant amounts of money on water issues was after the Clean Water Act in the 1970s, when they had to change how they treated water and wastewater. Today, environmental factors coupled with water availability are forcing state and local officials to create new policies and invest financially to ensure future access to water.



Wednesday, 24 June 2015 00:00

Another Cloud Boom Coming Our Way?

As a managed service provider (MSP), you must know that cloud adoption is in full-swing, right? Well, what if we were to tell you that another cloud computing boom is still to come? Whether you believe it or not, research suggests that a slew of new opportunities could be on the way for MSPs in the world of cloud data storage and cloud-based file sharing

When a new technology is unleashed on the world, it often makes itself known in waves. First, there is in the initial announcement and discussion of the technology. Upon release, there are the early adopters that look to take hold of it. Then, perhaps after some feedback and revisions, most technologies that are destined for longevity will see a great boom in acceptance and adoption.



Tuesday, 23 June 2015 00:00

6 Steps to Reduce Business Travel Risks

Serious medical emergencies, political unrest and devastating natural disasters – these are just a few of the dangers business travelers face as they travel the world on behalf of their companies.  Even seemingly smaller travel issues, such as a lost prescription, a stolen passport or even a cancelled flight can wreak havoc on one’s travel plans at the worst possible moment. All of these risks are abundant in business travel, and as employees circle the globe, it’s your responsibility to protect them from these risks with proactive crisis management.

A key component of any well-rounded Travel Risk Management (TRM) strategy, proactive crisis management can help organizations meet their Duty of Care objectives and prevent issues from becoming even more serious.  Companies must be ready to deal with crises as opposed to simply just reacting to them – and this knowledge can only come through experience. This experience is best found by incorporating crisis response exercises into your company’s TRM strategy. Here’s how:



Low river flow and nutrient loading reason for smaller predicted size


Scientists are expecting that this year’s Chesapeake Bay hypoxic low-oxygen zone, also called the “dead zone,” will be approximately 1.37 cubic miles – about the volume of 2.3 million Olympic-size swimming pools. While still large, this is 10 percent lower than the long-term average as measured since 1950.

The anoxic portion of the zone, which contains no oxygen at all, is predicted to be 0.27 cubic miles in early summer, growing to 0.28 cubic miles by late summer. Low river flow and low nutrient loading from the Susquehanna River this spring account for the smaller predicted size.

This chart shows, in the upper portion, the location of hypoxic (yellow, orange and red shading) bottom waters of Chesapeake Bay during the  early July,2014 survey. The bottom portion shows a longitudinal "slice" of the Chesapeake Bay main stem showing the depth of the hypoxic waters thru the central area of the Bay.  These data are collected by Maryland and Virginia as part of the comprehensive Chesapeake Bay Monitoring Program. (Credit: Maryland Department of Natural Resources)

This chart shows, in the upper portion, the location of hypoxic (yellow, orange and red shading) bottom waters of Chesapeake Bay during the early July,2014 survey. The bottom portion shows a longitudinal “slice” of the Chesapeake Bay main stem showing the depth of the hypoxic waters thru the central area of the Bay. These data are collected by Maryland and Virginia as part of the comprehensive Chesapeake Bay Monitoring Program. (Credit: Maryland Department of Natural Resources)

This is the ninth year for the Bay outlook which, because of the shallow nature of large areas of the estuary, focuses on water volume or cubic miles, instead of square mileage as used in the Gulf of Mexico dead zone forecast announced last week. The history of hypoxia in the Chesapeake Bay since 1985 can be found at EcoCheck, a website from the University of Maryland Center for Environmental Science.

The Bay’s hypoxic and anoxic zones are caused by excessive nutrient pollution, primarily from human activities such as agriculture and wastewater. The nutrients stimulate large algal blooms that deplete oxygen from the water as they decay. The low oxygen levels are insufficient to support most marine life and habitats in near-bottom waters and threaten the Bay’s production of crabs, oysters and other important fisheries.

The Chesapeake Bay Program coordinates a multi-year effort to restore the water and habitat quality to enhance its productivity. The forecast and oxygen measurements taken during summer monitoring cruises are used to test and improve our understanding of how nutrients, hydrology, and other factors affect the size of the hypoxic zone. They are key to developing effective hypoxia reduction strategies.

The predicted “dead zone” size is based on models that forecast three features of the zone to give a comprehensive view of expected conditions: midsummer volume of the low-oxygen hypoxic zone, early-summer oxygen-free anoxic zone, and late-summer oxygen-free anoxic zone. The models were developed by NOAA-sponsored researchers at the University of Maryland Center for Environmental Science and the University of Michigan. They rely on nutrient loading estimates from the U. S. Geological Survey.

“These ecological forecasts are good examples of the critical environmental intelligence products and tools that NOAA is providing to stakeholders and interagency management bodies such as the Chesapeake Bay Program,” said Kathryn D. Sullivan, Ph.D., under secretary of commerce for oceans and atmosphere and NOAA administrator. “With this information, we can work collectively on ways to reduce pollution and protect our marine environments for future generations.”

The hypoxia forecast is based on the relationship between nutrient loading and oxygen. Aspects of weather, including wind speed, wind direction, precipitation and temperature also impact the size of dead zones. For example, in 2014, sustained winds from Hurricane Arthur mixed Chesapeake Bay waters, delivering oxygen to the bottom and dramatically reducing the size of the hypoxic zone to 0.58 cubic miles.

“Tracking how nutrient levels are changing in streams, rivers, and groundwater and how the estuary is responding to these changes is critical information for evaluating overall progress in improving the health of the Bay,” said William Werkheiser, USGS associate director for water. “Local, state and regional partners rely on this tracking data to inform their adaptive management strategies in Bay watersheds.”

The USGS provides the nutrient runoff and river stream data that are used in the forecast models. USGS estimates that 58 million pounds of nitrogen were transported to the Chesapeake Bay from January to May 2015, which is 29 percent below average conditions. The Chesapeake data are funded through a cooperative agreement between USGS and the Maryland Department of Natural Resources. USGS operates more than 400 real-time stream gages and collects water quality data at numerous long-term stations throughout the Chesapeake Bay basin to track how nutrient loads are changing over time.

“Forecasting how a major coastal ecosystem, the Chesapeake Bay, responds to decreasing nutrient pollution is a challenge due to year-to-year variations and natural lags,” said Dr. Donald Boesch, president of the University of Maryland Center for Environmental Science, “But we are heading in the right direction.”

Later this year researchers will measure oxygen levels in the Chesapeake Bay. The final measurement in the Chesapeake will come in October following surveys by the Chesapeake Bay Program’s partners from the Maryland Department of Natural Resources (DNR) and the Virginia Department of Environmental Quality. Bimonthly monitoring cruise updates on Maryland Bay oxygen levels can be found on DNR’s Eyes on the Bay website at www.EyesontheBay.net

USGS provides science for a changing world. Visit USGS.gov, and follow us on Twitter @USGS and our other social media channels. Subscribe to our news releases via e-mailRSS or Twitter.

NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitterInstagram and our other social media channels.

The conventional wisdom these days seems to be that MSPs should ditch break-fix all together. We’ve heard this advice from MSP partners like Guy Baroan and Vince Tinnirello. According to both of them, the full managed services model makes sense because it’s simple to invoice, easy to budget for, and both clients and the provider have service agreements that make it all quite simple. Not to mention the fact that it’s a much more proactive method where maintenance occurs constantly, not just when something goes wrong.

Little did we know, however, that there are plenty of MSPs that are happy to work as hybrids, and they have some good reasons for doing so:



Why should data be erased?

Companies, no matter whether they are part of a large corporation or a smaller business, definitely need to use a professional data erasure method if they want to ensure that their data doesn’t fall into the wrong hands, like the Brighton and Sussex University Hospitals NHS Trust experienced in 2008.

Generally speaking, due to legal and internal regulations, data should be erased at the end of its so-called lifecycle. There are a number of existing national rules, regulations and laws that already require companies to comply with data protection measures, and thus also with data erasure. The provisions concerning data erasure will also become significantly tougher with the introduction of the European data protection regulation. The central element of this regulation, which is expected to come into force early next year, is certainly Article 17, which gives force of law to the “right to deletion” or the “right to be forgotten”.

To cut a long story short: Article 17 requires that all saved personal information that is no longer needed for its original purpose, for which no consent was given for its processing, or if its agreed retention period has expired, is to be securely erased. This requirement applies to all data collected, structured, transmitted and distributed concerning EU citizens, irrespective of the country or the storage system where the data is saved. For all companies, regardless of their size, this means that they should prepare intensively as of now and adapt all their processes to the new rules.



When it comes to singling out sectors that are in the forefront of disaster recovery, finance is often quoted as an example. With so much depending on the ability to recover systems and data rapidly after any incident, major banks were among the first to implement hot failover data centres for instance – as well as being among the only organisations that could afford them. At the other end of the scale, there are those that are particularly ill-equipped to deal with IT disasters. The education sector has been identified as one example, but another group falling short of the levels required could surprise you.



Tuesday, 23 June 2015 00:00

Tangents on Resilience

It seems that it now officially become a buzz-phrase – ‘Organisational Resilience’: impossible to define because there are many differing perceptions about what it is.  BS 65000-2014 says that it’s this: ‘ability of an organization to anticipate, prepare for, and respond and adapt to incremental change and sudden disruptions in order to survive and prosper’.  So I’m going with that for the time being.  I want to particularly focus on the last three words: ‘survive and prosper’. I think that there is too much emphasis on the ‘survive’ part when in fact it is probably the focus of most organisations to prosper, unless there is an oncoming wave of water, disease or armed terrorists.  The fact that there may well be a variable risk of such waves affecting many elements of our societies at some level or another is probably lost – or at least ignored by – most business organisations. The truth is they have to focus on the bottom line – and scaremongering about the catastrophes that may (not will) befall them will cut no ice.



If the FirstNet national first responder network succeeds, it’ll be because federal officials who are planning and deploying the network forged strong partnerships with states and localities. That’s why comments from state CIOs at the NASCIO Midyear Conference in April are troubling.

Although state CIOs generally support the concept of a nationwide interoperable public safety network, they’re clearly frustrated with the lack of details coming from the federal First Responder Network Authority about how the new network will be built and paid for.

“FirstNet is a fantastic idea, but people like me are very skeptical of something where nobody can show me the plan and nobody can show me the cost,” said Alabama CIO Brunson White. “I’ll remain skeptical until somebody does that, and we’ve been asking for a while now.”



(TNS) — Private security guards working at Iowa malls, schools and corporations have no required training and no recurring background checks, despite increased threats at these facilities.

Lawmakers and the public are raising questions about licensing requirements for private security companies after an off-duty guard fatally shot a woman June 12 at Coral Ridge Mall in Coralville.

Alexander M. Kozak, 22, of North Liberty, is being held on first-degree murder charges that he targeted mall employee Andrea Farrington, 20, and gunned her down amid hundreds of shoppers.

“Most organizations want to give the appearance of security, but they don’t want the substance,” said Tom M. Conley, president and chief executive officer of the Conley Group, a private security company in Urbandale.



Tuesday, 23 June 2015 00:00

Three Problems that Prove You Need a CDO

A few signs show that organizations might be retreating from the idea of a chief data officer. Instead, some organizations are adding strategic data functions to the CIO’s job. But is that enough or does the growing demand require a dedicated data executive?

Here are three reasons why I think organizations may want to embrace chief data officers.

First, as I shared in my last piece, most CIOs don’t want the data officer task. Experian surveyed CIOs last November and found that an incredible 92 percent of CIOs “are calling out for a CDO role to release the data pressures they face and enable a corporate wide approach to data management.” Call me crazy, but to me, it’s pretty clear that the people who have thus far handled the job say it needs a separate role.



With great convenience comes great responsibility...

Once a month I use my blog to highlight some of S&R’s latest and greatest. The cloud is attractive for many reasons -- the possibility of working from home, the vast array of performance and analytical capabilities available, knowing that your backups are safe from that fateful coffee spill, etc. Although the cloud is not a new concept, the security essentials behind it unfortunately remain a mystery to practically all users. What’s worse, the security professionals tasked with protecting corporate data rarely have visibility into all the risk -- it’s simply too easy for users to make critical cloud decisions without process or oversight.   

Underestimating or neglecting the necessary security practices that a cloud requires can lead to hacks, breaches, and horrendous data leaks. We’ve seen our fair share of security embarrassments that range from Hollywood execs to the US government, and S&R pros know that these are far from done.



Tuesday, 23 June 2015 00:00

Creating a Risk Intelligent Organization

Many organizations spend time and effort building and developing robust risk mitigation frameworks and strategies to handle business-specific risks. In spite of constant monitoring through dashboards and reports, many companies still face major and unexpected issues. One of the main reasons for shortfalls in risk management is the general attitude towards risk mitigation. Although companies are well-prepared with an infrastructure in place, they often struggle when cultivating a sense of risk awareness, responsibility and intelligence into and across the fabric of an organization, which results in gaps and deficiencies.

Every organization realizes the significance of risk intelligence, but they frequently face issues in the initial stage of their transition. Developing a risk culture is frequently viewed as just a requirement to be fulfilled rather than something that adds value to an enterprise. Without a clear agenda, many companies find it impossible to cultivate risk-taking capabilities into its employee base.

Risk intelligence demands that every individual in an organization take responsibility for managing risks in the day-to-day operations. Senior management should assess the existing risk management strategy and gauge its effectiveness in alleviating risks as well as developing awareness throughout the organizational structure.



Here’s the conundrum: There is a shortage of IT professionals who have the skills that employers need, and at the same time, there is an abundance of bright, eager people who dream of obtaining those skills and building a career in IT, but who simply lack the wherewithal to obtain a four-year college degree to realize that dream. The solution to this problem has long seemed destined to elude us. But maybe there is an answer after all.

That’s the conclusion I drew after learning about the Creating IT Futures Foundation (CITFF), the philanthropic arm of CompTIA, the Downers Grove, Ill.-based IT trade association best known for its certification programs. Formerly called the CompTIA Educational Foundation, CITFF is headed by CEO Charles Eaton, who was brought on board in 2010 “to find a more impactful way to engage in our strategy.” That strategy, in Eaton’s words, is to “move the needle on getting people who need an opportunity into IT careers.”



Let’s face it, you don’t know what’s happening until It’s happened; it takes time to find out what has occurred. What it major? Is it minor? Did IT get impacted? Was revenue (or other financial impacts) lost? Does the public know? Or worse, does the media know?

I’m all for plans and planning but you just won’t know everything up front when some sort of operational interruption occurs; be it weather related, power related, or some other interruption that causes a major disruption for the organization. Confusion is going to be present and it’s going to be present until you’ve got a handle on the situation. The amount of time from the disaster or operational interruption to the time you have a handle on what’s going on – and what needs to be done by way of a response, is where your plans and processes kick in.



OKLAHOMA CITY – The recent severe storms, floods, straight-line winds and tornadoes occurring May 5 through June 4 damaged public and private roads and bridges.

The Federal Emergency Management Agency (FEMA) and the U.S. Small Business Administration (SBA) may be able to help when repairing privately owned access roads and bridges.

FEMA’s Individual Assistance program could cover the expenses of repairing privately owned access roads if the following criteria are met:

  • It is the applicant’s primary residence;
  • It is the only access to the property;
  • It is impossible to access the home with the damaged infrastructure; or
  • The safety of the occupants could be adversely affected.

SBA is FEMA’s federal partner in disaster recovery, and may also help. Private property owners, established homeowner associations and properties governed by covenant may apply for a low-interest disaster loans directly through SBA. These funds can be used to repair or replace private roads and bridges. Privately owned access roads owned by homeowner associations may apply directly to the SBA.

Homeowners who jointly own access roads and bridges may also be eligible for repair grants or SBA loans under certain circumstances. In some cases, sharing the cost of repairs with funds obtained through a combination of FEMA, SBA loans and private funds may be another option. The affected homeowners should each register with FEMA individually.

Survivors can apply for state and federal assistance online at www.DisasterAssistance.gov or by calling 800-621-FEMA (3362) or (TTY) 800-462-7585. Those who use 711-Relay or Video Relay Services can call 800-621-3362 to register.

Each request for private road or bridge repair assistance is evaluated on a case-by-case basis.

Repair awards through Individual Assistance funding are for disaster-related damages and will not include improvements to the road’s pre-disaster condition, unless improvements are required by current local or state building codes of ordinances.

To register online visit www.DisasterAssistance.gov, by phone at toll-free 800-621-3362 or (TTY) 1-800-462-7585, or via smartphone or tablet at m.fema.gov.

For more information on Oklahoma disaster recovery, click http://www.fema.gov/disaster/4222 or visit OEM at www.oem.ok.gov


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers services to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

The Federal Communications Commission (FCC) is asking AT&T, in the form of a possible $100 million fine, to explain why it apparently throttled subscribers when it said their services were unlimited. The FCC says that the limitations kicked in after consumption of 5GB of data in a month.

This, Computerworld reports, has been happening since 2011: The company has 30 days to respond to the allegations. The FCC then will make an official determination. Even if the $100 million hit stands, it may have been worth it for AT&T:

The FCC said it's aware that the fine, while large, is a fraction of the revenue AT&T made from offering its unlimited plan to consumers. It is also considering other redress, including requiring AT&T to individually inform customers that its disclosures were in violation of rules and to allow them out of applicable contracts with no penalty.



The future of enterprise IT will be defined by the need to securely deliver a more consumer-like application experience that can be updated in a matter of minutes.

Speaking at the launch this week of a VMware Business Mobility initiative, VMware CEO Pat Gelsinger says this brave new world of enterprise IT will require not only fundamental changes to the way enterprise applications are built and delivered, but also the way IT infrastructure is provisioned and managed.

The VMware Business Mobility initiative unifies the delivery of identity management as a service provided by the AirWatch unit of VMware and the software-defined networking (SDN) technologies that VMware gained when it acquired Nicira in 2012.



If you work in an office, you might think that everyone’s favorite pastime is to complain about how inefficient IT is at helping to solve your technical issues in a timely manner. But surprise, surprise; according to a new study from Landesk, a majority of employees actually reported being very satisfied with their organization’s IT customer service.

Landesk announced the results of its 2015 Global State of IT Support Study, which surveyed 2,500 employees in the United States and Europe to determine how satisfied they are with their organization’s IT customer service. According to the survey, 80 percent of respondents said they would give their IT departments a grade of either “A” or “B” in terms of customer satisfaction, which seriously bucks the stereotype of inefficient IT workers.



Monday, 22 June 2015 00:00

Look Forward with Your Hybrid Cloud

The cloud industry is starting to look a lot like the wine industry: Experts galore are ready to declare what is and is not a quality cloud, and hybrids and cross-breeds incorporate various components to produce a wide variety of options for consumers.

The debate over the efficacy of the various cloud approaches now on the market will likely continue for some time to come, as neither public nor private infrastructure appears to be going anywhere soon. But remember that all data infrastructure solutions are a means to an end, so it is important to keep your ultimate goals in mind when pursuing any one strategy.

This can be a tricky thing to do, says IBM’s John Easton, because most IT professionals tend to view cloud solutions from their own perspectives as managers of traditional data center infrastructure. In fact, he says he can guess a person’s particular job based on their rationales for migrating to the cloud, such as improved systems management or greater scalability. But this ultimately diminishes the return on any cloud investment because it focuses on how the cloud can solve current problems rather than how it can open new opportunities for the future. This is why most hybrid cloud deployments have proven to be of middling success at best – they are geared largely toward cost-saving and infrastructure efficiency rather than more forward-looking data portability and development agility opportunities.



It’s a little known fact that flash-based storage can be too much for most systems. Designed back in the days when slow hard disk drives (HDDs) carried out the reading and writing of data, today’s channels for information transport often can’t cut it when loaded up with flash. The result is bottlenecking applications: the combined might of multicore processors, abundant RAM and flash pack far more processing punch than can be relayed by the associated storage protocols and bus architectures.

Enter Non-Volatile Memory Express (NVMe). It's a PCIe-based approach to resolving those bottlenecks. And it’s about to capture the imagination of the storage world.

“I’ve been at this for more than 20 years and NVMe is one of the most revolutionary, most anticipated and most exciting developments I’ve seen,” said Doug Rollins, Senior Technical Marketing Engineer, Enterprise Solid State Drives for the Storage Business Unit at Micron Technology.



(TNS) — The Buckskin fire looks a little different on Matthew Krunglevich's computer screen, an adornment of yellow dots smeared across part of a southwestern Oregon map, with dashes of orange along the blaze's eastern and southern edges.

At a glance, this view from NASA's MODIS — Moderate Resolution Imaging Spectroradiometer — satellite doesn't look like much. But it actually tells Krunglevich, of the Oregon Department of Department of Forestry, a lot. The splashes of color southwest of Cave Junction show where the fire is burning and where it's burning hottest: yellow equals warm, orange equals warmer. Predictably, the orange is shown where the fire is burning outward, where the flames are newer.

"It gives us an idea of — but a really rough approach — to how big a fire is, where there's heat activity on a fire on a broad scale," Krunglevich says.

It's one tool in a growing high-tech toolbox that can help crews prioritize resources as needed. Because in an area such as southwestern Oregon that's so consistently primed for summer wildfires, the more information, the better, fire officials say.



(TNS) — As the nation mobilizes to determine what motivated the gunman in the Charleston, S.C., massacre, the shootings highlight what a number of experts said Thursday is a chilling reality: The greatest danger from terrorism may be from our own ranks and within our own borders.

“Since 9/11, our country has been fixated on the threat of jihadi terrorism,” said Richard Cohen, president of the Southern Poverty Law Center. “But the horrific tragedy at the Emanuel AME reminds us that the threat of homegrown domestic terrorism is very real.”

Dylann Storm Roof, 21, was arrested Thursday in Shelby, N.C., ending a massive manhunt that began after the killing of nine people attending a Bible study at the Emanuel African Methodist Episcopal Church on Wednesday night.

Now comes the investigation into how and why it happened.



The term Internet of Things may have sprung to fame only recently, but its origin dates back several years. Apparently, it was first used in 1999 at a research facility located at the famous American university MIT, the Massachusetts Institute of Technology.

But what exactly is the Internet of Things? Conceptually, the IoT is simple: it describes a reality where things are capable of exchanging information. To fully understand the IoT’s potential, imagine that a growing quantity of objects- not PCs, smartphones and tablets, but common everyday objects – become capable of communicating with one another, exchanging data collected from sensors, accelerometers and GPS systems to provide us with services and information based on these readings.

This type of communication among objects is generally referred to by the acronym M2M, representing the Machine to Machine communication that allows wireless and wired devices to converse.

But what are the possible applications for the Internet of Things?



There has been a lot of talk about the degree of enterprise readiness of the cloud. Some argue that it doesn’t have the performance capabilities of data center-based applications. Maybe the question we should be asking is whether the service is enterprise-ready. Many existing cloud services have a consumer heritage—fine for individual users and perhaps a very small business. And therein lies the problem. An enterprise-ready service should be designed from the ground up to operate in the cloud and provide enterprise-level performance, features and security.



Increasing complexity means that business continuity professionals need to rethink some of the paradigms of the practice, says Geary Sikich.


Business continuity professionals need to rethink some of the paradigms of the practice. All too often we tend to fall back on what are considered the tried and true ways of doing things. This essentially leaves us in two camps; the first, evolved out of information technology and disaster recovery and the second, evolved out of emergency preparedness (tactical planning), financial risk management (operational) and strategic planning (strategic). These two camps each offer much to be desired. The first, having renamed disaster recovery and calling it business continuity still retains a strong focus on systems continuity rather than true business continuity; but this is not a bad thing. The second, has begun a forced merger of sorts; combining the varied practices at three levels (tactical, operational and strategic) and renaming it enterprise risk management (ERM). The second group still retains strong perspectives on risk management; that is why I have divided it into the three sub-groups (tactical, operational and strategic).



Small to midsize businesses (SMBs) may be finally realizing the extent to which cybercrimes can affect them, but do they realize just how intently hackers are targeting them? A report by Check Point Software says that SMBs have become “the cybercriminal’s ‘sweet spot,” due to a lower level of IT security but still a decent level of valuable information that can be utilized to make money.

The Check Point report says that appropriately 63 percent of SMBs are worried about malware, and 38 percent are worried about possible phishing scams, but 31 percent aren’t doing anything to protect against such threats. This report also cites statistics from the CyberSecurity Alliance that say 36 percent of cyberattacks target small businesses and of those businesses that are attacked, 60 percent will be forced to close within six months following—likely due to the fact that the average cost for a data breach at an SMB is $36, 000.



This week, I ventured up to West Glocester, Rhode Island, home of the coolest place any insurance broker, insurance client, or risk management journalist can visit: the FM Global Research Campus.

Because FM Global is intently focused on prevention of loss as the chief means of minimizing claims, the company maintains a 1,600-acre campus dedicated to property loss prevention scientific research. The biggest center of its kind, the research center features some of the most advanced technology to conduct research on fire, natural hazards, electrical hazards, and hydraulics. Here, experts can recreate clients’ warehouse conditions to test whether existing suppression practices would be sufficient in the event of a massive fire, for example. Fabricated hail or seven-foot 2x4s are shot from a cannon-like instrument at plywood, windows, or roofing to test whether these materials can withstand debris that goes flying in hurricane-strength winds. Hydraulic, mechanical and environmental tests are conducted on components of fire protection systems, like sprinklers, to ensure effectiveness overall and under the specific conditions clients face. Indeed, these hydraulic tests have led the company’s scientists and engineers to design and patent their own, more effective sprinklers, the rights to which are released so anyone can manufacture these improved safety measures.



(TNS) — Allstate said Wednesday that it is one step closer to using drones to assess damages after catastrophes.

The insurer, based in the Chicago suburb of Northbrook, said that a new ruling by the Federal Aviation Administration will allow the consortium it works with to research the benefits of flying drones to assess property claims.

The year-old Property Drone Consortium is led by EagleView Technology, whose services include aerial imagery and data analysis.

Allstate said that in a disaster, access to neighborhoods might be restricted by debris or local authorities and that drones could help claims professionals serve customers in spite of those restrictions.



A recent Information Management article argues that chief data officers (CDOs) are making “gradual gains” this year. The piece backs this up with a list of recent appointments, as well as a stat from Experian that says roughly 60 percent of chief information officers hope to hire CDOs this year.

With all due respect, I disagree. In fact, there are several signs that CDOs as a concept may falter, and their functions may be absorbed by other existing roles.

First, the list actually includes only one CDO appointment. That was at Clinical Ink, a company that develops health care patient engagement technology. Obviously, that’s a step forward, but if I may be frank, I’m a bit surprised a company like that didn’t already have a chief data officer, since their work is patient engagement.



All countries need to be prepared for the unanticipated spread of serious infectious diseases says WHO.

After a meeting on the 17th June, the United Nations World Health Organization (WHO) declared that the Middle East Respiratory Syndrome, or MERS, outbreak that spread from the Middle East to the Republic of Korea does not constitute a ‘public health emergency of international concern’ but is nonetheless a ‘wake-up call’ for all countries to be prepared for the unanticipated spread of serious infectious diseases.

The Emergency Committee, convened by the WHO Director-General under the International Health Regulations regarding Middle East respiratory syndrome coronavirus (MERS-CoV) in regards to the outbreak in the Republic of Korea, also recommended against the application of any travel or trade restrictions and considers screening at points of entry to be unnecessary at this time.

WHO did recommend “raising awareness about MERS and its symptoms among those travelling to and from affected areas” as “good public health practice.”

The Committee noted that there are still many gaps in knowledge regarding the transmission of this virus between people, including the potential role of environmental contamination, poor ventilation and other factors, and indicated that continued research in these areas was critical.

Meanwhile, in a JAMA Viewpoint article, Georgetown public health law professor Lawrence O. Gostin and infectious disease physician Daniel Lucey state that MERS-CoV requires constant vigilance and could spread to other countries including the United States. However, MERS can be brought under control with effective public health strategies.

In the Viewpoint, published online on June 17th, the authors outline strategies for managing the outbreak, focusing on transparency, trust and infection control in health care settings. The duo also outline weaknesses in the World Health Organization's framework designed to govern patents on certain viruses, which is likely to impact critical future research.

Key points Gostin and Lucey make about MERS-CoV infection control include:

  • Training health workers and conducting diagnostic testing of certain travelers;
  • Limiting quarantine quarantines use to well-documented exposures using the least restrictive means possible;
  • Restricting travel should be avoided as it would be ineffective as evidence is lacking of MERS-CoV community transmission; and
  • Closing schools also should be avoided given the lack of community transmission of MERS-CoV.

In addition, Gostin and Lucey say the WHO's Pandemic Influenza Preparedness Framework fails to cover non-influenza pathogens like MERS-CoV noting, "...there remain substantial holes in international rules needed to facilitate critical research."

Data center infrastructure is supposed to be the rock upon which higher order applications and services are built. So what are we to think when someone comes along and says we can do all kinds of wonderful things by severing the application’s ties to this foundation?

In a way, what is happening to data architectures mirrors what we can see in the data center. The floor is concrete, but the racks are made of metal. The servers themselves are not welded to the rack but can slide in and out for easy replacement. At each delineation, the goal is to produce maximum flexibility while still rooting the system in the strength of its supporting infrastructure.

The latest iterations of virtual infrastructure are taking this idea to an entirely new level, however, because they purport to remove infrastructure concerns entirely from the business model. This can be seen in solutions like Nutanix’s Xtreme Computing Platform (XCP), which aims for full application independence from what the company is now calling “invisible infrastructure.” With the app now enjoying full mobility, native virtualization and even consumer-level search capabilities, it subsumes virtually all of the provisioning, orchestration and other functions it needs to support business processes at scale. In this way, organizations can finally rid themselves of costly infrastructure concerns and focus on what matters to them: making money through app-level innovation.



WASHINGTON – Today, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) signed Memoranda of Understanding (MOU) with seven technology organizations to provide state, local, tribal and territorial governments with technology resources during a disaster to expedite response and recovery. Cisco Systems, Google, Humanity Road, Information Technology Disaster Resource Center, Intel, Joint Communications Task Force and Microsoft have joined FEMA’s new Tech Corps program – a nationwide network of skilled, trained technology volunteers who can address critical technology gaps during a disaster.

During major disasters or emergencies, trained technology volunteers can complement ongoing response and recovery efforts, including installing temporary networks; enabling internet connectivity, and telephone, and radio communications; and providing other support, such as geographic information system (GIS) capacity, coding, and data analytics.  In 2002, Senator Ron Wyden (D-OR) proposed a mechanism of leveraging private sector technology capabilities to innovate the way federal, state, local and tribal governments respond to disasters. Tech Corps is based on this model, which was developed beginning in 2013 to assemble the initial group of companies for the voluntary program.

“When disaster strikes, we all have a role to play in helping survivors recover, and that includes the private sector,” said FEMA Administrator Craig Fugate. “Tech Corps volunteers will bring a vital skill set to our emergency management team to help the survivors we serve recover more quickly after disasters. We’re grateful to Senator Wyden and the private sector for contributing to this effort and we look forward to partnering with them to make communities stronger and safer.” 

“Tech Corps harnesses a deep well of technical expertise and private-sector manpower to make sure every resource is available immediately when disaster strikes,” said Senator Wyden. “Information technology is often critical to saving lives, and this program ensures that red tape won’t stand in the way of volunteer experts who can stand up temporary cell networks and Wi-Fi solutions that are so important in disaster areas. I’m hopeful today’s partners are the first of many to sign up to work hand-in-hand with emergency responders to help craft more resilient and effective responses to future disasters.”

Already, Tech Corps partners have been active on their own during national and global technology disaster response efforts, including providing support during Hurricane Sandy and the earthquakes in Nepal and Haiti. This initiative signifies a greater level of coordination between volunteers and the emergency management community through FEMA. 

To learn more about Tech Corps, please visit: fema.gov/tech-corps.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

OKLAHOMA CITY – Not all of the damage from flooding takes place while your home or business is under water. Long after the flood waters have receded, mold and mildew can present serious and ongoing health issues.

Oklahomans impacted by the severe storms and flooding that took place between May 5 and June 4 should take steps to protect the health of their family or employees by treating or discarding mold- and mildew-infected items as soon as possible.

Health experts urge those who find mold to act fast. Cleaning mold quickly and properly is essential for a healthy home or work place, especially for people who suffer from allergies or asthma.

Mold and mildew can start growing within 24 hours after a flood, and can lurk throughout a home or business, from the attic and basement to crawl spaces and store rooms. The best defense is to clean, dry or discard moldy items. A top-to-bottom cleanup is your best defense, according to the experts.

Many materials are prone to developing mold if they remain damp or wet for too long. Start a post-flood cleanup by sorting all items exposed to floodwaters:

  • Wood and upholstered furniture and other porous materials can trap mold and may need to be discarded.
  • Carpeting presents a problem because drying it does not remove mold spores. Carpets with mold and mildew should be removed.
  • Glass, plastic and metal objects and other items made of hardened or nonporous materials can often be cleaned, disinfected and reused.

All flood-dampened surfaces should be cleaned, disinfected and dried as soon as possible. Follow these tips to ensure a safe and effective cleanup:

  • Open windows for ventilation and wear rubber gloves and eye protection when cleaning. Consider using a mask (rated N-95 or higher) if heavy concentrations of mold are present.
  • Use a non-ammonia soap or detergent to clean all areas and washable items that came in contact with floodwaters.
  • Mix 1.5 cups of household bleach in one gallon of water and thoroughly rinse and disinfect the area. Never mix bleach with ammonia, as the fumes are toxic.
  • Cleaned areas can take several days to dry thoroughly. The use of heat, fans and dehumidifiers can speed up the drying process.
  • Check all odors. Mold often hides in the walls or behind wall coverings. Find all mold sources and clean them properly.
  • Remove and discard all materials that can’t be cleaned like wallboard, fiberglass and other fibrous goods. Clean the wall studs where wallboard has been removed and allow the area to dry thoroughly before replacing the wallboard.

For other tips about post-flooding cleanup, visit www.fema.gov, www.oem.ok.gov, www.epa.gov, or www.cdc.gov.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers services to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

Improving server utilization is like walking on a frozen pond during a spring thaw. The more comfortable the air temperature gets, the greater the danger of falling through.

With utilization, the higher you go, the less overhead you have when the inevitable data spikes arrive. Sure, you could have cloud-based IaaS at the ready, but now you are simply leasing underutilized resources rather than buying them.

This is why the tendency to view recent reports of underutilized servers with consternation is wrong-headed. The latest is Anthesis Group’s finding that 30 percent of servers worldwide are “comatose,” says eWeek’sJeffrey Burt, representing about $30 billion in “wasted” IT infrastructure. This may cause non-IT people to wring their hands, but anyone who has even a modicum of experience in data infrastructure will know that a 70 percent utilization rate is actually quite good—in fact, it is historically high given that in the days before virtualization, a typical server could sit idle maybe 80 percent of the time.



While every organization is at risk of employee theft–with the typical company losing 5% of revenue to fraud each year–smaller organizations with less than 500 employees (72%) were the most targeted.

According to The 2015 Hiscox Embezzlement Watchlist: A Snapshot of Employee Theft in the U.S., of the smaller companies targeted, four out of five had less than 100 employees and more than half had fewer than 25 employees. Smaller organizations also had the largest losses, according to the survey. Financial services companies were most at risk (21%), followed by non-profits, labor unions and municipalities.

Hiscox noted steps organizations can take to minimize employee theft, adding that this is most important for small- to medium-sized businesses, which can be more impacted by theft. In fact, the survey found that 58% showed no recovery of their losses.



(TNS) — Iowa Agriculture Secretary Bill Northey said Monday the Bird flu outbreak ranks as Iowa’s worst animal health emergency and could cost federal and state agencies up to $300 million in the cleanup, disposal and disinfection process on top of the sizable losses being incurred by producers.

“Animal-health wise, there is nothing that we’ve ever had like it,” said Northey, who held out hope the spread is “winding down,” since Iowa recently has reported fewer confirmed cases of the highly pathogenic flu that has led to the deaths and euthanizing of more than 32.7 million commercial layers and turkeys on 76 farms in 18 Iowa counties. All the infected birds in Iowa have been depopulated and humanely destroyed, he said.

Northey said hotter temperatures and decontamination efforts have slowed the outbreak, although state officials Monday said they were investigating a possible new case. He noted that Minnesota saw a resurge in cases after a brief lull, and nearly 2,300 federal and state response personnel remained at work Monday in the field assessing Iowa’s situation and looking ahead to what might happen once fall weather returns along with migratory bird activity.



Often crisis management case studies focus on what went wrong in badly handled crises. In this article Charlie Maclean-Bristol FBCI takes five lessons from an incident that was well managed.

After commenting on so many organizations that get their crisis management wrong, it is refreshing to see an organization which in the main have got their response to a serious incident right! The handling of the response to a recent accident at its Alton Towers theme park by Merlin Entertainments has not been quite ‘text book’ but it has been close to it. On June 2nd two cars on the Smiler rollercoaster crashed in to each other resulting in four serious and twelve minor injuries to those on the ride. Subsequently one of the riders had to have part of her leg amputated. Often it takes a poor response and criticism for an organization to ‘put its house in order’ and to improve its response. Here they got it right first time.

So what are the five lessons learned from this incident?



Using Twitter and Google search trend data in the wake of the very limited US Ebola outbreak of October 2014, a team of researchers from Arizona State University, Purdue University and Oregon State University have found that news media is extraordinarily effective in creating public panic.

Because only five people were ultimately infected yet Ebola dominated the US media in the weeks after the first imported case, the researchers set out to determine mass media's impact on people's behavior on social media.

"Social media data have been suggested as a way to track the spread of a disease in a population, but there is a problem that in an emerging outbreak people also use social media to express concern about the situation," explains study team leader Sherry Towers of ASU's Simon A. Levin Mathematical, Computational and Modeling Sciences Center. "It is hard to separate the two effects in a real outbreak situation."



For many people, IT security is about keeping the bad guys out of the data centre by using firewalls to control external access and anti-malware programs to prevent hackers from infecting servers. That is only half the picture however. The threat that has also been growing comes from people already within the security perimeter of the data centre. They have legitimate access to servers, but are misusing that access either unintentionally or deliberately to take data out. The challenge in resolving this kind of insider threat is that it is typically not a malware attack, but a personal ‘manual’ attack.



The Office of Personnel Management has some explaining to do.

Cyberthieves have pilfered the personal information of millions of federal employees – notably including the private data of those with security clearances – and the story seems to grow worse by the day.

While investigating a cyberattack on the information of about 4 million feds, officials discovered “a separate intrusion into OPM systems that may have compromised information related to the background investigations of current, former, and prospective Federal government employees, and other individuals for whom a federal background investigation was conducted,” Samuel Schumach, OPM’s press secretary, said Sunday.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) launched a National Flood Insurance Program (NFIP) call center pilot program to serve and support policyholders with the servicing of their claims.

Flood insurance claims can be complicated, and policyholders may have questions in the days and weeks following a disaster.

The NFIP call center is reachable at 1-800-621-3362, and will operate from 8 a.m. to 6 p.m. (CDT) Monday through Friday. Specialists will be available to assist policyholders with the servicing of their claims, provide general information regarding their policies, and/or offer technical assistance to aid in recovery.

For those who prefer to put their concerns in writing, a “Request for Support” form is posted at www.fema.gov/national-flood-insurance-program, which can be filled out and emailed to This email address is being protected from spambots. You need JavaScript enabled to view it. or faxed to 540-504-2360.

Call center staff will be able to answer questions, such as “How do I file a flood insurance claim? What type of documentation is needed? Can I still obtain disaster assistance even though I have a flood policy?” as well as more complicated insurance questions about the extent of coverage, policy ratings, and more.  The call center will also be open to disaster survivors who have general questions about the NFIP.

“Flood insurance provides residents with the ability to protect themselves financially against the most common disaster we see in America,” said Roy Wright, Deputy Associate Administrator for the Federal Insurance and Mitigation Administration. “We’re providing this new resource to ensure that the people we serve have another way get information they may need to understand how flood insurance works and how to navigate the claims process.  This hotline also provides us with a direct connection to policyholders themselves should they have concerns to report about how their claims are being handled and enabling us to take prompt action to ensure that they receive every dollar they are owed under their policies.”

Flood insurance plays a critical role in assisting survivors on their road to recovery. Like other types of insurance, it does not cover all losses, but it is the first line of defense against a flood. While the policy payouts won’t make the insured whole, our top priority is to ensure policyholders get what they are due under their coverage. This initiative is part of FEMA’s ongoing commitment to effective, long-term improvements to the NFIP.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Tuesday, 16 June 2015 00:00

Selecting the Right Kind of Cloud

Saying that the cloud is becoming more specialized is like saying the days are getting longer now that summer is here: It is such a natural phenomenon that it barely needs to be stated.

But I’m going to state it anyway, because this facet of cloud computing alone will probably do more to capture critical enterprise loads and break down the psychological barriers to cloud adoption than any mere technological development.

Across a number of fronts, organizations are gaining the ability to deploy not just the cloud, but a highly specialized data ecosystem tailored to specific functions, industry verticals and even individuals. In a way, this follows that same pattern of software development in general, except that now the application software is backed by a cloud component that caters to its every whim.



Tuesday, 16 June 2015 00:00

Mastering IT Risk Assessment

The foundation of your organization’s defense against cyber theft is a mastery of IT risk assessment. It is an essential part of any information security program, and in fact, is mandated by regulatory frameworks such as SSAE 16, SOC 2, PCI DSS, ISO 27001, HIPAA and FISMA.

Compliance with those frameworks means that your organization not only has to complete an IT risk assessment but it must also assess and address the risks by implementing security controls.

In the event of a breach, an effective IT risk management plan—which details exactly what your IT department is going to do and how they’re going to do it—and implementation of the critical security controls that have the potential to save your organization millions of dollars in direct response costs, legal fees, regulatory fines, and costs associated with rebuilding a damaged corporate reputation.



Thanks to a new report from Trustwave, it is easy to see why cybercrime has become so prevalent. It pays very well.

The 2015 Trustwave Global Security Report (free download with registration) looked at all sorts of issues on the cybersecurity front, from spam to passwords to where compromises are actually happening. Though the report presented a fascinating and all-encompassing look at the state of cybersecurity today, unfortunately, it isn’t pretty.

The bit of information that appears to have caught the most attention is how lucrative cybercrime is for hackers. The report stated that hackers receive an estimated 1,425 percent return on investment for exploit kit and ransomware schemes, or nearly $6000 for a single ransomware campaign. That’s a stunning amount of money. TechWeek Europe explained why cybercrime is so lucrative:



(TNS) — Energy firms in Wyoming are being urged to take precautions against potential cybersecurity attacks.

Michael Bobbitt, a supervisory special agent with the FBI, told attendees at the Wyoming Infrastructure Authority's energy conference on Friday that companies should be aware of the growing number of threats on both the national and international levels.

Bobbitt is the team leader for the FBI's criminal and national cybersecurity squad in the agency's Denver office.

He said any business that uses computers faces the risk of being hacked or exposed to a cyberattack.



As important as it is for managed service providers (MSPs) to protect your clients from external threats, it can be just as important to protect organizations from themselves. By managing security and access in cloud data storage and cloud-based file sharing, MSPs can help to prevent employee misuse within an organization.

Over the past couple years, the news all around the world has been littered with the narratives of major security breaches from outside hackers. As organizations (and MSPs) rush to patch up any openings in their security protection against the external invaders, they better be just as cognizant of the potential threats that can compromise their data from inside their own walls.



Remember the U.S. Office of Personnel Management (OPM) data breach that was reported earlier this month? OPM officials last week said the incident now appears to have affected millions of federal employees and contractors.

And as a result, the OPM once again tops this week's list of IT security news makers to watch, followed by Microsoft (MSFT), the "Punkey" malware and Blue Shield of California.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



Business continuity and disaster recovery are two common reasons why organizations consider cloud migration: but sometimes the decision to migrate is put off due to fears that the process will be difficult. In this article, Lilac Schoenbeck offers some tips to help smooth the migration path.

Are you looking to utilise the business continuity and disaster recovery advantages that the cloud offers? Are you running out of data centre space? Do you need to reduce the time spent maintaining physical hardware? The reasons to transition to cloud continue to stack up, and stories about cloud benefits and successes are only becoming more prominent. Still, many organizations and IT teams continue to be wary of making the move because of the challenges associated with migrating their applications.

The good news? Cloud migration does not have to be as daunting as it once was. Others have helped pave the way, establishing best practices and systematic approaches to ease the process. Here are six tips to help make your migration a smooth one:



In a world of constantly emerging threats, security is a tough job: but the concepts of best practice have been devised for a reason. The challenge for organizations is to attain that balance between unworkable change control practices and an anarchic environment that provides ample opportunities to hide.

However strong the perimeter security, in the vast majority of organizations there are far too many opportunities for hackers or malware attacks to slide in undetected.

Forensic-level monitoring of system changes provides a means whereby subtle breach activity can be exposed, but just having the means to detect changes is only part of the solution.

In the same way that seemingly clear pond water is revealed to be teaming with life when placed under a microscope, the amount of noise created on a daily basis by critical upgrades, system patches and required updates once visible is overwhelming. And when it comes to breach detection, it is virtually impossible to distinguish between the expected file and registry changes prompted by these changes and nefarious activity.



Monday, 15 June 2015 00:00

What to Expect from a FEMA Inspection

After you register for assistance, an inspector from the Federal Emergency Management Agency (FEMA) will call you for an appointment to inspect your damaged property.

Q. Why is the inspector there?
A. Verifying disaster damage is part of the process to establish the amount and type of damage you suffered.  The inspectors have construction backgrounds and are fully qualified to do the job.

Q. How do I know the Inspector is from FEMA?
A. You should ask to see the inspector's identification.  All FEMA housing inspectors will have a FEMA badge displayed. Also, each disaster survivor is provided a unique FEMA registration number when they register for assistance.  The inspector will know your FEMA registration number.

If you have concerns with the legitimacy of a FEMA housing inspector, you should contact your local law enforcement as they will be able to validate their identification. 

Q. What does the inspector look for?
A. The inspector determines whether the house is livable by checking the structure, including heating, plumbing, electrical, flooring, wallboard, and foundation.

Q. How about personal property?
A. Damage to major appliances - washer, dryer, refrigerator, stove - is assessed. Other serious needs such as clothing lost or damaged in the disaster are surveyed.

Q. Do I need to have any paperwork on hand?
A. Some evidence that the property is your usual residence or evidence that you own the property will be required.  It might be a recent utility bill, mortgage payment record, or rent receipts.

Q. Will I find out the results of the inspection?
A. If you are eligible for assistance, you will receive a check in the mail.  You will be notified by letter if you are not eligible.  You have 60 days to appeal the decision, and the appeal process is outlined in the letter.

Q. What other inspections should I expect?
A. Depending on the types of assistance for which you may be eligible, your losses may be verified by FEMA, the U.S. Small Business Administration (SBA), and your local building inspector's office.

Heat is a form of energy, and energy is a commodity. And commodities, of course, can be sold for a profit.

So it is something of a misnomer to say that data centers are constantly dealing with the problem of waste heat when what is really going on is that they are failing to capitalize on their heat-generating capabilities.

But a few are starting to realize the commercial possibilities of the heat coming off the server racks. Probably the most innovative is the Foundry Project in Cleveland, Ohio, which is pumping heat from an underground data center to a $4.5 million co-located fish farm devoted to raising Mediterranean sea bass. The data center itself will measure about 40,000 square feet and is linked by three 100Gbps fiber networks. Foundry executives say they already have a client lined up but have yet to reveal a name. Meanwhile, the fish farm is expected to produce about 500,000 pounds per year, and waste from the fish will be delivered to a nearby orchard as fertilizer.



Forrester research analyst Michael Gualtieri made a bold prediction at this week’s Hadoop Summit. Gualtieri told attendees that 100 percent of all large enterprises eventually would adopt some form of Hadoop, according to Information Week Editor-at-Large Charles Babcock.

Babcock points out that Hadoop has a way to go, since actual deployment is currently around 26 percent, with only 11 percent planning to invest in the next 12 months.

Still, I think Gualtieri’s prediction is reasonable. Enterprises tend to be more conservative than, say, Internet start-ups, so typically they try to hit that sweet spot between disruption and too late to the game. In fact, Capgemini’s research found that leading businesses are already using Big Data to disrupt markets and threaten their competitors.

“In our study, a surprising 64% of respondents said that big data is changing traditional business boundaries and enabling non-traditional providers to move into their industry,” the report, released earlier this year, notes. “Companies report a significant level of disruption from new competitors moving into their industry from adjacent industries (27%), and over half (53%) expect to face increased competition from start-ups enabled by data.”



(TNS) — When Justin McQuillen died in 1994 after being hit by a pitched baseball, the technology for automated external defibrillators was not as sophisticated as it is today.

Today, the lightweight, portable devices can check a person’s heart rhythm, recognize when a shock is required and advise the rescuer when to administer it.

Some AEDs use voice prompts, lights and even text messaging to tell the user what steps to take. Most range in cost from $1,500 to $2,000, according to the American Heart Association, though less expensive models can be found.

McQuillen, 9, of Honey Brook, Pa., died in May 1994 after being struck in the chest with a baseball in a Twin Valley youth league game. An AED was not immediately available at the field.



OKLAHOMA CITY – Oklahoma residents whose properties were damaged in the recent storms and flooding are warned to be alert for, and urged to report, any potential fraud during recovery and rebuilding efforts, according to the Oklahoma Department of Emergency Management and the Federal Emergency Management Agency.

The aftermath of a disaster can attract opportunists and confidence artists. Homeowners, renters and businesses can follow some simple steps to avoid being swindled.

Be suspicious if a contractor:

  • Demands cash or full payment up front for repair work;
  • Has no physical address or identification;
  • Urges you to borrow to pay for repairs, then steers you to a specific lender or tries to act as an intermediary between you and a lender;
  • Asks you to sign something you have not had time to review; or
  • Wants your personal financial information to start the repair or lending process.

To avoid fraud:

  • Question strangers offering to do repair work and demand to see identification;
  • Do your own research before borrowing money for repairs. Compare quotes, repayment schedules and rates. If they differ significantly, ask why;
  • Never give any personal financial information to an unfamiliar person; and
  • Never sign any document without first reading it fully. Ask for an explanation of any terms or conditions you do not understand.

Disasters also attract people who claim to represent charities but do not. The Federal Trade Commission warns people to be careful and follow some simple rules:

  • Donate to charities you know and trust. Be alert for charities that seem to have sprung up overnight.
  • If you’re solicited for a donation, ask if the caller is a paid fundraiser, whom they work for, and the percentage of your donation that will go to the charity and to the fundraiser. If you don’t get a clear answer — or if you don’t like the answer you get — consider donating to a different organization.
  • Do not give out personal or financial information – including your credit card or bank account number – unless you know the charity is reputable.
  • Never send cash: you can’t be sure the organization will receive your donation.
  • Check out a charity before you donate. Contact the Better Business Bureau’s Wise Giving Alliance at www.give.org.

If you believe you are the victim of a contracting scam, price-gouging or bogus charity solicitations, contact local law enforcement and report it to the Oklahoma Office of the Attorney General. Find a complaint form online at www.ok.gov/oag. The Federal Trade Commission takes complaints at www.ftc.gov/complaint.

Many legitimate people — insurance agents, FEMA Disaster Survivor Assistance personnel, local inspectors and actual contractors — may have to visit your storm-damaged property. Survivors could, however, encounter people posing as inspectors, government officials or contractors in a bid to obtain personal information or collect payment for repair work. Your best strategy to protect yourself against fraud is to ask to see identification in all cases and to safeguard your personal financial information. Please keep in mind that local, state and federal employees do not solicit or accept money for their services to the citizens.

All FEMA employees and contractors will have a laminated photo ID. A FEMA shirt or jacket alone is not proof of identity. FEMA generally will request an applicant's Social Security or bank account numbers only during the initial registration process. However, FEMA inspectors might require verification of identity. FEMA and U.S. Small Business Administration staff never charge applicants for disaster assistance, inspections or help filling out applications. FEMA inspectors verify damages but do not recommend or hire specific contractors to fix homes.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

The Oklahoma Department of Emergency Management (OEM) prepares for, responds to, recovers from and mitigates against emergencies and disasters. The department delivers service to Oklahoma cities, towns and counties through a network of more than 350 local emergency managers.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at www.twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners, and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster.

I’ve seen the strides made in cloud security over the years, but a couple of new studies show that there is still a long way to go.

The study from Netskope found that sensitive data stored in the cloud has a one in five chance of being exposed. Okay, the flip side to that is a four out of five chance that your sensitive data won’t be exposed, but when you are dealing with health information, Social Security numbers, and other data that could result in identity theft for unsuspecting consumers, that number isn’t good enough – at least not for those who are still wary about migrating to the cloud.

The primary culprit of data loss is cloud storage apps, where 90 percent of all data loss prevention violations occurred. This result was a surprise, Sanjay Beri, Netskope's CEO and founder, told eSecurity Planet:



As emergency management evolves as a profession and grows in diversity, there’s a blending of personalities, viewpoints and different structures that come to the fore. People will come from different backgrounds, experiences and professions and have different styles and perspectives. They can blend to become a healthy whole, said Nim Kidd, Texas Division of Emergency Management chief, in a keynote address at the 2015 National Homeland Security Conference this week in San Antonio.

Kidd came from the fire service and acknowledged that his experience and style is different from others rising in the emergency management ranks from the military, law enforcement, health care and academia. None of those have the market cornered on the “right way” to do things, and there are advantages and disadvantages to how each communicates and approaches situations.

For instance, law enforcement isn’t known for being the best at communicating information, for good reason and sometimes not so good. The military and fire service bring invaluable experience to the emergency management field, and what health care and academia lack in experience, they make up for in knowledge and information.



One of most effective risk management philosophies is to work smarter, not harder, implementing holistic tools, such as predictive analytics to ensure it is minimized. More often than not, companies implement blanketed management programs, applying the same strategies to all employees regardless of performance. With this approach, employers waste time and effort focusing on employees who are not at risk, leaving room for at-risk employees to go unnoticed. On an opposing front, many companies use the “squeaky wheel” approach, diverting all of their attention to employees that actively demonstrate troublesome behaviors. While this approach targets a greater amount of at-risk employees, it still leaves room for some to go undetected.

Alternatively, a strategic employee-specific management program allows employers to identify at-risk employees regardless of how “squeaky” they are. The theory behind an employee-specific management program is simple – monitor your employees for changes that indicative risky behavior.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) launched a new data visualization tool that enables users to see when and where disaster declarations have occurred across the country. As hurricane season kicks off, the tool helps provide important information about the history of hurricanes and other disasters in their communities and what residents can do to prepare.

The data visualization tool is accessible at fema.gov/data-visualization and allows users to view and interact with a wide array of FEMA data. Through an interactive platform, users can view the history of disaster declarations by hazard type or year and the financial support provided to states, tribes and territories, and access public datasets for further research and analysis. On the site, you can see compelling visual representations of federal grant data as it relates to fire, preparedness, mitigation, individual assistance and public assistance.

“We have a wealth of data that can be of great use to the public,” said FEMA’s Deputy Administrator of Protection and National Preparedness Tim Manning. “By providing this information in a way that is visual and easy to understand, people will be moved to action to prepare their families and communities.”

The data visualization tool builds on FEMA’s commitment to transparency by making it easy to convert historical data – already available via the OpenFEMA initiative - into a readable and interactive map. Users can see the types of disasters that have occurred in their community and FEMA’s support to build and sustain the capabilities needed to prevent, protect, mitigate against, respond to, and recover from those threats and hazards in the future. The tool also provides ways for users to take action to prepare for future disasters by supporting community preparedness planning, providing information on individual preparedness actions people can take, or joining a local Citizen Corps program.

FEMA encourages all individuals to interact with the tool, learn more about the emergency management process, and provide feedback. FEMA will continue to develop additional visualizations based on feedback and the availability of public data.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Thursday, 11 June 2015 00:00

Rising Concerns Over Next Global Pandemic

As South Korean authorities step up efforts to stop the outbreak of Middle East Respiratory Syndrome, or MERS, from spreading further, the president of the World Bank Jim Yong Kim has warned that the next global pandemic could be far deadlier than any experienced in recent years.

Speaking in Frankfurt earlier this week, Dr Kim said Ebola revealed the shortcomings of international and national systems to prevent, detect and respond to infectious disease outbreaks.

The next pandemic could move much more rapidly than Ebola, Dr Kim noted:

The Spanish Flu of 1918 killed an estimated 25 million people in 25 weeks. Bill Gates asked researchers to model the effect of a Spanish Flu-like illness on the modern world, and they predicted a similar disease would kill 33 million people in 250 days.”



Thursday, 11 June 2015 00:00

Look Who’s Doing Risk Management

If you’re wondering how much risk management should become part of your organisation’s rulebook, you may already be looking around to see who else is doing it. Insurers and bankers are obvious examples, because their businesses are centred on risk calculation, whether in terms of setting insurance premiums or defining credit interest rates. Many insurers are also ready to discuss risk management with potential customers in a variety of different industry sectors. These can range from agriculture and aviation to sports and transportation. However, there are other perhaps unexpected examples that show how far the concept of risk management has spread in general.



Thursday, 11 June 2015 00:00

Is It Time for the Data Center OS?

It doesn’t take a lot of imagination to see the digital ecosystem as a series of concentric circles. On the processor level, there are a number of cores all linked by internal logic. The PC contains multiple chips and related devices controlled by an operating system. The data center ties multiple PCs, servers, storage devices and the like into a working environment, and now the cloud is connecting multiple data centers across distributed architectures.

At each circle, then, there is a collection of parts overseen by a software management stack, and as circles are added to the perimeter, the need for tighter integration within the inner architectures increases in order to better serve the entire data ecosystem.

It is for this reason that many data architects are warming to the idea of the data center operating system. With the data center now just a piece of a larger computing environment, it makes no more sense to manage pieces like servers, storage and networking on an individual basis than to have multiple OS’s on the PC, one for the processors, another for the disk drive, etc. As tech investor Sudip Chakrabarti noted on InfoWorld recently, the advent of virtualization, microservices and scale-out infrastructure in general are fueling the need to manage the data center as a computer so the distributed architecture can assume the role of the data center.



More than 20% of consumers use passwords that are more than 10 years old, and 47% use passwords that have not been changed in five years, according to a recent report by account security company TeleSign. What’s more, respondents had an average of 24 online accounts, but only six unique passwords to protect them. A total of 73% of accounts use duplicate passwords.

Consumers recognize their own vulnerability. Four out of five consumers worry about online security, with 45% saying they are extremely or very concerned about their accounts being hacked – something 40% of respondents had experienced in the past year.



(TNS) — The newly appointed director of the National Flood Insurance Program said the organization needs to focus more on the welfare of disaster victims and rethink gaps in coverage that bedeviled homeowners after superstorm Sandy.

Roy Wright, who takes over the federal program next week, said in an interview Tuesday that flood insurance policies have become laden with complex loopholes that nickel-and-dime homeowners and undermine their ability to rebuild after floods.

"The center of gravity needs to continue to shift in favor of the policyholder," Wright said.



Many organizations are hesitant to adopt cloud services for cloud storage and cloud-based file sharing.  Although there are many customers that don’t understand cloud security (and don’t want to), slow adopters present a special challenge for managed service providers (MSPs).

One way to sell organizations on the importance of their own security practices is to point out how far cloud services have come in terms of safety and reliability.  How can you do this?  Here are some ways that MSPs can convince slow-to-adopt organizations to take responsibility for their data security:



Andrew MacLeod argues that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and can therefore be used to enhance organizational resilience. For an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.

By Andrew MacLeod BA (Hons) MBCI

“The concept of organizational culture must be recognised as one of vital importance to the understanding of organization and all activities and processes operating within and in connection with organization.” (Brooks, 2003)

As Brooks states, the concept of culture and therefore insights into its operation within an organization are fundamental. However, to fully understand how culture can enhance organizational resilience, one must be clear by what is meant by both organizational resilience and organizational culture. This paper will define organizational resilience in the contemporary context and explore what is meant by culture. It will be demonstrated that culture is a complex field of study and that every organization has its own unique culture which is interwoven with concepts of individual and national culture. This paper will argue that insights into, and more importantly understanding of, an organization’s culture help to ascertain the risk appetite of an organization and these insights can be used to enhance organizational resilience. It will be shown that for an organization to truly enhance its resilience it needs to embed a culture of resilience at every level.



Businesses often struggle on with legacy server rooms due to budget constraints and fear of upgrade risks. In this article Mark Allingham challenges BC managers to face up to this problem.

One of the basic rules of business continuity management is to ensure that everyday information technology systems are protected and fit for purpose but often businesses struggle on with legacy server rooms. Mark Allingham challenges BC managers to

The server room is the beating heart of any but the smallest business. You rely on your servers for vital files, essential information and the day to day running of the organization, so any risk of failure is a considerable threat to business continuity. Legacy server rooms with outdated equipment and limited capacity are liable to power outages, downtime and worse. So any business continuity manager should consider carefully whether their existing server room is fit for purpose.



It seems like once a week, we see yet another story about a security failure involving passwords. In May alone, for instance, the news came that an unpatched vulnerability in Oracle’s PeopleSoft could open a hole for thieves to steal passwords; Google revealed that those security questions that help you retrieve a lost password are anything but secure; and Starbucks blamed passwords for its own recent hack attack.

It’s no wonder, then, that passwords (and usernames) were a popular topic at the RSA Conference this year. One of those speaking about the problem of passwords, Phillip Dunkelberger, president and CEO at Nok Nok Labs, said a number of significant problems with passwords make them a poor single method of authentication.

“First, passwords are a symmetric secret – we enter a password on our PC or smartphone that is matched up on a server, this means that organizations are holding hundreds of millions of passwords in large databases. Despite using techniques such as salting and hashing of password databases, security professionals have found it practically impossible to secure this infrastructure, so passwords are very vulnerable to massive, scalable hacks,” he said.



SURAT, India — “I don’t have to go to the gym,” says Urmil Kumar Vyas with an impish smile. “Don’t you think climbing 400 steps is enough exercise for a day?”

Vyas and I are wending our way toward a high-rise building in one of the wealthier zones of Surat, a city of 5 million in western India about five hours north of Mumbai. Vyas is a primary health worker in the Surat Municipal Corporation’s Vector Borne Diseases Control Department. He has spent 21 years on the job, and has seen his share of sickness and death. But his energy and sense of humor remain intact.

Vyas joined the city workforce in 1994, the year Surat exploded onto the front pages of newspapers worldwide in the aftermath of a virulent plague. More than 50 people died. Hundreds of thousands more, including migrant workers, fled the city out of fear; businesses across the city shut down.



Enterprises will account for 46 percent of Internet of Things (IoT) device shipments this year, BI Intelligence predicts. That’s not surprising when you consider the incredible predictions around IoT savings (billions, according to Business Insider) and IoT revenues ($14.4 trillion by 2022, according to this Forbes column).

But first, there will be raw data — terabytes of it, warns Elle Wood in a recent post for analytics vendor AppDynamics’ blog.

“With a sensor on absolutely everything – from cars and houses to your family members – it goes without saying there will be some challenges with these massive amounts of data,” Wood writes. “After all, IoT isn’t just about connecting things to the Internet; it’s about generating meaningful data.”



Wednesday, 10 June 2015 00:00

Quantifying supply chain risk

Today, more businesses around the world depend on efficient and resilient global supply chains to drive performance and achieve ongoing success. By quantifying where and how value is generated along the supply chain and overlaying of the array of risks that might cause the most significant disruptions, risk managers will help their businesses determine how to deploy mitigation resources in ways that will deliver the most return in strengthening the resiliency of their supply chains. At the same time, they will gain needed insights to make critical decisions on risk transfer and insurance solutions to protect their companies against the financial consequences of potential disruptions.

As businesses evaluate their supply chain risk and develop strategies for managing it, they might consider using a quantification framework, which can be adapted to any traditional or emerging risk.



Helping your clients remain compliant with the laws and standards set forth by the governing bodies presiding over their industries is an essential component of the role of managed service providers (MSP).  When it comes to protecting sensitive data being stored in the cloud or transmitted via cloud-based file sharing, MSPs often need to protect their clients from themselves.

Among the industries that appear to be fighting this battle against their own personnel, perhaps none is more scrutinized than the healthcare industry. While there are many strict stipulations in place for handling sensitive health data, there are also many employees that have access to the data from a host of endpoints.

The healthcare industry’s HIPAA regulations go a long way towards ensuring that the private, sensitive, personal information of patients is handled very carefully. What the regulations don’t stipulate well enough, however, is the management of an organization’s own administrative, physical, and technical safeguards.  According to HealthIT Security, “If a recent survey is any indication, health and pharmaceutical companies, along with other industries, might be falling behind when it comes to protecting sensitive data.”



Does resilience in your enterprise spring from its senior management as a source of inspiration to all? Or is it perhaps embedded in your organisational culture, lovingly nurtured and developed over the years? Either possibility would be gratifying. However, some recent information suggests that neither is the primary source of resilience. Researchers Sarah Bond and Gillian Shapiro surveyed 835 employees from a cross-section of firms in Britain and found that 90% of those employees considered their resilience to be inherently within themselves; and only 10% thought their organisation provided them with resilience. If this is true more generally, there are some important consequences for any enterprise to consider.



Not surprisingly, I’ve heard from a lot of people regarding the announcement of the Office of Personnel Management (OPM) breach, but what Andy Hayter, security evangelist for G DATA, told me in an email jumped out at me – in part because of the imagery but also because it was eerily similar to a thought that I had. Hayter said:

I have to think that it must appear to threat actors all over the globe that the U.S. government's IT systems are full of holes, like Swiss cheese, and the response from the U.S. is to play whack-a-mole every time, in a valiant attempt to close each hole. With all of these attacks, it’s likely that each one is arming cyber criminals with exactly what they need and want to execute another one, and the vicious cycle continues. Unfortunately every time there's another breach on a Federal agency, it spells out our vulnerabilities loud and clear to our adversaries, letting them know there are many more opportunities for them to hack our systems and networks over and over again.

Whack-a-mole security. It really is easy to think that way. The OPM breach is just the latest – and perhaps most damaging because of the vast amount of data that could be compromised – incident within the federal government, and now we are at a point where we’re going to wait for the next incident to pop up.



There really isn’t anything new under the sun. More than a century ago, Nikola Tesla made great strides in his dream of the wireless transmission of electricity. Tesla came up short, but his dream increasingly is coming true more than a century later.

Popular Science and InformationWeek report on research from the University of Washington that could pave the way for devices to be charged by Wi-Fi. The InformationWeek story says that the approach, which of course is called power over Wi-Fi (PoWiFi), could work at up to 28 feet. Prototypes (temperature and camera sensors) are operational to 20 feet.

Popular Science has more detail, saying that about 1 watt of power is transmitted as a normal part of Wi-Fi operations. The technology is aimed at capturing and putting that energy to work. The 1 watt of power isn’t enough to charge phones or perform other higher-level jobs. However, many tasks associated with the Internet of Things (IoT) can be satisfied. Wrote Dave Gershgorn:

This technology isn’t new. Companies like Energous have already brought products to market that send power over similar Wi-Fi signals, and they claim to be able to charge cell phones. Yet the novel feature of PoWiFi is the ability to harness power with pre-existing hardware, and the University of Washington team says their routers transmit both power and data in the same signal.



Tuesday, 09 June 2015 00:00

Getting a Handle on This Dev/Ops Thing

Is Dev/Ops for real, or is it simply the latest marketing tool to get you to buy more stuff for your data center? Or is it a little of both, a potentially revolutionary change to enterprise infrastructure management provided you can see through all the Dev/Ops-washing that is going on?

As with most technology initiatives, the concept behind Dev/Ops is solid – it offers a more flexible approach to the data-resource allocation challenges present in hyperscale and Big Data environments. But by the same token, success or failure is usually determined by the execution, not the initial design. So the real challenge with Dev/Ops is not in selecting the right platform but in taking the designs and concepts currently in the channel and making them your own.



Who was responsible for the recent U.S. Office of Personnel Management (OPM) data breach? Congressman Michael McCaul told CBS News that Chinese hackers could be the culprits in the incident that resulted in the theft of personal information from more than 4 million current and former federal employees.

And as a result, the OPM tops this week's list of IT security newsmakers to watch, followed by U.S. HealthWorks, the Dyre malware and CTERA Networks.

What can managed service providers (MSPs) and their customers learn from these IT security news makers? Check out this week's list of IT security stories to watch to find out:



(TNS) — Drone photography could soon take off for Victoria, Texas’ emergency responders.

Compared to the time, cost and challenges associated with using helicopters for search and rescue, drones could be a game-changer for the future of emergency response, said Emergency Management Coordinator Rick McBrayer.

Emergency responders used a drone at no cost to taxpayers to track in real time the Guadalupe River flood through Victoria. Now, officials are exploring the legality and permitting process to use drones again.



Tuesday, 09 June 2015 00:00

Attivio Updates Big Data Indexing Engine

For all the excitement that Big Data often generates with an organization, one of the fundamental challenges most of them face comes down to data management plumbing. There’s no shortage of data, but organizing all of it in a way that makes it consumable by a Big Data analytics application is problematic.

To enable IT organizations to manage that process better, Attivio today launched an update to its namesake indexing engine for data within an enterprise that adds a range of self-service capabilities for business analysts and data scientists to identify and unify self-selected data tables from the universal index.

Attivio CEO Stephen Baker says Attivio is squarely focused on applying search and indexing technologies to better manage data assets within an enterprise. All too often, IT organizations have hundreds of enterprise applications, but no one is quite sure what data resides inside each. As a result, these same organizations wind up investing in hiring a data scientist, only to watch the person spend months trying to organize all the data inside the organization. Attivio, says Baker, provides a mechanism to reduce the manual effort associated with integrating all that data by as much as 80 percent.



What cloud services should managed service providers (MSPs) sell to customers, and how profitable can those services really be? These are questions that MSPs are grappling with every day right now. Service Leadership CEO Paul Dippell presided over several sessions at LabTech Automation Nation 2015 last week that provided perspective on these questions.

Here are some of the takeways from a couple of Dippell's sessions, including an overview of the cloud market from his company and some real-world perspective from a panel of MSPs. Let’s start with an overview of the cloud market today.



(TNS) — In a narrow parking lot, Brett Kennedy and Sisir Karumanchi stand around what looks like a suitcase. But then four limbs extend from its sides, bending and clicking into position. Two spread out like legs and two rise up like arms as the robot goes through several poses, looking for all the world like a Transformer doing yoga.

This is RoboSimian, a prototype rescue robot whose builders at NASA's Jet Propulsion Laboratory hope can win the $2-million prize at the DARPA Robotics Challenge. The goal: to foster a new generation of rescue robots that could help save lives when the next disaster hits.

Twenty-four teams from around the U.S. and the globe have sent their best and brightest bots to compete in a grueling obstacle course — a robot Olympics, if you will.



(TNS) — On May 23, the extended Taylor family had just sat down for dinner at their River Road house when the phone rang. It was a pre-recorded call from Hays County emergency officials warning residents with homes along the Blanco River that the water was rising quickly and flooding was likely.

It was the first of several such calls his father-in-law took during the course of the meal, recalled Scott Sura. “But he sort of brushed it off. He’s been through several floods, and he wasn’t worried. In fact, he later went to bed.”

Across the river and downstream, on Flite Acres Road, Frances Tise said she and her husband Charles also fielded the emergency calls that evening. “But I had seen the river rise before, and it just came up to our backyard,” she said. “We just didn’t realize how fast it was coming up.”



AUSTIN, Texas – State and federal recovery officials urge Texans affected by the ongoing severe storms and floods to watch for and report any suspicious activity or potential fraud.

Even as government agencies and charitable groups continue to provide disaster assistance, scam artists, identity thieves and other criminals may attempt to prey on vulnerable survivors. The most common post-disaster fraud practices include phony housing inspectors, fraudulent building contractors, bogus pleas for disaster donations and fake offers of state or federal aid.

“Scam attempts can be made over the phone, by mail or email, or in person,” said Federal Coordinating Officer Kevin Hannes of Federal Emergency Management Agency (FEMA). “Con artists are creative and resourceful, so we urge Texans to remain alert, ask questions and require identification when someone claims to represent a government agency.”      

Survivors should also keep in mind that state and federal workers never ask for or accept money, and always carry identification badges with a photograph. There is no fee required to apply for or to get disaster assistance from FEMA, the U.S. Small Business Administration (SBA) or the state. Additionally, no state or federal government disaster assistance agency will call to ask for your financial account information; unless you place a call to the agency yourself, you should not provide personal information over the phone – it can lead to identity theft.

Those who suspect fraud can call the FEMA Disaster Fraud Hotline at 866-720-5721 (toll free). Complaints may also be made to local law enforcement agencies.

Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

Drug abuse with people sharing the same syringe

In a small, rural town in Southern Indiana, a public health crisis emerges.  In a community that normally sees fewer than five new HIV diagnoses a year, more than a hundred new cases are diagnosed and almost all are coinfected with hepatitis C virus (HCV).

How was this outbreak discovered, and what caused this widespread transmission? Indiana state and local public health officials – supported by CDC – set out to answers these questions and help stop the spread of HIV and HCV in this community.

The Outbreak

In January 2015, Indiana disease intervention specialists noticed that 11 new HIV diagnoses were all linked to the same rural community.  This spike in HIV diagnoses in an area never before considered high-risk for the spread of HIV, launched a larger investigation into the cause and impact of these related cases.

The investigation began by investigating the 11 newly diagnosed cases. This process involved talking to newly diagnosed individuals about their health and sexual behaviors, as well as past drug use. In the United States, HIV is spread mainly by having sex or sharing injection drug equipment such as needles with someone who has HIV.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

Scanning electron micrograph of HIV-1 virions budding from a cultured lymphocyte.

In the case of the 11 related diagnoses in Indiana, almost all were linked to injection drug use. Investigators discovered that syringe-sharing was a common practice in this community–often used to inject the prescription Opana; opioid oxymorphone (a powerful oral semi-synthetic opioid medicine used for pain.)  HIV can be spread through injection drug use when injection drug equipment, such as syringes, cookers (bottle caps, spoons, or other containers), or cottons (pieces of cotton or cigarette filters used to filter out particles that could block the needle) are contaminated with HIV-infected blood. The most common cause of HIV transmission from injection drug use is syringe-sharing. Persons who inject drugs (PWID) are also at risk for HCV infection. Co-infection with HCV is common among HIV-infected PWID. Between 50-90% of all persons who inject drugs are infected with both HIV and HCV.

The Investigation

“Contact tracing” is the process of identifying all individuals who may have potentially been exposed to an ill person, in this case a person infected with HIV.  Contact tracing involves interviewing the newly diagnosed patients to identify their syringe-sharing and sex partners.  These “contacts” are then tested for HIV and HCV infection, and if found infected are likewise interviewed to identify their syringe-sharing and sex partners. This cycle continues until no more new contacts are located.

As of May 18, contract tracing and increased HIV testing efforts throughout the community identified 155 adult and adolescent HIV infections. The investigation has revealed  that injection drug use in this community is a multi-generational activity, with as many as three generations of a family and multiple community members injecting together and that due to the short half-life of the drug, persons who inject drugs may have injected multiple times per day (up to 10 in one case). may be needed .

Early HIV treatment not only helps people live longer but it also dramatically reduces the chance of transmitting the virus to others.  People who do not have HIV and who are at high risk for HIV can also benefit more directly from the drugs used to treat HIV to prevent them from acquiring HIV.  This is known as pre-exposure prophylaxis (PrEP). Post-exposure prophylaxis, or PEP, is an option for those who do not have HIV but could have been potentially exposed in a single event.

The Response


So what is the next step in addressing this staggering outbreak? First, public health officials must work to get every person exposed to HIV tested. All persons diagnosed with HIV need to be linked to healthcare and treated with antiretroviral medication. Persons not infected with HIV are counseled on effective prevention and risk reduction methods; including condom use, PrEP, PEP, harm reduction, and substance abuse treatment. Getting messages about the benefits of HIV treatment to newly diagnosed individuals and prevention information to at-risk members of the community are key components to control this outbreak.

The underlying factors of the Indiana outbreak are not completely unique. Across the United States, many communities are dealing with increases in injection drug use and HCV infections; these communities are vulnerable to experiencing similar HIV outbreaks. CDC asked state health departments to monitor data from a variety of sources to identify jurisdictions that, like this county in Indiana, may be at risk of an IDU-related HIV outbreak.  These data include drug arrest records, overdose deaths, opioid sales and prescriptions, availability of insurance, emergency medical services, and social and demographic data. Although CDC has not seen evidence of another similar HIV outbreak, the agency issued a health alert to state, local, and territorial health departments urging them to examine their HIV and HCV surveillance data and to ensure prevention and care services are available for people living with HIV and/or HCV.

The work that has been done thus far, as well as the continued efforts being made to address this response, highlight importance of partnerships between federal, state and local health agencies. The work done by Indiana State Department of Health’s disease intervention specialist to link the initial HIV cases to this rural community, and the work of the local health officials to respond quickly and thoroughly to investigate all possible exposures and spread important health prevention information demonstrates the critical importance of strong public health surveillance and response.

The Division of HIV/AIDS Prevention commends the efforts of all the individuals involved in controlling the HIV outbreak in Indiana. The response illustrates that together we are committed to improving the health of our communities across the nation.

In addition to announcing that it is making its core engine available as an open source Project Apex technology, DataTorrent has released an update to its Big Data analytics software for Hadoop that eliminates the dependencies organizations now have on developers to create these applications.

John Fanelli, vice president of marketing for DataTorrent, says the latest version of DataTorrent enables individuals to assemble Big Data analytics applications without having to write code. In addition, end users can make use of a library of visualizations to create dashboards in a matter of minutes.

Finally, DataTorrent 3.0 comes with pre-built connectors for integrating with both enterprise applications and custom Java applications in addition to graphical tools that make it simpler to ingest data into a Big Data application.



Are you embarking on an IT career? Are you maybe a few years in and looking to make a big move in your career if you can find the right opportunity?

What are your expectations for your next IT job? Perhaps you expect the following:

  • To be treated by management with respect.
  • To have invigorating, exciting work and to feel your work is appreciated.
  • To have co-workers you admire and who admire you.
  • To be compensated well—because you’re worth it!

What you might want to do right now is write these expectations down.

Then, go out in the backyard and LIGHT THEM ON FIRE.

Congratulations! You have just liberated yourself from job disillusionment and career self-sabotage.



In the aftermath of the 2008 global financial crisis, postmortems were convened in countries around the world to identify what went wrong. A unanimous conclusion was that Boards of Directors of public companies in general, and financial institutions in particular, need to do more to oversee “management’s risk appetite and tolerance” if future crises are to be avoided.

This finding represents a significant paradigm shift in role expectations while introducing a new concept the Financial Stability Board (FSB) has coined effective “Risk Appetite Frameworks” (RAFs). Regulators around the world are now moving at varying speeds to implement these conclusions by enacting new laws and regulations. What regulators appear to be seriously underestimating is the amount of change necessary to make this laudable goal a reality.



I was leafing through a pile of old BCI documents when I stumbled across a paper detailing a presentation, entitled “Resilience isn’t the future of business continuity” given by Charlotte Newnham at the BCM World Conference and Exhibition in November 2012.

In the presentation a number of facts and figures were provided which explain a great deal about “actual” resilience capabilities. The figures provided, included the facts that, of the existing resilience departments approximately 50% were in the public sector and that 76% of organisations extend their remit to incident/emergency management. Whilst, these figures seem positive for the resilience function, only 30% oversaw security or risk management and just 7% had any involvement in IT continuity.



AUSTIN, Texas – Recovery specialists have some sound advice for Texans whose homes and property took on floodwaters: Protect your family’s health and your own by treating or discarding mold- and mildew-infected items.

Health experts urge those who find mold to act fast. Cleaning mold quickly and properly is essential for a healthy home, especially for people who suffer from allergies and asthma, said the Federal Emergency Management Agency (FEMA).

Mold and mildew can start growing within 24 hours after a flood, and can lurk throughout a home, from the attic to the basement and crawl spaces. The best defense is to clean, dry or, as a last resort, discard moldy items.

Although it can be hard to get rid of a favorite chair, a child’s doll or any other precious treasure to safeguard the well-being of your loved ones, a top-to-bottom home cleanup is your best defense, according to the experts.

Many materials are prone to developing mold if they remain damp or wet for too long. Start a post-flood cleanup by sorting all items exposed to floodwaters:

  • Wood and upholstered furniture, and other porous materials can trap mold and may need to be discarded.
  • Carpeting presents a problem because drying it does not remove mold spores. Carpets with mold and mildew should be removed.
  • However, glass, plastic and metal objects and other items made of hardened or nonporous materials can often be cleaned, disinfected and reused.

All flood-dampened surfaces should be cleaned, disinfected and dried as soon as possible. Follow these tips to ensure a safe and effective cleanup:

  • Open windows for ventilation and wear rubber gloves and eye protection when cleaning. Consider using a mask rated N-95 or higher if heavy concentrations of mold are present.
  • Use a non-ammonia soap or detergent to clean all areas and washable items that came in contact with floodwaters.
  • Mix 1-1/2 cups of household bleach in one gallon of water and thoroughly rinse and disinfect the area. Never mix bleach with ammonia as the fumes are toxic.
  • Cleaned areas can take several days to dry thoroughly. The use of heat, fans and dehumidifiers can speed up the drying process.
  • Check out all odors. It’s possible for mold to hide in the walls or behind wall coverings. Find all mold sources and clean them properly.
  • Remove and discard all materials that can’t be cleaned, such as wallboard, fiberglass and cellulose areas. Then clean the wall studs where wallboard has been removed, and allow the area to dry thoroughly before replacing the wallboard.

 For other tips about post-flooding cleanup, visit www.fema.gov, www.epa.gov, or www.cdc.gov.


Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The SBA is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing This email address is being protected from spambots. You need JavaScript enabled to view it., or visiting SBA’s website at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800) 877-8339.

In communicating with the business and the board about the consequences of data breaches, IT is always going to be asked to place dollar figures, which can be difficult to do, even with increasing access to predictive analytics and historical data from any previous breaches in the organization. One of the most extensive benchmark studies that IT can use to help with this is the Ponemon Institute’s annual “Cost of Data Breach Study: Global Analysis.” In its 10th year, and sponsored by IBM, the recently released 2015 edition covers 11 countries, 350 companies, and detailed data about direct and indirect costs of data breaches.

Three major reasons are contributing to a rapid increase in the average cost of a data breach and the average cost per breached record – this last varying by industry – according to Chairman and Founder Dr. Larry Ponemon:

“First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."



The adoption rates have been slower than that of other industries, but financial institutions are finally starting to leverage the cloud in greater numbers. But the real story isn’t that they’re adopting it—it’s what they are adopting it for. As we discussed in a recent post, financial firms are more concerned about the security risks of cloud-based file sharing than most MSPs would like to hear.

CRM, application development, email and back-end services—these are the functions that most financial firms are prioritizing. Why is file-sharing noticeably absent? In an interview with eWeek, Luciano Santos, vice president of research and member services at the Cloud Security Alliance alluded to the reason:  

"Primarily the top security concerns were more focused around data protection. Data confidentiality, data governance and data breach were the top-ranked security concerns identified by the financial institutions that participated."



(TNS) — Florida has more homes at risk from the devastating damage of hurricane-powered storm surges than any other state, according to a new study by CoreLogic, a California-based real estate information firm.

While the designation will come as no surprise to anyone living smack in the path of hurricane alley, the numbers reported by CoreLogic are sobering. More than 2.5 million homes in the state are at risk for some kind of damage from storm surge, according to the study. Rebuilding costs statewide from an extreme worst-case surge could amount to $491 billion — more than the gross domestic products of Austria, Chile, Venezuela or a dozen other countries.

In the tri-county area between Miami and West Palm Beach, CoreLogic found more than a half million homes are at risk. The company estimated rebuilding costs for a worst-case flooding from storm surge at $105 billion.



(TNS) — The Hawaii National Guard is holding the largest disaster preparedness exercise in its history with more than 2,200 participants from multiple states responding to a simulated hurricane and other events across Oahu, Hawaii island, Maui and Kauai.

Some Chinook and Black Hawk helicopter activity will be seen, Waimanalo will request assistance — possibly for debris clearance — a mass-casualty exercise will take place at the Queen’s Medical Center-West Oahu, and harbor chemical spills will be dealt with in Honolulu and on Hawaii island, officials said.

“It combines the civilian government and military organizations, and that’s important because we need to get the organizations working together — understanding each other’s capabilities — before we get to a natural disaster, a real natural disaster event,” said Brig Gen. Bruce Oliveira, the head of the Hawaii Army National Guard.



Celebrating Europe's finest in the business continuity industry

At an awards ceremony at the La Maison du Cygne, a prestigious 17C building on the Grand Place in Brussels, Belgium and once home to the city's butchers' guild, the  Business Continuity Institute recognised the talent that exists in the business continuity industry across the continent as they held their annual European Awards.

The BCI Awards consist of nine categories – eight of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from across the region.

The winners were:

Continuity and Resilience Consultant of the Year 2015
Chris Needham-Bennett MBCI of Needhams 1834

Continuity and Resilience Professional of the Year 2015 (Private Sector)
Michael Crooymans CBCI of SOGETI

Continuity and Resilience Newcomer of the Year 2015
Jacqueline Howard CBCI of Marks and Spencer

Continuity and Resilience Team of the Year 2015
Ulster Bank Business Resilience Team

Continuity and Resilience Provider (Service/Product) of the Year 2015
Sungard Availability Services

Continuity and Resilience Innovation of the Year 2015
PinBellCom Ltd

Most Effective Recovery of the Year 2015

Industry Personality of the Year 2015
David Window MBCI of Continuity Shop

The BCI European Awards are one of seven regional held by the BCI and which culminate in the annual  Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI European Awards are automatically entered into the Global Awards.

Cloud Endure has released the results of a recent survey into public cloud usage, downtime, availability and disaster recovery.

The 2015 Public Cloud Disaster Recovery Survey looks at disaster recovery challenges and best practices. It also benchmarks the best practices of companies that host web applications in the public cloud. The survey received responses from 109 IT professionals from North America and Europe.

Key findings include:

  • The number one risk to system availability is human error followed by networks failures and cloud provider downtime.
  • While the vast majority of the organizations surveyed (83 percent) have a service availability goal of 99.9 percent or better, almost half of the companies (44 percent) had at least one outage in the past three months, and over a quarter (27 percent) had an outage in the past month.
  • The cost of a day of downtime in 37 percent of the organizations is more than $10,000.
  • When it comes to service availability, there is a clear gap between how organizations perceive their track record and the reality of their capabilities. While almost all respondents claim they meet their availability goals consistently (37 percent) or most of the time (50 percent), 28 percent of the organizations surveyed don’t measure service availability at all. It is hard to tell how these organizations claim to meet their goals when they are not able to measure them.
  • The top challenges in meeting availability goals are budget limitations, insufficient IT resources, and lack of in-house expertise.
  • There is a strong correlation between the cost of downtime and the average hours per week invested in backup and disaster recovery.

Read the survey report (registration required).

Friday, 05 June 2015 00:00

The Real Cost of IT Complexity

IT complexity is one of the enterprise’s biggest challenges, affecting every facet of the organization--from employees to customers.

But how do you define IT complexity, and what is the impact? Lucky for us, Oracle commissioned IDC to look at organizations that simplified their IT environment and to develop an index to quantify IT complexity’s impact.

According to IDC, IT complexity can be defined “as the state of an IT Infrastructure that leads to wasted effort, time, and expense.” Conditions contributing to this include:

  • Heterogeneous environments
  • Using outdated technologies
  • Server, application or data sprawl
  • Lack of sufficient management tools and automation
  • Silo’d IT




Global temperature trends.

(Credit: NOAA)

A new study published online today in the journal Science finds that the rate of global warming during the last 15 years has been as fast as or faster than that seen during the latter half of the 20th Century. The study refutes the notion that there has been a slowdown or "hiatus" in the rate of global warming in recent years.


The study is the work of a team of scientists from the National Oceanic and Atmospheric Administration's (NOAA) National Centers for Environmental Information* (NCEI) using the latest global surface temperature data.

"Adding in the last two years of global surface temperature data and other improvements in the quality of the observed record provide evidence that contradict the notion of a hiatus in recent global warming trends," said Thomas R. Karl, L.H.D., Director, NOAA's National Centers for Environmental Information. "Our new analysis suggests that the apparent hiatus may have been largely the result of limitations in past datasets, and that the rate of warming over the first 15 years of this century has, in fact, been as fast or faster than that seen over the last half of the 20th century." 

The apparent observed slowing or decrease in the upward rate of global surface temperature warming has been nicknamed the "hiatus." The Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report, released in stages between September 2013 and November 2014, concluded that the upward global surface temperature trend from 1998­­-2012 was markedly lower than the trend from 1951-2012.


Since the release of the IPCC report, NOAA scientists have made significant improvements in the calculation of trends and now use a global surface temperature record that includes the most recent two years of data, 2013 and 2014--the hottest year on record. The calculations also use improved versions of both sea surface temperature and land surface air temperature datasets. One of the most substantial improvements is a correction that accounts for the difference in data collected from buoys and ship-based data.

No slow down in global warming.

(Credit: NOAA)

Prior to the mid-1970s, ships were the predominant way to measure sea surface temperatures, and since then buoys have been used in increasing numbers. Compared to ships, buoys provide measurements of significantly greater accuracy. "In regards to sea surface temperature, scientists have shown that across the board, data collected from buoys are cooler than ship-based data," said Dr. Thomas C. Peterson, principal scientist at NOAA's National Centers for Environmental Information and one of the study's authors. "In order to accurately compare ship measurements and buoy measurements over the long-term, they need to be compatible. Scientists have developed a method to correct the difference between ship and buoy measurements, and we are using this in our trend analysis." 

In addition, more detailed information has been obtained regarding each ship's observation method. This information was also used to provide improved corrections for changes in the mix of observing methods.   

New analyses with these data demonstrate that incomplete spatial coverage also led to underestimates of the true global temperature change previously reported in the 2013 IPCC report. The integration of dozens of data sets has improved spatial coverage over many areas, including the Arctic, where temperatures have been rapidly increasing in recent decades. For example, the release of the International Surface Temperature Initiative databank, integrated with NOAA's Global Historical Climatology Network-Daily dataset and forty additional historical data sources, has more than doubled the number of weather stations available for analysis.

Lastly, the incorporation of additional years of data, 2013 and 2014, with 2014 being the warmest year on record, has had a notable impact on the temperature assessment. As stated by the IPCC, the "hiatus" period 1998-2012 is short and began with an unusually warm El Niño year. However, over the full period of record, from 1880 to present, the newly calculated warming trend is not substantially different than reported previously (0.68°C / Century (new) vs 0.65°C / Century (old)), reinforcing that the new corrections mainly have in impact in recent decades. 

On the Web

* Note: NOAA's National Centers for Environmental Information (NCEI) is the merger of the National Climatic Data Center, National Geophysical Data Center, and National Oceanographic Data Center as approved in the Consolidated and Further Continuing Appropriations Act, 2015, Public Law 113-235. From the depths of the ocean to the surface of the sun and from million-year-old sediment records to near real-time satellite images, NCEI is the nation's leading authority for environmental information and data. For more information go to: http://www.ncdc.noaa.gov/news/coming-soon-national-centers-environmental-information 


NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.

Friday, 05 June 2015 00:00

What to Do About Reputation Risk

Of executives surveyed, 87% rate reputation risk as either more important or much more important than any other strategic risks their companies face, according to a new study from Forbes Insights and Deloitte Touche Tohmatsu Limited. Further, 88% say their companies are explicitly focusing on managing reputation risk.

Yet a bevy of factors contribute to reputation risk, making monitoring and mitigating the dangers seem particularly unwieldy. These include business decisions and performance in the following areas:



Friday, 05 June 2015 00:00

Storm Surge: The Trillion Dollar Risk

More than 6.6 million homes on the Atlantic and Gulf coasts are at risk of hurricane-driven storm surge with a total reconstruction cost value (RCV) of nearly $1.5 trillion.

The latest annual analysis from CoreLogic finds that the Atlantic Coast has more than 3.8 million homes at risk of storm surge in 2015 with a total projected reconstruction cost value of $939 billion, while the Gulf Coast has just under 2.8 million homes at risk and nearly $549 billion in potential exposure.

Which states have the highest total number of properties at risk?

Six states—Florida, Louisiana, New York, New Jersey, Texas and Virginia—account for more than three-quarters of all at-risk homes across the United States. Florida has the highest total number of properties at various risk levels (2.5 million), followed by Louisiana (769,272), New York (464,534), New Jersey (446,148), Texas (441, 304) and Virginia (420,052).



Now that management science has taught us how to quantify so many other things, crisis management is a good candidate for being awarded its own scale of seriousness too. The detail you put into such a scale will depend on how much crises afflict your enterprise. If you are battling a continual stream of problems, your scale may be finer (say, 1 to 10), in order to sort out the life-and-death situations from the nuisances. Otherwise, a high-medium-low system of ranking may be sufficient, as long as there are clear definitions for crises to be categorised correctly. So, how does this work in practice?



Thursday, 04 June 2015 00:00

Implications of the All-Flash Data Center

From a performance perspective, the all-Flash data center certainly makes a lot of sense. In an age when the movement of data from place to place is more important than the amount of data that can be stored or processed in any given location, high I/O in the storage array should be a top priority.

But while no one disputes the efficacy of Flash over disk and tape when it comes to speed, the question remains: Does the all-Flash data center still make sense for the enterprise? And if so, what impact will this have on other systems and architectures up and down the stack?

HP recently pushed the envelope on the all-Flash data center a little further with a new line-up of arrays and services for the 3PAR StoreServ portfolio. The set-up is said to improve performance, lower the physical footprint of storage and reduce cost to about $1.50 per usable GB, which is about 25 percent less than current equivalent solutions. The company is already reporting workload performance of 3.2 million IOPS with sub-millisecond latency among its Flash drives, and the 3PAR family’s Thin Express ASIC provides a high degree of data resiliency between the StoreServ array and the ProLiant server to reduce transmission errors.



(TNS) — Rice University civil engineering professor Philip Bedient is an expert on flooding and how communities can protect themselves from disaster. He directs the Severe Storm Prediction, Education and Evacuation from Disasters Center at Rice University.

On Memorial Day evening, Houston suffered massive flooding after getting nearly 11 inches in 12 hours. Bedient designed the Flood Alert System — now in its third version — which uses radar, rain gauges, cameras and modeling to indicate whether Houston's Brays Bayou is at risk of overflowing and flooding the Texas Medical Center. In an interview with Ryan Holeywell, editor of the Kinder Institute's "Urban Edge" blog, Bedient said more places need this kind of warning system.



Wednesday, 03 June 2015 00:00

Datameer Applies Data Governance to Hadoop

One of the biggest inhibitors to applying Hadoop in any production environment is the general lack of governance tools for IT organizations to use to manage access permissions for the data that resides there.

To address that issue, Datameer today announced it has embedded a raft of data governance tools inside its analytics software that runs natively on Hadoop.

Matt Schumpert, director of product management at Datameer, says that because its software runs in memory as a Hadoop application, responsibility for data governance within Hadoop naturally falls to Datameer.



Financial firms are tasked with a lot of different responsibilities, not the least of which is the responsibility to protect sensitive data and information.  When it comes to the resistance on the part of financial firms choosing to adopt cloud services for data storage and cloud-based file sharing, managed service providers (MSPs) need to preach security as everyone’s top priority.

According to a How Cloud is Being Used in the Financial Sector, a recent study from the Cloud Security Alliance (CSA), a large number of security concerns are keeping financial firms on the sidelines looking in at cloud computing.  Chief among those concerns is data security apprehension.



Wednesday, 03 June 2015 00:00

Nepal: Risk from the Theoretical to Reality

The Nepal earthquake, which triggered massive destruction from the Himalayan Mountains to India, is more than a tragic story of bad luck. It’s an example of how little we really understand risk and the consequences of our inability to fully absorb events such as earthquakes in Nepal and Haiti and other natural disasters that are so devastating.

Even our perception of these events is skewed.  According to the USGS website, the U.S. government’s official site for monitoring earthquakes, approximately one major earthquake of magnitude 8.0 or greater has occurred each year over the last 24 years.  We tend to discount major disaster in our own lives while believing there is a higher probability that others may suffer calamity.  That may explain why so many say we “never saw that coming” when disaster strikes.  Many of these quakes have occurred with little damage or no deaths, but we remember the ones with a high death toll and quickly dismiss the others.

Given our inability to look into the future, the question is: has our world become more or less risky?  Well, it depends!  For many of us, the perception of risk depends on our own circumstances.  Let’s take two people of similar age but from remarkably different backgrounds.



According to a new study conducted by PwC and commissioned by the UK Government to raise awareness of the growing cyber threat, the average cost of the single worst online security breach suffered by big businesses is between £1.46m and £3.14m, up from £600k – £1.15m in 2014. The Information Security Breaches Survey 2015 highlights the rising costs of malicious software attacks and staff related breaches, and illustrates the need for companies to take action. And it is all companies, not just big business, as the research also shows that the equivalent costs for small business is £75k – £311k, up from £65k – £115k a year ago.

It is not just costs that are high, but occurrence too, as the survey also revealed that 90% of large organisations reported they had suffered an information security breach, while 74% of small and medium sized businesses reported the same. The median number of breaches for large organisations was 14 (down from 16 in 2014) while for small businesses it was four (down from six last year). The problem is unlikely to go away as 59% of respondents to the survey expect there will be more security incidents in the coming year.

These figures may not come as a surprise to business continuity professionals who have consistently expressed concern about data breaches, the disruption they can cause and the cost as a consequence. The latest Horizon Scan report published by the Business Continuity Institute revealed that 74% of respondents to a survey expressed concern or extreme concern at the prospect of a data breach occurring and, along with cyber attacks, it has been a top three threat since the survey began.

Attacks from outsiders have become a greater threat for both small and large businesses with 69% of large organisations and 38% of small organisations being attacked by an unauthorised outsider in the last year, although Denial of Service (DoS) attacks have actually decreased with only 30% of large organisations and 16% of small organisations being attacked in such a way. The outsider threat may be high, but when asked about the single worst breach, 50% of organisations stated that it was due to inadvertent human error.

Digital Economy Minister Ed Vaizey said: "The UK’s digital economy is strong and growing, which is why British businesses remain an attractive target for cyber-attack and the cost is rising dramatically. Businesses that take this threat seriously are not only protecting themselves and their customers’ data but securing a competitive advantage."

Andrew Miller, Cyber Security Director at PwC, said: "With 9 out of 10 respondents reporting a cyber breach in the past year, every organisation needs to be considering how they defend and deal with the cyber threats they face. Breaches are becoming increasingly sophisticated, often involving internal staff to amplify their effect, and the impacts we are seeing are increasingly long-lasting and costly to deal with."

Companies are learning the hard way that there’s a downside to data democratization: more data silos.

“On the heels of the consumerization of enterprise software and the growing ubiquity of easy-to-use analytics tools, silos appear to be coming back in all their former collaboration-stifling glory as individual teams and departments pick and choose different tools for different purposes and data sets without enterprise-level oversight,” writes Katherine Noyes in a recent Computerworld article exploring this growing problem.

It’s hard to hear in this age of Big Data and data lakes, but in hindsight, it really isn’t surprising. SaaS made it possible for the lines of business to choose their own applications with nothing more than a credit card. Then Apple tipped the balance on personal devices. Finally, Amazon and others democratized storage and Big Data processing power. It only makes sense that analytics — and more data — would leave the centralizing influence of IT and segregate into silos.



s a Public Information Officer, Mike was used to communicating health information to the people of his state. When word came that a major hurricane was approaching, he knew people would be facing fear and uncertainty. How could he make sure that the right information got to the right people? How should he react to the public’s negative emotions and false information? Most importantly, how could he help to protect health and lives?  Mike knew exactly where to begin- with the principles of CDC’s Crisis and Emergency Risk Communication training.

CDC’s Crisis and Emergency Risk Communication (CERC) program teaches you how to craft messages that tell the public what the situation means for them and their loved ones, and what they can do to stay safe.

CERC provides a set of principles that teach effective communication before, during, and after an emergency. The six principles of CERC are:

  1. Be First                              4. Express Empathy
  2. Be Right                             5. Promote Action
  3. Be Credible                      6. Show Respect

The CDC CERC program has resources, training, and shared learning where you can participate in online training and receive continuing education credits. CERC also has CERC in Action stories from other public health professionals who have successfully applied CERC to an emergency response.

Communicating during an emergency is challenging, but you’re not alone! CERC can help you figure out how to get the right information to the right people at the right time whether you’re dealing with a family emergency or a hurricane.

CERC in Action

23Frozen powerline.4

PHPR: Health Security in Action

This post is part of a series designed to profile programs from CDC’s Office of Public Health Preparedness and Response.

CERC and CERC training are a service provided by CDC’s Office of Public Health Preparedness and Response’s (OPHPR) Division of Emergency Operations.


Wednesday, 03 June 2015 00:00

Five Myths About the Commoditization of IT

“Commodity” is a bad word among technologists. It implies standardized, unchanging, noninnovative, boring, and cheap. Commodities are misunderstood. This post seeks to dispel some of the myths around the commoditization of IT services (i.e., the cloud).



(TNS) — When a powerful earthquake in March 2011 triggered a tsunami that devastated Japan’s Fukushima-Daiichi nuclear plant and raised radiation to alarming levels, authorities contemplated sending in robots first to inspect the facility, assess the damage and fix problems where possible. But the robots could not live up to the task and eventually, humans had to complete most of the hazardous work.

Ever since, Defense Advanced Research Projects Agency (DARPA), an agency under the U.S. Department of Defense, has been working to improve the quality of robots. It is now conducting a global competition to design robots that can perform dangerous rescue work after nuclear accidents, earthquakes and tsunamis.

The robots are tested for their ability to open doors, turn valves, connect hoses, use hand tools to cut panels, drive vehicles, clear debris and climb a stair ladder — all tasks that are relatively simple for humans, but very difficult for robots.



AUSTIN, Texas – Texans who sustained property damage as a result of the ongoing severe storms and flooding are urged to register with the Federal Emergency Management Agency (FEMA), as they may be eligible for federal and state disaster assistance.

The presidential disaster declaration of May 29 makes disaster aid available to eligible families, individuals and business owners in Hays, Harris and Van Zandt counties.  

“FEMA wants to help Texans begin their recovery as soon as possible, but we need to hear from them in order to do so,” said FEMA’s Federal Coordinating Officer (FCO) Kevin Hannes. “I urge all survivors to contact us to begin the recovery process.”

People who had storm damage in Harris, Hays, and Van Zandt counties can register for FEMA assistance online at www.DisasterAssistance.gov or via smartphone or web-enabled device at m.fema.gov. Applicants may also call 800-621-3362 or (TTY) 1-800-462-7585 from 6 a.m. to 9 p.m. daily. Flood survivors statewide can call and report their damage to give the state and FEMA a better idea of the assistance that is needed in undesignated counties.

Assistance for eligible survivors can include grants for temporary housing and home repairs, and for other serious disaster-related needs, such as medical and dental expenses or funeral and burial costs. Long-term, low-interest disaster loans from the U.S. Small Business Administration (SBA) also may be available to cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.

Eligible survivors should register with FEMA even if they have insurance. FEMA cannot duplicate insurance payments, but under-insured applicants may receive help after their insurance claims have been settled.

Registering with FEMA is required for federal aid, even if the person has registered with another disaster-relief organization such as the American Red Cross, or local community or church organization. FEMA registrants must use the name that appears on their Social Security card. Applicants will be asked to provide:

  • Social Security number
  • Address of the damaged home or apartment
  • Description of the damage
  • Information about insurance coverage
  • A current contact telephone number
  • An address where they can get mail
  • Proof of residency, such as a utility bill, rent receipts or mortgage payment record
  • Bank account and routing numbers if they want direct deposit of any financial assistance.

(TNS) — Thousands of Pinellas County, Fla., beach residents and business owners could hit an unexpected road block trying to return to the barrier islands after a storm evacuation.

Pinellas County Sheriff Bob Gualtieri said Monday his office, working with beach city governments, has developed a hang-tag identification system to allow drivers quick access to the islands after an evacuation.

However, since the program rolled out in February, only 17,000 hang tags have been handed out, while Gualtieri estimates about 88,000 people will need them.

“That gives me a lot of concern,” he said, urging people to get the tags as soon as possible.



What does the phrase “needle in a haystack” mean to you? For many, it implies the impossible or something that can’t be done. As an MSP, don’t you strive to do the seemingly impossible for your customers? It sure will endear them to you.

One feature that can help you triumph over “needle in a haystack” scenarios is granular recovery. Think back to a customer that got hit with CyrptoLocker or perhaps had a rogue employee who deleted important files. No doubt your customers had that empty feeling that their valuable data was unrecoverable. With granular recovery, it’s not only possible, but also easy. You can easily search documents, emails and attachments by keyword and restore exactly what you need. Now, won’t that impress your customers?



Downtime to the broadband connection is now one of the major threats facing today’s organizations, so why are many businesses not considering resilience when purchasing broadband or looking at how broadband failure fits into the disaster recovery plan? Mike van Bunnens, managing director, Comms365, explores the issue.

What is the most important consideration for a business buying a new broadband connection? From the way many businesses are making the investment decision, the answer appears to be cost: with most expecting to achieve the same rock bottom prices on offer in the domestic market. But with more and more businesses running VoIP and cloud based applications, their choice of broadband connection is essential. Any glitch in service will have a massive knock on effect on productivity and customer relationships. So why are businesses not considering resilience or how broadband failure fits into the disaster recovery plan? Why are many not even ascertaining the speed and quality of the broadband options before moving to a new office premises?

A high quality, resilient broadband connection is now one of the most critical aspects of any business’ set up. So why are business owners still applying domestic thinking to business critical communications?



BSI is seeking feedback on the draft BS 12999 standard. ‘BS 12999 Damage Management - Stabilization, mitigation, and restoration of properties, contents, facilities and assets following incident damage’ is intended to provide recommendations to individuals and organizations involved in carrying out damage management. It will be applicable to domestic, commercial and public buildings and includes the following main contents:

  • Introduction
  • Scope
  • Terms, definitions and abbreviations
  • Damage incident instructions, intake and response planning4 On‑site damage assessment
  • Stabilization
  • Damage scoping
  • Damage recovery and restoration
  • Completion sign-off and handover.

The deadline for comments is June 30th 2015.

Click here to read the draft standard and take part in the consultation.

‘Agile’ is still a buzzword. That’s quite a feat in today’s high-speed business and technological environments, where concepts date so rapidly. The original ‘Manifesto for Agile Software Development’ appeared in 2001, some 14 years ago. Since then, the word and the concept it labels have been applied to different business areas, including marketing and supply chain operations. Recently, it has also cropped up in the phrase ‘agile recovery’. But is this taking the ‘agile concept’ too far?



Last week, we learned that cybercriminals undermined the identity verification of the IRS’ Get Transcript app and gained access to the tax returns on 104,000 US citizens, so it’s only fitting in this analyst spotlight, we interview one of the team’s leading analysts for identity and access management (IAM), VP and Principal Analyst, Andras Cser. Andras consistently produces some of the most widely read research not just for our team but across all of Forrester. And clients seek his insight across a number of coverage areas beyond IAM, including cloud security, enterprise fraud management, and secure payments. As the tallest member of our S&R team at 6’5”, Andras also provides guidance to clients on the emerging fields of height intel and altitude management.


Andras Cser Image


Before joining Forrester, Andras worked as a security architect at Netegrity and then CA Technical Services. He also worked in a number of technical and sales capacities at Sun Microsystems prior to joining Netegrity. In his roles on the vendor-side, he architected and implemented IAM and provisioning solutions at Fortune 500 companies.


Listen to this month’s podcast below to hear Andras talk about his most common client questions, counterintuitive insights, and vendors to watch. And as you can tell from our analyst interview, Andras prides himself on being clear and concise.



WASHINGTON – Today, the Federal Emergency Management Agency (FEMA) urges residents across the nation to prepare for the 2015 Atlantic Hurricane season, which begins today and runs through November 30. 

Hurricanes and tropical systems can cause serious damage on both coastal and inland areas. Their hazards can come in many forms including: storm surge, heavy rainfall, inland flooding, high winds, and tornadoes. To prepare for these powerful storms, FEMA is encouraging families, businesses, and individuals to be aware of their risks; know your sources of reliable information; prepare your home and workplace; and be familiar with evacuation routes.

“One hurricane hitting where you live is enough to significantly disrupt your life and make for a very bad hurricane season,” said FEMA Administrator Craig Fugate. “Every person has a role to play in being prepared – you should know if you live or work in an evacuation zone and take time now to learn that route so you’re prepared to protect yourself and your family from disaster.”

This year, FEMA is placing an emphasis on preparing communities to understand the importance of evacuations, which are more common than many people realize. When community evacuations become necessary, local officials provide information to the public through the media. In some circumstances, other warning methods, such as, text alerts, emails, or telephone calls are used. Information on evacuation routes and places to stay is available at www.ready.gov/evacuating-yourself-and-your-family.

Additionally, knowing and practicing what to do in an emergency, in advance of the event, can make a difference in the ability to take immediate and informed action, and enable you to recover more quickly. To help communities prepare and enhance preparedness efforts nationwide, FEMA is offering two new products.

  • FEMA launched a new feature to its App, available for free in the App Store for Apple devices and Google Play for Android devices. The new feature enables users to receive weather alerts from the National Weather Service for up to five locations anywhere in the United States, including U.S. territories, even if the mobile device is not located in the weather alert area. The app also provides information on what to do before, during, and after a disaster in both English and Spanish.
  • The Ready campaign and America’s PrepareAthon! developed a social media toolkit that you can download and share with others at www.ready.gov/ready2015. The kit contains information on actions communities can take to practice getting ready for disasters.

While much attention is often given to the Atlantic Hurricane Season, there are tropical systems that can affect other U.S. interests as well. The Eastern Pacific Hurricane Season runs from May 15 through November 30. The Central Pacific Hurricane Season runs from May 15 to November 30. To learn more about each hurricane season and the geographical areas they may affect, visit www.noaa.gov.

Additional tips and resources:

  • Learn how to prepare for hurricane season at www.ready.gov/hurricanes
  • Talk with your family today about how you will communicate with each other during a significant weather event when you may not be together or during an evacuation order. Download the family communications at www.ready.gov/family-communications.
  • For information on how to create an emergency supply kit, visit www.ready.gov/build-a-kit
  • Consider how you will care for pets during an evacuation by visiting www.ready.gov/caring-animals
  • Use the Emergency Financial First Aid Kit (EFFAK) to identify your important documents, medical records, and household contracts. When completing the kit, be sure to include pictures or a video of your home and your belongings and keep all of your documents in a safe space. The EFFAK is a joint publication from Operation Hope and FEMA. Download a copy at www.ready.gov/financial-preparedness.
  • If you own or manage a business, visit www.ready.gov/business for specific resources on response and continuity planning.
  • The National Weather Service proactively sends free Wireless Emergency Alerts, or WEAs, to most cell phones for hurricanes, tornadoes, flash flooding and other weather-related warnings. State and local public safety officials may also send WEAs for severe or extreme emergency conditions. If you receive a Wireless Emergency Alert on your cell phone, follow the instructions, take protective action and seek additional information from local media. To determine if your wireless device can receive WEA alerts contact your wireless carrier for more information or visit www.ctia.org/WEA.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

(TNS) — While the global fracking boom has stabilized North America’s energy prices, Chicago — America’s third largest city and the busiest crossroads of the nation’s railroad network — has become ground zero for the debate over heavy crude moved by oil trains.

With the Windy City experiencing a 4,000 percent increase in oil-train traffic since 2008, Chicago and its many densely populated suburbs have become a focal point as Congress considers a number of safety reforms this year.

Many oil trains are 100 or more cars long, carrying hydraulically fracked crude and its highly explosive, associated vapors from the Bakken region of Montana, North Dakota, Saskatchewan, and Manitoba.



Hackers illegally accessed the personal information of 104,000 taxpayers this spring, according to the U.S. Internal Revenue Service (IRS).

And as a result, the IRS tops this week's list of IT security newsmakers to watch, followed by Woolworths, Google (GOOG) and Kaspersky Lab.

What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:



(TNS) — It was the year they ran out of names.

The hurricane season that began 10 years ago Monday generated so many storms — 27 in all — that, for the first time since officials started using names in 1953, they went through a list of 21 names and had to start on the Greek alphabet: from Arlene on June 9, just nine days in, to Zeta, which finally fizzled on Jan. 6, 2006, a month after that manic 6-month season officially ended.

Right in the middle was Katrina, which raised serious issues that had little to do with meteorology. And for South Florida, so late in the season that its cleanup competed with Halloween preparations, was Wilma. It brought billions in damage, much of that to Palm Beach County, still recovering from two hurricanes three weeks apart in the previous year's "mean season."



(TNS) — Staring at an image of your home and neighborhood inundated with 2, 6 or maybe 9 feet of rushing water from a hurricane storm surge can be horrifying.

At least that’s what Pinellas County Emergency Management Director Sally Bishop is hoping.

As the 2015 hurricane season dawns on Monday, Bishop is unveiling her department’s newest tool for storm preparation: a Storm Surge Protector computer application that gives people a realistic view of what can happen when a hurricane comes ashore.



Last week the Ponemon Institute rolled out the results of yet another Global Cost of Data Breach report and, surprising very few people in the security world, the stats show costs rising again. Sponsored by IBM, the report benchmarked 350 companies across 11 countries. It found that the consolidated total cost of a breach has now risen to $3.8 million, about 23 percent higher than the figure back in 2013. They're compelling statistics for anyone in the managed services world trying to offer customers justification for improved security coverage.

According to the report, there are three big factors that are contributing to the rising costs of breaches.



The data breach at the IRS that left the personal information of 104,000 taxpayers in the hands of thieves is the latest wrinkle in a mammoth problem faced by tax authorities: Identity theft and its crippling consequences.

An unprecedented surge in online tax scams by increasingly sophisticated criminals has challenged the IRS to respond quickly to get ahead of the fraudsters, especially during this year’s tax season after hackers targeted TurboTax, the country’s largest online filing service.

The vulnerability of taxpayers’ personal data was identified last fall by the IRS’s independent watchdog as the agency’s number one problem. Tax officials estimate that the government has lost billions of dollars in recent years to fraudulent refunds filed by hackers who steal personal information on tax returns, then use it to claim a refund in a taxpayer’s name before he or she files.



FEMA Officials Encourage Those With Concerns about Hurricane Sandy Flood Insurance Claims to Call 866-337-4262

WASHINGTON – The Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program (NFIP) announced the start of Hurricane Sandy flood insurance claims review. The review is part of a broad process to reform NFIP claims and appeals procedures.       

FEMA opened the Hurricane Sandy claims review process and began mailing letters to approximately 142,000 NFIP policyholders, offering them an opportunity to have their claims from Hurricane Sandy reviewed. In the review, policyholders who have not pursued litigation or already received the maximum amount under their policy will have an opportunity to have their files reviewed. FEMA will contact policyholders and explain how to request this review.

“Flood insurance issues arising from Hurricane Sandy are of great concern to FEMA,” said Deputy Associate Administrator for Federal Insurance Brad Kieserman. “We are committed to administering a program that is survivor-centric and helps policyholders recover from flooding in a fair, transparent, and expeditious way. I encourage anyone who suspects they may have been treated unfairly to call 866-337-4262.”

Flooding is the most common natural disaster in the United States. Between 1980 and 2013, the United States suffered more than $260 billion in flood-related damages. Flood insurance is a vital service that protects communities from the most common and costly disaster we face, and those who purchase insurance must be able to count on it being there when it is needed to help rebuild their lives.

Policyholders who incurred losses from Hurricane Sandy from Oct. 27, 2012, through Nov. 6, 2012, and want their claim reviewed may contact FEMA by:

  • Calling toll-free at 866-337-4262.
  • Email by downloading an application online and submitting it to This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Fax by downloading an application online and submitting it to 202-646-7970.
  • For individuals who are deaf, hard of hearing or have a speech disability using 711 or VRS, please call 1-866-337-4262.  For individuals using a TTY, please call 800-462-7585.

As FEMA reviews Hurricane Sandy claim files, the agency will also begin overhauling the claims and appeal process and improving the customer experience. FEMA’s goals are excellent customer experience, responsiveness, transparency, low risk of waste, fraud and abuse, and continuous improvement. While settling these legal matters, FEMA is instituting additional oversight of Write Your Own insurance companies to hold them accountable.

FEMA will continue to work closely with Congress and federal, state, local, tribal, and community officials to ensure policyholders are paid every dollar to which they are entitled and to improve the flood insurance program going forward.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

Monday, 01 June 2015 00:00

Real Tools to Manage Shadow IT

It is the rare enterprise these days that does not have some form of shadow IT in its midst. If you think otherwise, maybe it’s time to do a little digging into what your business groups have been up to.

But while the consensus is that the enterprise should embrace shadow IT rather than fight it, there has not been a whole lot of guidance as to how this should be done, other than vague recommendations about becoming more proactive and transitioning IT to cloud brokerage.

Lately, however, the industry has started to see a trickle of actual solutions that enhance the enterprise’s ability to get a handle on shadow IT – not to combat it, mind you, but to help integrate it into a broader computing architecture.



Monday, 01 June 2015 00:00

2015 Hurricane Season Opener

By now you’ll have read the latest forecasts calling for a below-average Atlantic hurricane season.

NOAA, Colorado State University’s Tropical Meteorology Project, North Carolina State University, WSI and London-based consortium Tropical Storm Risk all seem to concur in their respective outlooks that the 2015 hurricane season which officially begins June 1 will be well below-norm.

TSR, for example, predicts Atlantic hurricane activity in 2015 will be about 65 percent below the long-term average. Should this forecast verify, TSR noted that it would imply that the active phase for Atlantic hurricane activity which began in 1995 has likely ended.

Still it’s important to note that the forecasts come with the caveat that all predictions are just that, and the likelihood of issuing a precise forecast in late May is at best moderate. In other words, uncertainties remain.



According to the 2015 Makovsky Wall Street Reputation Study, released Thursday, 42% of U.S. consumers believe that failure to protect personal and financial information is the biggest threat to the reputation of the financial firms they use. What’s more, three-quarters of respondents said that the unauthorized access of their personal and financial information would likely lead them to take their business elsewhere. In fact, security of personal and financial information is much more important to customers compared to a financial services firm’s ethical responsibility to customers and the community (23%).

Executives from financial services firms seem to know this already: 83% agree that the ability to combat cyber threats and protect personal data will be one of the biggest issues in building reputation in the next year.

The study found that this trend is already having a very real impact: 44% of financial services companies report losing 20% or more of their business in the past year due to reputation and customer satisfaction issues. When asked to rank the issues that negatively affected their company’s reputation over the last 12 months, the top three “strongly agree” responses in 2015 from communications, marketing and investor relations executives at financial services firms were:



Houston, the fourth-largest city in the United States, has been struggling through extreme storms and some of the worst flooding in years over the past few days. Roadways were blocked, drivers were left stranded, and homes were completely destroyed due to the flash flooding.

More than 1,000 residents have been displaced and area businesses have come to a screeching halt. Once the storms and flash flooding started, I reached out to some of my clients in the area to make sure they were okay and find out what they were doing to help affected individuals and businesses.



In the enterprise world, keeping a business afloat is not enough. The true mark of success comes when a company brings innovation and evolution to the forefront. And the most successful businesses find ways to constantly grow and change with the flow of the market.

But how does an enterprise go about identifying, fostering and delivering the right innovations? How is it possible to have this level of coordinated effort flow through departments and provide the outcome that the business needs? In a term, the answer is program management.

First, program management is not project management. Program management involves goals that generally affect the company as a whole—often the bottom line. And managing programs requires commitment. It involves long-term strategy and in many cases an ongoing dedication to improvement of processes, products and people.

Of course, program management isn’t something that can be started on a whim to bring about competitive change in a company. To help the enterprise begin a successful program management process, Satish P. Subramanian wrote the book, “Transforming Business with Program Management: Integrating Strategy, People, Process, Technology, Structure and Measurement.”



Engagement is critical to the success of a firm. Years ago, I did a competitive study between Sony and Dell. Sony had better looking, more reliable hardware; Dell’s stuff wasn’t as attractive and it broke a lot in comparison. Sony sucked at engaging with customers; Dell led the segment. The end result was that Sony failed and Dell succeeded.

Dell’s Annual Analyst Conference (DAAC) is this week and HP Discover is next week. Many of the analysts here at DAAC have decided not to attend Discover because they don’t feel HP is really relevant anymore and the others who are going have indicated that they are going to confirm that the firm is effectively dead. At this same time, I’m getting notes from folks who have left HP for Oracle and are sharing how much better Oracle is than HP, in their opinion.

I can’t believe the experienced executives at HP realize they are sending a strong message that they are effectively managing their company out of business. Nor that if this results, that this is likely their last job because the failure will inevitably stain their resumes. The reason they don’t see this is that they don’t engage, and this behavior starts at the top.

I spent some time with Michael Dell this trip. I follow Meg Whitman and have met with her in person as well and the distinct difference between the two people is like night and day.



All organizations with a Business Continuity Management (BCM) or Disaster Recovery (DR) program always strive to have their Business Continuity Plans (BCP) / Disaster Recovery Plans (DRP) in a state they can use: in a state they believe will cover them in any and all situations. They want their plans to at least cover the basic minimum so that they can be responsive to any situation. But if an organization takes its program – and related plans – seriously, then these plans are never fully complete.

For a plan to be truly viable and robust, it must be able to address as many possible situations as possible and be flexible enough to adapt to any potential unknown situations.

This includes incorporating lessons learned captured from news headlines and then incorporating the potential new activities or considerations that may not be in the current BCM / DRP plan. These plans aren’t quick fixes or static response to disasters; they are living and breathing documents that need new information to grow and become robust. This is why they should never be considered as complete; as the organization grows and changes – and the circumstances surrounding the organization changes – so to must the BCM and DRP plans.



(TNS) — Despite more predictions Wednesday from experts that it will likely be a quieter than normal hurricane season, the information comes with two caveats — they don't know where the storms will go and below average doesn't mean zero.

NOAA predicts 6-11 named storms (winds of 39 mph or higher), of which 3-6 could become hurricanes (winds of 74 mph or higher), including 0-2 major hurricanes (winds of 111 mph or higher) for the 2015 hurricane season. They also project a 70 percent likelihood that it will be below average.

In a similar report released last month, Colorado State forecasters William Gray and Phil Klotzbach also projected a season that won't make the average of 12 named storms, six hurricanes and two major hurricanes.



(TNS) — The National Bio and Agro-Defense Facility is more than just a big project for Kansas and Kansas State University – it will be the front line in protecting the nation’s food supply.

That was the consensus of federal and state leaders who gathered Wednesday to celebrate the start of construction on the $1.25 billion national laboratory complex that will be built across the street from Kansas State University’s football stadium.

“The NBAF laboratory will provide the nation with cutting-edge, state-of-the-art lab capabilities to help protect our food supply and the nation’s public health,” said U.S. Secretary of Homeland Security Jeh Johnson. “The NBAF addresses a serious vulnerability: biological or agricultural threats, deliberate or natural.

“We will now be able to ensure availability of vaccines and other rapid-response capabilities to curb any outbreak.”



Climate change is taking a toll on Texas, and the devastating floods that have killed at least 15 people and left 12 others missing across the state are some of the best evidence yet of that phenomenon, state climatologist John Nielsen-Gammon said in an interview Wednesday. 

"We have observed an increase of heavy rain events, at least in the South-Central United States, including Texas," said Nielsen-Gammon, who was appointed by former Gov. George W. Bush in 2000. "And it's consistent with what we would expect from climate change." 

But the state's Republican leaders are deeply skeptical of the scientific consensus that human activity is changing the climate, with top environmental regulators in Texas questioning whether the planet is warming at all. And attempts by Democratic lawmakers during the 2015 legislative session to discuss the issue have come up short.



A business model focused on cutting costs has obvious limitations. That’s why managed services have to be about much more than lowering the cost of IT. And as much as customers love a bargain, most understand this intuitively, often citing other objectives for adopting managed services – improved uptime, access to technology advances and better security among them.

A new poll of MSPs by the MSPAlliance found customers hire MSPs primarily because they want to pay more attention to their core business. “Fifty percent of MSPs point to ‘focusing on core competencies’ as one of the leading reasons customers buy their managed services,” said MSPAlliance CEO Charles Weaver.



Recognising business continuity talent in India

Business continuity may be a developing industry in India but there still exists a wealth of talent across the country and those at the top of the profession were recognised at an awards ceremony at the India Business and IT Resilience Summit in Mumbai where the Business Continuity Institute presented their annual India Awards.

The BCI Awards consist of seven categories – six of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from across the region.

The winners were:

Continuity and Resilience Consultant of the Year 2015
Kaustubh Vazalwar MBCI of Hewlett Packard

Continuity and Resilience Professional of the Year 2015 (Private Sector)
Kapil Punwani CBCI of Reliance Life Insurance Company Limited

Continuity and Resilience Team of the Year 2015
JP Morgan Chase, CIB Resilience Team

Continuity and Resilience Provider of the Year 2015 (Service/Product)
Sungard Availability Services

Continuity and Resilience Innovation of the Year 2015
Sungard Availability Services

Most Effective Recovery of the Year 2015
JP Morgan Chase, CIB Resilience Team

Industry Personality 2015 of the Year 2015

Ramachandran Vaidhyanathan MBCI of Cognizant Technology Solutions

The BCI India Awards are one of seven regional awards held by the BCI, which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI India Awards are automatically entered into the Global Awards.

In investing, diversification is often seen as a good thing, unless you ask Warren Buffet, who is famously quoted as saying, “Wide diversification is only required when investors do not understand what they are doing.” Cloud services are the same way. So let’s change that quote up a bit, “Diversification of cloud-based file sharing is only required when managed service providers (MSPs) do not understand what they are doing.”

The average organization uses 721 cloud services, according to a recent study by Skyhigh Networks. How do companies end up with so many cloud solutions? Is it a good or bad thing for their organization? MSPs must understand how companies get themselves into this situation and the downsides they face to be able to take the first steps to getting them out of it and helping them unify their cloud infrastructure.



Thursday, 28 May 2015 00:00

Backing Up Large Data Sets

Some MSPs may be understandably worried about taking on the responsibility of backing up large data sets. After all, just about any analyst you talk to is projecting data growth of 30% or more per year. So is it a wise move to add to existing service responsibilities by taking on an additional service such as cloud backup? The answer is a resounding yes.

Here are the facts. First, a major potential headache when it comes to cloud backup is the initial backup of a large data set from a new customer. That’s the one that could take a while. But it doesn’t take that long when you use an enterprise-class cloud backup solution.

A recent independent test by Mediatronics revealed Zetta.net could back up half a TB of data in less than 3 hours with a 1Gbit connection. After that, an incremental backup--using a 5% change rate for a worst-case scenario--took only an hour. In reality, 5% is actually an aggressive change rate. Surveys show that the rate of change in any organization is typically only about 2% of the entire data set. This opens the door for a larger total dataset in the cloud.



Thursday, 28 May 2015 00:00

Why insider threats are succeeding

Many companies still lack the means or motivation to protect themselves from malicious insiders; but the effects of insider threats are simply too big to ignore. According to a report by the market research company Forrester, 46 percent of nearly 200 technology decision-makers reported internal incidents as the most common cause of the breaches they experienced in the past year. Out of those respondents, almost half said the breach stemmed from a malicious insider.

In this article TK Keanini looks at the practical steps that organizations can take to protect data and systems from insider threats.



The Ponemon Institute has released its annual Cost of Data Breach Study: Global Analysis, sponsored by IBM. According to the benchmark study of 350 companies spanning 11 countries, the average consolidated total cost of a data breach is $3.8 million1 representing a 23 percent increase since 2013.

The study also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased six percent from a consolidated average of $145 to $154. Healthcare emerged as the industry with the highest cost per stolen record with the average cost for organizations reaching as high as $363. Additionally, retailers have seen their average cost per stolen record jump dramatically from $105 last year to $165 in this year's study.

"Based on our field research, we identified three major reasons why the cost keeps climbing," said Dr. Larry Ponemon, chairman and founder, Ponemon Institute. First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."

The first Cost of Data Breach study was conducted 10 years ago in the United States. Since then, the research has expanded to 11 countries. Ponemon Institute's Cost of Data Breach research is based on actual data of hundreds of indirect and direct cost categories collected at the company level using field-based research methods and an activity-based costing framework. This approach has been validated from the analysis of more than 1,600 companies that experienced a material data breach over the past 10 years in 11 countries.



I continue my exploration of actions you can take to improve your compliance program during an economic downturn with a review of what my colleague Jan Farley, the Chief Compliance Officer (CCO) at Dresser-Rand, called the ‘Desktop Risk Assessment’. Both the Department of Justice (DOJ) and Securities and Exchange Commission (SEC) make clear the need for a risk assessment to inform your compliance program. I believe that most, if not all CCOs and compliance practitioners understand this well articulated need. The FCPA Guidance could not have been clearer when it stated, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” While many compliance practitioners have difficulty getting their collective arms about what is required for a risk assessment and then how precisely to use it; the FCPA Guidance makes clear there is no ‘one size fits all’ for about anything in an effective compliance program.

One type of risk assessment can consist of a full-blown, worldwide exercise, where teams of lawyers and fiscal consultants travel around the globe, interviewing and auditing. Of course this can be a notoriously expense exercise and if you are in Houston, the energy industry or any sector in the economic doldrums about now, this may be something you can even seek funding for at this time. Moreover, you may also be constrained by reduced compliance personnel so that you can not even perform a full-blown risk assessment with internal resources.



By conventional standards, business continuity cannot exceed one hundred percent. Business continuity of less than 100% is obviously possible, although measurements of just how much less may only be approximate. However, if everything is working properly, full business continuity has been achieved. Does it make sense to then talk about ‘fuller than full’ or a business continuity index that is more than 100%?



Most of the commentary regarding the cloud these days (mine included) focuses on the myriad ways in which abstract, distributed architectures can remake the enterprise as we know it.

We talk of software-defined data environments, hyperscale infrastructure and advanced Big Data and mobile application environments that will allow organizations to shed their rusty legacy environments in favor of a brave new world of computing.

The trouble is, most organizations don’t want that – at least, not right away.

The simple fact of the matter is that radical change is frightening to most people, and the typical CIO or data management executive is driven not by a desire to deploy the latest and greatest technology but to implement solutions that contribute to the bottom line.



(TNS) — The recent rioting and unrest in Baltimore will cost the city an estimated $20 million, officials said Tuesday.

The expenses — which go before the city’s spending board for approval Wednesday — include overtime for police and firefighters, damage to city-owned property and repaying other jurisdictions for police and other assistance.

Henry J. Raymond, Baltimore’s finance director, said the city can temporarily cover the costs from its rainy-day fund while seeking reimbursement for up to 75 percent from the Federal Emergency Management Agency.

“The city remains on strong financial footing,” Raymond said. “Hopefully, with the FEMA reimbursement, it will reduce the financial stress that we’re under. In terms of the city’s overall revenue structure, we’re on firm footing and we’ll move forward.”



Thursday, 28 May 2015 00:00

BCI: The Cost of Data Breaches

According to a new study by the Ponemon Institute, sponsored by IBM, the average consolidated total cost of a data breach is $3.8 million, representing a 23% increase since 2013. The annual 'Cost of Data Breach Study' also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased 6% from a consolidated average of $145 to $154.

"Based on our field research, we identified three major reasons why the cost keeps climbing," said Dr Larry Ponemon, chairman and founder, Ponemon Institute. "First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."

Data breaches are a significant threat to organizations as highlighted in the Business Continuity Institute's latest Horizon Scan report which revealed that 82% of respondents to a survey were either concerned or extremely concerned about this threat materialising while 74% expressed the same level of concern to a data breach, making them the first and third greatest threats respectively.

Some of the highlights from the Ponemon Institute’s research include:

  • Board level involvement and the purchase of insurance can reduce the cost of a data breach. The study looked at the positive consequences that can result when boards of directors take a more active role when an organization had a data breach. Board involvement reduces the cost by $5.50 per record. Insurance protection reduces the cost by $4.40 per record.
  • Business continuity management plays an important role in reducing the cost of data breach. The research reveals that having business continuity management involved in the remediation of the breach can reduce the cost by an average of $7.10 per compromised record.
  • The most costly breaches continue to occur in the US and Germany at $217 and $211 per compromised record respectively. India and Brazil still have the least expensive breaches at $56 and $78 respectively.
  • The cost of data breach varies by industry. The average global cost of data breach per lost or stolen record is $154. However, if a healthcare organization has a breach, the average cost could be as high as $363, and in education the average cost could be as high as $300. The lowest cost per lost or stolen record is in transportation ($121) and public sector ($68).
  • Hackers and criminal insiders cause the most data breaches. 47% of all breaches in this year's study were caused by malicious or criminal attacks. The average cost per record to resolve such an attack is $170. In contrast, system glitches cost $142 per record and human error or negligence is $137 per record. The US and Germany spend the most to resolve a malicious or criminal attack ($230 and $224 per record, respectively).
  • Notification costs remain low, but costs associated with lost business steadily increase. Lost business costs are abnormal turnover of customers, increased customer acquisition activities, reputation losses and diminished good will. The average cost has increased from $1.23 million in 2013 to $1.57 million in 2015. Notification costs decreased from $190,000 to $170,000 since last year.
  • Time to identify and contain a data breach affects the cost. The study shows the relationship between how quickly an organization can identify and contain data breach incidents and financial consequences. Malicious attacks can take an average of 256 days to identify while data breaches caused by human error take an average of 158 days to identify. As discussed earlier, malicious or criminal attacks are the most costly data breaches.

I. Bill Gates is an optimist.

Ask him, and he'll tell you himself. "I'm very optimistic," he says. See?

And why shouldn't Bill Gates be an optimist? He's one of the richest men in the world. He basically invented the form of personal computing that dominated for decades. He runs a foundation immersed in the world's worst problems — child mortality, malaria, polio — but he can see them getting better. Hell, he can measure them getting better. Child mortality has fallen by half since 1990. To him, optimism is simply realism.

But lately, Gates has been obsessing over a dark question: what's likeliest to kill more than 10 million human beings in the next 20 years? He ticks off the disaster movie stuff — "big volcanic explosion, gigantic earthquake, asteroid" — but says the more he learns about them, the more he realizes the probability is "very low."



Wednesday, 27 May 2015 00:00

Reinventing the Data Center Stack

Now that the cloud is becoming a common fixture in the enterprise, the IT industry is starting to look at how a cloud-facing, mobile-driven environment will affect that full data stack.

Naturally, this is mostly conjecture at this point because many leading experts still do not know how the technology, user requirements, business models and even entire industries will be affected by this transformation. From an historical perspective, the current decade is very similar to about 100 years ago as utility-based electrical grids were first powering up: People are in awe of an amazing new technology, even though its full ramifications cannot be discerned.

Still, there are those who are willing to give it a try, particularly when it comes to the all-software IT deployment capabilities that abstract architectures represent. MapR Technologies’ Jack Norris recently explored the potentialities of “re-platforming” the enterprise toward a more data-centric footing.  This will naturally require a new view of physical infrastructure, such as the current separation of compute and storage, but it also has implications higher up the stack, as in the need to maintain separate production and analytics architectures. This new stack will also require global resource management, linear scalability and real-time processing and systems configuration.



It’s been about eight months since IT services giant and top-ranked MSPmentor 501 2015 company Dimension Data announced it would deploy globally standardized managed services for data centers.

The service, built on the organization’s managed services automation platform, manages server, storage and networks for on-premise, cloud and hybrid data centers, the company said in a statement in September. Those services can be in the client’s data centers, colocation facilities, in the public cloud, in a private cloud, or in Dimension Data’s cloud.



Wednesday, 27 May 2015 00:00

Another Strand of the Resilience Web

One of the problems that is related to our ability to understand how resilient we can possibly be in the future is that we expect the future to be based on our normalities.  We expect (and would probably like) a degree of stability based upon what we know and understand to be our current terms of reference. Unfortunately, things change; and alongside the political and international tectonic shifts that appear to be accelerating at the moment, we should also consider those structures and capabilities upon which we have long relied and the fact that we may be losing control of them.

The structures of our societies, the underpinning elements of the way that we live can also have a profound influence on our ability to live in the same way in the future. An interesting combination of debt and demographic is influencing the potential longevity of our economic structures according to the European chief executive of Goldman Sachs Asset Management.



We have become not only acculturated to interruptions, but addicted to them. We have the mistaken belief that interruptions are a perfectly normal way of life, despite knowing deep down that “time is a precious commodity that we cannot afford to waste.”

Therein lies the essential message of Edward Brown, founder and president of Cohen Brown Management Group, a culture change and time management consulting and training firm in Los Angeles. But at least he’s trying to do something about it. He’s the author of The Time Bandit Solution: Recovering Stolen Time You Never Knew You Had,” and he feels strongly enough about the issue to take time out for an in-depth email interview on the topic.

I learned a lot from that interview about the extent to which we allow ourselves to be interrupted, and the price we pay as a result. To set the stage for the discussion, Brown pointed out that there are two key types of interruptions that we tolerate: those coming from other people, and those coming from our devices. He said other people are inveterate time bandits, and the fact that their intent is innocent doesn’t matter:



(TNS) — After a major accident or disaster, rescue operations have always focused on the nuts and bolts — saving the survivors, searching for those who didn’t make it, securing the evidence.

Now an added dimension — the consumer perspective — has expanded how disaster planners think. Philadelphia emergency management officials say it guided their response to the Amtrak derailment that killed eight people and injured more than 200 on May 12.

Passengers are going through “the most traumatic time of their lives,” said Everett A. Gillison, Mayor Michael Nutter’s chief of staff and deputy mayor for public safety. “Seeing the world through their eyes really kind of forces us to always question: ‘Are we providing what we really need to provide to them?’ “

That includes understanding what frantic families are going through. “If you haven’t heard from somebody, you kind of have to assume the worst,” he said.



(TNS) — Climate change may be triggering an evolution in hurricanes, with some researchers predicting the violent storms could move farther north, out of the Caribbean Sea and the Gulf of Mexico, where they have threatened coastlines for centuries.

Hurricane season in the Atlantic Ocean began Monday, and forecasters are predicting a relatively quiet season. They say three hurricanes are expected over the next six months, and only one will turn into a major hurricane.

Florida hasn’t been hit by a hurricane in a decade, and researchers are increasingly pointing to climate change as a potential factor.



(TNS) --On a day that brought a new round of fierce thunderstorms and torrential rains, authorities continued a grim search Monday for 12 people still missing after being swept from riverfront homes, and property owners returned to dramatic scenes of destruction.

San Marcos and Hays County officials revised upward the property damage wrought by the historic flood, saying 72 homes had been washed away. Texas Gov. Greg Abbott, who toured the scene, said the storms brought a punch that "you cannot candy coat" and declared a disaster area in 24 counties, including Bastrop and Hays.

Abbott said the flood in the Wimberley valley is "the highest flood we've ever recorded in the history of the state of Texas."

"It's a powerful message to anyone in harm's way of the relentless, tsunami-type power this wave of water can pose to people," he said.



Wednesday, 27 May 2015 00:00

The Importance of Risk Culture

When objective parties, armed with the benefit of 20/20 hindsight, can easily see warning signs that something was either wrong or wasn’t working and that executive management either missed or chose to ignore these same warning signs, it is fair to assert that management was encumbered with a blind spot. A culture that is conducive to effective risk management encourages open and upward communication, sharing of knowledge and best practices, continuous process improvement and a strong commitment to ethical and responsible business behavior.

Effective risk management doesn’t function in a vacuum and rarely survives a leadership failure. The risk management function can review, inform, advise, monitor, measure and even resign. It cannot control, decide or abort; that’s management’s job. Without an effective internal environment in place to ensure that adequate attention is given to protecting enterprise value, entrepreneurial behavior can run amok, completely unbridled and without boundaries or constraints. By “internal environment,” we mean the total package – the control environment, management’s operating style, the incentive compensation structure, a commitment to ethical and responsible business behavior, open and transparent reporting, clear accountability for results and other aspects of the organization’s culture.

Our premise is that ensuring an effective risk culture is an important task for executive management and the Board. Unfortunately, despite its importance, risk culture is often either given lip service or simply ignored.



You’re ready for advancement, you want to learn and you’re looking for an educational programme to encompass the needs of your current or planned role in the protection and preservation of your organisation’s functionality, viability and profitability. The MSc Organisational Resilience at Buckinghamshire New University will be good for you – here’s why:

You will become confident, capable and thorough in your knowledge and understanding of organisational resilience

You will understand how resilience needs to match the context of a changing global operating and threat landscape

You will develop the important skill of not just being able to talk about resilience, but also to take an analytical approach that allows you to offer balanced and evaluated solutions to real problems and issues



To ensure the availability of high performance, mission critical IT services, IT departments need both solid monitoring capabilities and dedicated IT resources to resolve issues as they occur. But even with the right tools in place, when an abundance of alerts and alarms start streaming in, it can quickly become overwhelming , particularly when IT staff have been asked to focus time and attention on activities that both support the organization’s end users and add to the company’s bottom line.

Logicalis US suggests that organizations need to ask the following five key questions to help ensure that enterprise IT monitoring is fit for purpose:

1. Is your monitoring tool configured properly? Most organizations have off-the-shelf monitoring tools that gather information from all of the devices on their network. The information coming from these tools can be overwhelming, and while it may be helpful to have access to all of that data, weeding through it in crunch-time can be cumbersome. To limit alerts to those that are most important takes training, knowledge and expertise, which leads many organizations that want to manage IT monitoring in house to employ full-time experts just to configure and manage their monitoring tools.

2. Do you update regularly? Since rules are continually being added to monitoring tools, monitoring isn’t an ‘implement and forget it’ situation, which means IT departments spend a considerable amount of time making sure the tools they depend on for alerts are as current and up-to-date as possible.

3. Can your tool provide event correlation? A single network error can have a ripple effect impacting applications that would otherwise be completely unrelated. As a result, it’s critical that an IT monitoring tool provide event correlation to speed diagnosis and remediation in all affected areas.

4. Does your monitoring tool offer historical trending data? When managing an enterprise environment, IT pros need to analyze historical trend data to identify recurring issues as well as to do capacity planning which, in many cases, can help prevent issues before they arise. Some of today’s popular monitoring tools, however, either operate in real time or store historical data for 30 days or less. Knowing what your tool offers is important information since being able to intelligently analyze and manage an organization’s IT environment can depend on having access to this historical data long term.

5. Do you have the right expertise in house? In an enterprise IT environment, it’s important to consider internal staffing needs and the expertise required to manage the monitoring tools and process in house. Keeping an enterprise environment up and running is no longer IT’s value-add; it’s an expectation. Today, most organizations want their IT staff delivering business results, which is why it may make sense to consider outsourcing monitoring to a third party skilled in assessing and limiting incident reports to only the handful that a busy internal staff actually needs to address.


On 12th December 2014 NATS, the UK's leading provider of air traffic control services, experienced a failure in its Swanwick flight data system. The outage resulted in widespread flight delays and cancellations. A report has now been published which details the events behind the outage and subsequent business continuity response.

Written by an enquiry panel led by Sir Robert Walmsley the report finds that:

  • Failure occurred on the 12th December because of a latent software fault that was present from the 1990s. The fault lay in the software’s performance of a check on the maximum permitted number of Controller and Supervisor roles.
  • The system error was caused because of a number of new Controller roles that had been added to the system the day before.
  • The standard practice in NATS is that engineering recovery is coordinated through a group of designated engineers, known as the Engineering Technical Incident Cell (ETIC) and drawn from those available in the Systems Control Centre adjacent to the Operations Room. While some recovery actions are automated, ETIC manually control all key recovery actions, e.g. the restoration of data, to ensure that decisions are made with due and careful deliberation; this is important, as the wrong decisions could have further downgraded performance.
  • Identifying a software fault in such a large system (the total application exceeds 2 million lines of code), within only a few hours, is a surprising and impressive achievement. This was made possible because system logs contain details of the interactions at the workstations.

The detailed 93 page report is available here as a PDF and should be of interest to business continuity managers whatever their sector. It shows how legacy systems can have unexpected and unanticipated impacts as well as giving useful details about the business continuity plans and strategies that were in place at the time of the incident.

The report makes clear that although this was a high profile incident which caused difficulties for NATS' direct customers and the supply chain, it was undoubtedly a business continuity success. Without a strong recovery team response and the pre-planned procedures that were in place the incident and disruption would have been much worse.

According to a new market research report published by MarketsandMarkets the mass notification market is estimated to grow from $3.81 billion in 2015 to $8.57 billion in 2020. This represents a compound annual growth rate (CAGR) of 17.6 percent from 2015 to 2020.

The major forces driving this market are the growing need for public safety, increasing awareness for emergency communication solutions, the requirement for mass notification for business continuity, and the trend towards mobility.

The report says that business continuity and disaster recovery and public safety compliance standards are boosting the sales of mass notification solutions.

Mass notification solutions providers are expected to collaborate and provide better competitive services to take advantage of the emerging mass notification market and to meet the need for complete crisis communication solutions.

Obtain the ‘Mass Notification Market by Solution (In-Building, Wide-Area, Distributed Recipient), by Application (Interoperable Emergency Communications, Business Continuity & Disaster Recovery, Integrated Public Alert & Warning, Business Operations), by Deployment, by Vertical & by Region - Global Forecast to 2020’ report from here.

Most people are visually oriented when it comes to taking in information. They also prefer analogue displays to digital ones. In other words, when it comes to understanding risk as part of business continuity, they like colours and graphics, rather than numbers in a spreadsheet. That makes the risk heat map a popular choice for presenting summary risk information to non-risk experts or senior management. Typically, areas in red on the heat map indicate the biggest risks and areas in green the smallest/most acceptable risks. But does this approach in fact too limited?



Tuesday, 26 May 2015 00:00

New Approaches to IT Efficiency

Virtually everyone is in favor of an energy-efficient data center. But if that is the case, why has the industry struggled so mightily to reduce power consumption?

Even with the remarkable gains in virtualization and other advanced architectures, the data center remains one of the primary energy consumers on the planet, and even worse, a top cost-center for the business.

But the options for driving greater efficiency in the data center are multiplying by the day – from low-power, scale-out hardware to advanced infrastructure and facilities management software to new forms of power generation and storage. As well, there is the option to offload infrastructure completely to the cloud and refocus IT around service and application delivery, in which case things like power consumption and efficiency become someone else’s problem.



Editor’s Note: This is part of a series on the factors changing data analytics and integration. The first post covered cloud infrastructure; the second discussed new data types, and the third focused on data services.

Data keeps expanding, but only recently have organizations been able to store the data in useful ways. Now, organizations can theoretically keep data at the ready, whether it’s in the cloud, a data lake or in-memory appliance.

Hopefully, it will soon be archaic to hear my doctor say, “Oh, we sent that x-ray to tape. We could get it — but it’s a huge hassle.”

The ability to store mass data is one of the five data evolutions that David Linthicum cited in his thesis on “The Death of Traditional Data Integration.” The ability to pool Big Data sets would not be disruptive, though, if it weren’t coupled with the ability to access it easily and as needed for analytics. As Informatica CEO Sohaib Abbasi points out, this “richness of big data is disrupting the analytics infrastructure.”



One of the often overlooked aspects of Big Data and the Internet of Things is the ability to model and simulate advanced data architectures. This is likely to become a crucial element in the emerging data-driven economy because it allows business leaders to further optimize their digital footprints in support of business goals without disrupting current operations.

As expected, there is a plethora of new simulation platforms hitting the channel that utilize both cloud and on-premises resources to, ironically, model cloud and on-premises infrastructure in support of advanced development and productivity applications.



ATLANTA – As the 2015 hurricane season begins, FEMA has launched a new feature to its mobile app to help you be prepared and stay informed about severe weather. The free feature allows you to receive weather alerts from five locations you select anywhere in the country, even if the phone is not located in the area. This tool makes it easy to follow severe weather that may be threatening your family and friends in other areas.

“Whether this years’ hurricane season is mild or wild, it’s important to be prepared,” said Regional Administrator Gracia Szczech. “Despite forecasters’ predictions for a below-normal number of storms, fewer storms do not necessarily mean a less destructive season. FEMA is reinforcing preparedness basics and resources to help people be ready whether they live along the coast or farther inland.” Visit FEMA’s www.ready.gov/hurricanes for step-by-step information and resources for what to do before, during and after a hurricane.

Cellphones and mobile devices are a major part of our lives and an essential part of how emergency responders and survivors get information during disasters. According to a recent survey by Pew Research, 40 percent of Americans have used their smartphone to look up government services or information. Additionally, a majority of smartphone owners use their devices to keep up to date with breaking news, and to be informed about what is happening in their community.

The new weather alert feature adds to existing features in the app to help Americans through emergencies. In addition to this upgrade, the app also provides a customizable checklist of emergency supplies, maps of open shelters and Disaster Recovery Centers, and tips on how to survive natural and manmade disasters. The FEMA app also offers a “Disaster Reporter” feature, where users can upload and share photos of disaster damage. The app defaults to Spanish language content for smartphones that have Spanish set as their default language.

The latest version of the FEMA app is available for free in the App Store for Apple devices and Google Play for Android devices. Users who already have the app downloaded on their smartphones should download the latest update for the new alerts feature to take effect. To learn more about the FEMA app, visit: The FEMA App: Helping Your Family Weather the Storm.


eFax Corporate recently hosted a webinar to inform covered entities in healthcare of the dangers that today’s sophisticated cyber hackers pose to their electronic protected health information (ePHI) and other intellectual property.

We chose healthcare because it is a favored target among hackers and other “malicious actors,” as the FBI calls them. This is largely because the personal data that health providers hold includes information valuable to criminals--names, birth dates, Social Security numbers. According to the Department of Health and Human Services’ Office of Civil Rights, data breaches of health providers in 2014 affected as many as 10 million people. And breaches like these were up an astonishing 1,800% from 2008 to 2013!

But the common pitfalls and best practices we identified in this webinar relate not only to healthcare-related businesses; they can also apply to organizations in all industries. So here’s a brief overview of the key points we discussed in the webinar--details you might want to share with your corporate clients.



One of the things that IT security folks don’t appreciate about the proliferation of mobile computing devices everywhere is how trusting those devices are. Every mobile computing device just naturally assumes that a radio signal within its reach is a trusted source of Internet access.

It turns out, however, that digital criminals are starting to abuse that trust by setting up fake wireless networks to hijack those radio signals using a process commonly referred to as “commjacking.” Once a fairly expensive ruse to set up, there are now open source kits that can be had for as little as $29 that enable criminals to set up a wireless network that for all intents and purposes looks like any other open wireless network. Once a mobile device connects to that network the digital criminals that run it simple steal all the data they can, including everything from credit card numbers to any unencrypted emails.



Fighting corruption has reached new heights on the global agenda, driven by the recognition that corruption fuels inequality, poverty, conflict, terrorism and failures of development.  Governments in India, Brazil, the UK, Canada, China and some other countries have followed enforcement of the U.S. Foreign Corrupt Practices Act by promulgating national anti-corruption laws that focus on the bribery of public officials by companies, generally with sweeping extraterritorial authority. The appropriate corporate response, we are told, is to build anti-corruption compliance programs; regulators even offer the private sector detailed guidance about best practices. All this has spawned a lucrative consulting industry dominated by investigation companies and accounting and law firms – what the Economist refers to as “FCPA Inc.” With little excuse for ignorance, it would seem that enterprises need only adhere to guidance from regulators and roll out the mandated programs.

It’s not working. Compliance officers tell of delayed rollouts, inadequate budgets, company-wide coordination problems and their own lack of organizational influence. Even when companies get past operational issues, the evidence suggests that a “tick-the-box” approach to compliance is inadequate. Many of the companies currently under investigation by the U.S. Department of Justice and the Securities and Exchange Commission already had hugely expensive, state-of-the-art compliance programs. A recent OECD review of successful corruption prosecutions cites involvement by senior management or Chief Executive Officers in more than 50 percent of global anti-corruption cases to date — revealing deliberately unethical decision making by executives who decisively outrank Chief Compliance Officers. This narrative of systemic degradation is at odds with the dominant “rogue employee under the radar” explanation of wrongdoing. It exposes a legal system that has mistakenly, or perhaps willfully, chosen to focus on a misleading proxy indicator of performance: individual accountability.



It was only a matter of time before there was a serious security flaw affecting the Internet of Things (IoT). It comes by way of a vulnerability in NetUSB, which lets devices that are connected over USB to a computer be shared with other machines on a local network. The vulnerability, which could lead to remote code execution or denial of service if exploited, may affect some of the most popular routers in our homes and workplaces.

Details of the vulnerability were released by SEC Consult. According to Forbes, the weakness is somewhat rare, but it works this way:

When a PC or other client connects to NetUSB, it provides a name so it can be recognised as an authorised device. Whilst the authentication process is ‘useless’ as the encryption keys used are easy to extract … it’s also possible for an attacker who has acquired access to the network to force a buffer overflow by providing a name longer than 64 characters.



A period of upheaval is on the near-horizon for MSPs, and it’s going to be especially hard on providers overly focused on technology. They must adapt by shifting their focus to delivering business solutions, and seek opportunities in cloud and virtual desktop services.

“I think there’s going to be a lot of casualties over the next three to five years in the MSP space, and primarily it’s because many MSPs today have been started by technologists,” Tommy Wald, president of TW Tech Ventures in Austin, Texas, said in a recent interview with MSPmentor.



(TNS) — Colorado will spend $1.2 million over the next two years on a "revolutionary" fire prediction system that uses atmospheric weather data to predict the behavior of wildfires up to 18 hours in advance.

Gov. John Hickenlooper signed House Bill 1129 on Wednesday afternoon at a fire station in Arvada, implementing one of several bills lawmakers drafted in response to wildfires in El Paso County and elsewhere.

"This bill will predict the intensity and the direction of fires 12 to 18 hours ahead of time. That is really important so we know where to direct our planes, the aircraft we had a bill for last year, and our firefighters," said Rep. Tracy Kraft-Tharp, D-Arvada, who introduced the bill. "This is really revolutionary."



(TNS) — Congressman Tom Cole (OK-04) introduced legislation this week that would help families rebuilding their homes after disasters. Currently, the Small Business Administration provides homeowners, renters and personal-property owners with low-interest loans to help recover from a disaster.

The Tornado Family Safety Act of 2015, introduced by Cole, clarifies that SBA disaster loans can be used by homeowners for construction of safe room shelters within rebuilt homes.

“Oklahomans are no strangers to severe weather and the terrible destruction that can result from it,” said Cole. “Considering the yearly risk and unpredictability of tornadoes that exists, it is not a matter of ‘if’ but ‘when’ it will occur.

This legislation underscores the type of projects that are eligible for these SBA disaster loans, which includes loans for construction of safe rooms. Under current law, SBA can increase the size of a home disaster loan up to 20 percent of the total damage to lessen the risk of property damage by future disasters of the same kind.



The typical organization loses 5% of revenue each year to fraud – a potential projected global fraud loss of $3.7 trillion annually, according to the ACFE 2014 Report to the Nations on Occupational Fraud and Abuse.

In its new Embezzlement Watchlist, Hiscox examines employee theft cases that were active in United States federal courts in 2014, with a specific focus on businesses with fewer than 500 employees to get a better sense of the range of employee theft risks these businesses face. While sizes and types of thefts vary across industries, smaller organizations saw higher incidences of embezzlement overall.

According to the report, “When we looked at the totality of federal actions involving employee theft over the calendar year, nearly 72% involved organizations with fewer than 500 employees. Within that data set, we found that four of every five victim organizations had fewer than 100 employees; more than half had fewer than 25 employees.”



The task of staying on top of all of the alerts and alarms that security monitoring tools send out constantly is becoming an unsustainable burden to some IT departments. In balancing setting up and manning these alerts – sometimes millions of them -- while at the same time providing other mission-critical services to grow the business, something has to give. The problem has even been blamed in the massive 2014 Target breach, in which relevant alarms were not noticed in a timely manner.

Security monitoring tools are all but useless without human IT resources to follow up on them, and quickly. It’s become a specialized service area for some enterprises, who want to outsource the monitoring to experts who do nothing but, and know the ins and outs of setting thresholds and balancing monitoring of multiple systems.

Managed service provider Logicalis US has compiled five questions for CIOs considering bringing on a monitoring service provider to support IT’s security responsibilities.



The SMB Group released information on its State of SMB Adoption of Mobile Apps and Management Solutions recently. It was a relief to see that SMBs were finally recognizing the importance of mobile solutions to their businesses, with 55 percent of the small and 65 percent of the midsize businesses strongly agreeing that these are critical. However, Kapsersky Lab’s own report on BYOD shows that a surprising number of SMB owners “don’t see a danger” with their employees using personal devices at work.

The Kaspersky report provides data that shows that BYOD could be the real security issue for SMBs, according to CBR Online. In the report, 92 percent of those surveyed said they “keep sensitive corporate information on smartphones and tablets, which they use for both work and personal activities.” That is a dangerously high number of businesses that put a lot of trust in their mobile security efforts, despite the fact that they also think that “basic security tools provided within free solutions” are enough to protect that data. Most also say they don’t see a reason to budget more money toward better security.



Wednesday, 20 May 2015 00:00

BMC’s Remedy for IT Obsolescence

Of the companies I follow, one stands out with the singular mission of assuring that IT doesn’t again become obsolete in the face of ever more powerful direct to line management offerings like Amazon Web Services. Most firms tend to treat Amazon’s offering as a competitor or potential customer and miss that it is actually a very different beast. It isn’t really going after IT as customers for the most part, it is rendering IT obsolete by going after IT’s customers directly. If we were talking about this in terms of sales channels, this would be like talking about what Amazon did to retail; it made the retail store obsolete in order to sell directly to their user customers. In effect, Amazon changed the game. BMC is the only enterprise vendor that has figured out that the proper defense isn’t to fight Amazon or to sell to Amazon -- it is to protect IT.

The MyIT effort validates this strategy and the new Remedy 9 platform is the latest in the company’s quiver of arrows designed to help IT defend against obsolescence.

In short, BMC’s goal is to make IT a better choice for employees than any cloud service, partially by embracing them, but mostly by driving IT to focus on making IT’s own customers more satisfied.



Wednesday, 20 May 2015 00:00

Managing the Hybrid Application Stack

The best part about moving data operations to the cloud is that you no longer have to worry about provisioning and managing infrastructure. The drawback, of course, is that you have to shift to a service/application-centric approach to management and then somehow integrate that with all of your legacy management systems.

Fortunately, hybrid data management is gaining a fair bit of traction in the development community as vendors seek to get the jump on what is likely to be the dominant enterprise data architecture going forward. According to BlueStripe’s Vic Nyman, the hybrid data center is likely to contain a broad mix of virtualized infrastructure, operating systems and container platforms, as well as a variety of database formats, third-party web services and distributed applications. To manage such diversity, the enterprise will need to deploy key functions such as dynamic application mapping and updating, seamless multi-platform visibility, real-time response time measurement and reporting – and this is before we can even think about expanding to microservices and application component aggregation.



(TNS) — When a bridge falls, when a water main fails or when a train crashes, news crews and commentators report on the sorry state of our nation’s infrastructure. Policymakers on both sides of the aisle say we need to do something to fix our roads and rails, our ports and pipes. This flurry of activity lasts for a few days, but then little to nothing happens.

Why isn’t there more action?

Despite infrastructure’s fundamental role in the health and safety of the American people and the economy, the United States has underinvested for decades. Today, infrastructure spending as a share of gross domestic product is about 2.5 percent, much lower than the 3.9 percent in peer countries such as Canada, Australia and South Korea. The figure for Europe as a whole is closer to 5 percent and between 9 and 12 percent for China.

The McKinsey Global Institute estimates that the United States should spend at least an additional $150 billion a year on infrastructure through 2020 to meet its needs. This investment is expected to add about 1.5 percent to annual GDP and create at least 1.8 million jobs.



Applications accepted for ocean, fisheries programs through July
Resilience means bouncing back. (Credit: NOAA)

(Credit: NOAA)

Two new NOAA grant programs will help coastal communities and their managers create on-the-ground projects to make them more resilient to the effects of extreme weather events, climate hazards, and changing ocean conditions.

This builds on NOAA’s commitment to provide information, tools, and services to help coastal communities reduce risk and plan for future severe events.

NOAA’s National Ocean Service is supporting the effort with $5 million in competitive grant awards through the 2015 Regional Coastal Resilience Grant Program and NOAA Fisheries is administering the companion $4 million Coastal Ecosystem Resiliency Grants Program.

“Coastal communities around the country are becoming more vulnerable to natural disasters and long-term environmental changes,” said Holly Bamford, Ph.D., assistant NOAA administrator for NOAA's National Ocean Service performing the duties of the assistant secretary of commerce for conservation and management. “These new grant opportunities will help support local efforts to build resilience of U.S. coastal ecosystems and communities, while finding new and innovative ways to mitigate the threats of severe weather, climate change and changing ocean conditions.”

The National Ocean Service 2015 Regional Coastal Resilience Grant Program will help coastal communities and organizations prepare for and recover from adverse events while adapting to changing environmental, economic, and social conditions. The grants will be awarded to  organizations to plan and implement resilience strategies regionally to reduce current and potential future risks. Proposals are due by July 24.

The NOAA Fisheries’ Coastal Ecosystem Resiliency Grants Program will focus on developing  healthy and sustainable coastal ecosystems through habitat restoration and conservation. The winning proposals will demonstrate socioeconomic benefits associated with restoration of healthy and resilient coastal ecosystems, support healthy fish populations, and demonstrate collaboration among multiple stakeholders. Proposals are due by July 2.   

Each grant proposal may request between $500,000 to $1 million in federal funds for the Regional Coastal Resilience Grant Program and $200,000 to $2 million for the Coastal Ecosystem Resiliency Grants Program. Eligible funding applicants include nonprofit organizations, institutions of higher education, regional organizations, private (for profit) entities, and local, state, and tribal government.

Details on the grant programs can be found at the NOAA Fisheries Coastal Ecosystem Resiliency Grants webpage (http://www.habitat.noaa.gov/funding/coastalresiliency.html) and the NOAA Ocean Service Regional Coastal Resilience Grant Program webpage (http://www.coast.noaa.gov/resilience-grant/). To apply visit http://www.grants.gov/

NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on FacebookTwitter, Instagram and our other social media channels.