Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Fall Journal

Volume 29, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (6543)

(TNS) - A denser, taller and better-built Silverdale would make Kitsap County more resilient to major earthquakes, according to a hazard mitigation report commissioned by the Federal Emergency Management Agency.

The report urges the county to use land-use policies, property owner incentives and construction budgets to steer development patterns away from steep slopes, unstable soils, tsunami inundation zones and other areas that amplify earthquake dangers. At the same time, the county should encourage growth in Silverdale, where quake damage is expected to be lower, the report states.

"When the earthquake happens, the county is going to have to spend millions and millions of dollars to repair the hazardous areas," said Bob Freitag, co-director of the University of Washington's Institute for Hazard Mitigation Planning and Research, which produced the report. "It's in their interest — and in the interest of reducing risk to people — to begin shifting development out of those areas."



(TNS) - U.S. Rep. Suzan DelBene, D-Wash., held a forum Friday at Skagit Valley College to discuss the region’s natural hazards with local officials and first responders.

The forum discussing hazards such as landslides, earthquakes and volcanic eruptions was part of a series called “Community Preparedness & Resiliency: Planning Today for a Safer Tomorrow.”

“This is really something that’s important to all of us. The beauty of this region comes with a price. It’s a very active region,” U.S. Geological Survey Northwest Regional Director Rich Fererro said.



These days, IT employees might have a background in theater instead of computer science. They’re more likely than ever to be part-timers and short-timers instead of career public servants. Some of them are even configuring networks on a phone rather than a PC.

What, exactly, is going on here?

These are all signs of the new normal for government IT departments. Public CIOs and their staffs face disruption from almost every angle — the long-anticipated baby-boomer retirement wave is beginning to crest, younger employees have new ideas about how and where they want to work, and rapidly evolving technology is rewriting the resume of typical tech workers.



Everyone is susceptible to a data breach, even the companies that provide the rest of us with warnings and advice about why and how breaches happen and what we should be doing to better protect ourselves.

The Verizon Data Breach Investigations Report is perhaps the most thorough examination of cyber threats conducted each year. I know I refer to it over and over again. When I first started getting emails about the Verizon breach, I honestly thought it was about this year’s report. I was wrong. These alerts were about an attack on Verizon Enterprise Solutions that compromised approximately 1.5 million customers. According to eWeek:

Attackers used a flaw in the company's Web portal for enterprise customers to steal data on its clients, Verizon said. . . . While Verizon did not confirm that 1.5 million customers were impacted, a spokesperson stressed that consumer data was not part of the breach.



Tuesday, 29 March 2016 00:00

Do you have a strong password?

Recently the most used passwords were announced by Splashdata. You can follow the link to see the list - https://www.teamsid.com/worst-passwords-2015/.

The top three being password, 123456 and 12345678.

Passwords used to be simple 6-8 character words but with so many software packages able to crack simple passwords, we need to ensure that the most common source of security works for us. Here are some basic rules. At a recent security seminar, a speaker was suggesting that a password needed to be at least 25 characters long to be effective! (try remembering numerous 25 character passwords)



How would you rate the information security of government agencies in most countries? Pretty good, probably. You might not go as far as to bet that they would never, ever suffer a breach of security. Yet today’s scandals seem to concern entities in the private sector, even if there are some big names among them (Sony, Target, Anthem, to mention a few). So how about a central bank being robbed of US $81 million? And with the money being willingly transferred to the criminals’ accounts by none other than the US Federal Reserve?



microphones set up for a news conference

An unexpected public health emergency can happen anywhere and to anyone. The right health or safety message at the right time from the right person can save lives. However, poor communication can also make an emergency situation much worse.

CDC’s crisis communicators are trained to speak to the public when the unthinkable happens to them, their families, and their communities. Crisis communicators use evidence-based communication strategies to deliver messages to help people stay safe and healthy during a disaster.

Crisis communicators work with scientists, doctors, and other experts during disasters to deliver information to people. To be ready to share information effectively, building partnerships is critical to reach target audiences, both domestically and globally.

How CDC Crisis Communicators Work

Working with CDC scientists who are experts on disease outbreaks, natural disasters, biological threats, and more, crisis communicators determine the best way to get health messages to the people who need them.

MERS press conference in 2014

From scientific research, we know that people process information differently during a disaster than they would otherwise do in their day-to-day lives – they respond better to simple, positive action steps. In an emergency, people need guidance as soon as it’s available, whether it’s complete or not. They need to hear the information from someone they trust, and they need information fast and often. These principles guide how messages are created during an emergency and how and when the messages are sent.

Having a crisis communication plan is critical. Although this plan will likely change as the crisis evolves, the initial outline helps to focus communication goals.

Even the best messages will be ineffective if the target audience does not receive them. Communicators must know who to talk to and how to make advice actionable. To do this, communicators rely heavily on partner engagement, develop messages for specific audiences, and use targeted messages. CDC crisis communicators focus on developing relationships with established community organizations, building relationships with spokespersons who are familiar with affected groups, and targeting at-risk populations.

As communicators, all of the information available is used to measure whether messages are reaching the right people at the right time. Monitoring and analyzing web traffic, social media and media coverage helps identify important information that is missing, rumors that should be addressed, and the impact that communication has on the public’s response to a disaster. Communication plans are adjusted based on these analyses and strategies are developed to improve the distribution of information to the people who need it.

CDC’s Ebola Response

In the last year and a half, CDC’s emergency communication activities have focused on the Ebola outbreak in West Africa. CDC has created over 500 distinct communication materials, including infographics, tutorials, and guidance documents that help people understand how to protect themselves and their families from Ebola. This large number of communications materials were developed to address various in-country needs, including a variety of languages spoken, low literacy levels, and cultural preferences. For Ebola-related communication, cultural considerations have helped us reach more people effectively. For instance, people in West Africa speak several languages, some of which are mainly spoken rather than written. Words were often replaced with pictures and illustrations to address language barriers, and radio, text messages, and social media channels were used to deliver messages.


CDC works alongside Ebola experts at home and in West Africa to adapt the content, format, and delivery of public health information. Local clinicians, both in Africa and the United States, needed guidance on how to help Ebola patients while protecting themselves, their staff, and other patients from getting sick with the virus. In West Africa, CDC provided in-person training for journalists, community leaders, and faith healers to help prepare them to protect their communities from Ebola. In the United States, CDC hosted in-person trainings, provided updates to healthcare organization networks, and conducted national-level conference calls that were attended by more than 16,000 healthcare providers and organizations.

Over the course of the Ebola response, CDC has also communicated with public health partners and reached more than 32,000 people and organizations who subscribed to the CDC Emergency Partner Newsletter. The CDC Emergency Preparedness and Response website is updated with the latest event information. CDC’s Center for Global Health has strong connections with health communicators around the world and these channels are used to reach the global public health community.

CDC crisis communicators are committed to lowering the rates of illness, injury, and death when disaster strikes by carefully crafting messages for specific audiences and delivering those messages through effective communication channels. Crisis communicators strive to make every message count.

Effective information security controls are a good defense against cyber threats. However, will that be enough to keep the doors of your company open for business over the long term?

Hiring a capable and effective information security analyst staff, or increasing your budget to expand information security jobs in your organization may prove your upper management’s support for keeping your company’s data safe from hackers …but….it may not be enough to help guarantee an enterprise’s ability to continually satisfy its customers.

Companies that can do that are often considered to have strong organizational resilience qualities embedded within their corporate culture and how they run their companies on a day to day basis. Research indicates that many leadership teams will agree that, to ensure lasting success, their organization must become ‘resilient’.

Emergency notification systems through email and phone have become an integral part of instant messaging. Colleges send out text messages warning students of serious incidents on campus, local authorities send out severe weather alerts, and police often send messages to those who have signed up to receive information during community emergencies. While the number of businesses and individuals using these types of notification systems are increasing, there are, unfortunately, cases in which messages have been inaccurate and caused confusion during a crisis situation. “Communication is Key” is starting to sound like a broken record. However, effective communication should be the key to communicating in a crisis situation.



Here’s part two of our interview with Amir Michael, who spent most of the last decade designing servers for some of the world’s biggest data centers, first at Google and then at Facebook. He was one of the founders of the Open Compute Project, the Facebook-led open source hardware and data center design community.

Today, Michael is a co-founder and CEO of Coolan, a startup that aims to help data center operators make more informed decisions about buying hardware and make their data centers more efficient and resilient using Big Data analytics.



For some reason, bad ideas often attempt to make a comeback – typically, after enough time has passed and the very reason they were discarded or abandoned in the first place is forgotten.

Bad ideas certainly are not exclusive to popular culture; in fact, articles and case studies litter the internet documenting both public and private organizations attempting to resurrect failed models and strategies in hopes that new capabilities or use cases will finally make a particular idea just as good in practice as it was in theory or on paper.

In the wake of several high-profile, unpredictable, catastrophic incidents (“Black Swan Events”) in 2012, Avalution received a number of requests to develop highly-specific, scenario-based plans from our clients. Planning for Every Scenario is “For the Birds” explains that Black Swan Events cannot be predicted, and advises that organizations that implement flexible strategies, applicable in almost any type of scenario to manage response and recovery, enjoy the highest levels of success when faced with a disruptive incident.

However, the demand for scenario-based plans seems to be back.



New survey of Board members and executives worldwide sheds light on most pressing risk issues for organizations

More organizations are realizing that additional risk management sophistication is warranted given the fast pace at which complex risks are emerging, according to results of the fourth annual joint survey assessing the current risk environment by global consulting firm Protiviti and the Enterprise Risk Management (ERM) Initiative at the North Carolina State University Poole College of Management.

Released today, Executive Perspectives on Top Risks for 2016 (www.protiviti.com/TopRisks) summarizes the concerns of 535 Board members, C-suite and other top-level executives around the world and across industries. In the survey, respondents rate the significance of 27 risk issues for the coming year, spanning three risk categories: macroeconomic, strategic and operational.

Regulatory change and heightened regulatory scrutiny is the number one risk cited by survey respondents for the fourth consecutive year, highlighting its dominance on the minds of Board members and executives worldwide. The majority (60 percent) of respondents believe this risk will continue to have a significant impact on their organizations, indicating business executives remain highly concerned about the effect of the regulatory landscape on their strategic direction.



On the PC, various hardware and software resources must function in harmony in order to produce something useful. This is why smart people invented the operating system.

In the data center, you have pretty much the same resources – compute, storage, networking – except on a larger, more distributed scale. Most data centers feature any number of management systems, many of which are optimized for a particular resource or application, and this has served as the data center operating system to varying degrees of success.

But now that the data center is about to be redefined from hardware to software, and then distributed not just across a building or a campus but across town and around the world, the need for a cohesive data center operating system is becoming evident.



(TNS) — A new maritime SWAT team that can be rappel-ready by next year’s Sail Boston Tall Ships Regatta and a regional response shelter unit trained to comfort people and their pets top this year’s anti-terrorism priorities for Boston and its eight contiguous neighbors.

Other submissions were scrapped either because of cost — like Brookline’s bid for a $250,000 trailer “to support regional coordination in the event of simultaneous terrorist attacks” — or because they are not eligible for federal funding, like the Hub’s $200,000 appeal for intelligence-gathering drones.

Just two days after ISIS bombers attacked Brussels, killing 31 and injuring 260, safety, security and planning experts from Boston, Brookline, Cambridge, Chelsea, Everett, Quincy, Revere, Somerville and Winthrop agreed yesterday to a $4.56 million wish list of training and equipment needs.



(TNS) — In the wake of the terrorist attacks on the airport and subway system in Brussels, a group of lawmakers in Congress is pushing to increase funding to provide better security on the United States’ mass transit systems.

On Wednesday, 66 House Democrats urged the Homeland Security appropriations subcommittee to set aside $105 million to help local transit systems improve security. That’s $20 million more than President Barack Obama requested in his 2017 budget proposal and a drop in the bucket compared to the billions the country spends annually on aviation security.

An explosion in the Maelbeek Station on the Brussels Metro killed 20 people Tuesday. One of the suspected bombers, Khalid El Bakraoui, was killed in the suicide attack, which happened near the headquarters of the European Commission.



For any large-scale internet company, data center efficiency and profit margins are closely linked, and at scale like Google’s, data center efficiency is everything.

Designing for better efficiency is a never-ending process for Google’s infrastructure team, and since cooling is the biggest source of inefficiency in data centers, it has always gotten special attention.

“Since I’ve been at Google, we have redesigned our fundamental cooling technology on average every 12 to 18 months,” said Joe Kava, who’s overseen the company’s data center operations for the last eight years.

This efficiency chase has produced innovations in water and air-based data center cooling. The company has developed ways to use sea water and industrial canal water for cooling; it has devised systems for reclaiming and recycling grey water and for harvesting rain water.



Google’s new push into public cloud portends to accelerate the already torrid pace of business adoption and force managed service providers (MSPs) to reassert their value to customers - including some who question if they still need third-party support.

The tech behemoth last week announced it's investing billions for more capacity and enterprise-critical enhancements, declaring its intention to seize market share in a competition now led by Amazon and Microsoft.

But the explosive proliferation of cut-rate cloud services threatens to disrupt many MSPs, with some needing to quickly add cloud offerings and others forced to resell clients on the importance of the managed services relationship in the new paradigm.

“It’s having an impact on our ability to acquire new customers because business owners think they can do this themselves,” said Joe Popper, who still works at the successful MSP he sold two months ago, after 25 years as its owner.

Aggressive marketing for products like Office 365 tout free migration and an average 20 percent savings in IT expenditure. Popper said he’s run across business owners who view the cloud as a means to cut costs by shedding monthly service subscriptions.

“'Cloud’ is one of the very few (IT) buzzwords – other than ‘Internet’ – that the layman understands,” Popper said. “The problem is that they think they can get everything out of the cloud ...  But if it blows up, they're in a world of hurt."



Monday, 28 March 2016 00:00

Biggest EMM Trends of 2015

Consumerization is quickly transforming business and IT models. Interrelated, as enterprises continue to see a rise in the here-to-stay bring your own device (BYOD) movement, they need to provide their employees with secure, mobile access to apps and data on any device across any network.

Serving as the “invisible middleman,” enterprise mobility management (EMM) gives IT and employees the tools and confidence they need to just say yes to workforce mobility.

With empowering employees to work and collaborate the way they prefer as a baseline (hearkening back to today’s trend for organizations to adopt BYOD policies to let people use their own PCs and mobile devices for work), Citrix recently polled customers that deployed EMM in the cloud using XenMobile last year. Comprehensive insight can be seen in this infographic.



Ahmad Wani remembers Oct. 8, 2005. It was the day when at 8:50 a.m. Pakistan Standard Time, a magnitude 7.6 earthquake struck his home in Kashmir killing more than 70,000 people and displacing another 4 million. He recalls the devastation clearly, the homes left in shambles, the shortages of food and water, the many lives torn asunder in the course of seconds.

“Having been one of the lucky few who lived through the disaster, I could see rescue authorities going around trying their best to rescue people,” Wani said. “However, the scale of the disaster was so large that they couldn't identify who needed to be rescued first, and what the priorities for rescue were.”

First responders were burdened by a lack of proper tools to coordinate efforts and clear blocked roadways to extricate victims. The aftermath’s resulting pandemonium — and subsequent national and international earthquakes that followed — led Wani, along with his fellow Stanford University alumni Nicole Hu and Timothy Frank, to create One Concern. The startup aspires to be one of the first to use artificial intelligence to save lives through analytical disaster assessment and calculated damage estimates. Wani said that with the platform, emergency operations centers (EOCs) can receive instant recommendations on response priorities and other insights to dispatch resources effectively.

One Concern’s efforts to pioneer machine learning services for state and local agencies has earned it a spot in Government Technology's GovTech100, a list of noteworthy companies to watch in the public-sector IT market.

Wani, who serves as One Concern’s CEO, elaborated on his startup’s origins and how the company is progressing after its recent beta launch.



Happy Near Miss Day! In 1989 an asteroid roughly the size of a mountain came within 500,000 miles of colliding with Earth. Geophysicists estimate that the impact of that asteroid—had it actually collided with Earth—would have released energy equivalent to the explosion of a 600-megaton nuclear bomb. Astronomers didn’t discover the asteroid—or evidence of how close the near miss was—until nine days after the asteroid has passed.

Just as we did not know about the potential doom of the asteroid until after it was too late, many organizations are already compromised and simply haven’t discovered it yet. The average time it takes to detect a breach after attackers have infiltrated a network is somewhere in the 200 days range. That is an exceptionally long time to be oblivious about a threat that already exists inside your network.

In the movie Armageddon, there is a great line from Billy Bob Thornton’s character who explains why NASA wasn’t able to detect a massive asteroid on a collision course with earth. His character explains to the President that although NASA’s collision budget is $1 million, it only allows the organization to track 3 percent of the sky. He apologizes, saying “Begging your pardon sir, but it’s a big-ass sky.”



Whistle blowing has negative connotations in many organizations but, if encouraged by management and handled sensitively, it can be an important tool for business continuity and risk management. David Honour explains why…

During every day, in every organization, corners are being cut, mistakes are being made and risky behaviour is taking place. Most of this is of little consequence but sometimes the risk-taking becomes systemic, with corporate reputation and even business survival threatening consequences.

Normally this every-day risk taking does not come to the attention of senior managers; it is seen only by those who work with the risk-takers on a daily basis. These risks and their potential impacts are also invisible to the organization’s business continuity team. They don’t appear in the risk register and don’t become visible during the business impact analysis. But they exist. And they can have serious consequences. So the question is how to gain visibility of such risks once they reach a systemic level or when they threaten safety and security systems?



Thursday, 24 March 2016 00:00

BCI: Threats to the supply chain

Threats to the supply chain

Theft, conflict, weather, labour unrest are all disrupting our supply chains and costing tens of billions of dollars each year, according to the Supply Chain Risk Exposure Evaluation Network published by BSI.

Nearly $23 billion was lost due to cargo theft worldwide in 2015 from a variety of supply chain threats, predominantly driven by security concerns. South Africa has seen a 30 per cent increase in cargo truck hijackings over the last year, with thieves using high levels of violence and switching from targeting only high value goods to also targeting lower value items. More sophisticated attacks were observed in India throughout 2015, where criminal gangs masterminded new techniques to steal goods without breaking customs seals in order to avoid detection – a major risk for companies participating in international supply chain security programs.

In Europe, disruptions in trade caused by the ISIS terrorist group clearly highlighted the link between terrorism and the supply chain. Border controls in France following the November attacks in Paris are estimated to have cost the Belgian shipping industry $3.5 million. Elsewhere, the Jordanian trucking industry suffered $754 million in lost revenue since conflict began in the Middle East in 2011.

In addition to theft, business continuity-related threats such as extreme weather events and political and social unrest, led to significant losses for individual companies and national economies last year. 2015’s top five natural disasters caused a collective $33 billion of damage to businesses.

Labour unrest and factory strikes have also caused considerable financial damage across the world. Factory strikes in China increased by 58.3% from the previous year due to pay disputes, as factory owners struggled to pay workers due to a slowing economy, leading to protests. The withholding of wages was cited as a major cause in 75% of protests and generated losses of up to $27 million in the footwear industry.

Numerous cases of child and forced labour were exposed in 2015, highlighting the need for visibility into corporate supply chain to mitigate the risk of human rights abuses. Nearly 80% of Argentina’s textile industry was found to be sourcing from unregulated facilities, where forced, child labour and poor working conditions are common. BSI also noted an increase in the risk of child labour use in India due to the existence of loopholes in labour reforms approved in 2015.

Supply chain resilience is a real concern for business continuity professionals with the latest Horizon Scan Report published by the Business Continuity Institute once again featuring it as one of the top ten threats with half of respondents to a global survey expressing concern about the possibility of a disruption. It is easy to see why when you consider the BCI’s Supply Chain Resilience Report which identified that nearly three quarters of respondents had suffered a supply chain disruption during the previous year and that 14% had experienced losses in excess of €1 million as a result.

Jim Yarbrough, Global Intelligence Program Manager at BSI commented: “Companies are facing an increasingly wide range of challenges to their supply chain, from human rights issues to acts of violent theft and natural disasters. Such complexity creates extreme levels of risk for organizations, both directly affecting the bottom line but perhaps more seriously, hidden threats to the supply chain which, if ignored, could do serious harm to a company’s hard-earned reputation.

The biggest threats to the global supply chain in 2016 include:

  1. Global cargo theft cost estimated to grow by a further $1 billion in 2016
  2. Continued tensions in South China Sea predicted to lead to further protests and disruptions
  3. On-going conflict in Syria will continue to impact supply chains
  4. ISIS is predicted to remain a significant threat to disrupt supply chains
  5. Labour unrest in China is predicted to persist, as a slowdown in the Chinese economy continues and more jobs move to neighbouring countries
  6. Weather disruptions e.g. La Nina phenomenon
  7. Global health crises e.g. Zika and Ebola
Thursday, 24 March 2016 00:00

Chief Compliance Officers are at Risk

Following the release of Thomson Reuters’ Personal Liability Report, Corporate Compliance Insights’ CEO, Maurice Gilbert, delved deeper into the findings with the report’s co-author, Stacey English, Head of Regulatory Intelligence at Thomson Reuters. Ms. English graciously provides her insight into both the role CCOs can play in mitigating personal liability risk and the future of the compliance profession.

Maurice Gilbert: What steps can a Chief Compliance Officer take to manage personal liability risk?

Stacey English: Personal liability for Chief Compliance Officers (CCOs) has grown alongside the liability for other senior managers in financial services firms. There are a range of measures CCOs can take to identify, manage and mitigate their rising personal liability. CCOs should ensure that their job descriptions are documented in up-to-date detail covering exactly what their role entails and how those obligations are discharged. As part of the discharge of obligations, CCOs need to maintain an appropriate suite of robust evidence to demonstrate the full discharge of their regulatory obligations.

CCOs are at the forefront of not only maintaining communications with all relevant regulators, but also tracking regulatory changes – including considering and learning the lessons from regula­tory announcements in ways that shape the nature of regulatory expectations and associated personal liability. All relevant regulatory infor­mation needs to be – and to be seen to be – considered. This includes supranational or cross-border regulatory changes, the lessons to be learned from enforcement actions against firms undertaking similar business activities and any messages from speeches and other regulatory publications.



The world tends to repeat cycles, and the current attention to deficient audits is reminiscent of the years ushering in the Sarbanes-Oxley Act of 2002. However, this time it is different, as regulators are proactively scrutinizing external auditors and imposing enforcement actions against audit committee chairs.

In a January 25, 2016 speech, Andrew Ceresney, Director of the SEC’s Division of Enforcement, stated: “audit committee members who fail to reasonably carry out their responsibilities, and auditors who unreasonably fail to comply with relevant auditing standards in their audit work, can expect to be in our focus.” Regulators are increasingly holding gatekeepers of the financial reporting process more accountable. According to Mr. Ceresney, the SEC has recently brought cases against three audit committee chairs who either approved public filings they knew were reckless or should have known to be false because of other information available to them. He also mentioned that the SEC settled high-profile cases in recent months against two national audit firms and individual audit partners from those firms pertaining to false and misleading unqualified audit opinions. These are the first cases against national audit firms for audit failures since 2009.

While it is important to understand the regulators’ viewpoints and efforts, the levers for extracting value from the external audit process largely rests with the company. With the audit season concluded for companies with fiscal year-ends synced to the calendar year, now is an ideal time to assess the value of the external audit process. First, let’s visit some recent regulatory developments.



(TNS) - Claims have been filed seeking more than $8.4 million in reimbursement for property damage cited by the city of Joplin as a result of the 2011 tornado.

Leslie Haase, city finance director, presented a recap of storm rescue and recovery details Monday night to the City Council.

"As we are approaching the five-year anniversary, this is probably a good time for it," she said.

An anniversary observance is planned with a number of events starting May 13 and culminating May 22.



DLL, a global provider of leasing and finance solutions, created a comprehensive Business Continuity Management (BCM) program with the help of Strategic BCP and its ResilienceONE software. The program replaced a decentralized program in which the company’s headquarters and most of their 35 global offices had their own Business Continuity (BC) plans focused primarily on IT recovery. They had no organizational contingency plan for BC in the event of a disaster or other work stoppage.

Each office appoints managers to facilitate planning and testing based on need. Larger facilities—those with a larger business impact—have more extensive plans than smaller ones. Each office also has BC coordinators who report to management. They test these plans at least once a year. Offices document critical business functions and identify components—including people and technology—that support their functions.



Qualcomm Ventures joined existing investors in a recent $27 million funding round for Stratoscale, an Israeli data center software company.

Stratoscale announced the new capital raise on Wednesday. This is the third round of funding for the company but the first time Qualcomm Ventures has participated. Over the past three years, Stratoscale has raised more than $70 million.

Stratoscale makes data center management software that’s hardware agnostic. The company claims its software allows information technology to grow and respond to real-time demands with greater ease and control.



What is a crisis?

A time of intense difficulty or danger. 

The diagram below shows what happens to your standard business operation, and moving from left to right, we see normal operations, then an incident happens. Crisis management doesn’t just need to be an incident occurring, like a fire, terrorism etc.  It can be something that is more focussed around your business.  You may see an incident coming for example – you may know there is going to be three feet of snow the following morning, so you may want to get your crisis management team together to work out what you want to do.



Mesosphere, the datacenter operating system startup working at the nexus of hyper-scale application development and cloud-based delivery of micro-services, announced a $73.5 million "strategic" investment round led by Hewlett Packard Enterprise and joined by new investor Microsoft Corp.

San Francisco-based Mesosphere also on Thursday (March 24) released the latest version of its Marathon container orchestration tool and a new continuous application development tool called Velocity. Marathon and Apache Mesos form the basis of the startup's flagship Datacenter Operating System (DCOS) designed to deploy and scale technologies such as Docker containers and Apache Spark.

Lak Ananth, managing director of Hewlett Packard Ventures, touted DCOS as nothing less than "the most exciting new enterprise operating system since Linux." The enterprise IT giant (NASDAQ: HPE), new investor Microsoft (NASDAQ: MSFT) along with other new and existing Silicon Valley venture firms are adding to Mesosphere's coffers which have swelled to $126 million in three funding rounds.



Majority of SMBs report that IT downtime could result in catastrophic loss

Small and medium sized businesses are increasingly focused on disaster recovery readiness, with as many as 84% noting that several days of IT downtime would result in moderate to catastrophic costs and loss. To make sure they are covered in the event of a disaster, more than half (53%) rely on more than one DR method to 'hedge their bets' to ensure recoverability. These results are according to a new study by Zetta.

Even while SMBs recognize the vital nature of their DR and business continuity strategy, they fail to monitor and ensure security of their DR solutions. One-third (33%) of the IT professionals surveyed reported that they rarely test their DR plan, while an astounding 62% only test their DR plan once a year or less. Even more surprising, as many as 13% of IT professionals admit that their organization doesn't have a technical disaster recovery strategy and another 14% report that they don't have offsite DR protection making them vulnerable to an onsite disaster.

When asked about the top three most important factors of a DR solution, cost was given as the reason by 65% of respondents, followed closely by speed of recovery (61%), reliability (60%) and simplicity (60%). Usability (54%) was identified more as a 'nice to have', but not a top priority.

The latest Horizon Scan Report published by the Business Continuity Institute revealed that SMBs are no different to larger organizations when it comes to the main threats they face. In both cases the top three threats in the same order were: cyber attack, data breach and unplanned IT and telecom outages.

"SMBs have made increasing investments into their virtual infrastructure to save on costs and resources, yet DR strategies to protect those virtualized workloads are still falling behind in comparison," added Grossman. "The cloud is delivering impressive new options for these organizations with a strong combination of reliability, recovery speed and attractive pricing. As a result, organizations are able to efficiently assure disaster preparedness to avoid the increasing fear of a catastrophic loss."

(TNS) - The world must brace for the threat of further attacks in the wake of this morning's bloodshed in Brussels, terror experts said today.

 "We're all at risk" of Brussels turning into a coordinated multi-city, multi-country attack, said former Boston Police Superintendent-in-Chief Daniel Linskey, now managing director of the global investigations and securities firm Kroll.

"I wish it wasn't true, but it's out there. They've already talked about it, and us not talking about it won't change that," said Linskey.

At least 34 people have been reported dead in three attacks, which rocked the airport and bus terminal in the Belgian capital during this morning's rush hour. In response, Belgium has activated its highest threat level, essentially going into a lockdown.



Now before I start I just want to say that I’m sure there are a thousand ways in which to design an info sec department structure. I’m just going to cover off an example of how it might be set up.

I'm writing this because at one point in my life I assumed that having a single information security manager in a business was enough (or even if it was just bolted on to another job it would do!). This is clearly not the case. I now see some of the complexities involved and I wanted to share it with those who might be equally as oblivious as I was!



If you think you know what Big Data is going to be like based on the volume of today’s workflows, well, to coin a phrase, “you ain’t seen nothin’ yet.”

The fact is that with the sensor-driven traffic of the Internet of Things barely under way, the full data load that will eventually hit the enterprise will be multiple orders of magnitude larger than it is today, and much of it will be unstructured and highly ephemeral in nature, meaning it will have to be analyzed and acted upon quickly or it loses all value.

The good news is that much of the processing will be done at the edge, where it can be leveraged for maximum benefit without flooding centralized resources. But a significant portion will still make it to the data center or the data lake, which means the enterprise will need to implement significant upgrades to infrastructure throughout the distributed data environment, and soon.



In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.

Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.

Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.



Wednesday, 23 March 2016 00:00

The Future of Disk Storage

Taller hard drives with multiple actuator arms supplied in multi-drive packages — these are just a few of Google's suggestions as it calls for a complete rethink of storage disk design.

In a white paper called "Disks for Data Centers," published last month, the company gives some hints as to how the hard disk drive might evolve in the coming years.

There's a need for change, the white paper asserts, because the fastest-growing use case for hard drives is for mass storage services housed in cloud data centers. YouTube alone requires 1 million GB of new hard drive capacity every day, and very soon cloud storage services will account for the majority of hard drive storage capacity in use, it says.



Two days before Christmas the lights went out across the Ivano-Frankivsk region of Ukraine. As many as 225,000 customers lost power, the result of coordinated cyberattacks on three power grids.

The hackers tricked utility employees into downloading malware – BlackEnergy – that was linked to Russian spy agencies and that had been used to probe power companies across the world, including those in the U.S. On attack day they remotely shut off current to about 60 substations, inserted new code that blocked staff from reconnecting and even “phone bombed” the companies’ switchboards to discombobulate employees rushing to get power flowing again.

The Ukrainians claimed it was the first time a power grid had been knocked out by hackers and quickly pointed a finger at Russia. Robert M. Lee was skeptical. In the midst of preparing for a Christmas wedding in Alabama, the ex-cyberwarfare Air Force officer needed proof. There had only been two known destructive attacks on critical infrastructure. He and several colleagues in the U.S. cyber community coordinated with contacts inside Ukraine to recover malware from the network. Lee was the first person to report about the malware after reviewing the public information and analyzing the grid’s control systems. It was soon apparent: This was the real deal, though Lee shies away from blaming Russia. “What surprised me is the bold nature of it. … It was so coordinated. All the stuff we’ve seen before looked like intelligence. This looked like military. That’s kind of alarming.”



Businesses typically put a great deal of time and resources into customer communications, from elaborate public relations plans to customer surveys. But when it comes to internal communications—those formal and informal means by which employers communicate with staff—communication is often taken for granted. Employees have a critical impact on the outcome of every project, as well as the overall success of your business. Unfortunately, it’s easy for an organization’s leaders to fumble the ball when attempting to improve employee communications.

Here are 10 things your workforce probably wishes you knew about communicating with them:



South Asia struggles to build resilience

1.4 billion people in South Asia, 81% of the region’s population, are acutely exposed to at least one type of natural hazard and live in areas considered to have insufficient resources to cope with and rebound from an extreme event, according to a new study by Verisk Maplecroft.

The research also highlights a lack of resilience to hazards across the region, especially in India, Pakistan and Bangladesh where governments have struggled to translate record levels of economic growth into improved resilience against natural hazards, leaving investors open to disruption to economic outputs, risks to business continuity and threats to human capital.

South Asian nations lag behind the world’s leading economies when it comes to mitigating the worst impacts of natural hazards. The Natural Hazards Vulnerability Index, which assesses a country’s ability to prepare for, respond to, and recover from a natural hazard event, rates Japan (183) and the U.S. (173) as ‘low risk,’ while China (126) is considered ‘medium risk’. In comparison, the weaker institutional capacity, financial resources and infrastructure of Bangladesh (37), Pakistan (43) and India (49) mean they are rated ‘high risk,’ leaving organizations under greater threat if a significant event occurs.

The data identifies flooding as one of the most substantial risks to communities and business in South Asia. In India alone, 113 million people, or 9% of the population, are acutely exposed to flood hazard, with a further 76 million exposed in Bangladesh and 10 million in Pakistan. Indeed, heavy monsoon rain during November and December last year sparked record flooding in South India, which cost the country upwards of US$3 billion and displaced more than 100,000 people.

Adverse weather has slowly been dropping down the ranked threats in the Horizon Scan Report published by the Business Continuity Institute, but is still considered to be a concern by over half (55%) of the business continuity professionals who responded to a global survey. Meanwhile, earthquake/tsunami is considered a concern by nearly a quarter (25%).

This data highlights the scale of the task facing governments and business in mitigating the threats to populations and workforces from natural hazards in these high risk regions,” states Dr James Allan, Director of Environment at Verisk Maplecroft. “With overseas investment pouring into the emerging Asian markets, companies have an increasing responsibility to understand their exposure and work with governments to build resilience.”

Google builds some of the largest, most sophisticated, and energy efficient data centers in the world. Unfortunately for data center professionals who don’t work for the company, Google data centers are closed to them.

Today at the first user conference for Google’s cloud infrastructure services in San Francisco, the company launched a 360-degree video tour of one of its data centers. The conference, called Google Cloud Platform Next, is where the company’s top management are attempting to make the case to the industry that Google is not only serious about enterprise cloud but plans to lead in the space, currently dominated by Amazon Web Services and, in a distant second position, Microsoft Azure.

The company also announced at the event the launch of two new cloud data centers, in Oregon and Tokyo, and plans to launch 10 more between now and the end of 2017.



(TNS) -- NYPD Commissioner William Bratton said on national television Wednesday that a deadly terrorist attack like the one in Brussels could happen in New York City.

“Certainly. It can happen anywhere in the world,” Bratton said in a live interview on CBS This Morning.

Although intelligence efforts and preventive measures are important deterrents, living in a free society exposes cities and metropolitan areas to such terrorist attacks, both Bratton and John Miller, NYPD’s deputy commissioner for intelligence and counterterrorism, said in the live interview.

Bratton called large metropolitan areas like New York City “soft targets” for terrorists.



The deadly terrorist bombings in Brussels this week have elicited an outpouring of support for the victims and for Belgium, along with renewed rage and consternation regarding ISIS. These are predictable reactions. What these acts also elicited, I’ve noticed, are numerous comments from many outlets that the attacks were not surprising.

The BBC, in fact, said the bombings were “not a surprise” and security experts chimed in with similar assessments. Even Belgians themselves admit that the attack wasn’t shocking—Prime Minister, Charles Michel, lamented that “what we feared, has happened.” Think about how much has changed in less than a generation. Now, when the capital of the EU and NATO becomes a war zone, many react as though this is business as usual.

When it comes to political violence and warfare, we (or at least Western Europe) are living in a brave new world. In fact, research I’ve conducted in recent weeks for a RIMS Executive Report on political risk confirms how much the paradigm has changed. Political risk experts I interviewed have been emphasizing this point. “I think it is truly a distinctive point in world affairs,” said one. Another confessed, “I’ve been doing this for nearly 20 years, and this is by far the most unstable, tenuous, deteriorating…risk environment I’ve ever seen.”



Tuesday, 22 March 2016 00:00

12 Steps to Make ERM a Team Sport

Unless ERM is treated as a team sport, with the company Board fully “on board,” the company will flounder when:

  • Overwhelmed with other issues,
  • Unfamiliar risks related to specific situations occur or
  • The sheriffs in the C-Suite who formerly interacted with the ERM designee Board member view the political risks as too costly to point out the “175-pound gorillas” in the room.

This puts one’s business at risk of a 175-pound gorilla growing into the proverbial 800-pound gorilla or even worse, into 800 dead rats. Before the blink of an eye and with brute strength, that dreaded multimillion-dollar roof comes crashing down.

In real life, there are never enough resources vis-à-vis people, money or time needed to take advantage of the myriad opportunities to solve all of the problems rapidly piling up on one’s desk. So, how does one increase Board and organization involvement in integrating enterprise risk management into the corporate DNA? And where can the right person be found to assist in reaching that goal?



The concept of biometrics is not new. International super spies have been accessing top secret information with fingerprint and retina scans, voice recognition, and other biometric methods in movies and TV shows for decades. Things like facial recognition and fingerprint scanning have finally made their way to mainstream devices used by average consumers, though, which raises the question of whether or not they provide adequate protection—or if they are more or less secure than the traditional username and password.

Better Than Nothing

It may not be a very convincing argument, or a compelling endorsement of biometrics, but biometric security is better than nothing.

There is an individual who has reached out to me a couple times—Hitoshi Kokumai. He believes that biometric authentication that substitutes for traditional passwords or PINs is inherently less secure than the password or PIN. He created a short video explaining that biometric authentication provides a false sense of security and results in “below-one factor authentication.”



(TNS) - James Young parked his pickup where the floodwaters lapped at Allie Payne Road in Orange and grabbed his kayak out of the bed.

Then, he paddled his way home.

This has been Young's routine for several days.

He wades through ankle-deep water in his flip-flops before climbing into his blue kayak and setting off down the Sabine. It's the only way in and out of his neighborhood.



The enterprise is eager to implement private and hybrid clouds even though full public infrastructure is likely to be less costly, more scalable and more flexible. At the same time, organizations are looking to supplement legacy virtual resources with advanced container platforms in support of broad service- and microservice-based data environments.

Clearly, there must be a way to bring all of these technologies together so that everyone is happy.

Microsoft is looking at containers as a key opportunity to draw more enterprise workloads to its Azure cloud. The company is close to releasing the next version of Windows Servers that features Hyper-V container technology to provide a distributed environment that the enterprise can use to deploy and manage self-contained virtual environments both on-premises and in the cloud. The company is rather late to the container game, as it was with the virtual machine, but its reach into legacy data environments is considerable, and many organizations will no doubt find it appealing to suddenly gain the ability to pool container services across hybrid clouds simply by upgrading their existing server environment.



One of the S&R team’s newest additions, Principal Analyst Jeff Pollard comes to Forrester after many years at major security services firms. His research guides client initiatives related to managed security services, security outsourcing, and security economics, and integrating security services into operational workflows, incident response processes, threat intelligence applications, and business requirements. Jeff is already racking up briefings and client inquiries, so get on his schedule while you still can! (As a side note, while incident response is generally not funny, Jeff is. He would be at least a strong 3 seed in a hypothetical Forrester Analyst Laugh-Off tournament. Vegas has approved that seeding.)


Prior to joining Forrester, Jeff served as a global architect at Verizon, Dell SecureWorks, and Mandiant, working with the world's largest organizations in financial services, telecommunications, media, and defense. In those roles he helped clients fuse managed security and professional services engagements in security monitoring, security management, red teams, penetration testing, OSINT, forensics, and application security.



Software-defined networking. Network functions virtualization. Virtual storage. These are the new buzzwords of the channel today. But are these trends actually as novel as they seem? Viewed from an historical perspective, not really.

SDN, NFV and scale-out storage — which we can collectively call software-defined everything, or SDx if you like acronyms — offer lots of benefits for data centers and the cloud. They abstract operations from underlying infrastructure, making workloads more portable, scalable and platform-agnostic. They also create new opportunities for building more secure infrastructure. And they can lower costs by letting you get next-generation functionality out of cheap commodity hardware.

It seems pretty certain that software-defined everything is the wave of the future. From Docker containers to carrier-grade SDN projects like ONOS, these technologies are progressing rapidly through the development and adoption stages and into production use. Some of them are not there yet, but they’re on the way.



Predicting the future of cybersecurity is a big deal in the security world. Every year, experts will put out their predictions of the biggest cybersecurity threats for the coming year. Sometimes, they actually get it right.

The folks at the Information Security Forum (ISF) have gone a little longer range in their predictions, with their Threat Horizon 2018 (yes, you read the year correctly). The report contains three themes that we should be preparing for: Technology adoption dramatically expands the threat landscape; the ability to protect is progressively compromised; and governments become increasingly interventionist. In a formal release, Steve Durbin, managing director of the ISF, stated:

We predict that many organizations will struggle to survive as the pace of change deepens.  Therefore, at least until a conscious decision is taken to the contrary, these three themes should appear on the radar of every organization, regardless of size.



Timing, as comedians say, is everything. It’s true if you’re on stage entertaining an audience. It’s also true if you’re trying to recover from IT disaster situations involving multiple systems that pass data between one another. In complex configurations with different intercommunicating systems running CRM, ERP, business intelligence, and process integration, one system going down can have a global impact on data consistency. The correct use of the recovery consistency objective or RCO, and understanding of timing issues, can help you recover data and consistency in the best way possible.

A quick refresher on RCO may be in order. Arithmetically, we define RCO as “one, minus [number of inconsistent systems/total number of systems]”.

A little mental juggling of the possibilities shows that RCO can vary between a maximum of one (totally consistent) and a minimum of zero (totally inconsistent).



Sometimes risk analysis can result in paralysis. Finding your risk tolerance and applying it to specific situations requires a nuanced approach.

I am always wary of anyone who tells me categorical rules – e.g. we do not do business in Russia because it is too risky. In this era of oversimplification, such statements border on intellectual dishonesty.

A careful approach to risk analysis always involves a cost benefit framework. Compliance is not a function that is dedicated to identifying risk and avoiding all potential risks. Compliance is part of an overall cost benefit risk analysis.



Cloud services are becoming increasingly common amongst businesses of all shapes and sizes, allowing for increased productivity while reducing on-premises infrastructure costs. As a result, Microsoft Office 365 and Exchange Online are now a common pairing that offers a hosted platform by which organisations can access their business data from anywhere in the world whether it’s from a workstation or mobile device.

Whilst there’s no denying that there are a number of benefits when moving to Office 365, it’s important to identify the risks involved and consider any changes that may be required to your business practices. Here are 4 areas that your organisation should be definitely be considering when making the move to a cloud-based platform or hybrid solution:



Sometime in the early 2000s, Amir Michael responded to a Craigslist ad that was advertising a data center technician job at a company whose name was not mentioned. He applied, and the company turned out to be Google. After years of fixing and then designing servers for Google data centers, Michael joined Facebook, which was at the time just embarking on its journey of conversion from a web company that was running on off-the-shelf gear in colocation data centers to an all-custom hyperscale infrastructure.

He was one of the people that led those efforts at Facebook, designing servers, flying to Taiwan to negotiate with hardware manufacturers, doing everything to make sure the world’s largest social network didn’t overspend on infrastructure. He later co-founded the Open Compute Project, the Facebook-led effort to apply the ethos of open source software to hardware and data center design.

Today, he is the founder and CEO of Coolan, a startup whose software uses analytics to show companies how effective their choices of data center components are and helps them make more informed infrastructure buying and management decisions.



Tuesday, 22 March 2016 00:00

Why IT Needs To Tear Down Data Silos

The hype cycle sure is in full swing when it comes to the importance of data. We see headlines declaring that data is the new oil. We hear analysts talk about how to hire a good data scientist. The taunts of our peers echo off the hallways: "If you don't hire a chief data officer, you must not be one of the cool kids!" Well, ok, maybe things haven't gone quite that far yet. But still.

Let's get one thing straight here. Data is nothing new for IT. It is, after all, the "information" part of information technology. Yet we spend so much time contemplating data science, analytics, and data lakes. My observation is that very little time is spent on the basics of data.

Marcus Suer, curator of the #CIOchat Twitter chat, recently asked, "Does your enterprise get tangible business value from the data or business intelligence that you provide?"

It's a good question, perhaps the most important question about data. If we don't provide business value, why are we gathering data or engaging in analytics at all?



Tuesday, 22 March 2016 00:00

CDC: Zika, Mosquitoes, and Standing Water

Zika, Mosquitoes, and Standing Water

With spring weather and mosquito season coming soon in the Unites States, the Zika virus – and the mosquitoes that carry the virus – may be a major concern. Zika is currently affecting more than 30 countries and territories in the Americas and Pacific Islands. Zika virus is primarily spread through the bite of an infected Aedes aegypti mosquito. People and communities can take steps to reduce the number of mosquitoes in their homes and communities to protect themselves from Zika.

How Does Water Help Mosquitoes Breed?

Aedes aegypti is known as a “container-breeding mosquito” because it likes to lay eggs in and around standing water. Studies show that female mosquitoes prefer to lay eggs in water that collects or is stored in manmade containers.

Water-filled bioassay trays were used to attract resident female mosquitos to deposit their eggs, where they hatched, and from which the larvae were collected.Aedes aegypti mosquitoes lay eggs on the walls of water-filled containers. Eggs stick to containers like glue and remain attached until they are scrubbed off. The eggs can survive when they dry out—up to 8 months. When it rains or water covers the eggs, they hatch and become adults in about a week.

Reduce mosquitoes at home

Here are a couple of steps you can take to prevent mosquitoes from living and breeding around your home.

Remove standing water

Keep mosquitoes from laying eggs inside and outside of your home. Items in and around people’s homes can collect water. Once a week, empty and scrub, turn over, cover, or throw out containers that hold water, such as

  • Vases
  • pet water bowls
  • flowerpot saucers
  • discarded tires
  • buckets
  • pool covers
  • birdbaths
  • trash cans, and
  • rain barrels.

These actions can help reduce the number of mosquitoes around areas where people live.

Follow safe water storage tips

If water must be stored, tightly cover storage containers to prevent mosquitoes from getting inside and laying eggs.

Reduce mosquitoes in the community

Communities also can take steps to reduce the number of mosquitoes and the chances of spreading disease.

Build systems that distribute safe water

If people have access to clean and safe water in their communities, they will not need to store it in and around their homes. Research has shown that when community-wide distribution systems are built, the number of mosquitoes decreases, because water is not being stored near areas where people live.

Improve sanitation

When water is contaminated with organic matter (for example, human or animal waste, grasses, and leaves), the chances that mosquito larvae will survive may increase because contaminated matter provides food for larvae to eat. Sanitation departments and wastewater treatment plants remove organic wastes and treat water with chlorine or other disinfectants. These activities may decrease mosquito populations and, simultaneously, prevent diarrheal diseases.

*Basic sanitation includes access to facilities for the safe disposal of human waste, and the ability to maintain hygienic conditions, through services such as garbage collection, industrial/hazardous waste management, and wastewater treatment and disposal.

Water, sanitation, and hygiene* (WASH) are critical to keep people healthy and prevent the spread of many different disease, including Zika. World Water Day recognizes the importance of safe drinking water and improved sanitation and hygiene in the health of our world’s population.

Learn more about World Water Day at www.unwater.org/worldwaterday and visit www.cdc.gov/healthywater/global for more information about CDC’s efforts to ensure global access to improved water, sanitation, and hygiene.

For more information on the Zika virus, and for the latest updates, visit www.cdc.gov/zika.

Monday, 21 March 2016 00:00

‘Data Centre Factory’ explained

The Challenge

Organisations more and more rely on IT when it comes to allowing their business strategies to grow and change. In the end they need an IT-infrastructure that is more powerful, flexible and cost-effective than ever before. Today’s businesses need systems that are permanently available, providing ubiquitous access and delivering fast, flexible responses to their fast-changing business requirements.

To accomplish this, data centres need to exploit existing enterprise resources in a more efficient way and thereby become more flexible in order to react faster. To meet these challenges, organisations need to build a unique, network-based data centre infrastructure that combines the traditional server, storage and networking infrastructure to better support emerging business applications.



Monday, 21 March 2016 00:00

Bridging the Seismology Gap

Lucy Jones received one of the most prestigious awards for a federal employee last year to the surprise of no one. The United States Geological Survey (USGS) seismologist earned the Samuel J. Heyman Service to America Medal in the citizen services category for her work in seismic research.

That award came largely because of her work in 2014 in Los Angeles Mayor Eric Garcetti’s office for a project that aimed to shape public policy toward the eminent Big One the city faces. The work resulted in the Resilience by Design report and important legislation. Garcetti called Jones’ work with Los Angeles groundbreaking in the way it bridges the gap between seismic science and public action. Jones announced that her last day with the USGS is March 30.

What’s the Resilience by Design program?

Probably the most important part of the water plan actually was the creation of what we call the Resilience by Design program within the Department of Water and Power.

There is a full-time person in charge of seismic resilience for the system developing these retrofit projects and evaluating new projects as they come forward. 

The commitment is to a future of seismic-resilient pipes. The path to that is going to be starting with a network of hardened arteries. So they’re setting up priorities for replacement to maximize the network and get water to as much of the city as possible.



(TNS) - At a San Bernardino County department operations center activated in the wake of the Dec. 2 terrorist attack, a decision was made to marshal extra ambulances from Riverside County.

Three teams of five ambulances were assembled — one was sent to Redlands, another to Rancho Cucamonga and the third remained in Riverside on standby.

Given the uncertainty about what was really happening, the thought was there might be a secondary or even tertiary event, said Tom Lynch, EMS administrator for the Inland Counties Emergency Medical Agency, which coordinates emergency medical response between ambulances and hospitals in case of an emergency.



(TNS) - When the Big One hits, one of the safest places for your child will be in school. Under 1933’s Field Act, public K-12 schools and community colleges are built to a higher standard than virtually any other building in the state.

“Our buildings are less likely to fail” in the event of a major earthquake, said Jill Barnes, coordinator of Emergency Services for the Los Angeles Unified School District. “Although that said, we do still evacuate and inspect the buildings before we let the students back in.”

Students and staff are also taught what to do in case of a major earthquake.



Organizations ill-equipped to manage the risks posed by third parties

Almost nine in 10 (87%) organizations have faced a disruptive incident involving third parties in the last three years, with incidents including loss of data by a third party, or failure to deliver a service or product on time. The new study by Deloitte also highlighted the increasing frequency and impact of these disruptions, illustrating the significant need for organisations to invest in better governance and risk management related to third parties.

Of the organisations surveyed, nearly all (94.3%) felt low to moderate levels of confidence in the tools and technology currently used to manage their third party risk, with similar sentiment expressed in supporting risk management processes (88.6%). At the same time, 73.9% of respondents believe that third parties will play a highly important, or even critical, role in the year ahead.

Ensuring business continuity across your supply chain is a part of ensuring business continuity within your own organization and being able to manage through disruptive events. The latest Supply Chain Resilience Report published by the Business Continuity Institute noted that almost three quarters of organizations had experienced a supply chain disruption during the previous year and that half of those disruptions occurred below the tier one supplier.

Kristian Park, partner and global head of third party governance and risk management at Deloitte, commented: “With reliance on third parties set to grow, now is the time to address the ‘execution gap’ between risk and readiness. The impact of third party incidents ranges from reputational damage, regulatory and data breaches, through to actual lost revenue and future business. Increasing frequency of third party incidents, some high profile, has driven a shift in motivation of organisations to improve their risk management.”

Last year, Deloitte calculated that fines issued directly from third party failure have ranged from £1.3m to £35m, reaching £650m for those firms operating internationally and subject to global regulation. As the market value of a company is impacted by such fines, shareholders could incur losses of up to 10 times the fine with an average share price drop in the region of 2.55%.

Kristian Park adds: “The good news is that third party risk management is starting to feature consistently in board-level discussions. Our latest survey found over half (51.1%) of respondents were united in their ambition to have integrated third party risk management systems in place in the year ahead. Rolling out common and unified standards remains a challenge as businesses are increasingly decentralised. Encouragingly, though, 86% of those surveyed have already started.


Are you an IT employee? If so, you’re probably aware of PagerDuty and DataDog (or similar alert monitoring services). In fact, you may even be the person that gets the calls when something goes terribly wrong in your cloud infrastructure or data center.

What if you could automate self-healing scripts and turn on sirens and alarms when PagerDuty detects an incident?



BATON ROUGE, La. – State and federal emergency management officials encourage Louisiana flood survivors to begin repairs as soon as they can.

Flood survivors do not need to wait for a visit from the Federal Emergency Management Agency or their insurance company to start cleaning up and make repairs. FEMA inspectors and insurance claims adjusters will be able to verify flood damage even after cleaning has begun.

It’s important for survivors to take photographs of damage and keep recovery-related receipts. Insurance companies may need both items, while FEMA may need receipts.

Survivors should check for structural damage before entering their homes and report any damage to local officials. They should also immediately throw away wet contents like bedding, carpeting and furniture because of health issues that may arise with mold.

Emergency management officials encourage survivors to register for FEMA assistance as soon as they can. They only need to register once and only one registration is allowed per household. Once registered, survivors should keep in touch with FEMA and update contact information if it changes.

FEMA assistance may help eligible homeowners and renters pay for a temporary place to stay, make repairs or replace certain damaged contents.

Individuals can register online at DisasterAssistance.gov or by calling toll-free 800-621-3362 from 7 a.m. to 10 p.m. daily. Multilingual operators are available.

Survivors who are deaf, hard of hearing or have a speech disability and use a TTY may call 800-462-7585. Survivors who use 711 or Video Relay Service or require accommodations while visiting a center may call 800-621-3362.

FEMA assistance is not taxable, doesn’t need to be repaid and doesn’t affect other government benefits.

Those who are referred to the U.S. Small Business Administration should complete and return the application for a low-interest disaster loan. It is not required to accept a loan offer but returning a completed application is necessary for FEMA to consider survivors for certain forms of disaster assistance.


We urge everyone to continue to use caution in areas where floodwaters remain. Monitor DOTD’s www.511la.org website for updated road closure information. Look for advisories from your local authorities and emergency managers. You can find the latest information on the state’s response at www.emergency.la.gov. GOHSEP also provides information on Facebook and Twitter. You can receive emergency alerts on most smartphones and tablets by downloading the new Alert FM App.  It is free for basic service.  You can also download the Louisiana Emergency Preparedness Guide and find other information at www.getagameplan.org.

Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status.  If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.  Follow us on Twitter at https://twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.

The U.S. Small Business Administration (SBA) is the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps businesses of all sizes, private non-profit organizations, homeowners and renters fund repairs or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property. These disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations. For more information, applicants may contact SBA’s Disaster Assistance Customer Service Center by calling (800) 659-2955, emailing /Users/Fielduser/AppData/Local/Microsoft/Windows/Temporary%20Internet%20Files/Content.Outlook/CRZY7KG3/disastercustomerservice@sba.gov">disastercustomerservice@sba.gov, or visiting SBA’s Web site at www.sba.gov/disaster. Deaf and hard-of-hearing individuals may call (800)877-8339.

High school students thinking about a college education and career in the cybersecurity field may want to begin preparing now.

There are numerous programs to help high schoolers learn about cybersecurity, gain experience for potential summer internships, and enhance college applications.

Hacker High School

Hacker Highschool provides a set of free hands-on, e-book lessons designed specifically for teens to learn cybersecurity and critical Internet skills. These are lessons that challenge teens to be as resourceful and creative as hackers with topics like safe Internet use, web privacy, online research techniques, network security, and even dealing with cyber-bullies. The full program contains teaching materials in multiple languages, physical books with additional lessons, and back-end support for high school teachers and home schooling parents.

The non-profit ISECOM researches and produces the Hacker Highschool Project as a series of lesson workbooks written and translated by the combined efforts of volunteers world wide. The result of this research are books based on how teens learn best and what they need to know to be better hackers, better students, and better people.



The Information Commissioner’s Office really had no choice but to come down hard on the National Offender Management Service recently, after a portable hard drive used to back up the prisoner intelligence database went missing from a prison security department.
Although nobody knew when it had actually gone missing, it could have been gone for almost a week since it had last been used on 18 May 2013 for the weekly back up - but was missing when staff went to back up six days later. It had not been locked in the fireproof safe afterwards, contrary to policy. Not only was it missing, it was also unencrypted and had not been password protected. It contained sensitive information about almost 3000 prisoners, including names and dates of birth, length of sentence, offence(s), physical descriptions and distinguishing marks, plus intelligence information about drug use and links to other prisoners or organised crime - certainly not the sort of details you’d want to lose.
Although only nine staff members had access to the area where the hard drive was used and the area was controlled by a keypad system, the door to the Security Department could be opened by anyone on the prison staff.

Last week, I wrote about Jessica Kriegel, senior organization development consultant at Oracle, who argues that generational stereotypes, like the widespread notions we’ve all read about millennials as entitled, tech-savvy, structure-averse job-hoppers, are harmful to workplace fairness and productivity. In my interview with Kriegel, I also drilled down on the issue of stereotyping as it pertains to IT professionals, which warrants further discussion here.

I found Kriegel, a millennial herself, to be persuasive in her argument, which she makes in her new book, “Unfairly Labeled: How Your Workplace Can Benefit from Ditching Generational Stereotypes.” I also found her to be refreshingly candid. She didn’t miss a beat, for example, in responding to my question about what sorts of generational stereotyping she has found to be most common within Oracle:

I can only speak to the groups that I have worked with. I was brought in to work with the product development team. The managers were basically saying that millennials could not easily transition from college to corporate. They felt like millennials were bringing the college campus style to the corporate atmosphere. So I was brought in to resolve that issue—to teach the millennials how to be more professional, more corporate, and less casual and college-like. That manifested itself in many ways. Some of it had to do with dress code; some of it had to do with productivity; some of it had to do with expectations with regard to work/life balance.



Businesses competing on data must be masters of change. To keep pace with constantly shifting business models, markets, and customer expectations, companies must become more agile, which includes empowering employees with insights that are available at their fingertips.

Self-service analytics is one of the tactics separating industry leaders from laggards.

In today's world, "self-service" is no longer synonymous with passively consuming static reports pre-packaged by IT. It's more about building one's own reports, exploring data, and interacting with it.



Last month I presented at “Cyber Security Exchange Day,” hosted by the folks at Bryant University and OSHEAN. It was a great event, filled with lots of discussion about what’s happening in the world of cyber security and how the threat landscape is evolving and impacting all forms of IT.

Although the cloud is being more widely adopted, cloud security remains a top concern among enterprise IT professionals. In recent years, news headlines have been filled with enough stories about compromised data security to drive executives away from networked and cloud solutions and back to the proverbial days of stuffing cash in a mattress.

However, while these high-profile news stories drive much of the narrative around data security, the reality is that the vast majority of network security attacks are far more basic in nature. It’s important for organizations to recognize that threats to a computing environment are always present, and that they need to take a more practical approach to manage against real--not simply perceived--threats.



Craig Huitema and Soni Jiandani blogged  about Cisco’s latest ASIC innovations for the Nexus 9K platforms and IDC did a write up and video. In this blog, I’ll expand on one component of the innovations, intelligent buffering. First let’s look at how switching ASICs maybe designed today. Most switching ASICs are built with on-chip buffer memory and/or off-chip buffer memory.  The on-chip buffer size tends to differ from one ASIC type to another, and obviously, the buffer size tends to be limited by the die size and cost. Thus some designs leverage off-chip buffer to complement on-chip buffer but this may not be the most efficient way of designing and architecting an ASIC/switch. This will lead us to another critical point, how can the switch ASIC handle TCP congestion control as well as the buffering impact to long-lived TCP and incast/microburst packets (a sudden spike in the amount of data going into the buffer due to lots of sources sending data to a particular output simultaneously. Some examples of that IP based storage as the object maybe spread across multiple nodes or search queries where a single request may go out on hundreds or thousands of nodes. In both scenarios the TCP congestion control doesn’t apply because it happens so quickly).

In this video, Tom Edsall summarizes this phenomenon and the challenges behind it.



DENTON, Texas – Cleaning up after a flood? FEMA has some suggestions:
•    Check for damage. Check for structural damage before re-entering your home. If you suspect damage to water, gas, electric or sewer lines, contact authorities.
•    Remove wet contents immediately. Wet carpeting, furniture, bedding and anything else holding moisture can develop mold within 24 to 48 hours. Clean and disinfect everything touched by floodwaters.
•    Tell your local officials about your damages. This information is forwarded to the state so state officials have a better understanding of the extent of the damages.
•    Plan before you repair. Contact your local building inspections or planning office, or your county clerk’s office to get more information on local building requirements.
•    File your flood insurance claim. Be sure to provide: the name of your insurance company, your policy number and contact information. Take photos of any water in the house and damaged personal property. Make a detailed list of all damaged or lost items.
There are also questions about when Federal Assistance is available after a disaster. In simple terms, here’s the process:
A disaster happens. Local officials and first responders respond. These officials see that their communities need assistance in dealing with it. They ask the state for help. The state responds. Sometimes, the state sees that the response is beyond its resources. That’s when the state reaches out to FEMA for assistance.
Typically, before asking for a Major Disaster declaration, the state asks for a preliminary Damage Assessment. This is done by teams composed of state and federal officials. They arrive in the disaster damaged area and local officials show them the most severely damaged areas that they can access.
Among the items considered are:
•    The amount of damage
•    How widespread the damages are, and the number of insured and uninsured properties involved
•    Special needs populations
•    Other disasters the state may be working.
Governors use this information to decide whether to request a disaster declaration. Once a governor decides to request a declaration, it is processed as quickly as possible.
If the President decides there’s a need, he signs a Major disaster declaration for either Individual Assistance, Public Assistance or both, for designated counties.
Individual Assistance means:
Individuals and business owners may be eligible for rental assistance, grants for repairs, or low interest loans from the U.S. Small Business Administration (SBA)for damages to uninsured or underinsured property.
Public Assistance means:
Government entities and certain private non-profit agencies may be eligible to be reimbursed for the cost of repairs to uninsured or underinsured facilities, as well as some costs for labor and materials.
If there is a Major Disaster declaration, survivors may register for assistance at www.disasterassistance.gov, or by calling 1-800-621-3362 or (TTY) 1-800-462-7585.
The Preliminary Damage Assessment teams often take photographs of damaged areas. After a Major Disaster declaration, photographs of your damages are accepted as documentation, in addition to your receipts.

FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at http://twitter.com/femaregion6 , and the FEMA Blog at http://blog.fema.gov. 

LOGICnow announced today the integration of the MAX Backup and Disaster Recovery offering into a comprehensive security solution, and its remote management software for managed services providers.

The backup and disaster recovery tool is part of the LOGICnow Layered Security suite, which can be operated as a software-as-a-service (SaaS) offering from the firm’s MAX Remote Management platform.

The technology is described as a holistic approach that allows IT professionals to get systems up and running “within minutes” after a data loss or cyber attack, and help companies regain access to locked data following a ransomware disruption.



Software-defined data centers were originally met with suspicion. In fact, two years ago, “software-defined” anything was largely considered marketing hype. According to experts, only organizations with pre-existing homogeneous environments could take advantage of it. Times change. Today, the Software Defined Data Center (SDCC) is transforming the service provider (SP) industry.

Nonetheless, service providers still face challenges, and these issues remain thorny:

  • Resource constraints: There are not enough qualified cloud professionals to meet demand for services. This talent shortage, along with the complexity of the technology, puts a crimp in the SP's ability to provide customers with innovative solutions that effectively differentiate, compete, and attract new business.
  • Pricing pressures: IT has always been challenged to demonstrate ROI.  Even organizations with aligned IT and business capabilities continue to look for ways to reduce data center costs.
  • Service Level Agreements: There is unrelenting pressure on solution providers to strengthen SLAs across the board—just to hold onto their existing customers. Seeing a way out of the conundrum can be difficult for SPs, and makes partnering with a vendor that provides the right technical solutions, training, and support critical to helping SPs expand opportunities.



Friday, 18 March 2016 00:00

BCI: Don't be held for ransom

Don't be held for ransom

The threat of ransomware is rapidly growing with 43% of IT consultants in a survey by Intermedia reporting they have had customers fall victim to an attack. 48% saw an increase in ransomware related support inquiries while 59% of respondents expect the number of attacks to increase this year.

A ransomware outbreak creates two hard choices for businesses: Either spend multiple days recovering locked files from backups (which may be old outdated versions), or pay a ransom to an organized crime syndicate who will then be incentivized to launch further attacks.

In both scenarios, organizations are likely to face significant user downtime that overshadows the cost of the ransom. The 2016 Crypto Ransomware Report revealed that 72% of infected business users could not access their data for at least two days following a ransomware outbreak, and 32% lost access for five days or more. As a result, experts observed significant data recovery costs, reduced customer satisfaction, missed deadlines, lost sales and, in many cases, traumatized employees.

Richard Walters, SVP of Security Products at Intermedia stated, “In the age of ransomware, what matters is how quickly employees are able to get back to work. Traditional backup and file sharing solutions are increasingly inadequate when it comes to addressing this growing concern, putting businesses at risk. Modern business continuity solutions that combine real-time backup, mass file restores and remote access combat threats by minimizing the crippling effects of downtime.

The report also noted that ransomware should no longer be seen solely as a threat to individuals and small businesses. Nearly 60% of businesses hit by ransomware had more than 100 employees, and 25% were enterprises with more than 1,000 employees.

IT consultants are not the only group to express fears about such attacks with the cyber threat featuring high in the Business Continuity Institute's latest Horizon Scan Report. This report revealed that cyber attacks and data breaches are the top two threats according to business continuity professionals, with 85% and 80% of respondents to a survey expressing concern at the prospect of one of them occurring. It was only recently that the BCI published an article regarding a US hospital that had fallen victim to a ransomware attack, and had to pay up in order to access their data again.

Felix Yanko, President at Technology & Beyond added, “As business IT consultants, we receive an astounding number of customer queries about suspicious emails and pop-ups. The world is becoming more cyber-aware, but ransomware’s depravity keeps it three steps ahead. CryptoLocker, for instance, will take down multiple offices in one sweep, should it infect a shared server. Trying to restore from ransomware attacks off traditional back-ups, businesses usually lose weeks of work due to lost files, plus a day or more of downtime while computers are wiped and reloaded. Companies must have the right security measures in place to mitigate the devastation of ransomware.

Friday, 18 March 2016 00:00

Tech Trends That We Don’t Talk About

I was asked to do a presentation on trends for a large number of IT folks. Of course, concepts like hyper-converged computing, analytics, mobile, hybrid cloud, and the move away from passwords (finally) came to mind. But at the time, I also happened to be reading a book called Moonshot (I recommend it, by the way) by ex-Apple CEO John Sculley, in which he talks about trends that really don’t have that much to do with servers, services, processors, systems or networking gear.

Let’s talk about some of the trends behind the trends this week.



A crisis can occur at any time, anywhere. Every year, dozens of organizations find themselves in the news for the wrong reasons, whether it be because of a natural disaster, terrorist attack, or a scandal. Ensuring your staff is trained in crisis communication before an incident occurs could save your company and its image when an emergency situation strikes.

Every organization in any field can be susceptible to a media crisis without the proper preparation in crisis communication. Luckily, Kathryn Holloway and her company, Press Alert, having been helping hundreds of firms prepare for such crises by media training their spokespeople, and now she’s here to help you! Kathryn joined us recently for a webinar where she uses relevant current events to outline a list of best practices for interacting with the press during a crisis.



Teddy bear in rubble

As demonstrated in events like the 2009 H1N1 influenza pandemic and the Ebola response of 2014, children can be particularly vulnerable in emergency situations. Children are still developing physically, emotionally, and socially and often require different responses to events than adults. With children ages 0 to 17 representing nearly a quarter of the US population, the specific needs of children during planning for natural, accidental, and intentional disasters has become a national priority.

Collaboration is Key

People participating in tabletop exercise

To practice preparedness among first responders, CDC and the American Academy of Pediatrics (AAP) joined forces to host a tabletop exercise on responding to an infectious disease threat at the federal, state, and local levels. Pediatric clinicians and public health representatives within federal region VI, (i.e. the “TALON” states of Texas, Arkansas, Louisiana, Oklahoma, and New Mexico) worked in teams to develop responses to a simulated outbreak of pediatric smallpox. Representatives collaborated to identify potential disease contacts, develop plans for Strategic National Stockpile countermeasure distribution, and communicate effectively with other health leaders to meet pediatric care needs. Children tend to have different exposure risks, need different doses of medications, and have more diverse physical and emotional needs than adults during a public health emergency. This training exercise served as a model to increase the focus on the unique needs of children in emergency preparedness and response activities.

Bringing health professionals from different backgrounds together demonstrated how building connections during public health emergencies can improve response efforts and save lives. The day-long exercise gave participants the opportunity to see different problem-solving skills and unique viewpoints that other responders brought to the scenario.

One participant in the exercise, Curtis Knoles, MD, FAAP, commented, “The exercise gave a good understanding of next steps we need to take; identify all the players involved with the pediatrics community and get them tied into the state department of health.”

Steps to protect your child during emergencies in the school day

Practice like the Pros at Home

While the tabletop exercise focused on emergency planning and response on a broad level, there are many ways you can practice keeping your children safe during an emergency, too. Check out some of the resources below for resources and ideas on how you can keep your family prepared!

  1. Make creating your emergency kit fun—let your kids pick out some snacks and games to include! Be sure to have a kit at home and in the car!
  2. Get your kids involved with emergency preparedness with Ready Wrigley games, coloring pages, and checklists
  3. Make and practice plans for where to go and how to communicate in case of an emergency
Thursday, 17 March 2016 00:00

Understanding the Value of Data

The enterprise has been sitting on a goldmine of valuable information for several decades now, but only recently has it had access to the technology to pull it all together and make sense of it. This is leading to a shift in the way organizations value both data and infrastructure – data becoming increasingly important to the business model while distributed cloud architectures and commodity hardware are diminishing the significance of infrastructure.

But raw data is like unrefined ore: There is potential there, but first it must be retrieved, cleaned, refined and then delivered to those who find it most desirable. For that, you need a top-notch data management platform.

According to a recent study by Veritas, many organizations are still squandering the value of data simply by not having a full understanding of what they have and how it can be utilized. More than 40 percent of data, in fact, hasn’t been accessed in three years. In some instances, this is due to compliance and regulatory issues, but in many cases it can be traced to improper management. Once data enters the archives, it tends to be lost forever even though it may still have value to present-day processes. As well, developer files and compressed files make up about a third of all stored data, even though the projects they supported are long gone. There is also a significant amount of orphaned data, unowned and unclaimed by anyone in the organization, and this is becoming increasingly populated with rich media files like video chats and graphics-heavy presentations.



Thursday, 17 March 2016 00:00

Building Resilience, City by City

With escalating risks and uncertainty around the globe, cities are challenged with understanding and circumventing those risks to stay vital. Much as in the business world, municipalities are moving towards resilience—the capability to survive, adapt and grow no matter what types of stresses are experienced.

Recognizing that they have much to offer each other, communities and businesses are often working together to pool their experience and knowledge. Helping to foster this is a project called the 100 Resilient Cities Challenge, funded by the Rockefeller Foundation. The project has selected 100 cities around the world and provided funding for them to hire a chief resilience officer.

“Resilience is a study of complex systems,” said Charles Rath, president and CEO of Resilient Solutons 21. He spoke about resilience and his experiences with the 100 Resilient Cities Challenge at the recent forum, “Pathways to Resilience,” hosted by the American Security Project and Lloyd’s in Washington, D.C. “To me, resilience is a mechanism that allows us to look at our cities, communities, governments and businesses almost as living organisms—economic systems that are connected to social systems, that are connected to environmental systems and fiscal systems. One area we need to work on is understanding those connections and how these systems work.”



Active shooter incidents have become an increasingly significant threat in healthcare and hospital environments.  According to a study conducted by the FBI titled Workplace Violence: Issues in Response, healthcare employees experience the largest number of Type 2 active shooter assaults (assaults on an employee by a customer, patient, or someone else receiving a service). [1] Also, in a 12 year study conducted by Johns Hopkins, hospital-based active shooter incidents in the United States increased from 9 per year in the first half of the study to 16.7 per year in the second half. [2]

Because of the increased active shooter risk that healthcare and hospital facilities face, it is crucial for decision-makers to integrate active shooter preparedness into their workplace violence prevention policy and to provide reality-based training and resources for their staff. Of equal importance is an emergency response procedure and communication strategy. Shooting incidents are unique in hospitals and healthcare settings and they require a clear, concise communication action plan.



It’s a common phrase you have probably heard throughout your career: A crisis management plan is a living document. It’s a reminder that any crisis plan should be updated continually to reflect a business, its employees and the threats that might impact normal operations.

However, in practice, ensuring your plan is current, and your team is up-to-date, requires a significant investment in time and patience, and can be downright challenging. But if your company makes crisis management a priority, it is possible.

Here are three key ways to ensure your plan and team are always up to date:




A breach a day is the new norm. In the past 12 months there have been a number of high profile breaches.  Take Sony for example, they lost control of their entire network.  The hackers were releasing feature length movies onto torrent sites for people to freely download. This was very high profile at the time and it was incredibly damaging.  TalkTalk, had all of their customer information dumped onto the internet for everybody to use.  XBOX Game Network was hacked over the Christmas period.  They had a Distributed Denial of Service – the hackers just wanted to do it for the fun of it!  Famous political figures have also had their public profiles very notably defamed.

These hacks happen everyday. A breach a day is the new norm.



DUPONT, Wash. – Washington suffered its worst wildfire season in state history in 2015. Raging fires burned more than one million acres of public and private lands. After two straight years of record-breaking wildfires, vast areas of the state face a much greater risk of flash flooding, debris flow and mudslides. But a team effort by all levels of government aims to reduce those threats to public safety.

The team—called the Erosion Threat Assessment/Reduction Team (ETART)—was formed by the Washington Military Department’s Emergency Management Division (EMD) and the Federal Emergency Management Agency (FEMA) after the Carlton Complex Fire of 2014. A new ETART was formed in October 2015 following the federal disaster declaration for the 2015 wildfires.

ETART participants include EMD, FEMA, the U.S. Army Corps of Engineers, the National Weather Service, the Confederated Tribes of the Colville Reservation, the Washington State Conservation Commission, the Washington State Department of Natural Resources, the Spokane, Okanagan and Whatcom conservation districts, and many others.

Led by the Okanogan Conservation District, ETART members measured soil quality, assessed watershed changes, identified downstream risks and developed recommendations to treat burned state, tribal and private lands.

“Without vegetation to soak up rainwater on charred mountainsides, flash floods and debris flows may occur after a drizzle or a downpour,” said Anna Daggett, FEMA’s ETART coordinator. “ETART brings together partners to collaborate on ways to reduce the vulnerability of those downstream homes, businesses and communities.”

Besides seeding, erosion control measures may include debris racks, temporary berms, low-water crossings and sediment retention basins. Other suggestions may include bigger culverts, more rain gauges and warning signs, and improved road drainage systems.

While public health and safety remains the top priority, other values at risk include property, natural resources, fish and wildlife habitats, as well as cultural and heritage sites.

“ETART addresses post-fire dangers and promotes collective action,” said Gary Urbas, EMD’s ETART coordinator. “With experienced partners at the table, we can assess and prioritize projects, then identify potential funding streams to fit each project based on scale, location and other criteria, which may lead to a faster and more cost-effective solution.”

Since the major disaster declaration resulting from wildfire and mudslide damages that occurred Aug. 9 to Sept. 10, 2015, FEMA has obligated more than $2.9 million in Public Assistance grants to

Washington. Those funds reimburse eligible applicants in Chelan, Ferry, Lincoln, Okanogan, Pend Oreille, Stevens, Whatcom and Yakima counties, as well as the Confederated Tribes of the Colville Reservation, for at least 75 percent of the costs for debris removal, emergency protective measures, and the repair or restoration of disaster-damaged infrastructure.

After the 2014 Carlton Complex Fire, FEMA provided $2.4 million in Public Assistance grants specifically for ETART-identified projects. Those grants funded erosion control measures that reduced the effects of the 2015 wildfires—such as installing straw wattles, clearing culverts and ditches of debris, shoring up breached pond dams, and seeding and mulching burned lands.

FEMA also offers fire suppression grants, firefighter assistance grants, Hazard Mitigation Grants and National Fire Academy Educational Programs.

Affected jurisdictions, landowners and business owners continue to submit requests for grants, disaster loans, goods, services and technical assistance from local, state and federal sources to recover from the wildfires, protect the watersheds or reduce the risks associated with flooding and other natural hazards.

ETART recently issued its final report, which details its methodology, assessments, debris-flow model maps, activities and recommendations. Completed activities include:

  • Compiled and shared multi-agency risk assessments across jurisdictions through a public file-sharing site.

  • Developed and disseminated an interagency program guide to assist jurisdictions seeking assistance.

  • Transitioned ETART to a long-term standing committee to address threats, improve planning, and resolve policy and coordination issues that may thwart successful response and recovery efforts related to past fires and potential future events.

The “2015 Washington Wildfires Erosion Threat Assessment/Reduction Team Final Report” is available at https://data.femadata.com/Region10/Disasters/DR4243/ETART/Reports/. Visitors to this site may also access “Before, During and After a Wildfire Coordination Guide” developed by ETART.

More information about the PA program is available at www.fema.gov/public-assistance-local-state-tribal-and-non-profit and on the Washington EMD website at http://mil.wa.gov/emergency-management-division/disaster-assistance/public-assistance.

Additional information regarding the federal response to the 2015 wildfire disaster, including funds obligated, is available at www.fema.gov/disaster/4243.

A few short decades ago, safety planning was not considered a priority for the vast majority of corporations. Instead, most incidents and emergencies were handled as they occurred, as effectively as possible given the limited technology resources available at the time.

Today, workplace health & safety departments have evolved into something else entirely. Now it is a must-have element of any corporation in order to maximize occupational health and safety.

To fully understand the importance of corporate safety planning—and to glimpse how much it has changed our modern work environment—you only need to take a quick look at how far it’s come. Let’s take a look at how workplace safety programs have evolved, as well as what worked—and what didn’t:



One of the more frustrating aspects of analytics is the amount of time it takes to put data in a format that makes it useful. By some estimates, manually making data accessible to an analytics application can consume as much as 80 percent of an analyst’s time. Given the salary analysts command, the cost of prepping data can be considerable.

IBM today announced a partnership with Datawatch under which it will resell Datawatch Monarch, a self-service tool that enables end users to automate much of the data preparation work associated with running an analytics application. In this instance, IBM intends to provide access to Datawatch Monarch to end users making use of the IBM Cognos and IBM Watson Analytics services delivered via the cloud.

Datawatch Monarch makes it possible for an end user to automatically have all the data in a file turned into rows and columns that can be easily consumed by an analytics application. It also makes it possible to join dissimilar data, all of which can be reused across the organization.



In recent years, more and more cybersecurity incidents have taken place as a result of insecure third-party vendors, business associates and contractors. For example, the repercussions of the notorious Target breach from a vulnerable HVAC vendor continue to plague the company today. With sensitive data, trade secrets and intellectual property at risk, hackers can easily leverage a third party’s direct access into a company’s network to break in.

While such incidents may cause significant financial and reputational harm to the first-party business, there is hope. Regulators are instating a growing number of legal requirements that an organization must meet with respect to third-party vendor risk management. As liability and regulations take shape, it is important to assess whether your company currently employs a vendor risk management policy, and, if not, understand how a lack of due diligence poses significant risk on your organization’s overall cybersecurity preparedness.

A vendor management policy is put in place so an organization can tier its vendors based on risk. A policy like this identifies which vendors put the organization most at risk and then expresses which controls the company will implement to lessen this risk. These controls might include rewriting all contracts to ensure vendors meet a certain level of security or implementing an annual inspection.



Thursday, 17 March 2016 00:00

Agility Trumps Cost in the Cloud

Most cloud experts will tell you that the real advantage of shedding static, legacy infrastructure is not the cost savings, but the enhanced agility. By quickly and easily developing new applications and pushing them out, organizations can craft a more responsive and compelling experience to customers, which should translate into higher sales.

But even after the cloud environment has been deployed, this doesn’t happen by itself. The enterprise needs to make sure that cloud functionality exists across the data environment and that business managers know how to leverage the flexibility and agility that the new service-based infrastructure offers.

One of the ways to do this, of course, is rapid deployment and configuration of resources. But as Google and others are quick to point out, the goal is not simply to deploy a new environment and let it run but to constantly configure and reconfigure resources to produce optimal results with the lowest consumption. Google’s new Custom Machine Types supports this level of functionality by offering sub-minute configuration changes, which provide the twin benefits of highly accurate load balancing and the ability to quickly change underlying resources like compute and memory to meet shifting data requirements. Essentially, it gives the enterprise what it wants when it wants it, with only a fraction of the complexity that usually accompanies infrastructure change management.



Cybersecurity is finally getting the attention it requires, but based on recent studies I’ve seen and conversations I’ve had, organizations have a long way to go to create a security posture that matches reality.

AttackIQ CEO Stephan Chenette wrote a blog recently that discussed the fact that most organizations are unaware of their security posture until they suffer a breach or are alerted by a third party. Yet, he wrote:

In the face of ever-increasing numbers of attacks, the average enterprise deploys 75 distinct security products (1), receives more than 17,000 alerts per day (2), and spends an average of $115 per employee on security (3). As an industry, we are getting into a cycle of buying more security technologies and then hiring more security engineers to manage those technologies. We need to get a handle on our capabilities sooner rather than later.



NORTH LITTLE ROCK – Federal assistance is being offered to help Arkansas communities rebuild infrastructure to higher, more disaster-resistant standards and state officials are encouraging local governments to take advantage of that funding.

The assistance to communities is part of the aid that became available following the severe storms, tornadoes, straight-line winds, and flooding Dec. 26, 2015 to Jan. 22, 2016.

“Generally, the federal Public Assistance program restores disaster damaged infrastructure to pre-disaster conditions,” said John Long, federal coordinating officer for the Federal Emergency Management Agency. “But when cost effective and technically feasible, it makes sense to rebuild to higher standards that can prevent future loss. FEMA makes available the funds to do so.”

FEMA’s Public Assistance program provides federal funds to reimburse a minimum of 75 percent of the costs for removing debris, conducting emergency protective measures and repairing levees, roads, bridges, public utilities, water control facilities, public buildings and parks. Mitigation funding may be considered in each project category.

Eligible applicants may include:

  • state agencies
  • local and county governments
  • private nonprofit organizations that own or operate facilities that provide essential government-type services

"Studies show that every $1 paid toward mitigation saves an average of $4 in future disaster-related costs,” said State Coordinating Officer Scott Bass of the Arkansas Department of Emergency Management Agency. "By adding mitigation money to repair costs, our goal is to reduce or eliminate damages from future disasters.”

As part of the process for applying for federal assistance, experts from ADEM and FEMA help identify projects that will qualify for the special mitigation program. Officials urge applicants to take advantage of the funds.

# # #

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Managed services providers can find themselves navigating sticky privacy issues, balancing their duty to cooperate with law enforcement against their responsibility to safeguard customers’ data.

Executives at Stonehill Technical Solutions won't soon forget the day about six years ago when an FBI agent contacted the Laguna Hills, Calif., managed services provider and asked them to turn over the login credentials for a client whose business had – for undisclosed reasons – drawn the scrutiny of federal authorities.

At CEO David Bryden’s request, the agent sent over some documentation and a phone number to an FBI office, proof that the people on the phone were who they said they were.



A service level agreement (SLA) outlines how a managed service provider (MSP) will support its customers day after day, establishing the expectations for response times for service requests.

This agreement also serves as a legally binding contract, and as such, may protect an MSP against legal action. 

An SLA serves many purposes for an MSP and its customers, but did you know this agreement can deliver a key differentiator for a service provider as well?



Thursday, 17 March 2016 00:00

Mining Open Payments Spend Data

At a Food and Drug Law Institute webinar last week, Robin Usi, the Director for the Division of Data & Informatics (DDI), in the Data Sharing & Partnership Group of the CMS Center for Program Integrity, made clear that accuracy matters in the reporting of spend data to CMS pursuant to the Physician Payments Sunshine Act.  She further stated that CMS is working to identify inaccurate data reporters and that reporters of inaccurate data are prime targets for agency audit and/or compliance actions.

As pharmaceutical and device companies gear up to report 2015 payments and other transfers of value made to physicians and teaching hospitals as required by the Sunshine Act, many companies may be concerned that they will miss something when the March 31 deadline rolls around.  And while there are statutory penalties for not reporting – as Ms. Usi made clear at the FDLI webinar – the penalties for what is reported may be much more significant if the government views a company’s payments to physicians as kickbacks intended to induce the use of the company’s product.

The CMS Open Payments database offers tremendous data mining possibilities.  For 2014 – the first full year of Open Payments reporting – the database contains over 11 million transactions valued at almost $6.5 billion.  This obviously makes for unprecedented public visibility into the financial relationships between the almost 1,500 reporting companies and the over 600,000 physicians receiving payments.



Midsection of male judge signing contract paper at desk

Whether it’s taking steps toward a healthier lifestyle, preventing diseases, or preparing for an emergency or natural disaster, public law is an important tool to promote and protect public health. The Centers for Disease Control and Prevention’s Public Health Law Program (PHLP) develops legal tools and provides technical assistance to public health colleagues and policymakers to help keep their communities safer and healthier.

Emergency preparedness is one of the most important topics PHLP covers. Most emergency response systems are based on laws that regulate when and how state, tribal, local, territorial, and federal entities can engage in an emergency response. The legal nuances are often complicated and easy to miss. PHLP offers resources and training to empower state, tribal, local, and territorial communities to better understand, prepare, and respond to public health emergencies. Together, public health and public health law can protect people from harm and help communities better prepare for disasters.

For the past 16 years, PHLP has helped public health practitioners respond quickly—and with the right legal resources—in times of crisis. PHLP’s work can be divided into two main areas: PHLP’s research initiative and the program’s workforce development activities. Through its research initiative, PHLP conducts legal research using legal epidemiology research principles. PHLP’s research looks at various critical issues to interpret how the law plays a role in diseases and injuries affecting the entire country, and examines specific topics in state and local jurisdictions.

Gregory Sunshine presenting at a CDC TedMed talk

Gregory Sunshine, JD, a legal analyst at CDC, describes the role the agency plays in our public health and legal systems and explains how this affected state Ebola monitoring and movement protocols.

PHLP’s training helps health officials learn what they need to know to prepare for an emergency and what the law allows. In 2015, staff went on a legal preparedness “roadshow,” training more than 500 people in 11 different states in just a few short months. This training showed participants how to recognize legal issues that arise during public health emergencies, offered tools for planning and implementing effective law-based strategies during an emergency, and provided an opportunity to exercise their knowledge through a fictional response scenario.

PHLP also offers emergency response support for specific emergencies. During a public health emergency, such as the Ebola epidemic, PHLP helps partners use the law to stay ahead of quickly evolving situations. After the first case of Ebola was diagnosed in the United States on October 11, 2014, enhanced entry screening was implemented in five airports, which is allowed by law to protect Americans’ health. The enhanced entry screening was implemented to help identify and monitor travelers from countries with Ebola outbreaks who could have been exposed to the disease or who had signs or symptoms of Ebola.

Stakeholders were concerned that variations in how each state monitored and controlled the movement of travelers from countries with Ebola outbreaks could cause confusion, so PHLP staff published the State Ebola Screening and Monitoring Policies on its website so travelers could access them in one easy location. This information helped people who were considering working in West Africa understand what the requirements might be after they returned home. Similar to what was done during the Ebola outbreak, the program recently published an analysis of emergency declarations and orders related to the West Nile virus as part of CDC’s response to the 2016 Zika outbreak.

PHLP helps public health partners across America answer legal questions on many emergency preparedness and response topics. Through legal research, trainings, and publishing of the latest information, PHLP is always ready to help their partners understand how to use law to protect the health and safety of the public. People interested in learning more about PHLP can visit PHLP’s website. For regular updates on public health law topics, including legal preparedness, subscribe to CDC’s Public Health Law News.

Link to TedMed Video: http://www.cdc.gov/phlp/videos/tedmed-ebola.html

Thursday, 17 March 2016 00:00

New Storage for a New Era

The enterprise storage industry is in transition and probably will be for some time as everything from the media to the array to software and even the physical location of all these components strives for relevancy in an increasingly complex digital economy.

The latest market research numbers should put aside all doubt that enterprise storage as we know it today is not long for this earth. But while this will certainly put a strain on the profits of the leading storage vendors, it is still too early to call for their demise. In fact, given the changes being made to their top platforms and their overall storage portfolios, there is every indication to suggest they are more than ready to roll with the changes.

As a whole, the enterprise data storage market saw a decline of 2.2 percent in the fourth quarter of 2015 even though shipments were actually up about 10 percent, according to IDC. Research manager Liz Conner pinned this on greater activity in areas like server-side storage and cloud-based deployments rather than the big storage arrays that dominated in the past. At the same time, all-Flash arrays saw a whopping 71.9 percent year-over-year gain to produce nearly $1 billion in revenues, while hybrid arrays drew nearly $3 billion to now account for about 28 percent of the overall storage market.



At the RSA Conference two weeks ago, a common question from both clients and former colleagues -- “So, what’s it like being analyst?” -- led me to write this blog post.

In the interest of full disclosure, there were no massive epiphanies during my first year, but the transition from being on the vendor side for 15+ years to an analyst provided some perspectives, listed here in no specific order:



A shining light to business continuity professionals in North America

At DRJ Spring World 2016 Tuesday, the Business Continuity Institute presented its annual North America Awards to recognize the outstanding contribution of business continuity and resilience professionals and organizations across the region.

The BCI North America Awards consist of nine categories – eight of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being chosen by BCI members in a vote. As expected, the entries received during the year were all to a high standard and the panel of judges had a difficult task deciding upon the winners.

There can be only one winner in each of the categories however, and those showered in glory were:

Continuity and Resilience Consultant
Suzanne Bernier MBCI, President of SB Crisis Consulting

Continuity and Resilience Professional (Private Sector)
Linda Laun, Chief Continuity Architect at IBM

Continuity and Resilience Newcomer
Bradley Hove AMBCI, Consultant at Emergency Response Management Consulting Ltd

Continuity and Resilience Professional (Public Sector)
Ira Tannenbaum, Assistant Commissioner for Public/Private Initiatives at New York City Office of Emergency Management

Continuity and Resilience Team
Aon’s Global Business Continuity Management Team

Continuity and Resilience Provider (Service/Product)
Fusion Risk Management Inc and the Fusion Framework BCM Software

Continuity and Resilience Innovation
Fairchild Consulting, FairchildApp

Most Effective Recovery
Aon Global Business Continuity Management Team – Americas

Industry Personality
Brian Zawada FBCI, Director of Consulting Services at Avalution Consulting

In addition to these awards, the occasion was also used to present Des O'Callaghan FBCI with a BCI Achievement Award.

Sean Murphy MBCI of the US Chapter of the BCI and presenter the Awards, commented: "I was impressed by the high calibre of the finalists and the winners this year. The fact that so many awards were hotly contested shows the depth of talent that is available in the business continuity and resilience profession in North America. The BCI North America awards are a great opportunity to recognise outstanding professionalism and innovation from the business continuity and resilience community and I was delighted to be part of this.”

The BCI North America Awards are one of seven regional awards held by the BCI and which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners of a BCI Regional Award are automatically entered into the Global Awards.

Faxing is still a key part of today’s business world--to the tune of 100 billion (yes, “b”) pages a year, according to research firm Davidson Consulting--for a number of reasons, including security, compliance and ease of use. In fact, a CIO Insight article tells us that 72% of U.S. companies still have fax machines.

However, here’s a caution flag for you. If your clients are still running their fax processes on aging, analog-era infrastructure--desktop fax machines, internal fax servers, gateway software, analog fax lines--this might be among the least secure protocols they use for transmitting their data. Traditional faxing can present security vulnerabilities at every step in the process.

Moreover, your clients probably don’t even realize the data they send and receive via traditional fax faces these security weak points--and, by extension, puts them at risk of non-compliance. This is where you can help them.



Thursday, 17 March 2016 00:00

Frankenstein's Information Technology

The scientist Victor Frankenstein never set out to create a monster; quite the reverse. In Mary Shelly’s novel, Frankenstein sought to assemble and reanimate a human from constituent parts. He creates life but is horrified by the result and rejects his creation, which – granted the gift of life - then seeks revenge on him.

Your IT estate evolves over time, although ‘evolves’ is perhaps not the best descriptor. Evolution implies some sort of Darwinian survival of the fittest selection process. Your IT is more like a medieval town that tends to unplanned sprawling growth, with a lack of building codes resulting in precarious constructions. Pretty soon, the defensive value of town walls are compromised by ramshackle lean-to buildings, sanitation proves inadequate, streets become alleyways and dangerous slums develop, along with conditions ripe for the spread of fire and diseases. Over decades the town goes from protecting and nurturing to constraining its populace. If the founders were still alive they would not recognise their creation.

What starts out as a carefully designed IT architecture accumulates complexity as new applications and functions are bolted on, acquisitions are integrated, and so on. Under pressure to make changes at pace and cheaply, it’s easy to create links between existing systems using whatever means are available and familiar. Before you realise it, your IT architecture looks more like a bowl of spaghetti with lines going everywhere. If you want to find out what is connected, the only way is to put your fork in, turn, lift and see what comes away on the end of it. That’s a mouthful which is messy and hard to eat.



Thursday, 17 March 2016 00:00

Resilience is a state of mind

What keeps an organization going when by all measures it shouldn’t be possible? I’ve known a small to medium sized family business take a hit that should have seen it fold. It kept going because of the shear grit and determination of a third generation owner/manager that his business was not going down on his shift. When less emotionally invested managers might have thrown in the towel he just kept going, at pace, looking for an opening that would save his business and, guess what? He found it.

Lately we've been pummelled by news of horrendous human tragedies and disasters, but I was struck by a recent uplifting story.  Storm ‘Desmond’ devastated parts of Cumbria (UK) and overwhelmed flood defences only installed following a ‘one in 170 year’ devastating flood back in 2009.  
There is a fallacy that lightening never strikes the same place twice.



Thursday, 17 March 2016 00:00

Automation Is Key To Better Online Security

Locks are great for safeguarding things from unauthorized access. Keys are necessary for allowing authorized people to open those locks. What happens, though, when there are so many locks, and so many keys that you lose track? What happens when those keys fall into the wrong hands, or can be easily duplicated? That is the dilemma facing Internet security and privacy today.

Digital certificates and encryption keys form the backbone of security and privacy online. When those certificates and keys are poorly managed, however, it puts the network and data at risk. Actually, the risk is even greater than if you had no keys and certificates at all, because having them creates a false sense of confidence. The existence of the keys and certificates provides an illusion of security that can make it even easier for attackers to exploit poorly managed keys and certificates.



(TNS) - Sabine River flooding that shut down westbound Interstate 10 traffic on Tuesday is "extremely inconvenient" to regional commerce, a noted Texas economist said, but likely isn't as costly as other disruptions, such as the 2008 storm surge from Hurricane Ike.

But it's early yet in what might be called a slow-motion disaster as floodwaters from Toledo Bend Reservoir move downriver toward Sabine Lake and the open Gulf of Mexico.

As of Tuesday evening, eastbound traffic could still cross into Louisiana, though Texas officials were prepared to barricade it if the swollen river crept onto its lanes. Westbound traffic to cross into Texas, meanwhile, was shunted north to Interstate 20 in Shreveport, Louisiana. That hours-long detour is expected to last through the weekend.



(TNS) - Washington's entire Metrorail system will close for at least 29 hours beginning at midnight tonight for emergency inspections following a tunnel fire that appeared similar to a fatal incident a year earlier, Metro General Manager/CEO Paul J. Wiedefeld said Tuesday.

The "full closure" will allow inspections of the system's third-rail power cables following an early morning tunnel fire on Monday, Wiedefeld said in a news release.

About 600 jumper cables will be examined along tunnel segments in the system.



(TNS) - Technicians are assessing damage at Fayette County EOC after its communications tower was struck by lightning Monday evening.

Kevin Walker, director of Fayette County Office of Emergency Management, said the tower took a direct lightning hit around 5:30 or 6 p.m. Monday and all communications, radio and phone, were lost.

The center immediately initiated mutual aid agreements with other counties, allowing neighboring centers to accept, dispatch and monitor all 911 calls, he said.

Operators and technicians worked through the night to restore service, and as of 3 a.m. Tuesday all phone lines and radios are back up and running, said Walker.



(TNS) - Emergency medical responders and law enforcement professionals trained Monday and Tuesday for one of the worst scenarios possible: an active shooter.

“This training is designed to prepare us to render medical aid in a shooting situation much sooner than we are currently able to,” said Eliza Shaw, training coordinator for Centre LifeLink EMS, which hosted the South Central Mountains Regional Task Force training initiative.

Task Force Director Phil Lucas said the training is beneficial to both emergency medical responders and law enforcement because it helps them establish expectations and learn each other’s procedures.



Cybersecurity is top of mind for IT and business resilience professionals all over the world. And, since mass notification systems rely on employee contact data, it’s crucial these systems are examined with scrutiny. While it’s unlikely an organization will have ultra-sensitive information (such as social security or credit card numbers) residing directly in a notification database, it’s nevertheless imperative personal data is well protected.

Here are seven tips for ensuring your emergency notification service remains in your full control.



Everyone wants the latest, the greatest, the most cutting-edge technology. This is easy to do with a tablet or a smartphone but not when it’s an integrated enterprise data environment. In this circumstance, the only thing worse than falling behind the technological curve is throwing your processes out of whack with fork-lift upgrades.

This is why data infrastructure must evolve rather than change outright. Sometimes the evolution is quick and, yes, disruptive; other times it is slow, almost to the point where users don’t even know it’s happening. But overall, the change must be steady and purposeful or else the enterprise will find itself unable to compete in the emerging digital economy.

Sounds simple, right? It isn’t, of course. But even though each move must be weighed against broader architectural goals rather than simply adding more storage or compute power as in the past, there are still ways to break down the overall process into key steps while still maintaining the flexibility to alter the plan as needed.



According to Frost and Sullivan, eight elements make a city “smart”: smart buildings, smart energy, smart mobility, smart health care, smart infrastructure, smart technology, smart governance and smart education, and smart citizens. Increasingly, city leaders are looking to the Internet of Things (IoT) and advances in technology to make their cities – and their citizens – work more efficiently and cost effectively. Smart building technology and sensor data analytics are being instituted in everything from lowering energy consumption to rethinking traffic flow to ordinary infrastructure maintenance.

Businesses, too, are adopting smart technology and sensor data analytics as a way to create better workspaces and improve employee productivity.



While developers and IT operations professionals have been excited about the concept of DevOps, data center operators, the people who run the infrastructure for the teams upstream, haven’t generally been involved in the conversation. Jack Story, distinguished technologist at Hewlett-Packard Enterprise, thinks that is a mistake.

And people make that mistake because there is a lot of confusion about what DevOps is and isn’t. In a session at this week’s Data Center World Global conference in Las Vegas, Story attempted to make the case that data center operators should be part of the DevOps process and explain what it is.

A lot of confusion about DevOps comes from the misconception that it is about tools and automation. “It is not about automating the processes that you have today,” Story said. “It is not a tool. It is a cultural and organizational change.”



Today’s information security landscape is a constantly evolving beast. As attack vectors continue to grow, attacks become more frequent and attackers evolve to be even more sophisticated.

This is what we call “the new normal.”

As a result, the need to continuously adapt to an increasingly hostile environment has resulted in a significant change from the familiar security measures that kept us “comfortable” only a scant 5 years ago.



The World Health Organization recently announced that the Zika outbreak was a “public health emergency of international concern”. The Zika virus is a mosquito-borne virus linked to serious neurological birth disorders. It is native mainly to tropical Africa, with outbreaks in Southeast Asia and the Pacific Islands. It appeared in Brazil last year and has since been seen in many Latin American countries and Caribbean islands. Public Health England announced that four cases of the Zika virus have been confirmed in the UK.  These cases are believed to have been ‘travel associated’ and not thought to have been contracted in the UK, though health officials expect to see more cases of travel associated infections.[1] As we operate in an increasingly global world, with a constant flow of employees traveling for business, how should your organisation prepare for this pandemic?



Streaming analytics, self-service options, and embedding big data insights into the applications that drive the business are the new priorities for organizations as they evaluate their big data strategies.

That's according to a new TechRadar report from Forrester Research that looks at the state of big data in businesses today.

Enterprise organizations have reached a new stage in big data adoption, and in 2016 they will be looking to embed the technology into the applications that power their businesses via integration and APIs.



Thursday, 17 March 2016 00:00

BCI: The rising threat of a product recall

The rising threat of a product recall

The number of product recalls in the UK jumped by 26% to a new high of 310 in 2014/15 from 245 in 2013/14 according to a new study by law firm RPC.

The number of vehicle recalls rose dramatically in the last year after several high profile incidents within the motor industry. In the last year the UK has seen 39 different motor vehicle recalls, a 30% increase from the 30 recalled in 2013/14.

The scandal over General Motors’ failure to promptly recall cars with a potentially faulty ignition switch may have prompted other manufacturers to recall more swiftly and more frequently if they identified a potential problem with their car. US federal agencies claimed the fault caused up to 124 deaths. GM recently agreed to pay $900 million in criminal damages to settle the case and eventually recalled 800,000 cars.

Pressure on the motor industry has been further raised by the investigation into Volkswagen over emissions testing, which began in 2014. French carmaker Renault recently recalled 15,000 cars after questions were raised over emissions testing of its cars.

Product recalls may not feature high on the list of threats according to the Business Continuity Institute's latest Horizon Scan Report, but they are still a threat to some. Product quality incidents were a concern for 27% of respondents to a global survey and product safety incidents were a concern to 19%.

Gavin Reese, Partner at RPC, commented: “Sometimes it can take a huge scandal to break for an industry to sit up, take notice and ensure their products are watertight. Certainly the automotive industry is now very sensitive to accusations of being slow to recall faulty or non-compliant products. Car manufacturers are looking for irregularities more closely, as well as facing increased pressure from regulators and, therefore, it’s likely that 2016 will also see a high level of vehicle recalls.”

RPC also noted that the number of recalls relating to food and drink has significantly increased, by 50% this year from 56 to 84. After the horse meat scandal in 2013, the National Food Crime Unit was established which works to uncover incidents of food fraud in the UK. RPC says that the creation of this unit as well as the increasing importance being placed by supermarkets on their supply chains may have led to the rise in food product recalls in the last year.

Gavin Reese adds: “The horse meat scandal set off reverberations across the food industry and now a couple of years on tighter measures and an increased scrutiny have clearly made a big difference.”

Thursday, 17 March 2016 00:00

Your Site Has Been Hacked... Now What?

I get it, funds are limited and you probably think you’re too stressed out to even think about paying for website security protection. Your business is probably not “big enough” to be targeted by hackers and then again, what kind of destruction could they even cause? Yeah, that’s what I thought.

Early on in my business (about 4 months in to be exact), the Foxtail Marketing website was hacked . We lost our traffic, our good standings with Google, and our site was left unsalvageable. Because the hackers had infected every aspect of our site, we couldn’t trust any of the old code. We had to bite the bullet and buy a new (and expensive) website, wipe our servers, change every password we ever created, and pray that we didn’t leave any backdoors for the hackers to get in again.

Just like a spare tire, you’ll never understand how bad you need website security until it’s too late. But likely for you, it’s not too late. The following guide should help you understand just how badly you need this security.



(TNS) - A post that appeared on Facebook about 10 a.m. Wednesday declared “THIS IS AN EXERCISE” before detailing a mock emergency.

“There has been an imminent failure at the West Pass Dike. Diablo Dam has failed at 815. Newhalem and Diablo have been evacuated,” read the post from the Skagit County Department of Emergency Management. “Concrete and Hamilton have been evacuated and moved to higher ground. Evacuate to Concrete High School and Concrete Airport.”

As the post went up, an emergency coordination center on East College Way in Mount Vernon was already teeming with activity.



Yesterday’s archives can’t handle the challenges of today’s modern enterprise, let alone the challenges enterprises are bound to face in the future as communication channels rapidly evolve.

The variety of communication sources has dramatically changed, expanding to include instant messaging, unified communications, enterprise social networks, social media, and more.

Old school archives were built at a time when an organization’s primary communication vehicle was email, and social media sites like Twitter and Facebook had not yet gained massive popularity. Expecting these antiquated archives to perform in today’s world is akin to using your grandparent’s landline phone as a primary mode of contact.

So how does today’s archive different from the archive of yesteryear?



Thursday, 17 March 2016 00:00

The Examiners are Coming!

As Paul Revere gazed out his window, seeing the signal lantern in the tower of the North Church, he cried out… “The Examiners are coming!!! The Examiners are coming!!!”

Paul’s wife, startled, leapt to her feet and excitedly remarked, “Now what do we do?”

First of all, do not panic. Most likely you have been operating for approximately 12 to 18 months as a de novo licensee, and hopefully your operations have not only been successful, but also profitable.  Some regulators prefer to contact a licensee by phone with the news that the examiners will be arriving shortly to conduct the first examination.  During this initial call, they will provide you with a list of required documentation to be prepared in anticipation of the examiners’ on-site arrival.  Other regulatory agencies choose to send out what the examiners refer to as a “first day letter request” (FDL).  The FDL details the required documentation to be readied for the examination and, in some instances, identifies and provides contact information for the Examiner-in-Charge (EIC) of the examination.



NORTH LITTLE ROCK –  FEMA offers a wide range of free resources for Arkansas homeowners who are either rebuilding after the winter storms or preparing for the next time disaster strikes.

FEMA maintains an extensive online library, including bilingual and multimedia resources, which describe the measures contractors or do-it-yourselfers can take to reduce risks to property. FEMA publications can be viewed online and downloaded to any computer.

For rebuilding information, go to www.fema.gov and click on “Plan, Prepare and Mitigate.” There are numerous links to resources and topics including “Protecting Homes,” “Protecting Your Business” and “Safe, Strong and Protected Homes and Communities.” There are also links to information about disaster preparedness.

The decision to rebuild stronger, safer and smarter may save lives and property in a future disaster.

http://www.fema.gov/protect-your-property - offers a comprehensive overview of available publications to help protect your home or business against hazards including earthquakes, fire, flood, high winds and others.

http://www.fema.gov/small-business-toolkit/protect-your-property-or-business-disaster - provides links to resources for protecting your community, your business and places of worship, and offers helpful links like these:

# # #

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

When it comes to security and reports like those I’ve just read, I have to wonder if CEO stands for Chief Executive Ostrich, because there are a lot of them with heads buried in the sand, ignoring reality.

Take this new study by Cyphort and Ponemon Institute, for example. The email announcement I received regarding the study warned that CEOs are “completely clueless” about cyberattacks on their company, with a little more than one third of respondents saying they are never updated about security incidents. Why aren’t they learning about the attacks? The report, which surveyed 597 IT leaders in the private sector, found that 39 percent said the company didn’t have the intelligence data available to present to CEOs and convince them of the security risk. In turn, not only are companies being attacked, but it is taking way too long to detect that attack, with nearly a quarter saying it can take up to two years.

This could be because C-level executives make productivity a greater priority than security, according to the newest report from Barkly. The study found that while IT professionals want to put more emphasis on security, only 27 percent of executives want to prioritize security. Another big disconnect between IT and executives when it comes to security: The C-level suite thinks more software is the solution to improved security while IT professionals want to bump up employee education. The most ironic result of the survey was that IT pros say the uninformed employee is the network’s biggest threat while executives say it is insider threats. It’s almost like comparing green apples and red apples, isn’t it? But it does show that there is a serious lack of communication and understanding when it comes to security. As Jack Danahy, co-founder and CTO of Barkly, said in a formal statement:



Thursday, 17 March 2016 00:00

What is Your Reputation Worth?

“It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.” – Warren Buffett

For Volkswagen, the second largest auto manufacturer in the world, it took 78 years to build its reputation and one day to lose it. Volkswagen Group has about 340 subsidiary companies. It has operations in 150 countries, including 100 production facilities. The company sells passenger cars under the Audi, Bentley, Lamborghini and Porsche brands, and motorcycles under the Ducati brand.

VW admitted installing software in diesel cars to dupe emissions control tests by making them test cleaner than they actually were—even using this information in their marketing campaign to promote these cars. Unfortunately for them, in 2014 a team of researchers at West Virginia University ran separate tests both in the lab and on the road and to their surprise the road tests showed 40 times more emissions. After 14 months of denials VW admitted they had installed “defeat” software that detected when the car’s emission system was being monitored in the lab and altered the results. As a result of the fallout, the company’s CEO resigned, criminal charges were filed, and losses are estimated to be in the billions.

No one will know for sure how much this lapse in judgment will cost Volkswagen in the long run. It makes you wonder who made the decision to cheat. Was it just one engineer, or a team of engineers? How far up the chain of command did it go? Did the CEO know? It doesn’t matter because he was forced to resign and the damage had been done.



Is customer-facing breach notification and response a part of your incident response plan? If should be! This is the part where you notify people that their information has been compromised, communicate to employees and the public about what happened and set the tone for recovery. It's more art than science, with different factors that influence what and how you do the notification and response. Unfortunately, many firms treat breach notification as an afterthought or only as a compliance obligation, missing out on an opportunity to reassure and make things right with their customers at a critical time when a breach has damaged customer trust.

At RSA Conference last week, I moderated a panel discussion with three industry experts (Bo Holland of AllClear ID, Lisa Sotto of Hunton & Williams, and Matt Prevost of Chubb) who offered their insights into the what to do, how to do it, and how to pay for it and offset the risk as it relates to breach notification and response. Highlights from the discussion:



Given the sensitivity of the data stored in customer relationship management (CRM) applications, it should come as no surprise that there is a lot of concern over how to secure that data. To address that issue, Salesforce today extended a security policy engine service that now makes it possible to limit who gets to see which data stored in its applications in real time.

Seema Kumar, senior director of product marketing for Salesforce, says the Transaction Security service is an extension of Salesforce Shield that makes use of new event monitoring tools that IT organizations can then use to either block entirely or simply generate an alert when a user tries to access a certain type of data without permission. The IT organization can use Salesforce Shield to determine the specific action across a broad set of data.

In addition, Kumar says Salesforce will soon extend this capability to not only its own applications, but all the applications that tap into the same customer records stored in the Salesforce cloud ecosystem.



Most data center servers operate at only 12 to 18 percent of their capacity, yet many companies aren’t taking advantage of the cost-saving potential offered by data center consolidation. Consider this: in the last five years, the US government saved nearly $2 billion by consolidating data centers. Companies like Microsoft, HPE, and IBM have likewise saved billions.

In an effort to cut costs and regain control of the data center environment, IT managers are asking that their environments be consolidated and made more efficient. The conversation revolves around aligning IT with business needs, which today often means greater IT agility. Managers and executives are trying to drive down cost and in doing so have prioritized data center consolidation and migration projects.

In creating a consolidation or data center migration plan, high-density server equipment, applications, virtualization technology, and end-user considerations all fall under the general scope.



Don’t put all your eggs in one basket, or so the saying goes. When it comes to phone system resilience, this would seem to be sound advice. After all, phone availability is critical for many organisations and relying on just one solution to guarantee that availability would be foolhardy. However, single points of failure may lie in wait for the unwary, even in situations as simple as putting in a toll free number for use in an emergency.



Tuesday, 08 March 2016 00:00

Solving the bulk password theft puzzle

Nick Lowe explores how current security measures against bulk data theft from organizations are broken: and how they can be fixed.

Another year, and another round of large-scale data breaches has started. We were barely a week into 2016 when Time Warner was forced to announce a breach of up to 320,000 users’ email account passwords; this followed 2015’s mega-breaches at organizations such as Ashley Madison, the US Government’s Office of Personnel Management, toy maker Vtech and many others.

Despite the scale of these ongoing data losses, and the reputational damage and remediation costs they cause, the methods for enterprise-level protection of bulk passwords and personally identifiable information (PII) have remained fundamentally unchanged over the past 20 years. And it’s evident that these approaches are simply not effective in preventing breaches.

A majority of data thefts are done from an organization’s bulk file storage. This is because once a successful attack is executed, whether via a social engineering exploit to gain administrator credentials, malware installation, or a privilege-escalation attack using known software flaws, the theft itself can be done remarkably quickly. A million username/password pairs may be stolen in just 60 seconds.



The Cyber Kill Chain describes the different stages of an attack, from initial reconnaissance to objective completion. In this article Richard Cassidy describes the different elements of the Cyber Kill Chain and how to use it.

Today’s attackers are becoming increasingly sophisticated, using advanced techniques to infiltrate a business’s environment. Unlike in the past when hackers primarily worked alone using ‘smash-and-grab’ techniques, today’s attackers prefer to work in groups, with each member bringing his or her own expertise. With highly skilled players in place, these groups are able to approach infiltration in a much more regimented way, following a defined process that enables then to evade detection and achieve their ultimate goal: turning sensitive, valuable data into a profit. With attackers ready to pounce on any business at any moment, how can businesses stay ahead and ensure their sensitive data remains safe? Most attacks follow a ‘process’ that identified attackers’ behaviours, ranging from researching, to launching an attack and ultimately to data exfiltration: this is articulated as the ‘Cyber Kill Chain’.

The Cyber Kill Chain was developed by Lockheed Martin’s Computer Incident Response Team and describes the different stages of an attack, from initial reconnaissance to objective completion. This representation of the attack flow has been widely adopted by organizations to help them approach their defence strategies in the same way attackers approach infiltrating their businesses. As malicious activity continues to threaten sensitive data — whether it is personal data or company sensitive data — one certainty remains: attackers will continue to exploit weakness to infiltrate systems and extract data that they can turn into money. The best opportunity to get ahead of the hacker is to understand the steps he / she will go through, his / her motivations and techniques, and a security strategy around it.



Monday, 07 March 2016 00:00

BCM/DR: Face-to-Face Meetings

I’ve often run into people that have to ‘send an email’ with a question for a person that’s located on a few seats away.   Are they afraid of that person?  Why can’t they just get up and go see them for a couple of minutes to ask what they need to ask?  It seems the art of face-to-face communication is disappearing in favor of CYA (Cover You’re A…) and audit concerns.  If it’s not written down then it’s can’t be true.  What have we done to ourselves?

This happens allot when it comes to developing strategies for Business Continuity Management (BCM) and other contingency related initiatives. We don’t go and ask people, we develop questionnaire’s – sent by snail mail or email – or we purchase an expensive online tool, fill it with questions that get interpreted a myriad of ways and expect recipients to respond in a timely and comprehensive manner.  Huh!



Over the last ten to twenty years, we have witnessed the expansion of federal criminal prosecution of health and safety matters. Environmental and food and drug regulatory enforcement has been supplemented by aggressive criminal enforcement.

In the last few years, we have seen some landmark criminal cases involving companies and executives for food safety violations. Compliance programs in these high-risk industries can literally be a matter of life and death. Judges are handing out tough criminal sentences when warranted.

Each week we hear about the outbreak of a new foodborne illness. Weeks after that, we then usually hear about a criminal investigation against the company and sometimes individual executives.



Monday, 07 March 2016 00:00

'No Price' to be Put on School Security

(TNS) - Local schools face tough choices on how much security is appropriate as last week’s shooting in Madison Twp. brought a nationwide issue close to home for the first time.

The challenge for schools is how far to go on a continuum with tons of options. More locks? More cameras? More guards? More drills? Adding metal detectors? Arming school staff? There’s no way to make everyone happy, as there are parents who support and oppose each of those steps.

“It’s a tough spot for schools and it comes down to one word — reasonableness. What is reasonable to reduce risk?” said Ken Trump, a national school safety consultant. “The majority of parents want safe schools, want risks reduced, want genuine preparedness.



(TNS) - For some 25 volunteers the objective Saturday morning was equal parts simple and perplexing: find "Joe," or maybe it's "Bob."

Jackson County Search and Rescue manager Mark Mihaljevich was purposely vague with details to the volunteers completing Search and Rescue Academy training. Joe's an elderly man, they don't know his last name, he's wearing a hunting vest and a hat, but they don't know what color.

In actuality, Joe is a duffel bag hidden somewhere on the rural county-owned Givan property off Agate Road, but the unclear details the search and rescue volunteers were given is a common beginning to a missing persons investigation.

"This is typical," instructor Micki Evans said.



There is an old saying that there are two things certain in life: death and taxes. I would like to add a third one–data security breaches. The Identity Theft Resource Center (ITRC) defines a data security breach as “an incident in which an individual name plus a Social Security, driver’s license number, medical record or financial records (credit/debit cards included) is potentially put at risk because of exposure.” The ITRC reports that 717 data breaches have occurred this year exposing over 176 million records.

On the surface, finding a pattern across all such breaches may appear daunting considering how varied the targeted companies are. However, the ITRC argues that the impacted organizations are similar in that all of the data security breaches contained “personally identifiable information (PII) in a format easily read by thieves, in other words, not encrypted.” Based on my experience, I’d expect that a significant portion of the data breaches compromised data in on-premises systems. Being forced to realize the vulnerability of on-premises systems, organizations are beginning to rethink their cloud strategy.

For example, Tara Seals declares in her recent Infosecurity Magazine article that “despite cloud security fears, the ongoing epidemic data breaches is likely to simply push more enterprises towards the cloud.” Is the move to the cloud simply a temporary, knee-jerk reaction to the growing trend in security breaches or are we witnessing a permanent shift towards the cloud? Some industry experts conclude that a permanent shift is happening. Tim Jennings from Ovum for example, believes that a driving force behind enterprises’ move to the cloud is that they lack the in-house security expertise to deal with today’s threats and highly motivated bad actors. Perhaps the headline from the Onion, which declares “China Unable To Recruit Hackers Fast Enough To Keep Up With Vulnerabilities In U.S. Security Systems” is not so funny after all.



A survey conducted by Lockheed Martin and the Government Business Council finds reasons to be hopeful about federal IT and challenges that need to be addressed

During Tuesday's House Judiciary Committee hearing on the challenge of balancing privacy with public safety, FBI Director James Comey faced skepticism about whether the agency had really fully explored how it might access an encrypted iPhone, currently the focus of a legal battle between the US government and Apple.

Though Comey insisted the FBI had sought assistance from other government agencies with cybersecurity expertise, not everyone was convinced.

Worcester Polytechnic Institute professor Susan Landau, in prepared remarks, said that law enforcement agencies should modernize their investigatory capabilities rather than relying on the assistance of the courts.



(TNS) - Illinois State University became the first university in Central Illinois and the second outside of northern Illinois to be designated as a “StormReady University” by the National Weather Service.

To earn the designation, the university had to meet seven criteria involving preparation to respond to severe weather conditions and weather emergencies, explained Chris Miller, warning coordination meteorologist with the National Weather Service office in Lincoln.

These included having designated storm shelters, multiple methods for issuing warnings, trained weather spotters and formal, written emergency plans that are tested.



Friday, 04 March 2016 00:00

Don’t Ask, Don’t Tell

We’re reading an item of interest from across the pond where the United Kingdom’s Institute of Directors (IoD) has issued a new report that gives insight into how companies tend to react if they are under a cyber attack.

The IoD study, supported by Barclays, revealed that most companies keep quiet, with under one third (28 percent) of cyber attacks reported to the police.

This is despite the fact that half (49 percent) of cyber attacks resulted in interruption of business operations, the IoD noted.



NORTH LITTLE ROCK – Arkansas residents who have registered with FEMA for disaster aid are urged by recovery officials to “stay in touch.” It’s the best way to get answers and resolve potential issues that might result in assistance being denied.

“Putting your life back together after a disaster is difficult,” said John Long, federal coordinating officer for FEMA. “While the process of getting help from FEMA is intended to be simple, it’s easy to understand how sometimes providing important information is overlooked or missed.”

Residents of Benton, Carroll, Crawford, Faulkner, Jackson, Jefferson, Lee, Little River, Perry, Sebastian and Servier counties affected by the severe storms Dec. 26 – Jan. 22, 2016 may be eligible for disaster assistance and encouraged to register for assistance with FEMA.

After registering, it’s important to keep open the lines of communication.  “It’s a two-way street,” said Long. “FEMA can’t offer assistance to survivors who – for whatever reason – have not provided all the necessary information.”

After registering with FEMA, applicants will receive notice by mail within 10 days on whether or not they qualify for federal disaster assistance.

  • If eligible, the letter explains how much the grant will be, and how it is intended to be used.
  • If ineligible – or if the grant amount reads “0” – you may still qualify. The denial may just mean the application is missing information or that you missed an appointment with an inspector.

Applicants who are denied assistance may call the Helpline to understand why, or go online to www.disasterassistance.gov or m.fema.gov. Becoming eligible for assistance may be as simple as supplying missing paperwork or providing additional information.

FEMA looks at a number of things to determine if a survivor will receive disaster assistance. The agency must be able to:

  • Verify an applicant’s identity.
  • Verify damages. If you believe the inspector didn’t see all of your damages, call the FEMA Helpline at 1-800-621-3362.
  • Verify home occupancy. Applicants need to provide proof of occupancy such as a utility bill.
  • Collect insurance information.

“FEMA personnel are here to help,” said Scott Bass, state coordinating officer with the Arkansas Department of Emergency Management. “Keep in touch. Use the Helpline. You’ll get answers to your questions and help with understanding the assistance process, and ways to move your personal recovery forward.

To register for assistance:

  • call 800-621-3362 (FEMA). If you are deaf, hard-of-hearing or have a speech disability and use a TTY, call 800-462-7585. If you use 711-Relay or Voice Relay Services, call 800-621-3362; or
  • go to www.DisasterAssistance.gov

The toll-free telephone numbers will operate from 7 a.m. to 10 p.m. seven days a week. Multilingual operators are available.

# # #

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Last Updated: 
March 3, 2016 - 14:25
State/Tribal Government or Region: 

This year’s international Business Continuity Awareness Week is taking place from 16th-20th May 2016 and a set of four posters for promoting it is now available.

The theme for BCAW 2016 is ‘return on investment’, so all four posters display the message ‘Discover the value of business continuity’.

The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. The BCI also encourages sharing of the image versions through social media channels to spread the message.

Obtain the posters

The Business Continuity Institute’s annual North America business continuity and resilience awards will be presented at a ceremony on March 15 at DRJ Spring World 2016 in Orlando. The shortlist of finalists is as follows:

Continuity and Resilience Consultant
Suzanne Bernier MBCI, President of SB Crisis Consulting
Christopher Duffy, Strategic BCP
Christopher Rivera MBCI, Lootok, Ltd

Continuity and Resilience Professional (Private Sector)
Pauline Williams-Banta, Business Continuity Manager, The Energy Authority
Aaron Miller MBCI, VP/Director of Business Continuity, Fulton Financial Corporation
Linda Laun, Chief Continuity Architect, IBM

Continuity and Resilience Newcomer
Bradley Hove AMBCI, Consultant, Emergency Response Management Consulting Ltd
Greg Greenwald, BCM Consultant, Lootok, Ltd
Bryan Weisbard, Head of Threat Intelligence, Investigations & Business Continuity, Twitter

Continuity and Resilience Professional (Public Sector)
Nina White, Business Continuity Manager, Talmer Bank and Trust
Ira Tannenbaum, Assistant Commissioner for Public/Private Initiatives, New York City Office of Emergency Management

Continuity and Resilience Team
Aon Business Continuity Team, Global/Americas Team
The Devry Online Service (DOS) Core Business Continuity Team
Aon’s Global Business Continuity Management Team
Health Partners Plan (HPP) Business Continuity Team
CBRE Business Continuity Management Team – Americas

Continuity and Resilience Provider (Service/Product)
Premier Continuum Inc ParaSolution BCM Software
Fusion Risk Management Inc, and the Fusion Framework BCM Software
xMatters Inc
AtHoc, a division of Blackberry
Strategic BCP® ResilienceONE® BCM Software

Continuity and Resilience Innovation
The Everbridge platform
Mars, Resiliency Summits, #WeGotThis, BCM Portal
Fairchild Consulting, FairchildApp

Most Effective Recovery
Aon Global Business Continuity Management Team – Americas

Industry Personality
Frank Leonetti FBCI
Howard Mannella MBCI
Lynnda Nelson
Pauline Williams-Banta
Brian Zawada FBCI

More details

Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Dean Coleman looks at some practical steps that businesses can take to utilise these for effective IT risk management.

IT is playing an increasingly prominent role within every organization and IT service managers need to be keenly aware of the importance of risk management to ensure they have control and influence over any issues likely to get in the way of the smooth running of the business.  Technology is now so pivotal to the healthy running of the majority of companies that IT risk management has become a key discussion point on the corporate agenda of many boardrooms, as downtime of critical systems – whether due to accidental or malicious intention – threatens to undermine the productivity of the entire organization. Yet, despite its importance, many organizations still use manual spreadsheets to manage risk which are not dynamically linked to IT real estate, so lack any ability to equate theoretical IT risk with the actual situation on the ground.

Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Many service management tools are likely to already have a database of IT assets and users, so it makes sense to link IT risk management to your overall service management capabilities.  That being so, what are the practical steps that businesses can take to rest back control of their IT assets and ensure that problems in one area of the business don’t have a knock-on effect on other functions?



Friday, 04 March 2016 00:00

People and Resilience (Part Two)

In the second article in a three-part series exploring ‘people and resilience’, Paul Kudray looks at a common misconception: that when disaster strikes employees will automatically rally round and play their part in helping the organization recover.

I’m sure you’re familiar with the phrase: “I hate my job!” You may even have used it: possibly on more than one occasion.

You and I know there are people who have dream jobs; they work in their favourite place, doing the things they love to do, and they even have great bosses! Yes, it happens!

The employers they work for may even have a great resilience plan. Everyone in the organization may be aware of it and each person may know what to do when the proverbial hits the fan. In short it’s a fantastic resilient organization, based around the people who make it work.



The current approach to business continuity, which generally focusses on ‘what could happen’, has significant limitations says Graham Goodenough. In this article he explains why this is the case; and suggests a better, more positive, method.


The use of the term ‘resilient enterprise’ as expressed in this article, applies to a business that has been purposely designed to have the ability to adapt to significant increases, or decreases, in production/service demands from the market it serves, and which can adjust demands within an acceptable time frame that is not financially detrimental to the business. Establishing such an ability for critical activities to respond within the business for normal operations and any unplanned disruptions will provide flexibility within the organization that will enable capacities to be delivered as needed, and maintain business income, whatever may be the cause of disruption.



US government agencies are no longer allowed to build or expand data centers unless they prove to the Office of the Federal CIO that it’s absolutely necessary, according to a new memo released by the White House’s Office of Management and Budget.

The new Data Center Optimization Initiative replaces the now six-year-old Federal Data Center Consolidation Initiative and has much stricter goals and additional rules meant to reduce the government’s sprawling data center inventory and the amount of money it takes to maintain it.

The government spent about $5.4 billion on physical data centers in fiscal year 2014. The new initiative’s goals are to reduce data center spending by $270 million in 2016, by $460 million in 2017, and by $630 million in 2018, for a total of $1.36 billion in savings over the next three years.



Mergers generally fail and large mergers generally fail spectacularly, so I get why many of my peers think the Dell/EMC merger will be a train wreck. They also thought Dell couldn’t be taken private because, generally, for a company like Dell, the path would be virtually impossible particularly if you had a corporate raider like Carl Icahn working against you.  

But here’s the thing: I’ve spent a lot of time looking at merger processes. I ran a merger clean-up team when I was at IBM (and I was really busy), and I’ve looked at Dell’s process in depth, one that was initially developed at IBM but refined at Dell. I learned there is nothing like it. Granted, a large merger will stress any process but, given EMC’s structure and Dell’s approach, there should be little customer impact for 12 to 18 months, and much of that initial impact should be positive.

In most every other large merger, there would be a reason to run for the hills, largely because most large companies don’t want to learn from their mistakes and would rather focus on shooting the people that made them. But Dell is very different. It actually has an incredibly successful merger process that, for some screwy reason, no one else seems to want to emulate.

I’ll compare the HP/Compaq merger that I thought was idiotic to the Dell/EMC merger, so you get a sense of what makes this different.



(TNS) - A new study has provided the first evidence that the Zika virus may be the cause for a spike in cases of a severe neurological disorder called the Guillain-Barré syndrome (GBS).

The study, published in the medical journal Lancet, showed 42 patients developed symptoms of GBS, which causes the immune system to attack parts of the nervous system.

The neurological symptoms include acute motor axonal neuropathy, which is characterised by severe paralysis. It also caused respiratory problems in about a third of the patients who needed medical assistance to breathe properly, the report said.

However, none of the patient-subjects died.



Whether they love police, hate police, or anything in between, most community members want to know more about police. Police themselves, on the other hand, are hesitant to share information, for good reason, at least most of the time. The key to successful sharing, especially with tools collectively known as social media, is to find the balance between letting people have more information and not giving out so much it causes problems.

Generally, when talking about community engagement in social media, the typical advice is to follow people, provide good content, answer questions, be transparent, and so on. These basics are important, essential even. But where the rubber meets the road is what lies a mile or so beyond the next curve.



When data center operators examine data center cost, they generally look at high-level metrics, such as gigabytes of storage or Power Usage Effectiveness. These do matter of course, but to get to the real cost, you have to zero in on lower-level components.

Do you know how much the flash drives on your servers cost? How about the CPUs or DRAM cards? A different vendor supplies each one of those components, and they make a big difference in total cost of ownership of every data center.

Web-scale data center operators like Google and Facebook learned this lesson long ago. For years, they have been re-examining each individual component of their IT gear, looking for ways to get it cheaper.



Cloud computing offers a wide range of solutions to companies, and online backup is one of the best: It keeps important data safe from disruptions and disasters, and provides a way to keep applications and data off-site in highly secured environment.

There are great advantages to using backup technology, such as automation functionality and encrypted data. There are some business experts who state that the cloud is not a secure source for important data. However, online backups have encryption capacity to keep data safe. Conversely, hard drive (external) storage is not secure, and could be stolen or misplaced. Online backup is also reasonably priced. By using online backup, companies are given an opportunity to keep important files and documents safe from disarray and disaster at reasonable rate.



Promoting Business Continuity Awareness Week

The countdown has begun for Business Continuity Awareness Week (16-20 May 2016). We are only a few months away, and now we have published the posters that will be used to promote the week. The theme for BCAW this year is return on investment, so all four posters display the message discover the value of business continuity, as ultimately we want to get the message across that business continuity can have benefits other than the obvious returns when disaster strikes.

The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. Make sure you display these posters prominently in your workplace or any other suitable location, and share the image versions through your social media channels to really spread the message.

Competition Time

Business Continuity Awareness Week is your opportunity to help raise awareness of business continuity and highlight the value of your profession, so make sure you get involved. Ways you can take part include, but are not limited to: hosting a webinar, publishing a paper, recording a video, or writing a blog. All of which should demonstrate the theme for the week.

As an added incentive, all those who post a blog on the BC Eye blog site will be entered into a prize draw to win £250 worth of Amazon vouchers.

Get your thinking hats on and get creating. For further information, just contact Andrew Scott.

Good luck!

Thursday, 03 March 2016 00:00

The Life Cycle of a Data Center

Your data center is alive.

It is a living, breathing, and sometimes even growing entity that constantly must adapt to change. The length of its life depends on use, design, build, and operation.

Equipment will be replaced, changed, and may be modified to best equip your specific data center’s individual specification to balance the total cost of ownership with risk and redundancy measures.

Just as with a human being, the individual care and love you show your data center can lengthen the life of your partnership.



It takes a long time takes up a lot of expensive bandwidth to push 100 terabytes of data across a Wide Area Network.

Amazon’s answer to moving those kinds of data volumes from customer data centers to its cloud data centers has been to ship its customers high-capacity storage servers. The customer uploads their data to the server, which then gets shipped back to Amazon for upload to the cloud.

Amazon announced the service last year. Today, the company started offering the same service, but in reverse. If a customer has accumulated a lot of data in their AWS environment and wants to move it elsewhere, Amazon will put it on its Snowball data shipping servers and ship them to the customer.



Think you already have enough on your plate, dealing with Wi-Fi and other network security in your organisation? You may have to add lighting to the list as well. A French start-up, Oledcomm, has been developing Internet by light, cunningly christened (you guessed it) Li-Fi. The technology is based on the concept of light flashes from an LED, rather like Morse Code on steroids. According to its inventors, Li-Fi also has at least two sizable advantages in terms of connectivity that, hopefully, will not be undermined by the existence of yet another attack vector.



Digital business requires digital business continuity

In the latest edition of the Business Continuity Institute's Working Paper Series, Rudy Muls MBCI draws from his extensive experience to relate cyber resilience to its implications on business continuity practice. He further demonstrates possible opportunities for business continuity professionals to collaborate with their information security counterparts.

Cyber resilience is a topic of interest among practitioners as evidenced by the wealth of research on the subject. The BCI's most recent Horizon Scan Report revealed that cyber attacks and data breaches top the list of threats practitioners are most concerned about. The results of a global survey showed that 85% and 80% respectively expressed concern about the prospect of these threats materialising.

The paper concludes that there must be greater coordination and collaboration between those working in business continuity and information security, going as far as to say there could even be integration between the two functions. Furthermore, there should be more exercises to make staff and management aware of the cyber risk and how to react to incidents, as the involvement of all lines and areas of business management early in the incident management process is very important.

To download your free copy of ‘Digital business requires digital business continuity’, click here.

During military service (Reserve Captain) and as a voluntary fireman, Rudy Muls MBCI has gained a wealth of experience in crisis situations, and provided training in rescue and life saving techniques. During his professional career within an international financial institution he has been employed in different IT related positions, the most fulfilling of which started in 2010 when he was able to combine all his experience as business continuity manager and information security officer.

DRJ Spring World 2016

Disaster Recovery Journal Spring World 2016 is taking place March 13-16, 2016 at Disney’s Coronado Spring Resort in Orlando, FL. We’re looking forward to another amazing show with numerous educational sessions and awesome people!

We have a lot planned during DRJ Spring World, and we hope you’ll join us:

Please take a look below for more details. We look forward to seeing you soon!


Monday, March 14, 2016  |  12:00-1:15 PM  |  Coronado E  |  Lunch Provided
Speakers: Brian Zawada and Dustin Mackie, Avalution Consulting

Reserve your seat in advance: bccatalyst.com/drj

Catalyst makes business continuity and IT disaster recovery planning easy and repeatable for every organization. Join us to learn how Catalyst:

  • Delivers the fastest implementation on the market
  • Covers the ENTIRE continuity lifecycle
  • Generates truly insightful (and automatic!) program metrics
  • Saves you time by automating all administrative tasks
  • Provides the lowest total cost of ownership


BOOTHS 707 & 709
Stop by our booths during exhibit hours to meet our team, learn about our business continuity and IT disaster recovery consulting and software solutions, and enter for a chance to win a hoverboard (don’t worry – we’ll ship it home for the winner)!

Want to learn more about our products and services before the show? Check out:

Enter to Win


Solutions Track 3

When: Sunday, March 13, 2016  |  4:00-5:00 PM
Speakers: Michael Bratton and Bill DiMartini, Avalution Consulting

Many organizations design IT Disaster Recovery solutions like they’re booking a one-way flight – able to get to their destination but without a plan on how they’ll get back home. Even if plans include procedures to return to the restored data center, many times they are rarely tested and validated. This session is for you if you are responsible for the development and maintenance of your organization’s IT Disaster Recovery Plan or auditing IT Disaster Recovery Programs.

General Session 3

When: Monday, March 14, 2016  |  10:30-11:45 AM
Moderator: Tracey Forbes Rice, Fusion Risk Management
Panelists: Brian Zawada, Avalution Consulting, Ann Pickren, MIR3, John Jackson, Fusion Risk Management

Where will continuity be in 10 years? What‘s new in the continuity tool box? This panel of subject matter experts consisting of DRJ’s executive council members will be discussing the BCI 20/20 visionary think tank project and what the future holds for the professionals of this industry. Discussion will include eliminating blind spots, and recognizing the risk posed by near and far-sighted thinking. The panel will be thinking outside the box with the goal of developing a 360 degree view of risk in today’s leading organizations. Join this lively discussion to form a vision of what the future holds for this profession.

Senior Advanced Track 2

When: Monday, March 14, 2016  |  2:45-3:45 PM
Speakers: Brian Zawada, Avalution Consulting, John Jackson, Fusion Risk Management

The Horizon Scan Survey seeks to consolidate the assessment of near-term business threats and uncertainties based on in-house analysis of business continuity (BC) practitioners worldwide. This session will present and discuss the results of the survey.

Tuesday, 01 March 2016 00:00

How Do You Save Government IT?

A majority of IT projects undertaken by government fail to deliver satisfactory results, cost more than anticipated or take longer to implement than planned. How often do we, as project managers and government employees, hear things like this: “It’s software development; it’ll take as long as it takes.” “I know it’s what I told you to build, but it’s not what I need.” “Tell me again why it’s going to cost an additional $50,000.”

Faced with tighter budgets, increasing expectations and closer public scrutiny, government IT organizations are under extreme pressure to deliver technology solutions that meet the needs of their users quickly and at low cost. Where the traditional project management approach has failed, agencies need to find alternatives to address these heightened expectations. Agile development has sparked the interest of public-sector change-makers as a way to save government IT from the debacle of skyrocketing costs and redundant systems.

Agile is an entirely new way of approaching project delivery, especially for public agencies. Many of the concepts employed in agile are not particularly new. They have been used in software development under names like prototyping, extreme programming or rapid application development. Frameworks like Scrum bring a structured methodology to these same concepts. It’s about breaking up large, complex projects into easily digested pieces and routinely getting feedback to make sure what is being delivered is in line with what’s needed.



The storage industry has historically been left out of the conversation when discussing the innovative and ground-breaking feats coming out of the technology world. Now, that focus is shifting thanks to three emerging trends in the enterprise: the move away from integrated systems to software-on-commodity hardware architectures, the focus on utilization rates of physical resources, and the increasing need to support millions of individual workloads.

Companies such as Facebook, Google and Amazon have devoted massive resources to build and maintain customized data center infrastructures from the ground up. In doing so, these companies have realized tremendous levels of scalability, flexibility, and efficiency. Enterprises today are experiencing large and growing amounts of data storage requirements and are focused on achieving the same benefits, driving these trends.



IBM Corp. announced plans today to acquire Cambridge, Mass. based Resilient Systems, Inc., a privately held cybersecurity firm with 100 employees.

Resilient specializes in incident response, which helps IT security teams bolster their defenses against data breaches. The Resilient incident response platform is deployed in more than 100 of the Fortune 500 corporations – which is IBM’s sweet spot.

“We are thrilled with our plans to have the Resilient team join IBM Security” said Marc van Zadelhoff, general manager, IBM Security. “The Resilient team includes some of the best security talent in the industry, along with leading products that enable clients to automate and consistently manage all aspects of responding to a security incident.”



(TNS) - The last damaging earthquake in Washington struck 15 years ago, on Feb. 28, 2001.

The next one is scheduled for June 7.

The ground isn’t expected to actually shake this spring. But nearly 6,000 emergency and military personnel will pretend it is during a four-day exercise to test response to a seismic event that will dwarf the 2001 Nisqually quake: A Cascadia megaquake and tsunami.

Called “Cascadia Rising,” the exercise will be the biggest ever conducted in the Pacific Northwest. Which is fitting, because a rupture on the offshore fault called the Cascadia Subduction Zone could be the biggest natural disaster in U.S. history.



Renewable energy is tricky to use, and it’s even trickier to use in data centers, which have to be running around the clock, regardless of whether or not the sun is shining or the wind is blowing.

For data center operators that have turned to renewable energy, the three answers have been a) using a combination of renewable generation and energy storage to supplement a data center’s power supply, not replace it; b) investing in renewable energy generation for the same grid that feeds the data center – the grid that also has coal, nuclear, and other traditional energy sources; and c) simply buying Renewable Energy Credits equivalent to some or all energy a data center consumes.

Researchers behind an experimental project in Massachusetts hope to push the progress further by studying over time performance of a solar-powered micro data center launched this month. The test bed is called Mass Net Zero Data Center, or MassNZ. The project’s goal is to help researchers understand how to reduce data center energy consumption and increase data centers’ ability to use renewable energy.



What do you think happens when the computer reservation system of an airline company crashes? Well, a major airlines experienced that exact situation last September – watch this three minute video and learn about the domino effect.

When a problem occurs with an airline computer system, it’s a ripple effect which can quickly become a real mess as passengers are stuck in airports. The airline will soon order that all aircrafts be grounded. Passengers will start complaining and calling into the reservation desk to book other flights. In addition, labor laws will prevent the crew from working or flying.



The RSA security conference is being held this week in San Francisco where security pros come together to discuss strategy. IBM made several security announcements this morning ahead of the conference, headlined by the purchase of Resilient Systems.

Instead of trying to prevent an attack, Resilient gives customers a plan to deal with a breach after it’s happened. While IBM offers pieces for protecting and defending the network, no security system is fool-proof and there will be times when hackers slip through the defenses (or the attack comes from within).

“What happens when an attack happens, which unfortunately has become an inevitably. You need resilience to get back up and running and minimize the damage. There has to be muscle memory of what you will do and how you will react,” Caleb Barlow vp of security at IBM told TechCrunch.



WASHINGTON – The Federal Emergency Management Agency (FEMA) is pleased to announce that the application period for the 2016 Individual and Community Preparedness Awards is open. The awards highlight innovative local practices and achievements by individuals and organizations that made outstanding contributions toward making their communities safer, better prepared, and more resilient.

Emergency management is most effective when the entire community is engaged and involved. Everyone, including faith-based organizations, voluntary agencies, the private sector, tribal organizations, youth, people with disabilities and others with access and functional needs, and older adults can make a difference in their communities before, during, and after disasters.

FEMA will review all entries and select the finalists. A distinguished panel of representatives from the emergency management community will then select winners in each of the following categories:

  • Outstanding Citizen Corps Council 
  • Community Preparedness Champions
  • Awareness to Action
  • Technological Innovation
  • Outstanding Achievement in Youth Preparedness
  • Preparing the Whole Community
  • Outstanding Inclusive Initiatives in Emergency Management (new category)
  • Outstanding Private Sector Initiatives (new category)
  • Outstanding Community Emergency Response Team Initiatives
  • Outstanding Citizen Corps Partner Program
  • America’s PrepareAthon! in Action (new category)

Winners will be announced in the fall of 2016 and will be invited as FEMA’s honored guests at a recognition ceremony. The winner of the Preparing the Whole Community category will receive the John D. Solomon Whole Community Preparedness Award.

To be considered for this year’s awards, all submissions must be received by March 28, 2016, at 11:59 p.m. EDT and must feature program activities taking place between January 1, 2015, and March 28, 2016. Applications should be submitted to citizencorps@fema.dhs.gov.

More information about the awards is available at ready.gov/preparedness-awards.


FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.

The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

While most IT organizations are still held accountable for security breaches, many of them are now judged by the way they respond to a breach when one inevitably occurs. To help those IT organizations put a consistent incident response plan in place, today IBM at the RSA Security 2016 conference announced it has acquired Resilient Systems Inc.

As one of the pioneer vendors in the category, Caleb Barlow, vice president of IBM Security, says Resilient Systems extends IBM’s security portfolio to cover protecting and detecting threats, to include how to programmatically respond to them when they occur. Instead of wasting days trying to figure out what needs to be done in the event of a breach, Barlow says organizations need to have a plan in place that everyone in the organization can follow. That plan, adds Barlow, needs to cover everything from remediating the breach to informing the media and appropriate government agencies.



Google announced a number of new security features for Gmail users in the enterprise today. Last year, the company launched its Data Loss Prevention (DLP) feature for Google Apps Unlimited users that helps businesses keep sensitive data out of emails. Today, it’s launching the first major update of this service at the RSA Conference in San Francisco.

The DLP feature allows businesses to set rules for what kind of potentially sensitive information is allowed to leave and enter its corporate firewall through email.

The most important new feature here is that DLP for Gmail can now also use optical character recognition to scan attachments for potentially sensitive information (think credit card numbers, drivers license numbers, social security numbers, etc.) and objectionable words (maybe a swear words or a secret project’s codename).



Tuesday, 01 March 2016 00:00

How IT Decisions Impact Your Data Center

The modern business is directly tied with the capabilities of IT. Most of all, your data center now impacts how you create business goals and entire strategic directives. This means that business leaders and data center facilities managers must work in unison to create a truly cohesive ecosystem.

And decisions and actions on the IT side of the house can have a profound impact on mechanical systems and resulting operating costs and capacity of the data center.

When all sides of the house collaborate, there are specific benefits to the business and the entire data center environment. Consider these top challenges that collaboration aims to overcome:



Buyers, beware! While a car with one careful previous owner (we’ve all heard that one, right?) may still be a viable purchase proposition, somebody else’s security may be ill-suited to your organisation. Second-hand security can crop up in situations like company mergers and acquisitions. One of the challenges is to see beyond what the other party is telling you. Your prospective business partner may be assuring you with all the honesty in the world that security in its firm covers all requirements. However, what is true for one organisation does not necessarily carry over to another.



Over the many years I’ve been working in a clean room, I’ve grown quite familiar with hard drives and the many pros and cons they can present. Generally speaking, hard drives can be a pretty resistant medium when used correctly and a technology I confidently use for storing my personal files. However, I know bad things can happen to good data as I have witnessed countless damages and failures to these devices that can cause data loss.

In this post I will focus on physical issues in hard drives (HDDs) as the problems faced by this technology are completely different from those experienced by other alternatives available in the market, such as solid state drives (SSD).



If you are wondering whether a mobile solution would be right for your crisis management plan, start with a look at how much business life has changed in recent years.  Then ask whether your organization is keeping up or lagging behind when it comes to crisis planning.

In the past, it was sufficient to add crisis plans and emergency instructions to company intranets or send by email. That was a huge improvement over handing executives in the company a binder with the plans.

But now we are well into the twenty-first century, and the whole concept of crisis management has evolved.  Beyond planning for fires, floods, and strikes, organizations must prepare to cope with workplace violence, terrorist attacks, epidemics, data loss, data breaches, reputation damage, and a host of other possibilities that were not even thought about twenty or thirty years ago. Some of these crises will occur with no warning, and reach catastrophic levels in minutes or hours.



Tuesday, 01 March 2016 00:00

HPE Looks to Encrypt Mobile Data

With more data than ever being generated by mobile computing devices, securing that information has become a major challenge for IT organizations that often don’t control either the endpoint or even the network being used to transmit data.

At the RSA Security 2016 conference today, Hewlett-Packard Enterprise (HPE) moved to address that issue with the release of HPE SecureData Mobile, a solution that extends HPE encryption software to devices running Apple iOS and Google Android operating systems.

Chandra Rangan, vice president of marketing for HPE Security, says that given the lack of control most IT organizations have over mobile computing, it’s imperative that they find a way to encrypt data both when it’s at rest and in motion. In fact, a scan of 36,000 Apple iOS and Google Android devices conducted by HPE found that many of these applications routinely collect geolocation and calendar data. That information, notes Rangan, can in turn be used by hackers to enable all kinds of socially engineered attacks. In fact, the desire to get at that data helps explain why in 2015 there were 10,000 new Android threats discovered each day. And while Apple iOS devices benefit from being on a closed network, the number of malware exploits aimed at Apple iOS rose 230 percent in 2015.



Looking to make it simpler and less expensive to back up data, Oracle today unveiled an update to Oracle StorageTek Virtual Storage Manager System software that enables Oracle customers to back up data and archive directly into the Oracle cloud.

That move comes on the heels of acquiring Ravello Systems, a provider of a nested hypervisor technology that makes it simpler to deploy hybrid cloud computing environments, at a cost of $500 million.

Steve Zivanic, vice president of the Storage Business Group at Oracle, says version 7.0 of StorageTek Virtual Storage Manager System makes it possible for IT organizations to back up and archive data from both mainframes and distributed systems to a common public cloud. In the case of the mainframe in particular, the cost savings associated with not having to locally back up data on to a mainframe platform are substantial, says Zivanic.



Tuesday, 01 March 2016 00:00

BCI: The rules of business continuity

The rules of business continuity

Why do we have business continuity management programmes? Is it because we want to make sure our organizations are able to respond to a disruption? Probably yes! It is common sense that we would want to be prepared for any future crisis.

In some cases however, it is also because there is a legal obligation to do so. Many organizations are tightly regulated depending on what sector they are in or the country they are based, and therefore must have plans in place to deal with certain situations. Furthermore, the rules and regulations that govern us are often being revised, and sometimes it can be difficult to keep up with which ones are applicable.

There is a solution however. The Business Continuity Institute has published what it believes to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based on information provided by the members of the Institute from all across the world. Some of the items may not relate directly to BCM, and should not be interpreted as being specifically designed for the industry, but rather they contain sections that could be useful to a BCM professional.

The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, of course, a link to the full document itself.

The BCI has done its best to check the validity of these details but takes no responsibility for their accuracy and currency at any particular time or in any particular circumstances. The BCI is reliant on those working in the industry to provide updates to this document, so if you do come across any inaccuracies then please contact Patrick Alcantara and advise him of the required updates.

(TNS) - For Harvey County Sheriff T. Walton and Community Chaplain Jason Reynolds, the past four days have been a blur.

While Walton was tasked with responding to a very dangerous situation, Reynolds was tasked with supporting first responders like Walton and all the others who showed up immediately at the mass shooting at Hesston’s Excel Industries, where four people, including the shooter, were killed Thursday and 14 others injured.

Finally, Monday was an opportunity for the two men to sit side-by-side and speak briefly of what they experienced.

For Walton, the tragedy began unfolding as he learned of a shooting victim near 12th and Meridian in Newton. As he was dealing with that incident, another 911 call came through.

“Everyone is coming to me and I hear of more shootings on the radio. I am trying to figure this out,” Walton said.



Extending security to mobile devices and increasing the resilience of the enterprise against hackers are the two big moves Hewlett-Packard Enterprise will be announcing today at the RSA Conference in San Francisco.

The announcements mark a change of thinking at HPE, as the company wants to do a better job of weaving security into its service offerings and of responding to security issues "at machine speed," according to Chandra Rangan, vice president of marketing for HPE Security Products.

The company redefined the issues of today's threat landscape in its HPE Security Research Cyber Risk Report 2016 Report. Looking at mobility threats, HPE used its Fortify on Demand threat assessment tool to scan more than 36,000 iOS and Android apps for needless data collection. Nearly half the apps logged geo-location, even though they didn't need to. Nearly half of all game and weather apps collected appointment data, even though that information is not needed, either. Analytics frameworks used in 60% of all mobile apps can store information that can be vulnerable to hacking. Logging methods can also expose data to hacking.



The growing complexity of today’s enterprise computing environment means critical corporate data is stored in increasingly fragmented and heterogeneous infrastructures. Ensuring all this decentralized data is backed up in case of breach or disaster is a major cause of anxiety for both business executives and senior IT professionals.

That’s because comprehensive data protection is really not core to most people’s jobs – most of you have other things to worry about, and you just hope and pray that the systems you’ve implemented have backed up your data and will recover it in case of a disaster. But you’ve got your fingers crossed because you’re really not that confident that they will.

According to Jason Buffington, principal analyst for data protection at ESG, improving data backup and recovery systems has been a top five IT priority and area of investment for the past several years. That’s because continually-evolving computing infrastructures and production platforms are forcing companies to reexamine their data protection strategies. “When an organization goes from 30 percent virtualized to 70 percent, or from on-premises email servers to Office 365 in the cloud, these evolutions to your infrastructure drive the need to redefine your data protection strategy,” says Buffington. “Legacy approaches for data protection can’t protect all of the data in these more complex environments.”



A study published Thursday confirmed that the 100,000 tons of methane that flowed out of Aliso Canyon was the largest natural gas leak disaster to be recorded in the United States, and that it doubled the methane emission rate of the entire Los Angeles basin.

Researchers with the University of California's Irvine and Davis campuses, along with the National Oceanic and Atmospheric Administration (NOAA) found during the peak of the leak that "enough methane poured into the air every day to fill a balloon the size of the Rose Bowl."

University officials called it a first-of-its-kind study on the gas leak, published in the journal Science.

"The methane releases were extraordinarily high, the highest we've seen," said UCI atmospheric chemist Donald Blake in a statement. Blake, who has measured air pollutants worldwide for more than 30 years, collected surface air samples near homes in Porter Ranch.



Managing and analyzing big data -- the exponentially growing body of information collected from social media, sensors attached to "things" in the Internet of Things (IoT), structured data, unstructured data, and everything else that can be collected -- has become a massive challenge. To tackle the task, developers have created a new set of open source technologies.

The flagship software, Apache Hadoop, an Apache Software Foundation project, celebrated its 10th anniversary last month. A lot has happened in those 10 years. Many other technologies are now also a part of the big data and Hadoop ecosystem, mostly within the Apache Software Foundation, too.

Spark, Hive, HBase, and Storm are among the options developers and organizations are using to create big data technologies and contribute them to the open source community for further development and adoption.



The Zika virus, a mosquito-borne virus linked to neurological birth disorders, continues to be a serious problem worldwide. More cases in the US are being announced every day, with 14 new cases of sexually transmitted Zika virus being announced by the CDC just this week, several of which among pregnant women. The CDC wrote in a recent statement, “These new reports suggest sexual transmission may be a more likely means of transmission for Zika virus than previously considered.”

As the Zika outbreak progresses, Zika preparedness and planning becomes a critical talking point for leaders in public and private sectors. Questions such as how to handle an infected employee in the office or where to direct citizens so they can acquire accurate, up to date information need to be addressed and answered to ensure the highest level of citizen and employee safety through Zika preparedness.



Monday, 29 February 2016 00:00

Disaster Outreach, Hollywood-Style

(TNS) - The county’s emergency planning agency is betting that moviegoers, after watching a 300-foot tsunami barrel through a Norwegian fjord toward a small town, will be more receptive to information about disaster preparedness.

The Clark Regional Emergency Services Agency will host a screening of the disaster thriller The Wave 6 p.m. March 4 at Kiggins Theatre in Vancouver. It’s the first of what agency Emergency Management Coordinator Eric Frank hopes will be a recurring disaster movie night.

A movie night might draw a bigger and different crowd than the agency’s other modes of outreach, he said. “We do a lot of events every single year, but we know we’re still missing some demographics in there.”



Living with Climate Change: How Communities Are Surviving and Thriving in a Changing Climate (Jane Bullock, George Haddow, Kim Haddow, Damon Coppola) is a wide-ranging look at many aspects of past and present disaster mitigation efforts across the United States. The authors look at these efforts through the lens of climate change, and they understand that the debate on the cause of a warming climate is not accepted in all political circles. The book includes a number of case studies that look specifically at the previous benefits of the FEMA Project Impact program.

The body of the text comes primarily from a wide selection of contributors who have direct experience in academia, as well as emergency management practitioners. While book’s anticipated primary use might be as a classroom text for undergraduate and graduate students pursuing degrees in emergency management, it also has broad application for practicing emergency managers at the local, state and federal levels. We are entering a new era where climate impacts are beginning to reveal themselves. Emergency managers will need a resource that documents what has worked in the past and can apply to a new and undetermined future in which climate change exacerbates what were previously considered rare weather phenomena.

With new and more aggressive hazards come the need to understand terminology that is being used in different contexts. The two-page monograph by Cooper Martin, in which he tries to explain the difference between the terms “sustainability” and “resilience,” is quite helpful.



(TNS) - Jakki Lewis was nearing the end of her first day of work at Excel Industries on Thursday, when she heard gunshots.

"I never did see him. We just heard bullets," Lewis said. "He was running all over the plant, chasing people."

Another employee, a man armed with a long gun and a pistol, pulled into the parking lot of the plant where about 1,000 people work, manufacturing lawn mowers, and started shooting. He walked inside, where he shot three people near the front office, Harvey County Sheriff T. Walton said later.

After hearing shots, Jeff Lusk, who was at Excel for an interview at 5 p.m., said he saw the shooter and then got under a desk.



(TNS) -- Area hospitals are riddled with cybersecurity flaws that could allow attackers to hack into medical devices and kill patients, a team of Baltimore-based researchers has concluded after a two-year investigation.

Hackers at Independent Security Evaluators broke into one hospital's systems remotely to take control of several patient monitors, which would let an attacker disable alarms or display false information.

The team strolled into one hospital's lobby and used an easily accessible kiosk to commandeer computer systems that track medicine delivery and bloodwork requests — more opportunities for malicious hackers to create mayhem.

The firm worked with the knowledge and cooperation of a dozen hospitals, including hospitals in Baltimore, Towson and Washington. They did not release the names of the hospitals.



Iron Mountain, the nearly 70-year-old “information management” company that grew out of a big early 20th century underground mushroom growing operation, has joined a White House program created to push companies and government agencies to improve their data center energy efficiency.

President Barack Obama’s administration rolled out the Better Buildings Initiative in parallel with its clean energy investment program in 2011. The Better Buildings Challenge, one part of the initiative, called on companies and agencies to make specific energy efficiency improvement commitments for their facilities in return for access to some technical assistance from the government, shared best practices, and, of course, good publicity.

So far, Boston-based Iron Mountain is one of 11 private-sector data center operators to have accepted the challenge, pledging to reduce energy intensity of eight of its data centers by 20 percent in 10 years. The others are eBay, Facebook, Intel, Intuit, Home Depot, Staples, and Schneider Electric, as well as data center providers Digital Realty Trust, CoreSite Realty, and Sabey Data Centers.



Working from home and being able to take work out of the office makes working life easier but can be a nightmare for data privacy. With an estimated 56 percent of employees reporting that they either very frequently or frequently stored sensitive data on their laptops, smartphones, tablets, and other mobile devices, the chances of confidential information getting lost or into the wrong hands are very high. 
Bring-Your-Own-Device (BYOD) is part of the modern workplace. It’s becoming more and more normal for business information to be stored in or accessed by devices that are not fully controlled by IT administrators, and the possibility of data breaches caused by personal devices that aren’t properly protected is also on the rise. 
Protecting business information on mobile devices can be as simple as encrypting files and/or password protecting the device - it won’t stop them being lost but IT admin will be able to selectively remove sensitive encrypted data and the chances of someone using stolen information maliciously are much smaller if it’s not possible to get straight into any files that may be sensitive. The issue is clouded when the device actually belongs to the employee and not the business, however.
Monday, 29 February 2016 00:00

Data Breach Planning and Preparation

Responding to a data breach is one of the more challenging events any company can face.  On the one hand, a data breach requires nearly instantaneous decision making.  Which servers are affected and should be removed from the network (but not shut off)?  Who should be notified?  Should law enforcement, a regulator or the insurer be contacted first?  When should the breach be made public, if at all?  What experts should be engaged, how much do their services cost and can that budget be approved on a Sunday night?  And what is the home phone number for the Director of IT?

Even for the most agile of companies, informed and responsible decision making requires the input of an array of constituencies, some of whom rarely, if ever, have been in the same room together. The classic example is the C-Suite and IT personnel.  The executives may have a difficult time understanding the scope of the breach, and the language IT speaks is decidedly not the language of the boardroom. The legal requirements can be contradictory—for example, a regulator (or the FBI) may ask that you notify no one, but your insurer may require notice within 10 days to trigger coverage.  The scope of the breach may be unknown, resulting in over-protection or even paralysis based on the lack of information.  These complications multiply with the size and public profile of the organization.



Small businesses are bracing for another year of costly compliance change and complexity from Washington, D.C. While expecting a cascade of regulations, focus is on three priorities—the Affordable Care Act, Fair Labor Standards Act overtime regulations and mandatory paid family and medical leave.



Storage is one of the hottest IT topics today. Acquisitions are happening regularly, as more users are moving to flash and new types of storage controller ecosystems. We’re seeing powerful hybrid systems emerge and even more impact around extending environments to cloud storage. Throughout all of this, organizations must understand how to utilize these new types of storage resources, and where they apply to their data centers.

The challenge to virtualization and storage engineers is this: How do you manage and work with all the new storage capabilities? Even more important, how can you dynamically manage workload storage requirements within a virtual environment?



We sat down with VMware CEO Pat Gelsinger during the 2016 Mobile World Congress to learn more about the company's strategic partnership with IBM. Gelsinger also opened up about how the Dell-EMC deal has been affecting VMware's business, and shared an update on partner relationships.

BARCELONA – VMware's latest strategic partnership with IBM, the challenges it's faced as part of the Dell-EMC merger, and the status of partner relationships were among the topics discussed by VMware CEO Pat Gelsinger during an interview with InformationWeek at Mobile World Congress here.

On Feb. 22, IBM and VMware announced a strategic partnership that aims to enable enterprise customers to easily extend their existing workloads, as they are, from their on-premises software-defined data center to the cloud. As part of the deal, according to Gelsinger, IBM is "taking the full set of VMware technologies -- VSphere, NSX, plus our storage, plus our management -- and delivering that full set to the IBM cloud customers. IBM as an enterprise cloud provider is very significant, with 45 data centers worldwide. and they are making very vast investments into that strategy."



Friday, 26 February 2016 00:00

Security as High as the Cloud

Over the past several years, the cloud-based software-as-a-service (SaaS) model has proven to be a popular choice for enterprise applications, delivering efficiencies and value to organizations in many ways. Chief among these benefits are avoiding the major undertaking and licensing costs of deploying business-critical software across the organization and relieving IT of the burdens typically associated with maintaining on-premises software—including performing upgrades, installing patches and managing availability. Additionally, cloud-based solutions can enhance flexibility and scalability for enterprise applications and workloads. Of course, the benefits to be gained from adopting SaaS solutions in the enterprise must be balanced against potential risks. Exploring the path to ensuring your cloud applications are highly secure needs to be top priority.



(TNS) - McLean Fiscal Court approved the purchase of a critical communication service that is expected to help emergency management personnel keep the public better informed and alert.

The court approved the purchase of AlertSense, a public alert system that Emergency Management Director David Sunn said he believes could ultimately be a money saver for the county.

In the event of a critically dangerous event such as a hazardous material spill, the fire department, Sunn said, would be able to use AlertSense to determine a certain radius around the spill and send automatic phone calls or text messages to residents within the radius. That's important, he added, since the county includes vast portions of rural land where communication can be scarce.



(TNS) - Cedar Rapids Mayor Ron Corbett said Wednesday officials are bracing for the increasing possibility that new federal flood protection money, which once seemed locked in, will never arrive.

At stake could be $70 million to $80 million for flood walls, levees and pump stations to protect low-lying areas from rising tides on the east bank of the Cedar River. Congress authorized $73 million in spending in 2014, but never appropriated the money.

“We are in serious risk of never being funded,” Mayor Ron Corbett said during his State of the City address.

The sentiment marks a transition for a city rocked by flooding in 2008 from hopeful waiting to wondering if it’s time to plot a Plan B. Eight years later, Cedar Rapids still is recovering.



In his final budget proposal, President Obama is asking for an increase in spending on cybersecurity -- $19 billion, which is $5 billion more than last year. The requested increase is a response to the rise in cybersecurity threats being made against government agencies.

The budget request follows a trend as we’re seeing more organizations bumping up their cybersecurity budgets.  In fact, estimates are that cybersecurity spending will continue to rise, with expectations of more than $170 billion spent on security by 2020.

But is all this spending actually doing anything to improve cybersecurity? A new study from Venafi hints that perhaps much of that money is being wasted because it isn’t working on certain attacks. The problem, according to the CIOs surveyed, is that layered security defenses aren’t able to tell the difference between which keys and certificates should be trusted and which shouldn’t. A whopping 86 percent of those CIOs believe that stolen encryption keys and digital certificates are going to be the next big attack vector, which is a serious problem because, according to Information Age:



The parade of data center REITs reporting exceptional Q4 and full-year 2015 results has just become even more impressive.

CyrusOne (CONE) crushed results across the board during 2015, including record leasing of 30MW across more than 200,000 square feet of data center space in the fourth quarter alone. The company is expanding capacity across six markets, but its biggest expansion plans are in New Jersey.

CyrusOne CEO Gary Wojtaszek said the flexibility for his customers to lease anywhere from a single rack to 10MW of capacity was a key reason for success in 2015. He also pointed to the company’s ability to deliver data halls in just a few months’ time at less than $7 million per megawatt.



The 2015-16 El Nino season is far from over, and for many parts of the United States, the last couple of months have not been easy.  In fact, the City of Pacifica, CA declared a state of emergency last month after pounding waves and powerful winds caused destruction up and down the coastline [1].  The effects of El Nino span globally too – Stephen O’Brien, a United Nations’ under-secretary-general, said that El Nino has pushed the planet into “uncharted territory.” According to O’Brien, “the impacts, especially on food security, may last as long as two years [2].”

But has this El Nino season gone as planned? Back in December of 2015, we sat down with David Gold and Mike Gauthier of Weather Decision Technologies who took us through several prediction scenarios and preparation techniques for the impending El Nino season.  Fast forward two months and we are back to take a look at how the current season is panning out.  The results may surprise you.



Cybercrime and cyber security attacks hardly seem to be out of the news these days and the threat is growing globally. Be it a major financial institution or an individual, nobody would appear immune to malicious and offensive acts targeting computer networks, infrastructures and personal computer devices. Firms clearly must invest to stay resilient.

Indeed, and according to the latest results of the 2016 Global Asset Management and Administration Survey from Linedata, a NYSE Euronext-listed IT vendor providing solutions to the investment management industry around the world, cybercrime is being viewed as the “greatest business disruptor” over the next five years. But alongside this regulation remains a priority for financial firms.

The 20-page survey, which was conducted by the fintech vendor in the fourth quarter of 2015 and canvassed two hundred market participants  either face-to-face at Linedata Exchange events in London and San Francisco or via an online survey, found that more than a third (36%) of respondents were concerned about the threat from cyber criminals.



It’s no secret that Microsoft already has a lot of cloud data centers around the world. And the company is planning to build a whole lot more as it attempts to bite further into Amazon’s stranglehold on the cloud services market.

As it continues to build out its global cloud data center empire, Microsoft has to make sure it’s doing it in the most environmentally responsible way it can. It is one of tech’s biggest names and as such, it is under a lot of scrutiny by environmentalists and the public.

To help the cause, Microsoft has created a new role, dedicated specifically to data center sustainability. Not corporate sustainability, not energy strategy, not data center strategy, but data center sustainability. This week, the company announced it has hired Jim Hanna, who until recently led environmental affairs at Starbucks, to fill that role.



Dell Inc. said Tuesday that it has received U.S. regulatory clearance to proceed with its planned $67 billion purchase of data storage company EMC Corp.

Round Rock, Texas-based Dell Inc. has passed a mandated waiting period under antitrust laws that are intended to allow the U.S. Federal Trade Commission time to review the purchase. If no FTC action is taken, the purchase can proceed.

But the Dell Inc. deal still has to receive regulatory approvals from other jurisdictions and from EMC shareholders. Reuters new service reported last week that European regulatory approval is expected.



Application containers, namely Docker containers, have been heralded as the great liberators of developers from worrying about infrastructure. Package your app in containers, and it will run in your data center or in somebody’s cloud the same way it runs on your laptop.

That has been the promise of the technology based on the long-existing concept of Linux containers the San Francisco startup named Docker devised its application building, testing, and deployment platform around. While developers love the concept of Docker, IT managers that oversee the infrastructure those applications eventually have to be deployed on have certain processes, policies, requirements, and tools that weren’t necessarily designed to support the way apps in Docker containers are deployed and the rapid-fire software release cycle they are ultimately meant to enable.

This week, Docker rolled out into general availability its answer to the problem. Docker Datacenter is meant to translate Docker containers and the set of tools for using them for the traditional enterprise IT environment. It is a suite of products that enables the IT organization to stand up an entire Docker container-based application delivery pipeline that is compatible with IT infrastructure, tools, and policies already in place in the enterprise data center.



You have probably heard the old saying that “a lie will go round the world while truth is pulling its boots on.”  But you may not have considered this: “A crisis can do half its damage before the crisis plan is even found!”

And every minute a crisis goes unmanaged, costs may be piling up.

For example—the longer your people  go without clear guidance or worst wait to execute on your crisis management plans , the more likely it is that your situation will escalate.  And what if the instructions for shutting down a manufacturing line come too late?  That expensive equipment could end up a total loss.



CHICAGO — With a forecast that includes the potential for heavy snow and high winds, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) Region V encourages everyone to get prepared.

“If you must leave home in dangerous weather conditions, take precautions to get to your destination safely,” FEMA Region V Administrator Andrew Velasquez III said. “Taking simple steps to prepare before the storm not only keeps you safe, but others as well.”

Follow the instructions of state and local officials and listen to local radio or TV stations for updated emergency information. If you are told to stay off the roads, stay home, and when it is safe, check on your neighbors or friends nearby who may need assistance.

Find valuable tips to help you prepare for severe winter weather at www.ready.gov/winter-weather or download the free FEMA app, available for your Android, Apple or Blackberry device. Visit the site or download the app today so you have the information you need to prepare for severe winter weather.

Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema.  Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Thursday, 25 February 2016 00:00

Security Concerns Continue Amid Cloud Adoption

The Internet of Things (IoT) generates a lot of data, which organizations can store in the cloud. But how are they keeping it all safe?

Many companies are realizing they face this challenge and are ramping up efforts to improve data security as they embrace new platforms, including IoT and cloud-based applications, according to a recent survey conducted by 451 Research.

The survey, sponsored by data and cloud security vendor Vormetric, polled 1,114 senior IT executives, representing companies ranging from $50 million to more than $2 billion in annual sales.



February is American Heart Month. In light of that, it seems only fitting that we should check the pulse of a challenge faced by many in Healthcare IT: disaster recovery.

In a training class several weeks ago, Ryan, an incredibly enthusiastic sales engineer, and I had a conversation about disaster recovery. “Disaster recovery is so much more than the question of, ‘Will I pass the audit?’” he began. “Buildings fall apart, water rises, systems fail, snow falls, power surges,” he explained, making imaginary drawings in the air to emphasize his points. “Anything that stops hospital operations for a period of hours is definitely a disaster.”

“The great thing is that Citrix is on top of it,” he confidently added. Ryan backed that statement with a contrasting tale of two US hospitals – one in Texas that was plagued by human error and another in the Southwest that experienced equipment failure after a power surge.



5 Things You Really Need to Know About Zika Virus

Outbreaks of Zika have been reported in tropical Africa, Southeast Asia, the Pacific Islands, and most recently in the Americas. Because the mosquitoes that spread Zika virus are found throughout the world, it is likely that outbreaks will continue to spread. Here are 5 things that you really need to know about the Zika virus.

Zika is primarily spread through the bite of an infected mosquito.

Many areas in the United States have the type of mosquitoes that can become infected with and spread Zika virus. To date, there have been no reports of Zika being spread by mosquitoes in the continental United States. However, cases have been reported in travelers to the United States. With the recent outbreaks in the Americas, the number of Zika cases among travelers visiting or returning to the United States will likely increase.

These mosquitoes are aggressive daytime biters. They also bite at night. The mosquitoes that spread Zika virus also spread dengue and chikungunya viruses.

The best way to prevent Zika is to prevent mosquito bites.Zika_prevent mosquito bites

Protect yourself from mosquitoes by wearing long-sleeved shirts and long pants. Stay in places with air conditioning or that use window and door screens to keep mosquitoes outside.  Sleep under a mosquito bed net if air conditioned or screened rooms are not available or if sleeping outdoors.

Use Environmental Protection Agency (EPA)-registered insect repellents. When used as directed, these insect repellents are proven safe and effective even for pregnant and breastfeeding women.

Do not use insect repellent on babies younger than 2 months old. Dress your child in clothing that covers arms and legs. Cover crib, stroller, and baby carrier with mosquito netting.

Read more about how to protect yourself from mosquito bites.

Infection with Zika during pregnancy may be linked to birth defects in babies.

Waiting for a baby. Close-up of young pregnant woman touching her abdomen while sitting on the couch

Zika virus can pass from a mother to the fetus during pregnancy, but we are unsure of how often this occurs. There have been reports of a serious birth defect of the brain called microcephaly (a birth defect in which the size of a baby’s head is smaller than expected for age and sex) in babies of mothers who were infected with Zika virus while pregnant. Additional studies are needed to determine the degree to which Zika is linked with microcephaly. More lab testing and other studies are planned to learn more about the risks of Zika virus infection during pregnancy.

We expect that the course of Zika virus disease in pregnant women is similar to that in the general population. No evidence exists to suggest that pregnant women are more susceptible or experience more severe disease during pregnancy.

Because of the possible association between Zika infection and microcephaly, pregnant women should strictly follow steps to prevent mosquito bites.

Pregnant women should delay travel to areas where Zika is spreading.

Until more is known, CDC recommends that pregnant women consider postponing travel to any area where Zika virus is spreading. If you must travel to one of these areas, talk to your healthcare provider first and strictly follow steps to prevent mosquito bites during the trip.

If you have a male partner who lives in or has traveled to an area where Zika is spreading, either do not have sex or use condoms the right way every time during your pregnancy.

For women trying to get pregnant, before you or your male partner travel, talk to your healthcare provider about your plans to become pregnant and the risk of Zika virus infection. You and your male partner should strictly follow steps to prevent mosquito bites during the trip.

Returning travelers infected with Zika can spread the virus through mosquito bites.

Man using insect repellant

During the first week of infection, Zika virus can be found in the blood and passed from an infected person to a mosquito through mosquito bites. The infected mosquito must live long enough for the virus to multiply and for the mosquito to bite another person.

Protect your family, friends, neighbors, and community! If you have traveled to a country where Zika has been found, make sure you take the same measures to protect yourself from mosquito bites at home as you would while traveling. Wear long-sleeved shirts and long pants , use insect repellant, and stay in places with air conditioning or that use window and door screens to keep mosquitoes outside.

For more information on the Zika virus, and for the latest updates, visit www.cdc.gov/zika.

We’re constantly hearing about how the lack of rain in much of the Southwest has contributed to the worst drought in the history of the region, but the subject of water doesn’t come up much with respect to data centers.

However, it should garner just as much attention—specifically water treatment programs—according to Data Center World speaker Robert O’Donnell, managing partner of Aquanomix.

“The water management program is a huge risk in data centers; one that many facility owners don’t understand or give enough credence to,” he says.



Thursday, 25 February 2016 00:00

The Hybrid Cloud: Your Cloud, Your Way

Cloud computing has become a significant topic of conversation in the technology industry and is being seen as a key delivery mechanism for enabling IT services. Today’s reality is that most organizations already are using some form of cloud because it opens up new opportunities and has become engrained in the fabric of how things are done and how business outcomes are achieved.

Cloud offers a host of service and deployment models: both on- and off-premises, across public, private, and managed clouds. We see some organizations starting with public cloud because of the perceived ease of entry and lower costs. Some organizations, such as test and development groups, use public clouds because they need to quickly stand-up infrastructure, test and run their application and take it down, and this can’t be supported by their existing IT team. Other companies, such as startups, use public clouds because they simply don’t have the resources to build, own and manage a private cloud infrastructure today. We’re also seeing a rather significant shift back towards private clouds, which are becoming much easier and quicker to deploy and still come with IT control and piece-of-mind security benefits.

That said, every organization’s cloud is a unique reflection of its business strategies, priorities and needs; and this is why there is a great variation in how companies go about implementing their own specific clouds.



Thursday, 25 February 2016 00:00

Zika Virus Exposes Weaknesses in Public Health

State health officials were heartened when President Barack Obama this month asked Congress for $1.8 billion to combat the spread of the Zika virus because they fear they don't have the resources to fight the potentially debilitating disease on their own.

Budget cuts have left state and local health departments seriously understaffed and, officials say, in a precariously dangerous situation if the country has to face outbreaks of two or more infectious diseases -- such as Zika, new strains of flu, or the West Nile and Ebola viruses -- at the same time.

"We have been lucky," said James Blumenstock of the Association of State and Territorial Health Officials, of states' and localities' ability to contain the flu, West Nile and Ebola threats of the last five years.



(TNS) - At least three people have died in severe weather in the southern states of the United States, where tornadoes, damaging hail and flash floods left a swath of destruction.

Tornadoes churned across many states, from Louisiana to Georgia, but the most destructive were in Louisiana and Mississippi.

More than 30 people were injured in the storms. Two people died in the hamlet of Convent, Louisiana, after a tornado demolished more than 160 mobile homes.

The third casualty died in a trailer park in Purvis, Mississippi.

The storm left tens of thousands of people without power in Louisiana, and John Bel Edwards, the state governor, declared a state of emergency in seven parishes.

The powerful storm developed when the jet stream dived across the region on Tuesday. A jet stream is a fast-flowing ribbon of air, blowing high above the Earth's surface, which can dictate the path of storms and can also encourage their development.



The hybrid cloud is going mainstream as more companies seek to capitalize on the benefits of both the private and public cloud.

But this tech transition is not without its sundry challenges, particularly when it comes to security - and that’s where managed service providers can play key roles as customers transform their IT infrastructures.

Many smaller companies view the hybrid cloud as a sensible balance between offloading storage and computational time to a public cloud, and keeping a firm’s computational services all on premises. The good news is that unlike bigger enterprises, SMEs moving to hybrid clouds won't need to jerryrig older legacy infrastructures - potentially opening security holes in the computer network. MSPs can steer that migration to the hybrid cloud with "clean" deployments by starting from scratch.



A new survey of 1,080 IT professionals conducted by cloud services company Evolve IP indicated the cloud has "gained corporate alignment, increased real business benefits and has near ubiquitous adoption."

Evolve IP's "2016 North American Cloud Adoption Survey" revealed 86 percent of respondents said they believe cloud computing represents "the future model of IT."



Thursday, 25 February 2016 00:00

Nixle in Action: Preparing for a Power Outage

Over the past decade, the amount of power outages in the United States has increased. A recent Federal study shared that the U.S. electric grid loses power 285% more often than it did 30 years ago. [1] These surprising numbers are mainly attributed to aging infrastructure, a growing population, and more severe weather patterns. On top of the financial burden that this has on business, resident’s daily lives are affected by these unexpected failures.

What can residents do to be best prepared in the case of a power outage? One of the key elements in being prepared is having a line of communication. During a power outage, watching the news for information from local officials is not an option. Having a system to send out a mass text or email notification is a huge advantage when traditional means of communication are cut off. During a power outage, residents are often left in the dark about how long the power will be out for, what was the cause, and if the problem is being solved. By using Nixle, police departments and other officials can keep a line of communication with residents to update them on the progress of the outage.



(TNS) -- When people in the Kansas City area need emergency help, they can now send a text message to 911.

Text-to-911 service has been growing more common among cities across the country in recent years and is now fully operational at all emergency dispatch centers in the Kansas City metro area, the Mid-America Regional Council announced last week.

Sending a text to 911 instead of calling could be a lifesaving option for people in situations where they can’t speak safely, such as home invasions or active shooter incidents, according to MARC.