DRJ's Spring 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 32, Issue 1

Full Contents Now Available!

Industry Hot News

Industry Hot News (332)

tabletop exercisePreparing a business for the unknown requires a series of important steps to protect your employees and your operations. For many business owners, this foundation starts with an emergency plan and grows to include a business continuity plan, an inclement weather policy, and perhaps even a lone worker policy to keep employees safe.

So, you’ve made your emergency plans and identified the best people to lead your teams through each phase. Now, it’s time to practice with the low-cost but high-impact emergency planning event known as a tabletop exercise.



What You Need to Know for 2019 – and Beyond

In the fast-moving world of cybersecurity, predicting the full threat landscape is near impossible. But it is possible to extrapolate major risks in the coming months based on trends and events of last year. Anthony J. Ferrante, Global Head of Cybersecurity at FTI Consulting, outlines what organizations must be aware of to be prepared.

In 2018, cyber-related data breaches cost affected organizations an average of $7.5 million per incident — up from $4.9 million in 2017, according to the U.S. Securities and Exchange Commission. The impact of that loss is great enough to put some companies out of business.

As remarkable as that figure is, associated monetary costs do not include the potentially catastrophic effects a cyberattack can have on an organization’s reputation. An international hotel chain, a prominent athletic apparel company and a national ticket distributor were just three of several organizations that experienced data breaches in 2018 affecting millions of their online users — incidents sure to cause public distrust. It’s no coincidence that these companies were targeted — all store valuable user data that is coveted by hackers for nefarious use.

These events and trends should serve as eye openers for what’s ahead this year, as malicious actors are becoming more sophisticated and focused with their attacks. Consider these 10 predictions over the next 10 months:



Thursday, 21 February 2019 17:01

10 Corporate Cybersecurity Predictions

Companies think their data is safer in the public cloud than in on-prem data centers, but the transition is driving security issues.

More business-critical data is finding a new home in the public cloud, which 72% of organizations believe is more secure than their on-prem data centers. But the cloud is fraught with security challenges: Shadow IT, shared responsibility, and poor visibility put data at risk.

These insights come from the second annual "Oracle and KPMG Cloud Threat Report 2019," a deep dive into enterprise cloud security trends. Between 2018 and 2020, researchers predict the number of organizations with more than half of their data in the cloud to increase by a factor of 3.5.

"We're seeing, by and large, respondents are having a high degree of trust in the cloud," says Greg Jensen, senior principal director of security at Oracle. "From last year to this year, we saw an increase in this trust."



ASSP TR-Z590.5-2019 provides guidance from safety experts on proactive steps businesses can take to reduce the risk of an active shooter, prepare employees and ensure a coordinated response should a hostile event occur. It also provides post-incident guidance and best practices for implementing a security plan audit.

Active shooter fatalities spiked to 729 deaths in 2017, more than three times our country’s previous high. A business must know where its threats and vulnerabilities exist. Our consensus-based document contains recommendations on how a business in any industry can better protect itself in advance of such an incident. Based on the collaborative work of more than 30 professionals experienced in law enforcement, industrial security and corporate safety compliance, the report aims to drive a higher level of preparedness against workplace violence.



A new toolkit developed by the Global Cybersecurity Alliance aims to give small businesses a cookbook for better cybersecurity.
Small and mid-sized businesses have most of the same cybersecurity concerns of larger enterprises. What they don't have are the resources to deal with them. A new initiative, the Cybersecurity Toolkit, is intended to bridge that gulf and give small companies the ability to keep themselves safer in an online environment that is increasingly dangerous.

The Toolkit, a join initiative of the Global Cyber Alliance (GCA) and Mastercard, is intended to give small business owners basic, usable, security controls and guidance. It's not, says Alexander Niejelow, senior vice president for cyber security coordination and advocacy and MasterCard, that there's no information available to the small business owners. He points out that government agencies in the U.S. and the U.K. provide a lot of information on cybersecurity for businesses.

It's just that, "It's very hard for small businesses to consume that. What we wanted to do was remove the barriers to effective action," he says, and go beyond broad guidance to giving them very specific instructions presented, "…if at all possible in a video format and clear easy to use tools that they could use right now to go in and significantly reduce their cyber risk so they could be more secure and more economically stable in both the short and long term."



Bankers around the world are rightly worried about the threats posed by digital disruptors getting in between them and their retail banking customers. But Forrester’s newest research reveals that executives should be just as worried — perhaps even more worried — about another market that is being upended: Small business banking.

Small and medium-sized businesses (also called small and medium-sized enterprises or SMEs) are crucial sources of revenues and profits at most banking providers, so the prospect of bank brands losing their relevance among SMEs should keep bankers awake at night.

Here are just a few of the insights you’ll find in our new research report:



New data from CrowdStrike's incident investigations in 2018 uncover just how quickly nation-state hackers from Russia, North Korea, China, and Iran pivot from patient zero in a target organization.

It takes Russian nation-state hackers just shy of 19 minutes to spread beyond their initial victims in an organization's network - yet another sign of how brazen Russia's nation-state hacking machine has become.

CrowdStrike gleaned this attack-escalation rate from some 30,0000-plus cyberattack incidents it investigated in 2018. North Korea followed Russia at a distant second, at around two hours and 20 minutes, to move laterally; followed by China, around four hours; and Iran, at around five hours and nine minutes.

"This validated what we've seen and believed - that the Russians were better [at lateral movement]," says Dmitri Alperovitch, co-founder and CTO of CrowdStrike. "We really weren't sure how much better," and their dramatically rapid escalation rate came as a bit of a surprise, he says.

Cybercriminals overall are slowest at lateral movement, with an average of nine hours and 42 minutes to move from patient zero to another part of the victim organization. The overall average time for all attackers was more than four-and-a-half hours, CrowdStrike found.



Navigating the Information Age Without Saving Everything

Data retention is a persistent challenge for in-house counsel, but developing workable information governance policies and procedures needn’t be a taxing exercise; in fact, they can generate measurable cost savings to the company. Here, Buckley LLP’s Caitlin Kasmar highlights the importance of being equipped with the right advice at the right time to save in-house counsel the stress of dealing with the challenges of document retention compliance.

The posture of in-house counsel toward information governance and data retention is in the midst of a noticeable and rapid shift from “are we retaining the right information?” to “please, please tell me I can get rid of some of this stuff.”

Those urgent pleas are fed not by data storage costs, which continue to decline, but by savvy in-house lawyers anticipating a subpoena or lawsuit, confronting a decade’s worth of retained emails and calculating compliance costs.

How are in-house counsel expected to advise their business clients on data retention when, in the typical company, numerous legal holds have piled up over time, executives may be effectively exempt from whatever retention/destruction policy is in place and no audit process exists to ensure records are actually deleted in compliance with the policy? The right advice at the right time can save in-house counsel the stress of dealing with these tricky — and, let’s face it, not particularly glamorous — issues.



Tuesday, 19 February 2019 15:25

'Do I Really Need To Keep This?'

(TNS) - Peggy Wood kept sitting up in bed.

She snatched a legal pad and added to a scattered list of things she used to own.

She imagined she was at her old desk in the Driftwood Inn, and jotted what she saw. Six glaze brushes, an embroidery machine, dressmaker’s scissors. A Nikon camera. Lights for that camera, and a backpack. Perfume she spritzed on before going out to shoot photos.

Each item was a chain link in a new insurance filing after Hurricane Michael ruined the Inn she and her family spent four decades building.

The Woods had received a little more than $2 million in insurance payments by January, mostly from flood policies. They still hoped for at least another $1 million from wind coverage but did not know how much it would cost to rebuild the sprawling motel and its outbuildings, 24 units in all.

$3 million? $10 million?



Rich Campagna explores the security and compliance risks associated with data stored in – and accessible from – cloud applications, setting out best practices for assuring end-to-end protection.

With cloud adoption rapidly expanding across an immense range of industries, enterprises around the globe are eagerly embracing the benefits that can be gained from moving their mission-critical services to the public cloud.

Despite the fact that major cloud vendors invest heavily in security, with Microsoft alone dedicating more than $1 billion a year to internal security investments, companies need to understand the hidden risks associated with migrating to the cloud.

That entails senior company executives coming to grips with the security and compliance risks associated with data stored in – and accessible from – cloud applications, and who takes responsibility should the unthinkable occur.



Tuesday, 19 February 2019 15:22

Mind the gap: cloud security best practices

(TNS) - Cambria County Commissioners approved two contracts Thursday that will allow for new connections with other counties and improve existing ones when it comes to 911 communication.

During a regular meeting, the commissioners unanimously approved a 911 fund statewide interconnectivity grant with the Pennsylvania Emergency Management Agency (PEMA), for $439,653.

Robbin Melnyk, county 911 coordinator, said this money will be used to upgrade and renew licenses for two large pieces of equipment purchased by Cambria County and 14 surrounding counties a few years ago.

A second grant of $96,607 will go toward maintenance and monitoring of Cambria County’s software, connecting it with Blair and Somerset counties, Melnyk said.



Let’s be honest: Everything related to a traditional crisis is more likely to cause heartburn than joy.

When most people think of a traditional crisis plan, they envision something “comprehensive” that will prepare them for every conceivable situation. They think of an exhaustive process of research and planning and bulky binders filled with color-coded tabs.

The reality is far simpler. You cannot prepare for every situation. Trying to do so is a fool’s errand. The best plan provides a view from 30,000 feet. It defines the broad strokes of what to say and do (or not), determines who’s in charge of what, specifies who speaks for the organization and why it’s important not to talk out of school.

The main barrier to green-lighting a crisis plan is inertia. For two reasons. It seems arduous,  which causes procrastination. And you have so many other priorities competing for your attention and resources.

It’s time to change things up and declutter traditional crisis plans!



No longer can privacy be an isolated function managed by legal or compliance departments with little or no connection to the organization's underlying security technology.

Recent advancements in machine learning and big data analytics have made data more important today than ever before. Companies are now investing heavily in protecting their customers' data; for instance, Facebook has pledged to double its safety and security team to 20,000 people.

Since the introduction of Europe's General Data Protection Regulation (GDPR) in 2018, data protection officers (DPOs) have become the subject of the latest hiring frenzy. Large organizations that are mandated to hire a DPO based on the GDPR's criteria are struggling to find the right person for the job. But how does a DPO fit into the typical security organization?

At the end of the day, a DPO should report directly to top management on all regulation and privacy topics. As such, the perfect candidate must have in-depth knowledge of GDPR and other regulations. Your DPO should also view the responsibilities of GDPR compliance as an opportunity to drive your business forward.



Monday, 18 February 2019 16:56

Privacy Ops: The New Nexus for CISOs & DPOs

Preventing Legal Risks and Liabilities

The #MeToo movement has hammered home for employers the critical importance of keeping sexual harassment out of the workplace. However, a recent federal court case underscores how sexual harassment can occur in ways that defy what many employers might think of as the typical pattern. The ruling by the U.S. District Court for the Eastern District of Pennsylvania comes in a case that has nothing to do with a male boss or co-worker behaving inappropriately with a female colleague. It hinges instead on allegations that a supervisor failed to properly respond to sexual harassment of an employee by a non-employee.

That might bring to mind the Hollywood trope of a hardworking waitress forced to regularly endure catcalls or worse by a male customer, but Hewitt v. BS Transportation defies even this familiar scenario. It involves a lawsuit over alleged male-to-male sexual harassment in the world of big rigs and fuel refineries. In court documents, truck driver Carl Hewitt alleges that his supervisor at BS Transportation failed to take prompt remedial action in response to sexual harassment of Hewitt by a male worker at a fuel distribution company’s refinery. Hewitt routinely traveled to the Pennsylvania facility to pick up fuel bound for NASCAR racecars.



Businesses don't have sufficient staff to find vulnerabilities or protect against their exploit, according to a new report by Ponemon Institute.

For enterprise IT groups, responding to the volume of new vulnerabilities is growing more difficult – compounded by a chronic lack of skilled cybersecurity professionals to deal with the issues.

That is one of the major conclusions reached in a new report, "Challenging State of Vulnerability Management Today: Gaps in Resources, Risk and Visibility Weaken Cybersecurity Posture," published by Ponemon Institute and sponsored by Balbix.

When asked about the difficulties of maintaining an adequate security posture, 68% of the more than 600 cybersecurity professionals surveyed listed "staffing" as a primary issue. These staffing shortages don't exist exclusively at small organizations, either, with 72% of those surveyed from organizations with more than 1,000 employees.



Backup technology has evolved over the years, but the time has come to take a completely fresh approach, says Avi Raichel. In this article Avi explains: Why backup is a CTO concern; What CTOs need to do to update the backup strategies in place; How CTOs can help the business become IT resilient.

It’s no secret that backup is one of the most important things that a business can invest in, and it’s because of this that the evolution of backup has been such a grand one. The very first computer backups were made on to large reels of magnetic tape (punch cards), and have consistently evolved – from tape, to spinning disk, and then on to flash. However, what hasn’t changed with backup is the central idea of creating ‘golden copies’ of data, to be used ‘just in case’.

This idea is now, arguably, archaic. These traditional backups that only provide a snapshot in time are no longer compatible with the modern times. In this age, businesses, particularly digital ones, need to be ‘always-on’ – 24/7, 365 days a year. Because of this, the requirement for recovery point objectives (RPOs) of seconds, and recovery time objectives (RTOs) of minutes is essential.

Essentially, a business needs to be able to recover as quickly as possible from the second it went down – not from a backup made the night before. This dependence on periodic backups, rather than continuous data protection, may be why nearly half of businesses have suffered an unrecoverable data event over the last three years according to the latest IDC State of IT Resilience report.



Steering Clear of Antitrust Pitfalls

Knowing how to engage in competitor interactions is often more art than science. There are few clear lines of conduct to guide information exchanges made for legitimate business reasons. But broad principles do exist to help you consider your options carefully. Vedder Price’s Brian McCalmon discusses.

Throughout the country, sales managers, supervisors and executives attend antitrust trainings with varying degrees of regularity and detail. Antitrust as a corporate and individual pitfall is familiar to most doing business in the United States and abroad. If asked, most sales executives and line personnel can list the most dangerous and easily spotted scenarios to avoid: Don’t ask competitors about their pricing plans; don’t talk to competitors about customers; if competitors begin to discuss forbidden topics in a trade association meeting, stand up, announce your departure for the record and abruptly exit. This is all Antitrust 101.

But there is an Antitrust 102 and 103, and situations calling for a deeper understanding of antitrust may be thrust upon senior executives before they have had time to digest the consequences of a bad choice in the moment. Some risks may be so unobvious that the executive may never see the antitrust consequences at all. And a healthy respect for the antitrust laws, coupled with a poor understanding of them, has led to the unnecessary stifling of potentially efficient corporate initiatives. A deeper understanding of how communications with competitors, suppliers and customers may violate competition law can reduce risk and allow more efficient and procompetitive arrangements to flourish.



What does a business continuity or disaster recovery plan consist of? In a nutshell, it’s what needs to happen in case you can’t continue normal operations or business due to an “activity” that may have affected your organization. I am not trying to minimize this in the least. That’s just the tip of the iceberg. We NEED plans. We need to know what to do so that when we have to make critical decisions, the information is as our finger tips (especially when it’s an automated tool). Building these plans is vital to the survival of the business, should something occur. Most of our organizations are regulated and required to have plans. It’s not only a type of insurance policy, but it makes us feel better knowing it’s in place…but what happens when you need to activate that plan? Just as critical as the plan itself are the people needed to respond and assist in the recovery efforts. People execute the plan. Someone needs to flip the switch. Without people, your effort, time and planning will not be much help.

With that said, we need to make sure we prepare our employees, so they know what to expect and what is expected. How do we do that? We teach them. We exercise the plans and involve those people.

Most organizations don’t do full-scale exercises with their entire staff. It costs a lot of money, resources and takes up a lot of time from the work day. This would be the most desirable type of exercise and something we should all aim to achieve. If you can conduct something like this, that’s fantastic! If not, consider starting by setting up a table top exercise to walk through what’s currently in place in your plans.



A wireless device resembling an Apple USB-Lightning cable that can exploit any system via keyboard interface highlights risks associated with hardware Trojans and insecure supply chains.

During a month-long hiatus between jobs, Mike Grover challenged himself to advance a project he'd been working on for over a year: Creating a USB cable capable of compromising any computer into which it's inserted.

His latest iteration, the Offensive MG or O.MG cable, resembles an Apple-manufactured Mac USB-Lightning cable but incorporates a wireless access point into the USB connector, allowing remote access from at least 100-feet away, according to Grover. A video demonstration shows Grover taking control of a MacBook and opening up Web pages from his phone.

The cable takes advantage of a known weaknesses. To make keyboard, mice, and other input devices as easy to connect as possible, operating system makers have made computers accept the identification, through the Human Interface Device (HID) protocol, of any device plugged into a USB port. An attacker can use the weakness to create a device that acts like a keyboard to issue keystrokes, or a mouse to issue clicks.



(TNS) - It’s been a year since the Valentine’s Day murder of 17 students and staff members and the wounding of 17 others at Marjory Stoneman Douglas High School in Parkland, Florida.

Since then, schools around the country have taken steps to beef up security.

In this area, several schools have made great strides to improve the safety of the students and teachers.

Many of the improvements deal with how people enter school buildings.

“The number one thing that we’ve done: we put a kiosk system in where when you come in you have to bring your [driver’s] license in now. We know everybody who comes in and out of our building. So will that stop a shooting? No, but we actually have a better understanding of who is going to be in our building or not,” said Mel Rentschler, superintendent at Allen East schools.



Friday, 15 February 2019 15:06

Preparing for the Next School Shooting

Doron Pinhas looks at the common factors behind various high-profile technology outages in 2018 and proposes a practical approach which will help organizations reduce unplanned downtime in 2019.

Flying these days is almost never a pleasure, but in 2018, it was a downright nightmare with dozens of glitches and outages that kept planes grounded. 2018 wasn't such a great year for other industry sectors as well. Financial service customers also had a rough year accessing their funds and performing urgent financial transactions. In the UK, for example, banks experienced outage after outage. Three of Britain's biggest banks - HSBC, Barclays and TSB - all experienced outages on a single day, making online banking impossible, and there were dozens of other incidents peppered throughout the year.

And if your business lives on cloud platforms and SaaS, you might have found yourself running ragged at times trying to access your IT with all of the major cloud platforms suffering from outages throughout the year as well.

It may be 2019 now, but the fundamental gaps that led to those service disruptions haven't been resolved, so we can expect more such outages this year, and probably every year until companies figure it out – which, if you’re a business continuity or IT professional, raises the question: what should I do to avoid outages?



Some have even turned to alcohol and medication to cope with pressure.

A quarter of chief information security officers (CISOs) suffer from mental and health disorders as a result of tremendous and growing work pressures, a new survey shows.

Contributing to the strain are concerns about job security, inadequate budget and resources, and a continued lack of support from the board and upper management.

Domain name registry service provider Nominet recently polled 408 CISOs working at midsize and large organizations in the United Kingdom and United States about the challenges they encounter in their jobs.

A whopping 91% of the respondents admitted to experiencing moderate to high stress, and 26% said the stress was impacting them mentally and physically. A troubling 17% of the CISOs who took Nominet's survey admitted to turning to alcohol and medication to deal with the stress, and 23% said their work was ruining personal relationships.



Paul Barry-Walsh argues that as complexity increases in society, so do interdependencies. To prevent cascading disasters, organizations need to implement firebreaks which will ensure that they do not become the weak link in the supply chain.

There is a characteristic which is self-evident to the professionals in this field, that is, as we develop as a society, we become increasingly reliant on more and more suppliers delivering products or services. Should just one component of the supply chain be disrupted then this service or product cannot be delivered. This can result in chaos. This is simply a manifestation of Adam Smith’s contention that the increased division of labour allows increasing output. However, with ever more suppliers, and the implementation of just in time production, the loss of just one small component disrupts the entire chain. This is as true for services as it is for manufacturing and after Adam Smith we should perhaps refer to this as ‘Adams Law’.

To illustrate this, imagine yourself to be a Venetian banker in the 16th century. He would need ledges quills and ink, possibly a desk and to operate in a secure environment, under the rule of law, but that’s about it. Now consider his modern counterpart. Just providing the most basic of modern day services the banker needs to operate both within and under the rule of law, she/he needs sophisticated computers, needs a base to operate from, needs communication devices and needs an army of people to run this operation: accountants, data entry, lawyers, compliance people and then HR to manage them.

That’s a complex web of people and products just to do the simplest banking operation. This complexity brings with it vulnerability; if staff are denied access to the office, if there is no electricity, (or water) then the organization cannot function. If you cannot function, there will be a knock-on effect for the counterparties, due to the interconnectedness of our society. If just one bank fails, this has a domino effect on other financial institutions and counterparties.



(TNS) — Garfield County, Okla., Sheriff's Office is offering training in active-attack response to area schools and also will provide the course to employees at the county courthouse.

Acting Sheriff Jody Helm said this is the third year the sheriff's office has offered training to county schools. Previous training topics concerned weapons in schools and drugs in schools.

"They've been really receptive," Helm said.

Deputy Lloyd Cross presented the training, from the Advanced Law Enforcement Rapid Response Training at Texas State University, Wednesday to the staff of Kremlin-Hillsdale High School.

Cross said the goal was to present the information to administrators and teachers and not determine policy for the school system.



Many times when we talk abut communications plans and campaigns, we focus on the tactics. Which makes sense – there are the things we can see. The clever social media post, the direct mail piece, the slick website. But the true way to evaluate a communications plan or marketing campaign is through measurement.

My favorite way to illustrate the different types of measures and how they work comes from the book Effective Public Relations, Ninth Edition. This is the book I used to study for my Accreditation in Public Relations, and it’s still on my shelf, dog-eared and bursting with post-it notes. I have adapted their graphic into my own, which you can see here:



Friday, 15 February 2019 14:57

How to measure communications plan success

When each member of your security team is focused on one narrow slice of the pie, it's easy for adversaries to enter through the cracks. Here are five ways to stop them.

Today, enterprises consist of complex interconnected environments made up of infrastructure devices, servers, fixed and mobile end-user devices and a variety of applications hosted on-premises and in the cloud. The problem is traditional cybersecurity teams were not designed to handle such complexities. Cybersecurity teams were originally built around traditional IT—with a specific set of people focused on a specific set of tools and projects.

As enterprise environments have grown, this siloed approach to cybersecurity no longer works. When each member of your security team is only focused on one narrow slice of the pie, it’s far too easy for adversaries to enter through the cracks. The following are critical steps chief information security officers (CISOs) must take in order to establish a dream team for the new age of cybersecurity.



Truth is, in most of the reports we write about how to prepare your company for the future, two major recommendations always come out: Get your C-level leaders on board, and cultivate a culture that can transform your business. The first is crucial yet obvious, and I’ve grown tired of writing it. The second, culture, is equally obvious, but it’s also huge. Yes, we have statistically measured the role of culture in successful digital transformations and found that culture is the strongest predictor of whether you’ll make it. But culture is enormous, and changing it can feel overwhelming.

Today we offer a lifeline of incredible value. Culture can encompass a myriad of things, but it is best measured at the level of individual employees. Do they like being there? Do they support the mission of the organization? Do they feel supported in trying to accomplish the goals of the company? All of these things matter, but today the responsibility for engaging employees is diffused across the org. HR helps but focuses on narrow metrics while not touching on the business strategy. Leaders occasionally try to motivate with enthusiasm, but they don’t rigorously account for the impact of their demands on the employee base. And when you add technology, it’s clearly not IT’s job to make sure people feel like the tech is helping them as much as it’s helping the customer. Drowning yet?

That’s where our lifeline comes in: “Introducing Forrester’s Employee Experience Index.” Rather than simply telling you to go engage your employees, we’ve systematized the process. We’ve spent two years surveying more than 13,800 employees in seven countries. Drawing from the best of three decades of organizational psychology research, we’ve constructed a tool that identifies what an engaged worker looks like and then worked backward from there to figure out what factors either help or hurt employee engagement. The result is a clear blueprint for inspiring, empowering, and enabling your employee base. 



Did “data analytics” ruin baseball? Depends on whom you ask: the cranky old man in a Staten Island bar or the nerd busy calculating Manny Machado’s wRC+ (it was 141 in 2018, if you cared to know). 

What is indisputable, though, is that the so-called “Sabermetrics revolution” rapidly and fundamentally changed how the game is played – this is not your grandpa’s outfield!

And data is eating the whole world, not just baseball. Now it’s coming for the legal profession, of all places. The Financial Times recently published an article on how law analytics companies are using statistics on judges and courts to weigh how a lawsuit might play out in the real world. One such company does the following (per the article): 



Friday, 15 February 2019 14:53


Findings from Dun & Bradstreet

According to a report by Dun & Bradstreet, compliance and procurement professionals indicate that fraud tops the list of challenges, and technological advances exacerbate the problem. While technology is an enabler to these industries by creating the potential for improved efficiency and data management, in some instances, it may be putting organizations at greater risk for fraud if not implemented properly. Brian Alster discusses the approach compliance leaders should take to protect the organization.

Compliance professionals didn’t have it easy in 2018; significant regulations spanned industries globally – touching finance, trade and data in a big way. Among the related challenges of this business environment, the risk of fraud remains near the top of the list for many companies, a majority of whom have seen incidences of fraud negatively impact their business. Detection methods to combat fraud evolve over time, but so, too, do the fraudsters, turning the situation into a never-ending game of cat and mouse.

A majority (72 percent) of respondents to the second Dun & Bradstreet Compliance and Procurement Sentiment Report say fraud has had an impact on their company’s brand. In an effort to uncover the top issues and concerns among both compliance and procurement professionals, Dun & Bradstreet surveyed more than 600 professionals from the U.S. and U.K., delving into a range of questions about their roles, as well as their impressions of the industry overall. With this second report, we were able to measure changes in overall sentiment compared with the benchmark conducted earlier last year, and we dove deeper into fraud concerns and the use of technology.



Friday, 15 February 2019 14:46

Fraud A Top Concern For Compliance Leaders

Extra, extra! Read all about it!

TOPO declares that 86% of account-based organizations report improved close rates, and 80% say account-based strategies are driving increased customer lifetime value!

TribalVision channels the ITSMA when it reports that companies implementing account-based marketing (ABM) strategies typically see a 171% increase in annual contract value!

Really? Wow — huh, haven’t seen that way from where I’m standing.

From my (tenuous?) perch atop Forrester’s ABM research pile, it looks like FOMO* (more than anything) is driving marketers to take up the ABM banner. Our research, trends studies, and customer interactions show that ABM continues as a popular topic among B2B marketers and sellers. But many claims hit an almost hysterical note: Do this now or be left behind!



Online dating profiles and social media accounts add to the rich data sources that allow criminals to tailor attacks.

US-CERT and Cupid don't often keep company, but this Valentine's Day is being marked by new threats to those seeking romance and new warnings from the federal cybersecurity group.

A notice from US-CERT points to an FTC blog post about how consumers can protect themselves from online scams involving dating sites, personal messaging systems, and the promise of romance and companionship from online strangers.

The general warning comes as specific scams are being exposed by online researchers. For example, researchers at Agari Data have followed a Nigeria-based group dubbed "Scarlet Widow" since 2017 as they exploited vulnerable populations, moving from romantic "attacks" against isolated farmers and individuals with disabilities to business email compromises that raised the financial stakes.



Thursday, 14 February 2019 15:34

Scammers Fall in Love with Valentine's Day

NEW YORK and SAN FRANCISCO — An authoritative legal-industry report on the current state of artificial intelligence (AI) in contract analysis and data extraction and its applications within the legal community was released today. Leading industry analyst firm Ari Kaplan Advisors was engaged by Seal Software to design and conduct unbiased research, the findings of which provide clarity on how legal departments at large corporations perceive and practically apply AI-driven contract analytics in a broad range of matters.

The report is derived from comprehensive interviews with professionals, predominately at Fortune 1000 organizations, whom exercise influence over the adoption and deployment of AI technology. Law department leaders from American Express (NYSE:AXP), Hewlett Packard Enterprise (NYSE:HPE), Nokia (NYSE:NOK), Novartis (NYSE:NVS), Atos (EURONEXT:ATO), Transocean Ltd. (NYSE:RIG), SI Group Inc., CyrusOne (NASDAQ:CONE), PagerDuty and Olympus Corporation of the Americas, among others, shared their views in the benchmarking study. All but one of the participants were lawyers, about two-thirds of which were with organizations that had more than $5 billion in revenue, and most worked at companies with more than 5,000 employees.

“It was a privilege to speak with so many industry leaders and I am proud to share their perspectives about the promise and practical application of this technology,” said Ari Kaplan, principal of Ari Kaplan Advisors. “I hope this report fuels a productive dialogue that drives the legal community forward.”



(TNS) - Bay County and the cities of Springfield and Callaway will begin their final passes of free Hurricane Michael debris removal on March 11.

Residents in the two cities and incorporated areas of the county are encouraged to have all debris on their curbs by March 10 to help with the pickup. The final wave of cleanup will last through mid-April, after which any debris will be removed at homeowners' expense, officials said.

"We've got to get this place cleaned up," said Philip Griffitts, chairman of the Bay County Commission. "We continue to see illegal dumping ... we've got to set a date now or we'll never get this done."

While Springfield and Callaway decided to partner with the county on their final debris passes, other cities in the area still have their own schedules. Property owners in other cities can contact their local governments for information on when debris collection will end there.



In today’s school environment, effective communication is a complex undertaking. The average public school in America has more than 500 students.  Meanwhile, colleges and universities can easily have upwards of tens of thousands of students. On top of that, the different members of a school community—students, faculty, staff members, and parents—tend to have wildly different communication preferences and behaviors.

Administrators need to quickly send school-wide notifications about weather delays and closings. Teachers need to send classroom updates to all of their students’ parents. Parents and students also need to communicate effectively with teachers and administrators. Whatever the case, regular, well-executed communication is vital in a school setting. But how can schools most effectively and efficiently communicate to keep everyone safe, informed, and up to date? The key lies in a modern mass notification system for schools.



All data belonging to US users-including backup copies-have been deleted in catastrophe, VMEmail says.


An unknown attacker appears to have deleted 18 years' worth of customer emails, along with all backup copies of the data, at email provider VFEmail.

A note on the firm's website Tuesday described the attack, first reported by KrebsOnSecurity, as causing "catastrophic destruction."

"This person has destroyed all data in the US, both primary and backup systems. We are working to recover what data we can," the note read. VFEmail was established in 2001 and provides free and paid email services, including bulk email services in the US and elsewhere.

The attack, described in a series of tweets from the firm, seems to have occurred on Monday and had targeted all VFEmail's externally facing servers across data centers. Though the servers were running different operating systems and not all shared the same authentication, the attacker managed to access each one and reformat them all the same.



Digital intelligence (DI) – the practice of understanding and optimizing digital customer engagements – has been around for as long as the internet itself. But it has not remained stagnant. The practices and technologies needed to support DI have continued to be revolutionized by the digital disruption. And the means by which customers interact with a brand have skyrocketed in recent years showing no signs of slowing down. In a recent press release, IHS Markit estimated the number of internet-connected devices will grow to 125 billion by 2030, up from around 27 billion in 2017.

Understanding, and optimizing digital customer engagement in today’s environment demands a dizzying combination of DI tech.  Forrester recently analyzed the DI market to make sense of it. We published our findings in The Forrester Tech Tide™: Digital Intelligence Technologies, Q1 2019.



(TNS) — The strongest and potentially wettest storm of the winter season is bearing down on Southern California this week, threatening to unleash debris flows in burn areas in Orange and Riverside counties as the region’s wild winter continues.

The atmospheric river-fueled storm, packed with subtropical moisture, will take aim at large swaths of the already-soaked state beginning early Wednesday and lasting through Thursday.

The amount of precipitation from the storm will vary depending on the region, with San Diego, Orange and Riverside counties likely to be pounded with up to 2 inches of rain along the coast and up to 10 inches at higher elevations. This could create a dangerous situation for residents in recent burn areas, according to the National Weather Service.

Forecasters predict the Holy fire burn scar will see 2.5 to 6 inches of rain, while the area affected by the Cranston fire last year will likely experience 3 to 8 inches of precipitation through Thursday. That has the potential to trigger debris flows and flooding, according to the weather service.



(TNS) — A stubbed toe, a scraped knee, a twisted ankle.

Call 911 in Pinellas County, Fla., about any of those injuries and at least four people in two vehicles will show up.

But a new proposal — already implemented in Hillsborough County and across the country — that's being considered by county government and some of the cities, including St. Petersburg, would reduce the response for certain minor medical issues.

The goal: "We preserve our resources for the most severe calls, and ultimately improve our response times on the most critical emergencies," said St. Petersburg Fire Rescue Division Chief Ian Womack to City Council on Thursday. "The general principle is, if you over-resource low-priority calls, that unit is then committed to the low priority call."



What does the term “digital transformation” mean to you?

Is it about digital customer experiences? Digital operations? Transforming business models? Leveraging software ecosystems? Is it a floor wax? A dessert topping?

Digital transformation (DT) as a term loses meaning when it involves everything under the sun. Over the past few years, we’ve seen companies label anything and everything as “digital transformation” — no wonder why DT initiatives meander and stall.

Companies that succeed with transformation initiatives keep a laser focus on using technology to deliver business results. How? Not with long cycles of business requirements and software implementations.



(TNS) - After the Camp fire destroyed their home in Paradise, Calif., last November, Anastasia Skinner, 26, her husband and their three young children left their community and moved in with a relative in Nevada. But the schools there didn’t offer two of her special-needs children the care they required.

So it was welcome news when Skinner, who was pregnant when she fled the fire, heard that officials were allowing residents to move back and live on their fire-scarred properties in temporary dwellings. With the insurance money they collected, Skinner and her husband purchased a used RV for $10,000 and headed back to Paradise.

For about a month, they made a home of their small RV. Money was tight, and they spent more on water, propane and gas than what they paid for the monthly mortgage on their now-destroyed house of six years. But, unlike thousands of others, at least the Skinners had a home.



TPRM in the Wake of GDPR

Cisco has just released a Data Privacy Benchmark Study, revealing that outsourcers are taking seriously their responsibility to protect customers’ data. Tom Garrubba, Senior Director and CISO at Shared Assessments, offers his perspective on third parties’ performance of late.

Those of us in the privacy profession knew it was only a matter of time until privacy-minded organizations would see the benefits of their internal analysis and hard work. Their efforts to refine and/or create policies, procedures, standards and practices that better secure and guard privacy during the handling of their customers’ personally identifiable information are paying off.

Evidence of this came to light in the new Cisco Data Privacy Benchmark Study, published in late January 2019. The study indicates both outsourcing organizations and service providers are modifying the way they are doing business. Organizations increasingly understand the importance of recent regulations such as the General Data Protection Regulation (GDPR), which mandates protections of the personal data for citizens throughout the EU. This understanding is gaining traction as organizations grapple with similar U.S.-state privacy regulations and guidance, such as the California Consumer Privacy Act (CCPA). From a compliance perspective, this is a breath of fresh air, since organizations are required to provide evidence they’ve documented (and thus have a handle on) their internal processes and all the hands through which their data passes.



Catastrophes can take many forms ‒ from an active shooter to a chemical hazard or natural disaster ‒ and businesses must always have emergency response plans ready for those situations.

Authorities will be dispatched to your workplace as quickly as possible in the event of an emergency. Your emergency preparedness plan must be designed to help employees quickly respond in order to save lives and avoid further injury.

Here are how organizations should approach three of the most common emergencies:



Recently, the United States experienced a once-in-a-lifetime weather event when temperatures dropped drastically to record lows.

The National Weather Service in Chicago predicted it would be the chilliest Arctic outbreak since records have been kept. Biting winds caused the wind chill to hit life-threatening lows. In Thief River Falls, the AccuWeather RealFeel® Temperature was -77° F!

The real threat of the polar vortex was felt in the workplace as work days were canceled, employees called in, and the post office shut down. “The words “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds” – but the frigid temperatures did, and as a result, delayed shipping for two days.

Severe weather is becoming a more common risk to businesses worldwide. Houston, Texas has had three 500-year floods in the last five years. The California wildfires affected the agriculture industry, impacting wine, fruits, nuts, livestock, and poultry production. The 2011 tsunami and subsequent nuclear event in Japan caused a suspension of production of Toyota, Nissan, Honda, Mitsubishi, and Suzuki.



Monday, 11 February 2019 16:00

What’s your severe weather risk?

If you are reading this for the snark and jokes, thank you. We are so sorry to disappoint you, because we’re not sure how to make corporate climate risks funny. Instead, let’s have a sober discussion about climate risk, and you can leave the jokes in the comment section.

What Is It?

The Green New Deal is an ambitious proposal in the US to combat climate change. Named after President Franklin D. Roosevelt’s New Deal to combat the Great Depression, the Green New Deal is a massive stimulus package aimed to address climate change, as well as the rising social, economic, and political inequality in the US that comes with it. It calls for economic mobilization not seen since World War II and the New Deal and aims to cut greenhouse gas emissions (GHGs) in half by 2030, shift 100 percent of national power generation to renewable sources, upgrade all infrastructure and transportation for energy efficiency, decarbonize the largest polluting industries (manufacturing and agriculture), fund the capture of GHGs, and virtually eliminate poverty in the US by including everyone in the prosperity that this transition would provide.

Although it’s unlikely to pass through the US Senate this time around (never mind the President’s desk), we believe that businesses must adopt the goals of the deal to avoid going extinct.



Ugh. Everyone is talking about the citizen data scientist, but no one can define it (perhaps they know one when they see one). Here goes — the simplest definition of a citizen data scientist is: non-data scientist. That’s not a pejorative; it just means that citizen data scientists nobly desire to do data science but are not formally schooled in all the ins and outs of the data science life cycle. For example, a citizen data scientist may be quite savvy about what enterprise data is likely to be important to create a model but may not know the difference between GBM, random forester, and SVM. Those algorithms are data scientist geek-speak to many of them. The citizen data scientist’s job is not data science; rather, they use it as a tool to get their job done. Here is my definition of the enterprise citizen data scientist:

A businessperson who aspires to use data science techniques such as machine learning to discover new insights and create predictive models to improve business outcomes.



Monday, 11 February 2019 15:31

Who Are You, Citizen Data Scientist?

Weather tools help Team Rubicon respond quicker and reduce risks

By Glen Denny, President, Enterprise Solutions, Baron Critical Weather Solutions

Team Rubicon is an international disaster response nonprofit with a mission of using the skills and experiences of military veterans and first responders to rapidly provide relief to communities in need. Headquartered in Los Angeles, California, Team Rubicon has more than 80,000 volunteers around the country ready to jump into action when needed to provide immediate relief to those affected by natural disasters.

More than 80 percent of the disasters Team Rubicon responds to are weather-related, including crippling winter storms, catastrophic hurricanes, and severe weather outbreaks – like tornadoes. While always ready to serve, the organization needed better weather intelligence to help them prepare and mitigate risks. After adopting professional weather forecasting and monitoring tools, operations teams were able to pinpoint weather hazards, track storms, view forecasts, and set up custom alerts. And the intelligence they gained made a huge difference in the organization’s response to Hurricanes Florence and Michael.

Team Rubicon relies on skills and experiences of military veterans and first responders

About 75 percent of Team Rubicon volunteers are military veterans, who find that their skills in emergency medicine, small-unit leadership, and logistics are a great fit with disaster response. It also helps with their ability to hunker down in challenging environments to get the job done. A further 20 percent of volunteers are trained first responders, while the rest are volunteers from all walks of life. The group is a member of National Voluntary Organizations Active in Disaster (National VOAD), an association of organizations that mitigate and alleviate the impact of disasters.

By focusing on underserved or economically-challenged communities, Team Rubicon seeks to make the largest impact possible. According to William (“TJ”) Porter, manager of operational planning, Team Rubicon’s core mission is to help those who are often forgotten or left behind; they place a special emphasis on helping under-insured and uninsured populations.

Porter, a 13-year Air Force veteran, law enforcement officer, world traveler, and former American Red Cross worker, proudly stands by Team Rubicon’s service principles, “Our actions are characterized by the constant pursuit to prevent or alleviate human suffering and restore human dignity – we help people on their worst day.”

Weather-related disasters pose special challenges

The help Team Rubicon provides for weather-related disasters spans the gamut, from removing trees from roadways, clearing paths for service vehicles, bringing in supplies, conducting search and rescue missions (including boating rescues), dealing with flooded out homes, mucking out after a flood, mold remediation, and just about anything else needed. While Team Rubicon had greatly expanded its equipment inventory in recent years to help do these tasks, the organization lacked the deep level of weather intelligence that could help them understand and mitigate risks – and keep their teams safe from danger.

That’s where Baron comes into the story. After learning of the impressive work Team Rubicon is doing at the Virginia Emergency Management Conference, a Baron team member struck up a conversation with Team Rubicon, asking if they had a need for detailed and accurate weather data to help them plan their efforts. Team Rubicon jumped at the opportunity and Baron ultimately donated access to its Baron Threat Net product. Key features allow users to pinpoint weather hazards by location, track storms, view forecasts and set up custom alerts, including location-based pinpoint alerting and standard alerts from the National Weather Service (NWS). The web portal weather monitoring system provides street level views and the ability to layer numerous data products. Threat Net also offers a mobile companion application that gives Team Rubicon access to real-time weather monitoring on the go.

This suited Team Rubicon down to the ground. “In years past, we didn’t have a good way to monitor weather,” explains Porter. “We went onto the NWS, but our folks are not meteorologists, and they don’t have that background to make crucial decisions. Baron Threat Net helped us understand risks and mitigate the risks of serious events. It plays a crucial role in getting teams in as quickly as possible so we can help the greatest number of people.”

New weather tools help with response to major hurricanes

Baron1The new weather intelligence tools have already had a huge impact on Team Rubicon’s operations. Take the example of how access to weather data helped Team Rubicon with its massive response to Hurricane Florence. A day or so before the hurricane was due to make landfall, Dan Gallagher, Enterprise Product Manager and meteorologist at Baron Services, received a call from Team Rubicon, requesting product and meteorological support. Individual staff had been using the new Baron Threat Net weather tools to a degree since gaining access to them, but the operations team wanted more training and support in the face of what looked like a major disaster barreling towards North Carolina, South Carolina, Virginia, and West Virginia.

Gallagher, a trained meteorologist with more than 18 years of experience in meteorological research and software development, quickly hopped on a plane, arriving at Team Rubicon’s National Operations Center in Dallas. His first task was to meet operational manager Porter’s request to help them guide reconnaissance teams entering the area. They wanted to place a reconnaissance team close to the storm – but not in mortal danger. Using the weather tools, Gallagher located a spot north of Wilmington, NC between the hurricane’s eyewall and outer rain bands that could serve as a safe spot for reconnaissance.

The next morning, Gallagher provided a weather briefing to ensure that operations staff had the latest weather intelligence. “I briefed them on where the storm was, where it was heading, the dangers that could be anticipated, areas likely to be most affected, and the hazards in these areas.”

Throughout the day, Gallagher conducted a number of briefings and kept the teams up to date as Hurricane Florence slowly moved overland. He also provided video weather briefings for the reconnaissance team in their car en route to their destination.

Another crew based in Charlotte was planning the safest route for trucking in supplies based on weather conditions. They wanted help in choosing whether to haul the trailer from Atlanta, GA or Alexandria, VA. “I was not there to make a recommendation on an action but rather to give them the weather information they need to make their decision,” explains Gallagher. “As a meteorologist, I know what the weather is, but they decide how it impacts their operation. As soon as I gave a weather update they could make a decision within seconds, making it possible for actions based on that decision.” Team Rubicon used the information Gallagher provided to select the Alexandria VA route; their crackerjack logistics team was then able to quickly make all the needed logistical arrangements.

In addition to weather briefings, Gallagher provided more detailed product training on Baron Threat Net, observed how the teams actually use the product, and learned how the real-time products were performing. He also got great feedback on other data products that might enhance Team Rubicon’s ability to respond to disasters.

Team Rubicon gave very high marks to the high-resolution weather/forecast model available in Baron Threat Net. They relied upon the predictive precipitation accumulation and wind speed information, as well as information on total precipitation accumulation (what has already fallen in the past 24 hours).

The wind damage product showing shear rate was very useful to Team Rubicon. In addition, the product did an excellent job of detecting rotation, including picking out the weak tornadoes spawned from the hurricane that were present in the outer rain bands of Hurricane Florence. These are typically very difficult to identify and warn people about, because they spin up quickly and are relatively shallow and weak (with tornado damage of EF0 or EF1 as measured on the Enhanced Fujita Scale). Gallagher had seen how well the wind damage product performed in larger tornado cases but was particularly gratified at how well it helped the team detect these smaller ones.

For example, Lauren Vatier of Team Rubicon’s National Incident Management Teamcommented that she had worked with Baron Threat Net before the Florence event, but using it so intensively made her more familiar with how to use the product and really helped cement her knowledge. “Before Florence I had not used Baron Threat Net for intel purposes. Today I am looking for information on rain accumulation and wind, and I’m looking ahead to help the team understand what the situation will look like in the future. It helps me understand and verify the actual information happening with the storm. I don’t like relying on news articles. Now I can look into the product and get accurate and reliable information.”

Vatier also really likes the ability to pinpoint information on a map showing colors and ranges. “You can click on a point and tell how much accumulation has occurred or what the wind speed is. The pinpointing is a valuable part of Baron Threat Net.” The patented Baron Pinpoint Alerting technology automatically sends notifications any time impactful weather approaches; alert types include severe storms and tornadoes; proximity alerts for approaching lightning, hail, snow and rain; and National Weather Service warnings. She concludes, “I feel empowered by the program. It ups my confidence in my ability to provide accurate information.”

Baron2TJ Porter concurred that Baron Threat Net helped Team Rubicon mobilize the large teams that deployed for Hurricane Florence. “It is crucial to put people on the ground and make sure they’re safe. Baron Threat Net helps us respond quicker to disasters. It also helps the strike teams ensure they are not caught up in other secondary or rapid onset weather events.”

Porter explains that the situation unit leaders actively monitor weather through the day using Baron Threat Net. “We are giving them all the tools at our disposal, because these are the folks who provide early warnings to keep our folks safe.”

Future-proofing weather data

Being on the ground with Team Rubicon during the Hurricane Florence disaster recovery response gave Baron’s Gallagher an unusual opportunity to discuss other ways Baron weather products could help respond to weather-related disasters. According to Porter, “We are looking to Baron to help us understand secondary events, like the extensive flooding resulting from Hurricane Florence, and to understand where these hazards are today, tomorrow, and the next day.”

In addition, Team Rubicon is committed to targeting those areas of greatest need, so they want to be able to layer weather information with other data sets, especially social vulnerability, including location of areas with uninsured or underinsured populations. Says Porter, “Getting into areas we know need help will shave minutes, hours, or even days off how long it takes to be there helping”.

In the storm’s aftermath

At the time this article was written, hundreds of Team Rubicon volunteers were deployed as part of Hurricane Florence response operations and later in response to Hurricane Michael. Their work has garnered them a tremendous amount of national appreciation, including a spotlight appearance during Game 1 of the World Series. T-Mobile used its commercial television spots to support the organization, also pledging to donate $5,000 per post-season home run plus $1 per Twitter or Instagram post using #HR4HR to Team Rubicon.

Baron’s Gallagher appreciated the opportunity to see in real time how customers use its products, saying “The experience helped me frame improvements we can develop that will positively affect our clients using Baron Threat Net.”

A quality business continuity management (BCM) program is made up of six separate plans covering everything from emergency response to IT disaster recovery. In today’s post, we’ll explain what the six plans are and share some tips to help your organization devise them.

Somebody once said that, “A goal without a plan is just a wish.” A less known variation of the quote (much less known) is, “A goal with six plans is a BCM program.”

These six plans are the ones you need to be able to respond, recover, and return to normal operations after a business disruption. What are the six? The answer is coming up.

Before we begin, our title says every BCM program should have these plans. There are a couple exceptions as I’ll go into below.

Here are the six plans, in order of importance:



(TNS) - The full scope of a project aimed to prevent roughly 200 Highland properties from being included in a flood map was presented to the city council this week.

The Federal Emergency Management Agency (FEMA) periodically updates its flood maps to take into account new developments, changes in topography, etc., and how those changes may put surrounding areas at a greater risk for flooding.

In 2017, Highland officials began studying potential problem areas when on preliminary flood maps FEMA released to replace those created in 1986.

Those drafts tripled the 1986 floodplains, increasing the number of “high-risk” parcels from 135 to 365 and showed flood elevation upstream of the CSX railroad of about 6 and a half feet, adding roughly 100 acres to the floodplain.



(TNS) - When there's a potential tornado threat to Dallas-Fort Worth, outdoor sirens let residents know, but not every Texas city operates the same warning system.

The city of Dallas has 162 sirens to warn residents about an imminent weather emergency. Fort Worth has 153. But Austin, Houston and San Antonio have gone a different route.

Austin Miller was born in downtown Dallas and grew up in Richardson and Garland, before moving to Houston. While living in the Houston area — first in Sugar Land and later in Cypress — Miller noticed the absence of the sirens he had grown used to in North Texas.



Organizations may be tempted to dismiss artificial intelligence as something which is currently out of their reach, but Thorsten Kurpjuhn says that this is definitely not the case. In fact, AI can help businesses of all sizes to ensure network uptime and protection.

Business reliance on IT has grown exponentially over the past few years. Not only has this put a strain on existing IT network set-ups but has seen the role and expectations of the network administrator change beyond all recognition, in a bid to keep everything running smoothly and securely.

There was a time when those in charge of the network knew where they stood and had the time and resource to deal with reliability and unexpected security issues – which were a less frequent occurrence. But in a world where technology underpins every activity and transaction, there is now a need to spin multiple, moving plates to ensure operational efficiency.

Managing hybrid cloud networks, reacting to the overwhelming amount of big data residing on the network, the growing number of connected mobile devices all wanting to access the WiFi, and the ever-increasing risk and prevalence of cyber threats are now the order of the day, making network monitoring a very different beast.



(TNS) - Law enforcement officials on the East End in New York are investigating a spate of prank emergency calls that have triggered heavy police responses and are forcing officials to do more to authenticate time-consuming and potentially dangerous incidents.  

The bogus calls, known as swatting because they often draw a police department’s SWAT team and other first responders, have targeted high-profile people such as celebrities, as well as those in the gaming community, a law enforcement source said.

“Generally they involve a report of a murder, or a kidnapping or both,” said Southampton Town Police Chief Steven Skrynecki. “These events trigger a significant and serious response.”

They can also be costly and dangerous.



(TNS) - Have you ever seen an emergency alert on your phone or heard a radio program interrupted by a harsh tone followed by a warning?

Here’s what you need to know about emergency alerts and the authorities behind them:

What are emergency alerts?

Whenever there’s a serious emergency affecting a large group of people, it can be important to deliver information swiftly and through reliable channels.

In 2006, then-President George W. Bush signed an executive order to set up an “effective, reliable, integrated, flexible and comprehensive system” to alert and warn the American people in situations of war, terrorist attack, natural disaster or other hazards to public safety.

Under that order, the Federal Emergency Management Agency created something called the Integrated Public Alert & Warning System, which is now used by government and emergency agencies across the United States to communicate with the American people in times of trouble. IPAWS can be used to deliver many different kinds of emergency alerts, including Amber Alerts, severe weather warnings and messages like the kerosene alert sent in Baltimore County on Wednesday.



NORTHPORT, N.Y. – Cybersecurity Ventures is excited to release this special first annual edition of the Cybersecurity Almanac, a handbook containing the most pertinent statistics and information for tracking cybercrime and the cybersecurity market.

Cisco’s commitment to security and partnerships starts at the top, and it’s one of the reasons why we’re collaborating with them. “At Cisco, security is foundational to everything we do,” said Chuck Robbins, Chairman and CEO. Last year Cisco blocked seven trillion threats, or 20 billion threats a day, on behalf of their customers, according to Robbins.

Cisco and Cybersecurity Ventures have compiled 100 of the most important facts, figures, statistics, and predictions to help frame the global cybercrime landscape, and what the cybersecurity industry is doing to help protect governments, citizens, and organizations globally.

Cybersecurity Ventures formulates our own ground-up research — plus we vet, synthesize and repurpose research from the most credible sources (analysts, researchers, associations, vendors, industry experts, media publishers) — to provide our readers with a bird’s-eye view of the most dangerous cyber threats, and the most important solutions. 



By Alex Winokur, founder of Axxana


Disaster recovery is now on the list of top concerns of every CIO. In this article we review the evolution of the disaster recovery landscape, from its inception until today. We look at the current understanding of disaster behavior and as a result the disaster recovery processes. We also try to cautiously anticipate the future, outlining the main challenges associated with disaster recovery.

The Past

The computer industry is relatively young. The first commercial computers appeared somewhere in the 1950s—not even seventy years ago. The history of disaster recovery (DR) is even younger. Table 1 outlines the appearance of the various technologies necessary to construct a modern DR solution.


Table 1 – Early history of DR technology development


From Magnetic Tapes to Data Networks

The first magnetic tapes for computers were used as input/output devices. That is, input was punched onto punch cards that were then stored offline to magnetic tapes. Later, UNIVAC I, one of the first commercial computers, was able to read these tapes and process their data. Later still, output was similarly directed to magnetic tapes that were connected offline to printers for printing purposes. Tapes began to be used as a backup medium only after 1954, with the


Figure 1: First Storage System - RAMAC

Although modern wide-area communication networks date back to 1974, data has been transmitted over long-distance communication lines since 1837 via telegraphy systems. These telegraphy communications have since evolved to data transmission over telephone lines using modems.
introduction of the mass storage device (RAMAC).

Modems were massively introduced in 1958 to connect United States air defense systems; however, their throughput was very low compared to what we have today. The FAA clustered system deployed communication that was designed for computers to communicate with their peripherals (e.g., tapes). Local area networks (LANs) as we now know them had not been invented yet.

Early Attempts at Disaster Recovery

It wasn’t until the 1970s that concerns about disaster recovery started to emerge. In that decade, the deployment of IBM 360 computers reached a critical mass, and they became a vital part of almost every organization. Until the mid-1970s, the perception was that if a computer failed, it would be possible to fail back to paper-based operation as was done in the 1960s. However, the wide-spread rise of digital technologies in the 1970s led to a corresponding increase in technological failures on one hand; while on the other hand, theoretical calculations, backed by real-world evidence, showed that switching back to paper-based work was not practical.

The emergence of terrorist groups in Europe like the Red Brigades in Italy and the Baader-Meinhof Group in Germany further escalated concerns about the disruption of computer operations. These left-wing organizations specifically targeted financial institutions. The fear was that one of them would try to blow up a bank’s data centers.

At that time, communication networks were in their infancy, and replication between data centers was not practical.

Parallel workloads. IBM came up with the idea to use the FAA clustering technology to build two adjoining computer rooms that were separated by a steel wall and had one node cluster in each room. The idea was to run the same workload twice and to be able to immediately fail over from one system to the other in case one system was attacked. A closer analysis revealed that in a case of a terror attack, the only surviving object would be the steel wall, so the plan was abandoned.

Hot, warm, and cold sites. The inability of computer vendors (IBM was the main vendor at the time) to provide an adequate DR solution made way for dedicated DR firms like SunGard to provide hot, warm, or cold alternate site. Hot sites, for example, were duplicates of the primary site; they independently ran the same workloads as the primary site, as communication between the two sites was not available at the time. Cold sites served as repositories for backup tapes. Following a disaster at the primary site, operations would resume at the cold site by allocating equipment, executing a restore from backup operations, and restarting the applications. Warms sites were a compromise between a hot site and a cold site. These sites had hardware and connectivity already established; however, recovery was still done by restoring the data from backups before the applications could be restarted.

Backups and high availability. The major advances in the 1980s were around backups and high availability. On the backup side, regulations requiring banks to have a testable backup plan were enacted. These were probably the first DR regulations to be imposed on banks; many more followed through the years. On the high availability side, Digital Equipment Corporation (DEC) made the most significant advances in LAN communications (DECnet) and clustering (VAXcluster).

The Turning Point

On February 26, 1993 the first bombing of the World Trade Center (WTC) took place. This was probably the most significant event shaping the disaster recovery solution architectures of today. People realized that the existing disaster recovery solutions, which were mainly based on tape backups, were not sufficient. They understood that too much data would be lost in a real disaster event.

SRDF. By this time, communication networks had matured, and EMC became the first to introduce a storage-to-storage replication software called Symmetrix Remote Data Facility (SRDF).


Behind the Scenes at IBM

At the beginning of the nineties, I was with IBM’s research division. At the time, we were busy developing a very innovative solution to shorten the backup window, as backups were the foundation for all DR and the existing backup windows (dead hours during the night) started to be insufficient to complete the daily backup. The solution, called concurrent copy, was the ancestor of all snapshotting technologies, and it was the first intelligent function running within the storage subsystem. The WTC event in 1993 left IBM fighting the “yesterday battles” of developing a backup solution, while giving EMC the opportunity to introduce storage-based replication and become the leader in the storage industry.


The first few years of the 21st century will always be remembered for the events of September 11, 2001—the date of the complete annihilation of the World Trade Center. Government, industry, and technology leaders realized then that some disasters can affect the whole nation, and therefore DR had to be taken much more seriously. In particular, the attack demonstrated that existing DR plans were not adequate to cope with disasters of such magnitude. The notion of local, regional, and nationwide disasters crystalized, and it was realized that recovery methods that work for local disasters don’t necessarily work for regional ones.

SEC directives. In response, the Securities Exchange Commission (SEC) issued a set of very specific directives in the form of the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S.” These regulations, still intact today, bind all financial institutions. The DR practices that were codified in the SEC regulations quickly propagated to other sectors, and disaster recovery became a major area of activity for all organizations relying on IT infrastructure.

The essence of these regulations is as follows:

  1. The economic stance of the United States cannot be compromised under any circumstance.
  2. Relevant financial institutions are obliged to correctly, without any data loss, resume operations by the next business day following a disaster.
  3. Alternate disaster recovery sites must use different physical infrastructure (electricity, communication, water, transportation, and so on) than the primary site.

Note that Requirements 2 and 3 above are somewhat contradictory. Requirement 2 necessitates synchronous replication to facilitate zero data loss, while Requirement 3 basically dictates long distances between sites—thereby making the use of synchronous replication impossible. This contradiction is not addressed within the regulations and is left to each implementer to deal with at its own discretion.

The secret to resolving this contradiction lies in the ability to reconstruct missing data if or when data loss occurs. The nature of most critical data is such that there is always at least one other instance of this data somewhere in the universe. The trick is to locate it, determine how much of it is missing in the database, and augment the surviving instance of the database with this data. This process is called data reconciliation, and it has become a critical component of modern disaster recovery. [See The Data Reconciliation Process sidebar.]


The Data Reconciliation Process

If data is lost as a result of a disaster, the database becomes misaligned with the real world. The longer this misalignment exists, the greater the risk of application inconsistencies and operational disruptions. Therefore, following a disaster, it is very important to align back the databases with the real world as soon as possible. This process of alignment is called data reconciliation.

The reconciliation process has two important characteristics:

  1. It is based on the fact that the data lost in a disaster exists somewhere in the real word, and thus it can be reconstructed in the database.
  2. The duration and complexity of the reconciliation is proportional to the recovery point objective (RPO); that is, it’s proportional to the amount of data lost.

One of the most common misconceptions in disaster recovery is that RPO (for example, RPO = 5) refers to how many minutes of data the organization is willing to lose. What RPO really means is that the organization must be able to reconstruct and reconsolidate (i.e., reconcile) that last five minutes of missing data. Note that the higher the RPO (and therefore, the greater the data loss), the longer the RTO and the costlier the reconciliation process. Catastrophes typically occur when RPO is compromised and the reconciliation process takes much longer.

In most cases, the reconciliation process is quite complicated, consisting of time-consuming processes to identify the data gaps and then resubmitting the missing transactions to realign the databases with real-world status. This is a costly, mainly manual, error-prone process that greatly prolongs the recovery time of the systems and magnifies risks associated with downtime.


The Present

The second decade of the 21st century has been characterized by new types of disaster threats, including sophisticated cyberattacks and extreme weather hazards caused by global warming. It is also characterized by new DR paradigms, like DR automation, disaster recovery as a service (DRaaS), and active-active configurations.

These new technologies are for the most part still in their infancy. DR automation tools attempt to orchestrate a complete site recovery through invocation of one “site failover” command, but they are still very limited in scope. A typical tool in this category is the VMware Site Recovery Manager (SRM). DRaaS attempts to reduce the cost of DR-compliant installation by locating the secondary site in the cloud. The new active-active configurations try to reduce equipment costs and recovery time by utilizing techniques that are used in the context of high availability; that is, to recover from a component failure rather than a complete site failure.

Disasters vs. Catastrophes

The following definitions of disasters and disaster recovery have been refined over the years to make a clear distinction between the two main aspects of business continuity: high availability protection and disaster recovery. This distinction is important because it crystalizes the difference between disaster recovery and a single component failure recovery covered by highly available configurations, and in doing so also accounts for the limitations of using active-active solutions for DR.

A disaster in the context of IT is either a significant adverse event that causes an inability to continue operation of the data center or a data loss event where recovery cannot be based on equipment at the data center. In essence, disaster recovery is a set of procedures aimed to resume operations following a disaster by failing over to a secondary site.

From a DR procedures perspective, it is customary to classify disasters into 1) regional disasters like weather hazards, earthquakes, floods, and electricity blackouts and 2) local disasters like local fires, onsite electrical failures, and cooling system failures.

Over the years, I have also noticed a third, independent classification of disasters. Disasters can also be classified as catastrophes. In principal, a catastrophe is a disastrous event where in the course of a disaster, something very unexpected happens that causes the disaster recovery plans to dramatically miss their service level agreement (SLA); that is, they typically exceed their recovery time objective (RTO).

When DR procedures go as planned for regional and local disasters, organizations fail over to a secondary site and resume operations within pre-determined parameters for recovery time (i.e., RTO) and data loss (i.e., RPO). The organization’s SLAs, business continuity plans, and risk management goals align with these objectives, and the organization is prepared to accept the consequent outcomes. A catastrophe occurs when these SLAs are compromised.

Catastrophes can also result from simply failing to execute the DR procedures as specified, typically due to human errors. However, for the sake of this article, let’s be optimistic and assume that DR plans are always executed flawlessly. We shall concentrate only on unexpected events that are beyond human control.

Most of the disaster events that have been reported in the news recently (for example, the Amazon Prime Day outage in July 2018 and the British Airways bank holiday outage in 2017) have been catastrophes related to local disasters. If DR could have been properly applied to the disruptions at hand, nobody would have noticed that there had been a problem, as the DR procedures were designed to provide almost zero recovery time and hence zero down time.

The following two examples provide a closer look at how catastrophes occur.

9/11 – Following the September 11 attack, several banks experienced major outages. Most of them had a fully equipped alternate site in Jersey City—no more than five miles away from their primary site. However, the failover failed miserably because the banks’ DR plans called for critical personnel to travel from their primary site to their alternate site, but nobody could get out of Manhattan.

A data center power failure during a major snow storm in New England – Under normal DR operations at this organization, the data was synchronously replicated to an alternate site. However, 90 seconds prior to a power failure at the primary site, the central communication switch in the area lost power too, which cut all WAN communications. As a result, the primary site continued to produce data for 90 seconds without replication to the secondary site; that is, until it experienced the power failure. When it finally failed over to the alternate site, 90 seconds of transactions were missing; and because the DR procedures were not designed to address recovery where data loss has occurred, the organization experienced catastrophic down time.

The common theme of these two examples is that in addition to the disaster at the data center there was some additional—unrelated—malfunction that turned a “normal” disaster into a catastrophe. In the first case, it was a transportation failure; in the second case, it was a central switch failure. Interestingly, both failures occurred to infrastructure elements that were completely outside the control of the organizations that experienced the catastrophe. Failure of the surrounding infrastructure is indeed one of the major causes for catastrophes. This is also the reason why the SEC regulations put so much emphasis on infrastructure separation between the primary and secondary data center.

Current DR Configurations

In this section, I’ve included examples of two traditional DR configurations that separate the primary and secondary center, as stipulated by the SEC. These configurations have predominated in the past decade or so, but they cannot ensure zero data loss in rolling disasters and other disaster scenarios, and they are being challenged by new paradigms such as that introduced by Axxana’s Phoenix. While a detailed discussion would be outside the scope of this article, suffice it to say that Axxana’s Phoenix makes it possible to avoid catastrophes such as those just described—something that is not possible with traditional synchronous replication models.


Figure 2 – Typical DR configuration


Typical DR configuration. Figure 2 presents a typical disaster recovery configuration. It consists of a primary site, a remote site, and another set of equipment at the primary site, which serves as a local standby.

The main goal of the local standby installation is to provide redundancy to the production equipment at the primary site. The standby equipment is designed to provide nearly seamless failover capabilities in case of an equipment failure—not in a disaster scenario. The remote site is typically located at a distance that guarantees infrastructure independence (communication, power, water, transportation, etc.) to minimize the chances of a catastrophe. It should be noted that the typical DR configuration is very wasteful. Essentially, an organization has to triple the cost of equipment and software licenses—not to mention the increased personnel costs and the cost of high-bandwidth communications—to support the configuration of Figure 2.


Figure 3 – DR cost-saving configuration


Traditional ideal DR configuration. Figure 3 illustrates the traditional ideal DR configuration. Here, the remote site serves both for DR purposes and high availability purposes. Such configurations are sometimes realized in the form of extended clusters like Oracle RAC One Node on Extended Distance. Although traditionally considered the ideal, they are a trade-off between survivability, performance, and cost. The organization saves on the cost of one set of equipment and licenses, but it compromises survivability and performance. That’s because the two sites have to be in close proximity to share the same infrastructure, so they are more likely to both be affected by the same regional disasters; at the same time, performance is compromised due to the increased latency caused by separating the two cluster nodes from each other.


Figure 4 – Consolidation of DR and high availability configurations with Axxana’s Phoenix

True zero-data-loss configuration. Figure 4 represents a cost-saving solution with Axxana’s Phoenix. In case of a disaster, Axxana’s Phoenix provides a zero-data-loss recovery to any distance. So, with the help of Oracle’s high availability support (fast start failover and transparent application failover), Phoenix provides functionality very similar to extended cluster functionality. With Phoenix, however, it can be implemented over much longer distances and with much smaller latency, providing true cost savings over the typical configuration shown in Figure 3.

The Future

In my view, the future is going to be a constant race between new threats and new disaster recovery technologies.

New Threats and Challenges

In terms of threats, global warming creates new weather hazards that are fiercer, more frequent, and far more damaging than in the past—and in areas that have not previously experienced such events. Terror attacks are on the rise, thereby increasing threats to national infrastructures (potential regional disasters). Cyberattacks—in particular ransomware, which destroys data—are a new type of disaster. They are becoming more prolific, more sophisticated and targeted, and more damaging.

At the same time, data center operations are becoming more and more complex. Data is growing exponentially. Instead of getting simpler and more robust, infrastructures are getting more diversified and fragmented. In addition to legacy architectures that aren’t likely to be replaced for a number of years to come, new paradigms like public, hybrid, and private clouds; hyperconverged systems; and software-defined storage are being introduced. Adding to that are an increasing scarcity of qualified IT workers and economic pressures that limit IT spending. All combined, these factors contribute to data center vulnerabilities and to more frequent events requiring disaster recovery.

So, this is on the threat side. What is there for us on the technology side?

New Technologies

Of course, Axxana’s Phoenix is at the forefront of new technologies that guarantee zero data loss in any DR configuration (and therefore ensure rapid recovery), but I will leave the details of our solution to a different discussion.

AI and machine learning. Apart from Axxana’s Phoenix, the most promising technologies on the horizon revolve around artificial intelligence (AI) and machine learning. These technologies enable DR processes to become more “intelligent,” efficient, and predictive by using data from DR tests, real-world DR operations, and past disaster scenarios; in doing so, disaster recovery processes can be designed to better anticipate and respond to unexpected catastrophic events.These technologies, if correctly applied, can shorten RTO and significantly increase the success rate of disaster recovery operations. The following examples suggest only a few of their potential applications in various phases of disaster recovery:

  • They can be applied to improve the DR planning stage, resulting in more robust DR procedures.
  • When a disaster occurs, they can assist in the assessment phase to provide faster and better decision-making regarding failover operations.
  • They can significantly improve the failover process itself, monitoring its progress and automatically invoking corrective actions if something goes wrong.

When these technologies mature, the entire DR cycle from planning to execution can be fully automated. They carry the promise of much better outcomes than processes done by humans because they can process and better “comprehend” far more data in very complex environments with hundreds of components and thousands of different failure sequences and disaster scenarios.

New models of protection against cyberattacks. The second front where technology can greatly help with disaster recovery is on the cyberattack front. Right now, organizations are spending millions of dollars on various intrusion prevention, intrusion detection, and asset protection tools. The evolution should be from protecting individual organizations to protecting the global network. Instead of fragmented, per-organization defense measures, the global communication network should be “cleaned” of threats that can create data center disasters. So, for example, phishing attacks that would compromise a data center’s access control mechanisms should be filtered out in the network—or in the cloud— instead of reaching and being filtered at the end points.


Disaster recovery has come a long way—from naive tape backup operations to complex site recovery operations and data reconciliation techniques. The expenses associated with disaster protection don’t seem to go down over the years; on the contrary, they are only increasing.

The major challenge of DR readiness is in its return on investment (ROI) model. On one hand, a traditional zero-data-loss DR configuration requires organizations to implement and manage not only a primary site, but also a local standby and remote standby; doing so essentially triples the costs of critical infrastructure, even though only one third of it (the primary site) is utilized in normal operation.

On the other hand, if a disaster occurs and the proper measures are not in place, the financial losses, reputation damage, regulatory backlash, and other risks can be devastating. As organizations move into the future, they will need to address the increasing volumes and criticality of data. The right disaster recovery solution will no longer be an option; it will be essential for mitigating risk, and ultimately, for staying in business.

Thursday, 07 February 2019 18:15

Disaster Recovery: Past, Present, and Future

(TNS) - The Georgia Emergency Management and Homeland Security Agency encourages Georgians to become proactive about preparing for severe weather by participating in Severe Weather Preparedness Week (Feb. 4-8).

“This state has an unpredictable history when comes to severe weather,” said GEMA/HS Director Homer Bryson. “Whether it’s hurricanes, tornadoes or severe thunderstorms, Georgians need to be sure of one thing … that they’re prepared for any disaster. During Severe Weather Preparedness Week, we’re dedicated to educating our citizens on how to better prepare for sudden weather events.”

Spring (March, April, May) is typically the time where the threat of tornadoes, damaging winds, large hail and frequent lightning from severe storms is at its highest across Georgia. Take advantage of Severe Weather Preparedness Week to review your family's emergency procedures and prepare for weather-related hazards.



RPA is software that mimics the activity of a human being in carrying out tasks within a business process and thereby frees human capital to be utilized in other areas. The software bots are programmed to do manual tasks and are relatively lightweight in that they reside on top of existing systems and applications. Recent surveys indicate that anywhere from 30-50% of RPA projects fail. Ever wondered why there are so many instances of companies not making it past the initial stages of their RPA initiative? The lack of a consistent process to identify the right automation opportunities and prioritize them, inevitably results in organizations fumbling early in their RPA journey, and in some cases giving up on it altogether. Identifying and prioritizing candidates for automation are critical steps before one can pilot RPA and build the business case to move forward.

Figure 1 outlines the four-step approach we recommend to begin the RPA journey, each of which needs to involve engaging the right stakeholders who not only have the authority to take decisions, but also have sufficient insights regarding the process areas under consideration.



The ability of an organization to continue operating during a disruption has never been more important. So it’s no surprise that ISO 22301, the internationally recognized standard for a business continuity management system (BCMS), is being updated to make sure it remains relevant to today’s business environment.

As the first ISO standard based on the High Level Structure (HLS), it has a strong foundation that now aligns with many other internationally recognized management system standards such as ISO 9001 quality management and ISO/IEC 27001 information security management. However there are areas of improvement highlighted by users, particularly around less prescriptive procedures and updated terms and definitions, that need considering to ensure it remains relevant in a changing business landscape.



(TNS) - Some three months after Tropical Storm Michael caused damage in North Carolina, the Federal Emergency Management Agency has declared 21 counties eligible for federal aid.

Michael — which made landfall in the Florida panhandle in October and then made its way north through the Carolinas — caused flooding and wind damage through central North Carolina. The tropical storm, which had been a hurricane when it made landfall in Florida, came just a few weeks after Hurricane Florence ravaged the eastern part of the state with flooding and wind damage.

The federal disaster designation from FEMA will allow city and county governments, state agencies, some non-profits and religious institutions to be paid back for money used to repair buildings and infrastructure.

“This is good news for cities, towns and counties that suffered damages from Michael, which came right on the heels of Hurricane Florence,” N.C. Gov. Roy Cooper, who requested the designation from FEMA, said in a statement. “Cleaning up from Michael took a lot of local government resources, and this will help communities recover those funds.”



(TNS) - More than four months after Hurricane Florence battered the state, rivers of waste are still flowing to landfills in eastern North Carolina in volumes that their managers say they have never before seen.

Uprooted trees, broken furniture, sodden carpets, soggy sheet rock, smashed fencing, crushed carports and moldy clothing make up the mix of items destroyed by the September storm and subsequent flooding.

The trash piling up at some sites may not be disposed of until summer — or perhaps not until next year. Caravans of trucks are bringing new waste daily, and solid waste workers are logging major overtime to keep up with the load.



Local emergency managers are looking for about 200 volunteers to be "crisis actors" and help make more realistic an evacuation drill in March.

Chatham Emergency Management Agency will conduct a full-scale exercise March 26 at the Coastal Georgia Center and the Savannah Civic Center. This exercise will test the Evacuation Assembly Area plan for the county. The plan is implemented when the public needs transportation assistance during a county evacuation order, as happened with Hurricanes Irma and Matthew. The last time the exercise was conducted was 2015.

Volunteers should be willing to play the role of actors, simulating the general population. The volunteers will be transported from the Coastal Georgia Center to the Civic Center, where they will be "screened and processed" before being returned to the Coastal Georgia Center. They will be transported to and from multiple times throughout the exercise, but will have opportunities to rest. Food will be provided at both sites.



Software attacks, theft of intellectual property or sabotage are just some of the many information security risks that organizations face. And the consequences can be huge. Most organizations have controls in place to protect them, but how can we ensure those controls are enough? The international reference guidelines for assessing information security controls have just been updated to help.

For any organization, information is one of its most valuable assets and data breaches can cost heavily in terms of lost business and cleaning up the damage. Thus, controls in place need to be rigorous enough to protect it, and monitored regularly to keep up with changing risks.

Developed by ISO and the International Electrotechnical Commission (IEC), ISO/IEC TS 27008, Information technology – Security techniques – Guidelines for the assessment of information security controls, provides guidance on assessing the controls in place to ensure they are fit for purpose, effective and efficient, and in line with company objectives.

The technical specification (TS) has recently been updated to align with new editions of other complementary standards on information security management, namely ISO/IEC 27000 (overview and vocabulary), ISO/IEC 27001 (requirements) and ISO/IEC 27002 (code of practice for information security controls), all of which are referenced within.



As a higher education administrator, you know better than anyone the importance of timely communication on a campus, especially in a crisis.

In 2018, we saw schools across the country suffer from violence, and together we had to accept that campuses are targets for what was once the unthinkable.

Then there are other risks such as severe weather conditions or even day-to-day communications that need to be addressed. The one thing that is clear is the importance of having effective communication strategies in place to ensure campus safety.

Don’t assume that your campus is immune to crisis. The number of shootings on or near college campuses increased by 153% between the 2001 and 2006 academic years. And shooting incidents are predicted to increase during the next decade Take action in 2019 to improve communication with your many stakeholders, including students, faculty and staff, families, community members, and others.

We have the tips you need to make better communication a reality for your campus in 2019.



So you live in Wisconsin, far away from any hurricanes and ocean storm surges. You don’t need flood insurance, right?

Wrong. The big thaw spreading across the Midwest is a perfect lesson for why you do, in fact, need flood insurance.

The Wall Street Journal reported that in Lone Rock, Wisconsin, temperatures rose 80 degrees in three days, from minus 39 to 41. That kind of temperature swing is a recipe for floods.

Most obviously, melting ice and snow can swell rivers. But especially worrisome are “ice jams,” which form when frozen rivers melt into large ice chunks that can lodge together and block the river’s flow. In the worst cases, these artificial dams cause serious flooding in the area around the river.



3 Predictions for 2019

From Google’s GDPR violation to data breaches happening just hours after the new year, 2019 is off to a crazy start, especially for risk managers. In anticipation of the months ahead, LogicGate CEO Matt Kunkel predicts what GRC professionals should be prepared for in 2019. 

There’s no doubt risk managers stayed busy in 2018. From the GDPR rollout in May to numerous data breaches, these events come as no surprise to industry observers. To industry pros, data breaches are no longer seen in terms of “if,” but “when.” Every year, we continue to see companies collect enormous volumes of personal data, increasing the pressure placed on risk managers. However, in 2018, companies were finally held accountable for failing to protect customer data – just ask Mark Zuckerberg.

Looking ahead, what can GRC professionals expect in 2019? Below, I discuss three issues GRC professionals should be prepared for in 2019.



Monday, 04 February 2019 16:53

The State Of GRC

Rick Cudworth and Abigail Worsfold from the Deloitte crisis and resilience team provide a review of the new PD CEN/TS 17091 European technical specification for crisis management, which was launched in December 2018. 

A new technical specification for crisis management calls for a more strategic approach to the discipline. PD CEN/TS 17091 ‘Crisis Management – Building a Strategic Capability’ is a welcome intervention designed to help organizations develop this important capability. In this article we highlight four specific areas where the new technical specification advances good practice and provides more detailed guidance:

Crisis Management as a strategic capability

The technical specification’s expanded title – ‘building a strategic capability’ – is significant.

First, when things go wrong, and inevitably they will at some point, responding effectively will help keep the organization on track. Research published by Aon and Pentland Analytics (Reputation Risk in the Cyber Age – The Impact on Shareholder Value, August 2018) shows that companies that effectively respond to a crisis will out-perform those that don’t in terms of shareholder value. Organizations that see crisis management as a strategic discipline, are more likely to respond effectively when a crisis occurs.



Align Your Risk Approach to Your Unique Business Realities

LockPath’s Colby Smith discusses the reasons an integrated approach to risk management is an imperative – chief among them digital processes, global business and a reliance on third parties.

Digital transformation, globalization and outsourcing have given rise to unprecedented productivity, innovation, efficiency, collaboration and knowledge. However, with these business improvements come new risks.

Modern business risks are multifaceted: they impact operations and compliance simultaneously, morph from cyber to supply chain risk or any number of combinations. The interdependencies created by digital systems, third parties and automation can lead to a cascade of negative incidents if the interplay of risk factors is not carefully considered and managed. Enterprises can no longer afford to silo risk management efforts. Too many blind spots and hazards lie in the space between departmentalized programs.

Integrated risk management (IRM) practices and technology solutions are designed to address an enterprise’s particular ecosystem of risks. Gartner defines integrated risk management (IRM) as “a set of practices and processes supported by a risk-aware culture and enabling technologies that improve decision-making and performance through an integrated view of how well an organization manages its unique sets of risks.” Gartner recently introduced a new Magic Quadrant for IRM, confirming its growing importance as an advanced approach to dealing with ever-changing combinations of cyber, operational, geopolitical, regulatory, legal and financial risks.



Friday, 01 February 2019 15:06

The Case For Integrated Risk Management

(TNS) - More than a dozen people bustled about the basement of The River - A Community Church in New Kensington on midday Wednesday.

Pots of chicken noodle and vegetable broth soups simmered on a stovetop while flaky, doughy biscuits baked in the oven below.

Some volunteers prepped a refreshment area offering a variety of drinks and snacks — freshly brewed coffee, bottled water, fruit juice, hot chocolate and made-from-scratch cookies as well as store-bought ones. Others set up several tables and chairs and put on display an array of donated items up for grabs — beanies, scarves, gloves, coats and baby blankets.

Shortly before 3 p.m., a pair of men lugged outside a large sign labeled “Warming Center” and planted it in the church’s snow-blanketed lawn with an arrow pointing passersby to the basement entrance.



A new report published by Lloyds explores the impacts and economic costs of a future highly effective ransomware attack and concludes that the global economy is not ready to deal with such an attack.

The report, ‘Bashe attack: Global infection by contagious malware’ explores a scenario in which a ransomware attack is launched through an infected email, which once opened is forwarded to all contacts and within 24 hours encrypts all data on nearly 30 million devices worldwide. 

The report estimates a cyber attack on this scale could cost $193bn and affect more than 600,000 businesses worldwide; and states that the global economy is underprepared for these types of incident, with 86 percent of the total economic losses uninsured, leaving an insurance gap of $166bn. 



(TNS) - Safety across the region was a priority Wednesday, Jan. 30, as arctic temperatures and wind chills forced temperatures to as low as 65 below zero.

Hundreds of flights were cancelled across the Midwest on Wednesday. At Chicago's O'Hare International Airport, over 600 flights — about half of the flights originating from the airport — were cancelled and over 600 flights into O'Hare were also cancelled, according to flight tracker FlightAware.

Holland resident and Holland United Church of Christ pastor Bryan Berghoef has been in Toronto for work, and said his flights to Chicago have been cancelled for the past three days.

"It's very cold here," Berghoef said. "It's one below zero and feels like 11 below. Fortunately, I am staying with friends, so not stuck in the airport or at a hotel."



Just like athletes, business continuity practitioners need to be mentally tough in order to perform well in the face of the many stresses that go with their role. In today’s post, we’ll look at some of the factors that make business continuity uniquely stressful and share some tips to help you execute at a high level even when you’re under pressure.

If there’s one thing you can say for sure about the Super Bowl this weekend, it’s that it will feature exceptional athletes and coaches striving to perform to the best of their ability under great pressure. The team that does the best job of maintaining its poise and focus will probably win.

In business continuity, we talk a lot about the need to make our organizations resilient, but we must also be resilient as individuals, leaders, and crisis managers. Just like in sports, our mental robustness will be a big factor in how well we perform.



Who doesn’t love a meeting? Well, quite a few of us, actually. Here Loulla-Mae Eleftheriou-Smith asks the experts for their advice on minimizing meeting time while maximizing results, whether that means standing gatherings or curated guest lists

Meetings, scheduled or impromptu, can be something of a minefield when they’re conducted badly. Getting pulled into team discussions, summoned for a company update, or even just grabbed for a quick catch-up that ends up taking 45 minutes is easily done. But let them overrun your day and it can feel as if you have no time left for your actual work.

With research from eShare(1) showing that the average worker attends 4.4 meetings a week – more than half of which are thought to be unnecessary – it’s easy to see why. But when meetings are done well, they benefit everyone in the room. Here we speak to a range of experts about their tips for making meetings more productive.



Thursday, 31 January 2019 16:07

How to Have More Productive Meetings

2018 brought no calm to the smartphone industry, and Apple’s Q1 2019 earnings are an indication that the trend will continue this year. After years of consistent double-digit growth rates, a slowdown in smartphone sales that started in 2017 continued in 2018. Some reasons behind this include:

  • Major economies slowed. In 2018, many of the world’s large economies began to lose steam (or even come to a halt). While the economic challenges in Latin America and some African countries have become familiar in the past few years, 2018 saw China’s GDP growth dip, as well. This has had a direct impact on smartphone demand in China and has affected the overall global smartphone picture.
  • Smartphone penetration in key markets reached saturation. Smartphone demand has slowed due to market saturation in most North American and European countries, so the primary opportunities there stem from device replacement (“replacement opportunity”). Forrester forecasts that these economies will add no more than 91 million new smartphone subscribers — just 11% of the total number of new smartphone subscribers that will be added through 2023. Moreover, increased prices mean that consumers will replace their phones less frequently, further slowing smartphone demand.


The secret to leading a remote team effectively comes down to communication and cultivating caring relationships. Etan Smallman reports

While a prized window seat, flexible working hours and access to a fancy coffee machine may all make us feel more positive about work, they pale in comparison to the importance of having a good boss.

According to Gallup’s 2017 State of the American Workplace, about half of US workers have left a job to get away from a terrible boss. And only 21% of workers think their performance “is managed in a way that motivates them to do outstanding work.” In contrast, other studies show that a boss who is able to engage his team can expect higher productivity levels and reduced costs from staff turnover.

If you manage a team, you may be asking yourself how you can be this second kind of leader – especially if you’re someone managing a team of remote workers. Luckily, best-selling author and self-styled "Doctor of Happiness" Andy Cope is on hand to help. He’s spent more than a decade studying the impact positivity has on inspiring others and argues that a leader’s job is not to inspire people, but to be inspired.

Following the release of his latest book, Leadership: the Multiplier Effect, he explores how this applies to the new flexible world of work.



Technology has changed virtually everything: the way we work, the way we live, the way we play, and, at a basic level, how humans relate to each other.

On the business front, it has fueled the power shift from institutions to customers, powered the emergence of startups and super-scaled platforms that are undercutting and remaking traditional markets, and changed the way employees work.

Technology’s impact is both revolutionary and unsurprising. Ironically, it has not (yet) changed the fundamental model of IT. But that is going to change.

It’s not just the inevitable force of technology. Here is a shortlist of dynamics that conspire to create a far different future for IT:



If you want to be a remote worker, you may need to convince your boss. Daniel Mobbs has the lowdown on the skills required, from effective time management to confident communications

While remote working is becoming increasingly popular, not everyone is a natural-born remote employee. Where some people are primed to excel beyond the four walls of the office HQ, others are more likely to crash and burn without plans and strategies in place for a new way of working. “Candidates must be honest about their own ability to handle the heightened responsibilities and expectations that accompany remote work,” says Anthony Curlo, CEO of IT recruiting and staff augmentation firm DaVinciTek(1).

The good news is that these skills and strategies can be learned and developed by almost anyone. Look at successful remote workers and you’ll see many of the same traits popping up again and again. So if you’re trying to convince your boss (or even yourself) that remote working is the right thing for you, start by honestly asking yourself how many of the characteristics below you share in common with your free-range colleagues.



Thursday, 31 January 2019 15:58

Do You Have What It Takes to Work Remotely?

(TNS) — Erie County officials stressed caution to Western New Yorkers during their first briefing of what could be another eventful weather day.

They advised to adhere to travel advisories issued in several municipalities, notably in the Northtowns, which has received the brunt of the storm. Travel advisories are in place for the city of Buffalo and northern Erie County.

"To underestimate this would be a mistake," said Greg Butcher, Erie County Deputy Commissioner for Homeland Security and Preparedness. "I think it’s the totality of all of the things brought together. The snow event itself, the high winds, mixed with the extremely cold temperatures are the things we need to be concerned about."



(TNS) — Camp Fire survivors Lisa Butcher and Randy Viehmeyer remember waking up one night to the screams of a nearby shelter resident reliving the nightmare of watching her dog burn alive.

Having bounced from one chaotic and sometimes dangerous shelter to another, the couple said they’ve experience a kind of volatile “hell” since their Paradise home burned down last November during the Camp Fire, the deadliest and most destructive wildfire in California history.

And with the final remaining evacuation shelter for victims set to close Thursday in Chico, their fate is once again up in the air.



(TNS) — Even before the worst of the polar vortex gripped the Midwest, emergency rooms here already had treated several patients with cold-weather-related injuries.

Now, the arctic air that's enveloping the Rock River Valley gets truly dangerous with wind chill values on Wednesday that could plummet as low as 54 degrees below zero, according to the National Weather Service.

The record low temperatures spurred dire warnings from meteorologists, widespread closures of schools and businesses and led to a disaster proclamation from Gov. J.B. Pritzker, who said a wide variety of state resources will be available to help communities affected by the winter storms and bitter cold weather.



It’s hard to imagine pitchers and catchers reporting in a mere 12 days while another polar vortex rips through the Midwest.  Arctic blasts plunging the thermometer to 27 degrees below zero in some states? It’s safe to say our friends in Minnesota won’t be throwing the baseball around in the backyard this week. 

But a baseball-less future is probably the least of your worries right now. Extreme cold is dangerous – and expensive, if the pipes in your house freeze and burst. Water damage could cost you as much as $5,000, if not more. 



(TNS) — Dangerously cold weather is expected to hit much of state through Thursday, which means folks ought to prepare for their own safety and the safety of animals.

According to the National Weather Service in Aberdeen, temperatures are expected to drop to 35 below in the area tonight, and that's not even the wind chill.

The record low is 32 below, which was set in 1916.

State Climatologist Laura Edwards said the area saw similar frigid weather in January 2014.

"We do see cold like this, but not every year. Again, that doesn't mean we can ignore how much it can harm people and animals," Edwards said.

"You can get frostbite in as little as 10 minutes. Being protected is the biggest concern," she said.



Forrester has just published our forecast for US tech employment and compensation (see “US Tech Talent Market Outlook: Low Unemployment And Rising Wages Present New Challenges for CIOs”). It has some foreboding news for CIOs and for tech vendors: Tech talent will be harder to find and more expensive over the next two years.

The good news is that the supply of tech workers has largely kept up with demand — annual wage growth for tech workers  has generally hovered between 2.0% and 2.5% since 2015. But the current data available for 2018 suggests that wage growth is starting to accelerate. This acceleration poses a special threat to CIOs, who could find themselves paying premiums for certain tech roles in high demand.

Here’s a summary of our forecast for the US tech labor market over the next two years:



(TNS) — Saying they feel an urgency to act fast, California officials this week will launch the main phase of wildfire debris removal in Butte County, scene of November’s devastating Camp Fire.

But a potential problem has emerged: Nearly half of the property owners in the hill country around Paradise have not given the government permission to enter their properties to do the work.

County officials this week said they are making an extra push to get the word out to people who own burned property, informing them that they are required to have their land cleaned of ash and other fire debris, either through a free state-run program or by hiring their own contractor and paying for it themselves.



(TNS) - Gov. Tony Evers declared a state of emergency in Wisconsin on Monday, because of the heavy snows that have fallen and the extreme cold still to come.

"I’m concerned about the safety and well-being of our residents as this major storm and bitter cold moves in," Evers said in a release.

The state of emergency authorizes the adjutant general of the Wisconsin National Guard to call up military personnel to active duty if the need arises, and for all state agencies to be available if called on.

This request came from Wisconsin Emergency Management in case Guard units are needed to assist with emergencies in any affected parts of the state.



(TNS) — Equipment owned by California's three largest utilities ignited more than 2,000 fires in three years — a timespan in which state regulators cited and fined the companies nine times for electrical safety violations.

How the state regulates utilities is under growing scrutiny following unprecedented wildfires suspected to have been caused by power line issues, blazes that have destroyed thousands of homes and killed dozens of people.

Lacking the manpower and sophisticated technology necessary to monitor more than 250,000 miles of power lines across the state, regulators rely on something of an honor system, with utilities responsible for ensuring all trees and vegetation are cut back far enough from electrical equipment before the onset of dry, high-fire danger conditions.



You’d like to think that if you would just come up with a few good ideas, work hard, and have a stroke of luck, then your business would succeed. Alas, it’s not that simple. There are things that could do great damage to your company’s prosperity, and, what’s worse, is that they could come out of the blue. You might not even be aware that they’re slowly taking place, yet there they are, doing damage to your business. Below, we take a look at some of these threats, which can frequently fly under the radar.



A Landmark Settlement with Lessons Learned for Compliance Officers

Walgreens reached a settlement on Tuesday, concluding a six-year investigation into the company’s pharmacy and drug-pricing practices, initiated by a whistleblower and former pharmacy manager. CCI reports on the specifics of the case and the historic settlement.

The attorney representing the whistleblower in a historic case against Walgreens says the landmark settlement sends a clear message to compliance officers everywhere.

“Compliance officers, your job is vital,” said Andrew M. Beato of Stein Mitchell Beato & Missner LLP, speaking by phone Wednesday to Corporate Compliance Insights. “Compliance officers are the last line of defense (for a company) before it goes to this type of outcome.”

Walgreens will pay $60 million in the largest-ever settlement by a pharmacy chain for overcharging for drugs.  Announced Tuesday, the settlement was the result of a complaint filed in 2012 by a former pharmacy manager in Florida.



Monday, 28 January 2019 14:55

The Walgreens Whistleblower

Safety is the first priority for any company that seeks to protect employees and customers. Knowing the hazards that exist in workplace offices, equipment, and machinery is the first step toward preventing injury or even death.

The Occupational Safety and Health Organization (OSHA) publishes a list of its most frequently cited violations in the workplace. By examining this list, employers can analyze the dangers inherent in their workplaces and plan to avoid them.

“Knowing how workers are hurt can go a long way toward keeping them safe,” said National Safety Council President and CEO Deborah A.P. Hersman. “The OSHA Top 10 list calls out areas that require increased vigilance to ensure everyone goes home safely each day.”



Friday, 25 January 2019 16:21


The best business continuity managers run their programs like entrepreneurs running their own companies. In today’s post, I’ll share seven tips to help you adopt this world-beating, program-enhancing attitude.

One of the best models of a good way to run a BCM program is that of an independent entrepreneur. This is because when you’re leading a business, you can’t hide. You have to deliver the goods, take responsibility for your work, and satisfy your customers.

If you take this approach to your role as a BCM manager, you and staff will find greater fulfillment in your work, your managers and stakeholders will be pleased and impressed, your program will thrive, and your company will be better protected.



3 Trends and Predictions

In the year ahead, companies will need to find meaningful and measurable ways to align and integrate risk management with core business objectives to pursue and meet their company’s goals. LockPath’s Sam Abadir discusses how, as organizations of all sizes and types undertake this vital work, he sees the risk ecosystem, increased board engagement and compliance accountability as three trends that will challenge their progress and innovation.

2018 was quite the year. Between regulatory regimes, global competition and cyber threats, the cautionary tales of what can happen dominated headlines. The Equifax investigation findings were disturbing, Google had multiple cascading incidents and the Marriott breach effects continue to unfold.

Maintaining, growing and evolving the ecosystem of digital equipment, services and data is central to the modern enterprise; however, digital business moves fast and creates risk in its wake. The ability to remain viable in today’s business depends greatly on the effectiveness of risk management and compliance programs, as well as digital systems and processes, data governance and security practices.



Friday, 25 January 2019 16:18

The Key To Risk Management Success In 2019

As the volume and variety of cyber attacks on businesses continue to grow, the need for better incident response has never been greater. Stephen Moore discusses how to build an effective CSIRT and the role it can play in protecting an enterprise in the event of a breach.

A few years ago, the idea of a dedicated computer security incident response team (CSIRT) may have seemed luxurious. Fast forward to the present day and for many it’s become essential. A CSIRT differs from a traditional security operations centre /center (SOC), which focuses purely on threat detection and analysis. Instead, a CSIRT is a cross-functional response team, consisting of specialists that can deal with every aspect of a security incident, including members of the SOC team. The effort could include the technical aspects of a breach, assisting legal, managing internal communications, and even creating content for those that must field media enquiries.

Key roles and responsibilities within a CSIRT

In addition to the conventional duties of a SOC, a CSIRT must also fulfil a variety of non-technical, but equally important roles and responsibilities. This requires a much wider set of skills, and getting the right balance of personnel is key. Some members may be full-time, while others are only called in occasionally, but they will all bring key skills to the table if and when they are needed.

At a minimum, an effective CSIRT will contain the following members:



In a volatile market environment and with the edict to ‘do more with less’, many financial institutions are beginning efforts to re-engineer their risk management programs, according to a new survey by Deloitte.

70 percent of the financial services executives surveyed said their institutions have either recently completed an update of their risk management program or have one in progress, while an additional 12 percent said they are planning to undertake such a renewal effort. A big part of this revitalization will be leveraging emerging technologies, with 48 percent planning to modernize their risk infrastructure by employing new technologies such as robotic process automation (RPA), cognitive analytics, and cloud computing.

"Financial institutions face a formidable set of challenges posed by today's more complex and uncertain risk environment," said Edward Hida, a partner with Deloitte Risk and Financial Advisory at Deloitte US and the author of the report. "With budget cuts common — and a big focus on effectiveness and efficiency as the torrent of regulatory change has slowed — this will require institutions to rethink their traditional assumptions and employ fundamentally new approaches."



(TNS) - Anchorage property values increased slightly overall in 2018, though a few hundred homeowners may be looking at a lower home value because of damage caused by the Nov. 30 earthquake and its aftershocks, city assessors say.

About a week ago, the Anchorage property-tax appraisal office began mailing tens of thousands of assessment notices known as green cards. They show the city’s estimate of what a property would sell for on Jan. 1. It’s a critical step in determining a household or business property tax bill, which pays for essential city services such as police, firefighters and snowplowing.

This year, those with significant damage from the 7.0 earthquake may be asked to pay less.

In recent weeks, property appraisers worked with building-safety officials to identify damaged buildings. Officials have been using a system of green, yellow and red tags to indicate damage. Red-tagged buildings are unsafe to enter; a yellow tag means limited occupancy.



(TNS) — Matt Brown, emergency services chief for Allegheny County, Wednesday urged municipalities battered by landslides and recurring flooding to compete for a new pot of federal funding.

Up to $10 million in hazard-mitigation grants has been made available statewide, despite FEMA denying Western Pennsylvania’s request to declare a disaster because of a spate landslides and related challenges last year.

FEMA denied the local request but granted federal disaster status to 10 counties in Eastern Pennsylvania in December — and 15 percent of that funding, or about $9 million to $10 million, will be split among municipalities in need scattered across the state.



Alex Janković claims that some managed service providers have successfully managed to equate business continuity with IT disaster recovery, resulting, at best, in confusion among those new to the profession and, at worse, the development of business continuity plans that are not fit for purpose.

There is something which bothers me as a management consultant in the business continuity and information technology fields. Have you tried to search for the terms ‘Business Continuity’ or ‘Business Continuity Planning’ on Google or Bing search engines recently? Please do and the results may surprise you. Once you skip over a few Google ads and relevant, but not local, articles, you will find link after link to articles written by local managed service providers (MSPs).

If you are wondering what an MSP is, TechTarget defines it as “a company that remotely manages a customer's IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model.”, but I digress.

If you are brave enough and decide to click on any of those searched links, you will be met with a carefully designed and written corporate landing page. They will all have some very high-level, but somewhat relevant business continuity related jargon, but in the first few sentences the narrative will change from business continuity to IT disaster recovery. Furthermore, if you care to continue reading, these MSPs will start to pitch whatever product or vendor they are licenced to sell and distribute. The web-page message, tone, and the focus are ultimately geared around the capabilities of that product, and not necessarily anyhow related to the business continuity planning process or methodology itself. On top of that, MSPs will also suggest helping your organization develop business continuity or IT disaster recovery plans, which I am sure will be geared around the products they try to sell you, and will probably be developed without truly understanding the ins-and-outs and the complexity of your business.



(TNS) - At least five people are dead after an armed man opened fire inside a SunTrust bank in Sebring on Wednesday afternoon, prompting a standoff, authorities said.

Sebring Police Chief Karl Hoglund said the victims were “senselessly murdered” by Zephen Xaver, 21, who surrendered after a Highlands County Sheriff’s Office SWAT team entered the bank.

“Today’s been a tragic day in our community,” Hoglund said. “We’ve suffered a significant loss at the hands of a senseless criminal doing a senseless crime.”

Officials have not publicly identified the victims.



WASHINGTON, DC – AOAC INTERNATIONAL (AOAC) and the International Organization for Standardization (ISO) announce that they have entered into a cooperation agreement for the joint development and approval of common standards and methods. The partnership significantly increases the global relevance and impact of AOAC/ISO standards and methods.

“The AOAC and ISO partnership broadens global acceptance of standards and methods, benefitting all stakeholders and consumers,” said Brad Goskowicz, CEO of Microbiologics, Inc. and President of AOAC INTERNATIONAL. “AOAC and ISO’s commitment and global leadership pave the way for methods to ultimately advance to the Codex method process for consideration as International Standards.”

ISO Secretary-General Sergio Mujica added, “ISO’s partnerships with other relevant organizations are extremely important, as we believe that the best way to meet market needs and provide global solutions is by bringing together the world’s best experts. This agreement will therefore benefit the industry through the joint development of standards that are globally accepted and recognized by Codex. We look forward to collaborating further with AOAC via this agreement to produce effective International Standards.”



Thursday, 24 January 2019 15:18

AOAC and ISO announce cooperation agreement

The Crisis Communications Team (CCT) is the team of professionals within the organization that manages the communication function during a crisis.

This team works closely with the Crisis Management Team (CMT) that makes the important decisions pertaining to crisis communications, business continuity and disaster recovery, the three important management activities that need to be undertaken efficiently and effectively during a crisis.

How do you get the CCT to do its job? Well, for it to start working it needs to be switched on.

Sounds simple, right? Sadly, this is where things often go wrong in how crisis communication plans are conceived. What follows are four considerations for anyone designing a CCT activation processes.



If you’ve adopted a mass notification system, you’ve taken an important step towards crisis readiness.

To ensure a successful response to your next emergency, take the time needed now to prepare and fully communicate your emergency response plan to ensure that your crisis communication is quick, responsive, accurate, and efficient.

Preparation is important in any scenario, but especially in emergency response planning and execution. This means having plans in place for known threats, establishing communication strategies, training staff and more. It can truly make the difference between experiencing utter chaos and assisting in establishing community safety.

A well know example of this comes from evaluating the response to Hurricane Katrina. FEMA has highlighted several preparation-related challenges that came to define Katrina:



(TNS) - The next time a natural disaster threatens the Lincoln area, those coordinating emergency management will be working from a new home in south Lincoln.

In the northwest corner of the Lancaster County Youth Services Center at 1200 Radcliff St., the Lincoln-Lancaster County Emergency Operations Center provides a more spacious hub better equipped to handle a crisis for days on end if need be.

The facility includes beds and showers in the event emergency management officials need to stay at the center for extended periods of time.

“It’s essentially a small dorm or a quiet room,” said Director of Emergency Management Jim Davidsaver, a former Lincoln Police Department captain. “If you just need a break, the new facility gives you that opportunity.”



(TNS) — After spending an exorbitant amount of money on food for county workers during Hurricane Irma, Sarasota County has signed a catering contract with a much cheaper vendor should another natural disaster strike.

The County Commission recently approved a three-year contract with Metz Culinary Management, of Sarasota, as its primary vendor, and Mattison's Catering, also of Sarasota, as its secondary vendor to provide meals for county workers stationed in the Emergency Operations Center in the event of a disaster. The contract with Metz would cost $30.50 for four meals a day per person around the clock — compared with the $26 that the county spent during Irma per person, per meal, ultimately costing taxpayers $130,000.

While county residents hunkered down at home or in shelters as Irma thrashed the region on its trek up the Florida peninsula on Sept. 10 and 11 in 2017, about 400 county employees enjoyed Mattison's Catering, county records show. Under the $130,000 deal, Mattison's staff prepared and delivered enough food to serve up to 5,000 meals from lunch on the Saturday before the storm hit through lunch the following Tuesday — a cost of $26 per person, per meal, according to the purchase order.



What Recent News Means for the Future

The compliance landscape is changing, necessitating changes from the compliance profession as well. A team of experts from CyberSaint discuss what compliance practitioners can expect in the year ahead.

Regardless of experience or background, 2019 will not be an easy year for information security. In fact, we realize it’s only going to get more complicated. However, what we are excited to see is the awareness that the breaches of 2018 have brought to information security – how more and more senior executives are realizing that information security needs to be treated as a true business function – and 2019 will only see more of that.

Regulatory Landscape

As constituents become more technology literate, we will start to see regulatory bodies ramping up security compliance enforcement for the public and private sectors. Along with the expansion of existing regulations, we will also see new cyber regulations come into fruition. While we may not see U.S. regulations similar to GDPR on a federal level in 2019, these conversations around privacy regulation will only become more notable. What we are seeing already is the expansion of the DFARS mandate to encompass all aspects of the federal government, going beyond the Department of Defense.



In our hyper-connected world, IT security covers not just our data but virtually everything that moves – including machinery. Cyber-attacks or IT malfunctions in manufacturing can pose risks to the safety measures in place, thus having an impact on production and people. New international guidance to identify and address such risks has just been published.

“Smart” manufacturing, or that which takes advantage of Internet and digital technology, allows for seamless production and integration across the entire value chain. It also allows for parameters – such as speed, force and temperature – to be controlled remotely. The benefits are many, including being able to track performance and usage and improved efficiencies, but it also exacerbates the risk of IT security threats.

Increasing the speed or force of a machine to dangerous levels, or lowering cooking temperatures to result in food contamination, are just some examples of where cyber-attacks can not only disrupt manufacturing but pose serious risks to us. Happily, a new ISO technical report (TR) has just been published to help manufacturers prepare for and mitigate these risks.



Rapid growth in the use of public cloud services for core business operations is changing the technological landscape. But in the rush towards taking advantage of the agility that public cloud offers are organizations in danger of neglecting a core area of business continuity?

In the last eighteen months the acceleration of public cloud services has been overwhelming. It is no coincidence that UK based instances of Microsoft Azure and Amazon Web Service instances made more organizations willing to move workloads and data into public cloud services, and has seen these services go from strength to strength.

It is estimated that more than 60 percent of organizations use Office365 email services, for part, if not all of their messaging users. The most popular public cloud services like Microsoft Office365, Azure, Salesforce, Google Suite and Amazon Web Services have lowered the barrier to entry for small and medium sized businesses to access IT, and many larger organizations have also seen the benefits of moving to a pay monthly model. Getting access to these professional business applications, billed in a low-cost subscription model is helping accelerate business growth and agility.



A litany of disruptions and corporate scandals in 2018 showed that while making profits, organizations will be held responsible for their actions in an increasing shift towards more ethical business practices

Last year did not turn out to be great for businesses: there were mounting data privacy concerns around the globe; cyberattacks continued to hobble cities and disrupt business operations in the US; and Brexit uncertainty left UK industries worried. Meanwhile, shocking bank and corporate scandals sparked renewed regulatory interest in Europe, India, and Japan.

Amidst these larger issues, several new laws and regulations came into effect, adding to the complexity of an already challenging business landscape.

With so much that happened over the past year, here are some of the events and stories that stood out:



How prepared are your employees and organization to navigate the next major blizzard?

If you don’t know how you will keep employees informed and safe while you maintain business continuity, you can do more.

Imagine your employees waking up in the morning after a snowstorm has hit. The Weather Channel details the icy roads and slippery streets. Social media is crowded with photos and updates of vehicles buried in the snow and children celebrating the snow day. But what about work? Your employees must decide if they should brave the hazardous roads or stay home and potentially miss a workday. With no incoming messages and calls that lead only to voicemail, your employees become confused and frustrated.

Keep in mind that if your employees don’t know what to do in the event of a snow or ice storm, they can put themselves at risk. They may attempt to report to work but find the office closed. Alternatively, they could get into an accident on the way. Please don’t put them, or yourself, in that position.



(TNS) - The Northridge earthquake that hit 25 years ago offered alarming evidence of how vulnerable many types of buildings are to collapse from major shaking.

It toppled hundreds of apartments, smashed brittle concrete structures and tore apart brick buildings.

Since then, some cities have taken significant steps to make those buildings safer by requiring costly retrofitting aimed at protecting those inside and preserving the housing supply.

But many others have ignored the seismic threat. And that has created an uneven landscape that in the coming years will leave some cities significantly better prepared to withstand a big quake than others.



For the past five years Continuity Central has conducted an online survey asking business continuity professionals about their expectations for the year ahead. This article provides the results of the most recent survey and identifies some interesting changes from previous years…


134 survey responses were received, with the majority (78.4 percent) being from large organizations (companies with more than 250 employees). 12.7 percent were from small organizations (50 or less employees) and 8.9 percent were from medium sized organizations (51 to 250 employees).

The highest percentage of respondents was from the USA (38.5 percent), followed by the UK (23.1 percent). Significant numbers of responses were also received from Canada (6.1 percent) and Australasia (5.4 percent).

Change levels

The survey asked respondents: ‘What level of changes do you expect to see in the way your organization manages business continuity during 2019?’

12 percent of respondents expect to see no change in the way their organization manages business continuity. 54.1 percent expect to see small changes, whilst a third (33.9 percent) are anticipating large changes.

The 88 percent of respondents expecting to see changes were asked to provide details of the one area that is likely to have the biggest impact on business continuity practices or strategies within their organization. Key themes that emerged were as follows:
‘Making major revisions to BCM strategies and/or BCP(s)’ topped the list of changes that business continuity managers expected to see in 2018 and, in 2019, this was again top of the list, with 22 percent of respondents saying that this was the biggest change they expected to see.



(TNS) — A critical emergency alert system designed to warn UC Davis students and staff failed to fully notify the campus until more than an hour after Davis police Officer Natalie Corona was shot and killed blocks from the university, officials announced, calling the breakdown “unacceptable.”

The WarnMe-Aggie Alert sends text and email messages to UC Davis students and staff and is designed to alert 70,000 people. But the system initially notified only a fraction of those people about the events unfolding less than a mile from the campus and locked campus public safety officials out of some notification lists.

“The system failure we saw on January 10 was unacceptable and we will take all necessary measures to ensure 100 percent performance in the future,” said UC Davis Chancellor Gary S. May in a statement Tuesday.

The chancellor’s downtown Davis residence is also just blocks away from where the 22-year-old Corona was gunned down and where others were sent fleeing when Kevin Limbaugh opened fire from his bicycle as the rookie officer responded to a traffic stop.



Business continuity practitioners have plenty of reasons to be advocates for good fire and life safety practices at their organizations, even if that’s not one of their core responsibilities.

In today’s post, we’ll share 13 tips business continuity management (BCM) professionals can follow to make sure their companies are doing what they should to promote fire and life safety for their staff and facilities.



Moving Beyond Day-to-Day Data Cleansing

In the financial services industry, regulation on due process and fit-for-purpose data has grown increasingly prescriptive, and the risks of failing to implement a data quality policy can be far-reaching. In this article, Boyke Baboelal of Asset Control looks at how organizations can overcome these challenges by establishing and implementing an effective data quality framework consisting of data identification and risk assessment, data controls and data management.

Too many financial services organizations fail to implement effective data quality and risk management policies. When data comes in, they typically validate and cleanse it first before distributing it more widely. The emphasis is on preventing downstream systems from receiving erroneous data. That’s important, but by focusing on ad hoc incident resolution, organizations struggle to identify and address recurring data quality problems in a structural way.

To rectify this, they need to the ability to more continuously carry out analysis targeted at understanding their data quality and reporting on it over time. Very few organizations across the industry are currently doing this, and that’s a significant problem. After all, however much data cleansing an organization does, if it fails to track what was done in the past, it will not know how often specific data items contained gaps, completeness or accuracy issues, nor understand where those issues are most intensively clustered. Financial markets are in constant flux and can be fickle, and rules that screen data need to be periodically reassessed.



(TNS) — Tyler Cooper had a pile of work he needed to tackle at his desk in John Deere’s Cary office on Tuesday, but he and some of his coworkers decided to spend the day doing housework instead.

They boarded a bus in the early morning and headed to the rural Whitestocking community outside Burgaw, a section of Pender County where the Cape Fear River ran 10 feet deep across the landscape during flooding from Hurricane Florence last September.

They climbed into coveralls, put on protective goggles and breathing masks, and crawled under a house to start yanking out insulation still damp from the flood.

“There were a lot of people impacted by the storm,” Cooper said, dragging out torn sheets of ruined yellow fluff. “I just wanted to help out.”

Tens of thousands of homes across Eastern North Carolina were damaged by floodwaters from the storm, and five months later, many still have not been stripped to the studs so they can dry out and be rebuilt.



No business owner wants to think about a violent event happening at their workplace, but each year, more than 2 million American employees report having been a victim of various types of workplace violence. According the U.S. Bureau of Labor Statistics, 409 workers were fatally injured in work-related attacks in 2014. To put that into perspective, that’s about 16 percent of the 4,821 workplace fatalities in the same year.

What Are The Main Types Of Workplace Violence?

OSHA defines workplace violence as “any act or threat of physical violence, harassment, intimidation, or other threatening disruptive behavior that occurs at the work site. It ranges from threats and verbal abuse to physical assaults and even homicide. It can affect and involve employees, clients, customers, and visitors.”

The National Institute for Occupational Safety and Health reports that the types of workplace violence can be categorized into four buckets:



Thursday, 17 January 2019 14:48


Cloud adoption is increasing and, at the same time, advances in technology are occurring at a rapid pace. In this article Joe Kinsella looks at six trends which business continuity and enterprise risk managers need to be aware of.

As we move through 2019, organizations will begin to fully embrace the technological advances that move companies beyond standard adoption, and instead prompt organizations to redefine how they use cloud across lines of the business, specific applications and wider infrastructures. After years of discussing Cloud 2.0, we have finally welcomed in a new era of cloud. Looking ahead at this Cloud 3.0, we will continue to see impressive cloud adoption across all industries and with this, a resolute determination from the cloud industry to build solutions and integrated data tools that best meet user needs. The cloud space is evidently getting more crowded, complicated and competitive - but what will be the key trends that we can expect to see this year?

#1 The rise of multicloud deployment

In 2019, multicloud will become the most dominant approach as organizations deploy diverse clouds and operations within a single heterogeneous infrastructure in order to meet their different services and needs. It will continue to be used as a key strategy in bringing choice to the organization to pick and choose specific solutions as business requirements become more challenging due to increasing demand for digital transformation. 

Leaders will choose multi-cloud strategies to avoid dependence on a singular cloud provider and mitigate the risk of a single point of failure so to reduce impact and financial risk across the entire enterprise. In a time when security threats are at their greatest, leveraging two or more service providers will greatly decrease the risk of disaster during downtime.



Survivors are still reeling from 2018’s natural disasters as the numbers are coming in: Global losses from disaster last year were 11,000 people dead or missing and $155 billion in damages, according to Swiss Re Group, the world’s second-largest reinsurer.

Estimates were 10,000 dead or missing and $350 billion for 2017, when three record-breaking hurricanes swept through the west Atlantic in the span of a month, and scientists only expect figures to get worse as climate change progresses. A study published Monday by the National Academy of Sciences showed Antarctica’s annual ice loss has increased sixfold since 1979, melting faster in each successive decade.

Tech companies that study data and make government software have taken notice, and a handful of platforms already offer ways to help governments prepare for natural disasters — Hazus is the hazard modeling tool already used by the U.S. Federal Emergency Management Agency, Esri is adding data analytics and AI to its GIS platform, and Denver-based Geospiza maps at-risk populations.



Like many people who work in Business Continuity, I didn’t enter the professional world with the intention of becoming a part of this niche industry. For the past 10 years I worked in public education as a Social Studies teacher, which at the beginning of my career, I thought I would do until retirement. However, as life rarely goes to plan, my mindset began to change as I developed a desire for a new career path, and I began to investigate other job fields where my skills from teaching would carry over. After a year-long search, I came across a company called BC in the Cloud which provided business continuity and disaster recovery planning software to other businesses. I believe my response to that was, “Oh cool…what is business continuity and disaster recovery?” After an explanation and some research, I found myself with two major trepidations about taking this career jump. The first concern was whether this career path offered the internal reward of helping other people that I would be giving up by leaving teaching. The other major concern I had was whether I possessed the knowledge and/or skill set necessary to be successful in this field.



Tuesday, 15 January 2019 16:14

Attitude for Resilience

(TNS) - The sun had long set when the two men trudged up to the justice center in Shawnee and peered inside. They pulled open the door and entered the modern brick building that houses the police department and city offices, filming the scene with their cellphones.

No one approached them as they walked around, filming various items in the building. Then Patrick Roth turned the camera on his cohort, Tim Harper, who was wearing an orange shirt and a “Make America Great Again” ball cap and toted a gun in a holster on his right hip.

“Guys, he’s got four mags,” Roth said on the video he later posted on his YouTube channel, News Now Patrick, zooming in on the magazines that hold extra rounds of ammunition. “We’re in a police department. He’s open carrying.”

“Seventy-six rounds,” Harper said of his magazine capacity.

The two continued filming in the building for nearly 4 minutes before heading to the adjacent fire station to shoot more video.

The episode ended peacefully. That isn’t always the case.



(TNS) - With considerable fanfare, Los Angeles Mayor Eric Garcetti started the year by trumpeting a cellphone app that will instantly notify users in Los Angeles County when an earthquake of 5.0 or bigger begins to hit.

The pilot program, officially unveiled Jan. 3, can provide crucial seconds — even dozens of seconds — for people to duck and cover or otherwise take potentially lifesaving actions.

Dubbed ShakeAlertLA, it’s the first earthquake early warning system of its type in the country.

But that also means the rest of California continues without such alerts.



A friend of mine likes to say that New York City is so expensive that just leaving your apartment will cost you $20. It cost me $100 to leave my apartment the other day – in fines for leaving a piece of furniture by the curb on a day not designated for “bulk trash removal.”

I get it: leaving bulky trash all over the sidewalk for days on end is an antisocial thing to do, especially in a crowded city. I wouldn’t have felt great about myself if a kid had somehow tripped and hurt herself on my discarded garbage.

My landlord could also have landed in legal trouble had that happened. That’s because NYC law makes property owners responsible for keeping sidewalks “reasonably safe” and clear of debris (with some exceptions). “Reasonably safe” also includes shoveling snow and ice – something I’m always grateful for after the occasional NYC blizzard.



Monday, 14 January 2019 16:49


Cyber disruptions – and their impact on both reputations and profitability – have risen to the top of nearly every recent risk study. These increasing concerns weigh heavily on Executive Suites and Boards.

In the IT realm, CIO’s and CISO’s now focus their efforts on mitigating those risks, and planning responses to potential data breaches, malware and other cyber threats. As a result, more and more organizations have begun developing Cybers Security Incident Response Plans (CSIRPs).

Developing these plans in their own ‘silo’ – without considering the cyber incident impacts on general business operations – can be negligent and potentially dangerous. Integration of CSIRPs with existing Business Continuity and Disaster Recovery Plans can make the entire organization more resilient and prepared to respond to outages of any and every type.

Likewise, Business Continuity plans that simply focus on restoring day-to-day operations under specific scenarios may lack the necessary strategies and tactics to successfully respond to cybersecurity threats that may be at the root of a potential disruption.



The worst time to think about emergency planning is when the threat of an emergency looms over your business. When that happens, no business owner is glad they pushed off emergency planning “to Q1” or the hazy future: “We’ll get to that later.”

A better way for businesses to prepare is with the “all hazards” approach to emergency planning. The all hazards approach is defined by the Centers for Medicare and Medicaid Services as an “integrated approach to emergency preparedness planning that focuses on capacities and capabilities that are critical to preparedness for a full spectrum of emergencies or disasters.”



The human factor in business continuity is one of the most important and also one of the most overlooked keys to success in creating an effective business continuity management (BCM) program. In today’s post, we’ll discuss what you as a BCM leader can do to make sure you and your BCM team possess the personal and professional qualities needed to succeed in the vital task of making sure your organization is resilient and protected.

Join Michael Herrera for A High-Performance BCM Program Starts with You at DRJ Spring World in Orlando, Florida on Tuesday, March 26, 2019.

We talk a lot about technology and resources in business continuity, but you want to know something? It’s the human factor that determines whether a BC program thrives or flounders. We’ve seen rich programs that are a mess and ones run on a shoestring that are top-notch. Success is not up to how well-funded your program is, it’s up to you.

At first glance, being a good manager and building a good team might seem as mysterious as witchcraft. It’s not really that mysterious. It can be broken down, understood, and then mastered like anything else.

We’ll start with looking at you as a manager, then we’ll move on to discussing how you can assemble a top-flight team.



Multi-cloud environments provide substantial business continuity and disaster recovery benefits but  recent research suggests that compliance issues may be holding some companies back from establishing them. Mark Hickman looks at the issue…

Almost every enterprise is using the cloud in some way, whether for infrastructure services, or to provide software-as-a-service applications to users.  For some time, confidence has been growing in the cloud’s role in IT infrastructure, to the point that we are hearing increasing talk of serverless computing – where a company places its entire infrastructure in the cloud, which dynamically expands and contracts resources to meet business needs.

In the future, serverless computing may become a reality.  But for now, IT staff continue to battle with the challenges of managing the hybrid environments they already have, rather than feeling able to push everything to the cloud.  These complex hybrid environments often include multiple operating systems and cloud service providers, as well as increasingly common use of virtualized servers and hyperconverged infrastructure (HCI).

WinMagic recently conducted research to try and establish whether companies are getting the benefits they want from cloud technology and what, if anything, is holding them back from greater use, maybe even slowly moving towards this new serverless computing world.  There were some really interesting findings.  The role good security and compliance policies play in realising the business benefits were clear; 87 percent of IT decision makers (ITDMs) surveyed said they limit their use of the cloud because of the complexity of managing regulatory compliance. 



Charlie Maclean Bristol offers five predictions for areas that will have an impact on business continuity during 2019…

Volatility globally is going to get worse

In 2019 there will be an acceleration of the global change from globalism, social liberal politics and the USA as the unchallenged superpower, to a world of populism, nationalism and trade wars; with China and Russia challenging USA hegemony. There has been a lot in the news post-Christmas about sales being down for certain shops in the UK high street, so again we can see that the switch to online is accelerating and disrupting existing businesses. This means there will be even more uncertainty for businesses over the next year. This all suggests the need for organizations to be more resilient and nimbler, in order to be able to horizon scan for new challenges and be able to adapt to changing circumstances as they occur.

One of the roles of business continuity managers is to look outside the organization to identify possible supply chain issues, incidents elsewhere which could affect the organization, and possible new threats. Over the year we should redouble our efforts to carry out this role effectively.



If you are still running SQL Server 2008/2008 R2 you probably have heard by now that as of July 9, 2019, you will no longer be supported. However, realizing that there are still a significant number of customers running on this platform that will not be able to upgrade to a newer version of SQL before that deadline, Microsoft has offered two options to provide extended security updates for an additional three years.

The first option you have requires the annual purchase of “Extended Security Updates”. Extended Security Updates will cost 75% of the full license cost annually and also requires that the customer is on active software assurance, which is typically 25% of the license cost annually. So effectively, to receive Extended Security Updates you are paying for new SQL Server licenses annually for three years, or until you migrate off SQL Server 2008/2008 R2.

However, there is another second option. Microsoft has announced that if you move your SQL Server 2008 R2 instances to Azure, you will receive the Extended Security Updates at no additional charge. There is of course the hourly infrastructure charges you will incur in Azure, plus either the cost of pay as you go SQL Server instances or the Software Assurance charges if you want to bring your existing SQL licenses to Azure, but that cost includes the added benefit of running in a state of the art cloud environment which opens up opportunities for enhanced performance and HA/DR scenarios that you may not have had available on premise.



A new year means new challenges for your organization’s business continuity management (BCM) program, but you don’t have to face them alone. In today’s blog, we’ll sketch out some of the ways BC consultants can provide you targeted, cost-efficient assistance across nine key areas of business continuity.

Related on MHA Consulting:  Client’s Guide to Hiring a BC Consultant 

Last week, MHA Consulting and BCMMETRICS CEO Michael Herrera wrote a post on the BCMMETRICS blog called “CEO’s Crystal Ball: Business Continuity Trends for 2019.” In it, he made predictions across nine areas ranging from demand and management to threats and continuity software.

In today’s post, I’m going to look at the same categories Michael did but with a focus on how a business continuity consultant might be able to help you in those areas in 2019.



“Forty-nine innocent people killed at one time,” said Orange County Mayor Jerry Demings, who responded to Pulse as the county sheriff on June 12, 2016. “That’s something akin to what you might see on a battlefield in war.”

The mass shooting at the LGBTQ club traumatized not only Pulse patrons but also Orlando-area police, firefighters and paramedics summoned to the crime scene and others whose duties brought them face-to-face with the aftermath.

It still haunts some emergency-room doctors and nurses — and employees of the Medical Examiner’s Office who had to collect bodies from the grim scene, said Donna Wyche, manager of Orange County’s division of Mental Health and Homeless Issues.



Industry experts estimate that annual losses from cybercrime could rise to USD 2 trillion by next year1). With countless new targets added every day, especially mobile devices and connected “things”, a joined-up approach is essential.

The attraction of cybercrime to criminal hackers is obvious: tangled webs of interactions, relatively low penalties, disjointed approaches on money laundering and potentially massive payouts. The key is preparation and seeing vulnerabilities, and resilience, in terms of interactions with overall management systems, and that’s where information security management systems (ISMS) standard ISO/IEC 27001 comes in.

Close-up of Facebook notifications on a smartphone.This is the flagship of the ISO/IEC 27000 family of standards, which was first published more than 20 years ago. Developed by ISO/IEC JTC 1, the joint technical committee of ISO and the International Electrotechnical Commission (IEC) created to provide a point of formal standardization in information technology, it has been constantly updated and expanded to include more than 40 International Standards covering everything from the creation of a shared vocabulary (ISO/IEC 27000), risk management (ISO/IEC 27005), cloud security (ISO/IEC 27017 and ISO/IEC 27018) to the forensic techniques used to analyse digital evidence and investigate incidents (ISO/IEC 27042 and ISO/IEC 27043 respectively).

These standards are not only about helping to manage information security but will also help to identify and bring criminals to justice. For example, ISO/IEC 27043 offers guidelines that describe processes and principles applicable to various kinds of investigations, including, but not limited to, unauthorized access, data corruption, system crashes, or corporate breaches of information security, as well as any other digital investigation.



Thursday, 10 January 2019 15:11

How to Tackle Today’s IT Security Risks

Data, and the cloud that hosts it, has an almost infinite value for businesses that know how to process it – as long as the proper strategy is in place to unleash its potential. Orange Business Services helps customers turn their data into a true business asset, thanks to a little assistance from ISO/IEC’s IT service management standard.

With the digital revolution, businesses are producing more data than ever before. This data is no more than a raw material, but an organization’s ability to transform it into useful information can unlock a world of opportunities. Thanks to cloud computing, organizations can have access to powerful IT capabilities – and with more flexibility than ever, they can externalize all or part of their information systems, workspaces, servers, applications and storage.

Although the cloud has been around for over a decade, the biggest objection still hindering its adoption is ongoing concern about data security and integrity. Systems integrators that can successfully offer cloud-hosted security and access control solutions will find themselves well-positioned for the future, with the ability to deliver a wide range of managed and remote services to customers while boosting the overall value of their company.

Orange Business Services is one such company. As the B2B branch of the Orange Group, which boasts 260 million customers across 28 countries and an annual sales revenue of EUR 41 billion, the global ICT provider aims to be a leading performer in the “data journey”. Supporting organizations through every step of their digital transformation, it offers customers expertise in the collection, transfer, security, storage, processing, analysis and sharing of data, and value creation. To deliver support on such a broad scale, Orange Business Services needs to operate seamless global processes managed under a corporate governance model that applies worldwide.



Thursday, 10 January 2019 15:09

Enabling the Data Journey with ISO/IEC 20000-1

With technology becoming ever more sophisticated and offering both enhanced opportunities and new vulnerabilities and threats, there is a danger that organizations of every different type leave themselves open to malicious attack or data breaches on a massive scale. Risk management, therefore, is just as vital in cyberspace as it is in the physical world. But what are these cyber-risks? How can International Standards help mitigate them? And is it really the case that the only answer is even more sophisticated technology?

The Oxford English Dictionary definition is certainly clear enough: “risk”, it says, is “a situation involving exposure to danger”. Risk must be taken to achieve results, but also risk must be managed to achieve positive outcomes and avoid negative consequences.

Avoiding risk is impossible. Risks need to be taken and this is an inevitable and necessary part of all our lives, both personally and professionally. Indeed, if any company or organization in any industry in today’s highly competitive world was to try and pretend that there were no risks in what they did – in effect, that risk did not exist – then quite apart from defaulting on their statutory and legal obligations, they would very quickly fold and disappear from sight.

But risk can also be a force for good. Managing risks successfully can have positive results, and companies need to take risks in order to achieve their objectives. Organizations quite naturally need a degree of certainty before taking important strategic decisions, and it is essential to understand that risk is really about the likely impact of uncertainty on those decisions. In short, risk is about managing decisions in a complex, volatile and ambiguous world, one that is fast becoming even more complex and ambiguous.



Thursday, 10 January 2019 15:07

The Quest for Cyber-Trust

(TNS) — In 90 days, Panama City has picked up 20 years' worth of debris from the roadside, with plenty more to go.

"Until you start picking it up, you don't realize how bad we got hammered," quipped Mayor Greg Brudnicki.

Since Hurricane Michael swept through the region with a grudge against trees, city contractors have picked up 2.5 million cubic yards worth of debris, Panama City solid waste manager Shane Daugherty said, with an estimated 1.5 million cubic yards to go.

This is in addition to the clean-up happening elsewhere in the county. Bay County has picked up around 5 million cubic yards of debris, and when all is said and done officials are expecting to have about 9 million cubic yards of debris cleared, according to Brudnicki.

"To put that in perspective, in Hurricane Irma there was only 2.5 million cubic yards of debris in 50 counties," Brudnicki said. "By the time we get through, we will have 7 or 8 times as much."



It’s practically impossible to begin the New Year without thinking about some resolutions. While setting goals for the new year is pretty common for most of us, so are the results—we may consider them worthy endeavors, but they’re too hard to keep. Perhaps it’s time to reconsider these grand resolutions, and look at them more as pledges or promises to make a change—after all, it’s harder to break a promise, right? Also worth noting is how your New Year resolutions can positively affect you, your family, your company, and your disaster recovery planning—if you’re up to the task! As you balance your home and work promises for 2019, here are some suggestions for pledges you can set:


  • Lower your stress level by taking a walk around the neighborhood.
  • Cut down on your family’s screen time and play a board game together.
  • Decrease your company’s risk by reviewing your Business Continuity/Disaster Recovery (BC/DR) plan for gaps and implementing your plan to plug those holes.



3 Tips to Help Organizations Come Out on Top

“Compliance audit” is one of the last things a financial advisory firm hopes to hear, but it’s an inevitable, unavoidable fact of life for most. Fortunately, there are steps financial advisory firms can take to mitigate the requisite time and work of an audit, while paving the way to a successful outcome. Nuance’s Stacy Leidwinger discusses.

The words “compliance audit” tend to strike fear and anxiety in even the most reputable, meticulously run businesses. The reason is simple: compliance audits are viewed as unwelcome intrusions that detract from more strategic ends, all while absorbing an unsavory amount of time and resources. According to recent statistics, a typical financial audit today costs at least $10,000, and that figure is expected to rise.

A proactive focus on document management and document-based business processes – particularly the implementation of digital workflows – can help decrease the unnecessary time and pain associated with an audit. While organizations across industries face different types of documentation requirements in demonstrating compliance, here are three universal strategies that can help organizations navigate an audit more smoothly and increase their chances of passing with flying colors.



Wednesday, 09 January 2019 16:36

Preparing Your Documents To Survive An Audit

(TNS) - Some 1,200 properties in the Anchorage area are awaiting public earthquake damage inspections, more than a month after the powerful quake shook Southcentral Alaska, city officials say.

The mounting backlog has overwhelmed Anchorage’s building department, and comes as a deadline approaches for disaster aid. Jan. 29 is the last day to apply for a state individual assistance grant at ready.alaska.gov. Grants could provide up to $17,450 to cover damage that affects a person’s ability to live in the home.

Alaska Gov. Mike Dunleavy has also requested federal aid, a monthslong process that could open up more money for repairs.

A building inspection is not a prerequisite to apply for state or federal aid, officials stressed. In fact, even if the extent of damage to a building isn’t confirmed, don’t wait to start an application, said Jeremy Zidek, spokesman for the state’s Division of Homeland Security and Emergency Management. New applications will not be accepted after Jan. 29, Zidek said.



(TNS) — State firefighters are taking on the colossal task this year of updating maps that highlight the most fire-prone areas in California.

Fire officials in Marin say the maps, last updated more than a decade ago, are a helpful planning resource. But in California’s current climate, some say, those projections aren’t as relevant as they once were — the whole state is susceptible to flames.

Fueled by extreme winds and dry conditions, California in recent years has experienced many of the largest, most destructive wildfires in the state’s history. Some of those infernos have pushed deep into urban areas, which means nearly everybody is at risk, said Daniel Berlant, an assistant deputy director with the state Department of Forestry and Fire Protection.

Cal Fire’s high-hazard projection maps mandate management techniques in California’s most flammable regions.



(TNS) - Storm experts at the National Hurricane Center in Miami are working through the government shutdown, but with other federal agencies on furlough, some of the hurricane analysis and forecast modeling that happens during the winter months is on hold.

It's a pause that could delay life-saving upgrades to forecasts, said Eric Blake, the National Weather Service union steward at the National Hurricane Center.

"This is our main time of year when we improve on things," Blake said. "But it's basically at a standstill, at least on the modeling side, and the longer it goes on, the less likely they will improve by the start of hurricane season."

The Environmental Modeling Center in College Park, Md., which studies how to put better physics into hurricane forecast models to increase intensity and track accuracy, is almost entirely shutdown.



Turning a Key Vulnerability into a Victory

No matter what an organization’s major market is, it is probably subject to regulatory compliance requirements, such as PCI, SOX, FISMA and HIPAA. Failing to comply with any of these requirements could result in a failed audit, which can incur hefty penalties. This article by Markku Rossi of SSH.COM shares one little-known reason why organizations are vulnerable to failing a compliance audit.

No matter your organization’s major market or sector, whether you are in the Fortune 5000 or want to be, you are subject to regulatory compliance requirements such as PCI, SOX, FISMA, GDPR, HIPAA or similar. Failing to comply with the relevant requirements could result in a failed audit, which can incur hefty penalties or loss of business continuity.

Many compliance risk factors are hidden in the plumbing of your organization’s IT infrastructure. This article reveals one little-known reason why your organization is vulnerable to failing a compliance audit, as well as best practices for ensuring you’re prepared the next time you need to demonstrate compliance.



2019 promises to be an exciting year in business continuity and IT/disaster recovery. Below is my take on the new developments we are likely to see in management, budgeting, threats, software, and other key areas.

The more things change, the more they stay the same. What will 2019 bring for the world of BCM and IT/DR?

To answer this question, I drew on my 25-plus years of experience in the field, the expertise of my fellow MHA and BCMMETRICS consultants, and my CEO’s crystal ball.



Why It’s Time for a Next-Gen Solution

As organizations’ computing infrastructure expands, legacy asset management systems are becoming inadequate. To keep pace with this technological change – and remain compliant – companies must adopt a next-generation approach. Mark Gaydos of Nlyte discusses.

Imagine being charged by your cable company for movies you never watched. How about being charged an extra $100 dollars on your monthly electric bill for energy consumed by devices that are not plugged in? When these companies come after you for money, how can you prove that these charges are incorrect? Corporations utilizing licensed software or rented servers face similar situations. Without the use of asset management to identify hardware and software assets that are being utilized – or not utilized – it is nearly impossible to track and manage all your IT assets.

For foolproof corporate compliance, organizations must have in place a technology asset management (TAM) solution that will provide full transparency, enabling the company to determine if it is spending too much on maintenance or license costs, and providing insight into how many servers are actually up and running. When hardware and software vendors come knocking on your door looking for money, TAM will prove you are in the “right” and validate your defense.

Software and hardware companies are not forgiving; they make a living off renting licenses and applying maintenance fees. Corporations neglecting asset management will face stiff penalty fees for not operating within compliance. To avoid these risks and ensure companies pass vendor audits, TAM can help optimize asset usage and cut down on all those unnecessary maintenance costs by discovering what is actually being used and what can be taken off the network.



(TNS) — Los Angeles has unveiled its long-anticipated earthquake early warning app for Android and Apple smartphones, which is now available for download.

ShakeAlertLA, an app created under the oversight of Mayor Eric Garcetti and the city, is designed to work with the U.S. Geological Survey’s earthquake early warning system, which has been under development for years. It’s designed to give users seconds — perhaps even tens of seconds — before shaking from a distant earthquake arrives at a user’s location.

“ShakeAlertLA sends you information when a 5.0 or greater earthquake happens in Los Angeles County, often before you feel shaking,” the app says.

Garcetti is scheduled to make an official announcement unveiling the system Thursday morning. The app, which is also available in Spanish, was built under a contract with AT&T. It was published quietly online on New Year’s Eve, and by Wednesday morning, users of social media had already found it and begun tweeting their excitement about the release of the app.



(TNS) — Riverside property owners, anglers and others with interest in stream levels around Oregon have a new way to check for potential flooding.

The Oregon Office of Emergency Management last month released a new online dashboard. The tool includes an interactive map and a list of how many stream gauges are expected to be nearing flood stage, or have minor flooding, moderate flooding or major flooding. As of early this week, no gauges around Oregon indicated flooding.

Last month, a storm brought heavy rain to parts of Oregon. Daniel Stoelb, the geographic information system coordinator for the state Office of Emergency Management, created the stream gauge dashboard so people could quickly check how high water was rising.



Communication and immediate, accurate information are critical when emergencies occur.

In fact, anything from a mere disruption to an urgent event can snowball into widespread crises when those who are impacted don’t know what is happening or how to respond.

This is where emergency mass notification systems (EMNS) play a vital role. Capable of quickly sending out urgent messages to many parties across various channels, EMNS can be invaluable when it comes to:

  • Keeping people informed so they know what to do during emergency events
  • Reducing panic and minimizing the negative impacts of emergencies.



Many experts believe that the chance of an influenza pandemic or similar outbreak sweeping across the globe is high and growing higher every year. In today’s post, we’ll set forth some of the things your organization should be doing to prepare for this grave eventuality, starting with a pandemic plan.


A few days ago Bill Gates posted his annual “What I learned at work this year” address for 2018, and he devoted a good portion of the address to talk about the dangers of a pandemic.

After noting the reality of such dangers as terrorism and climate change, Gates writes, “But if anything is going to kill tens of millions of people in a short time, it will probably be a global epidemic.” He adds that an epidemic similar to the Spanish flu outbreak of 1918 could kill 30 million people worldwide in six months.

Anticipating Gates, a 2017 article in CNN set forth the “Seven reasons we’re at more risk than ever of a global pandemic.” The reasons include population growth and increased urbanization, climate change, and the rise in international travel.



Today Forrester closed the deal to acquire SiriusDecisions.  

SiriusDecisions helps business-to-business companies align the functions of sales, marketing, and product management; Sirius clients grow 19% faster and are 15% more profitable than their peers. Leaders within these companies make more informed business decisions through access to industry analysts, research, benchmark data, peer networks, events, and continuous learning courses, while their companies run the “Sirius Way” based on proven, industry-leading models and frameworks.  Forrester Acquisition of SiriusDecisions

Why Forrester and SiriusDecisions? Forrester provides the strategy needed to be successful in the age of the customer; SiriusDecisions provides the operational excellence. The combined unique value can be summarized in a simple statement:

We work with business and technology leaders to develop customer-obsessed strategies and operations that drive growth. 



Thursday, 03 January 2019 15:49

Forrester + SiriusDecisions

(TNS) — Richard Barry meticulously searched the ruins of a Paradise mobile home park for things he wished he would not find — bones or teeth, any signs of life lost in the devastating Camp Fire.

Barely visible within a pile of dust, Barry noticed what appeared to be a small skull. His mind raced with a sickening thought: Oh my God, it's going to belong to a child.

But it didn't. The skull belonged to an antique porcelain doll left in the rubble.

"At the time we didn't know what we were looking for," Barry said. "We knew we were looking for human remains, but I don't think anybody was prepared for what that meant. The fire burned so intensely that the only thing that was left was calcium, or human bone."



It has been more than two decades since AOL popularized email with the catchy “you’ve got mail” greeting. So ubiquitous was it in its heyday that it was the title of a romcom starring Tom Hanks and Meg Ryan. Since then, however, the way that people use the internet to communicate has evolved significantly. AOL and Yahoo! recently shuttered their once popular instant messenger services as mobile messaging, social networking, photo sharing, and video chat have risen to prominence. You’ve Got Mail has made way for The Social Network. But despite these innovations, email remains a digital mainstay.

As part of the research behind our recently published Forrester Analytics: Email Marketing Forecast, 2018 To 2023 (US), we uncovered three email trends that might surprise you:



Wednesday, 02 January 2019 16:26

You’ve Still Got Mail

Disaster Recovery as a Service (DRaaS) is becoming more popular as a business continuity technology. Mick Bradley looks at what is currently possible using DRaaS and how the technology will develop in the future.

2018 was another year of IT outages and cyber attacks. Despite significant cyber security budgets, malicious attacks and careless mistakes from employees have unnecessarily interrupted business processes, leading to financial losses as well as diminishing customer confidence. With this in mind, many organizations are looking at how to better secure their IT systems to make sure we don’t see more of the same in 2019.

Unfortunately, despite investing in the latest security technology, there is often nothing that can be done to stop these incidents from happening. Even with the tightest security, some attacks still manage to sneak through. If organizations want to mitigate these risks, one thing they can do is invest in minimising the impact on customers and end users. However, Arcserve research has shown that despite the fact that nearly half of global IT decision-makers feel like they have less than one hour to recover business-critical data before it starts impacting revenue, only 26 percent are extremely confident in their ability to do so.

So, how can organizations address this issue? How can they make sure that 2019 is not another year punctuated by unnecessary disruptions to business processes?



We have nothing to fear from advanced technology in the workplace, Jason Stockwood tells Etan Smallman. Rather, we should make it a valuable servant of humanity

The advent of artificial intelligence has seen many business owners and employees panic about the prospect of industry turmoil and mass layoffs. But is "the rise of the machines" necessarily something to fear?

Jason Stockwood, CEO of Simply Business and the man named by The Sunday Times as the UK’s best leader, doesn’t think so. In his new book, Reboot: A Blueprint for Happy, Human Business in the Digital Age(1), he argues for a sense of optimism about harnessing technology to reap empowered teams and fuel creativity in your company. Here, he talks exclusively to Regus Magazine.



“You who are on the road must have a code that you can live by ... teach your children well ... and feed them on your dreams.”

Graham Nash was spot on — about machine learning. Training a well-adjusted machine learning model that won’t get in trouble and embarrass you in public requires the same assiduous guidance, supervision, and ongoing engagement required to raise a successful, ethical child. Give them good training data, validate their predictions, and give them a set of business rules (a moral code) to prevent catastrophic errors in judgment, and you will be well on your way to raising a machine learning model (or child) that you can be proud of.

Machine learning models, like children, are endowed with enormous potential — potential to do great things that benefit society as well as the obverse. However, major gaffes with machine learning in recent years highlight that we have a long way to go in improving the development of these models. Take, for example, Microsoft’s now-infamous Tay bot. Designed to mimic the style and vernacular of a teenager, Tay was a short-lived AI chatbot whose goal was to learn from the experience of interacting with users on Twitter and offer playful responses. Within a day of corresponding with internet trolls, however, the friendly Tay was spouting racial epithets along with other polemics and had to be quickly shut down by its creators. It’s easy to blame the users who corrupted Tay, but the humans who created the bot were just as culpable. If they had approached the development of Tay as if they were raising a child, they would have taken many steps that would have prevented Tay’s corruption or at least made it much harder.



Wednesday, 02 January 2019 16:21

Machine Learning Model I Learned From My Kids

In today’s blog, I’ll share our business continuity management guide so that it might help you wrap your mind around starting a BCM program.

So you’ve been tasked with getting a business continuity management (BCM) program up and running for your organization. Congratulations, it’s an exciting field and an important responsibility.

This information might also be helpful to folks who have a program up and running and wonder what they might be missing.



How to create and sell an awkward crisis exercise scenario

It’s often the case that the most severe crises that cause lasting harm to a company’s brand and valuation are internally generated. These crises are the result of wrong-doing, sometimes even outright criminal behavior, by members of the C-suite, all the way up to the CEO. Crises caused by deeply flawed leaders also occur frequently in non-profits, government, religious and educational institutions.

The question for crisis managers tasked with planning exercises and drills is this: How can you possibly sell the idea of a crisis exercise whose scenario explicitly implicates your organization’s top leadership with serious criminal offenses? And if you think that kind of an exercise is unnecessary or not worth the heartache of convincing others that it’s an important exercise scenario, consider the following:

In the corporate world, take VW’s “dieselgate” as Exhibit A. Here the highest echelons of the company were involved in an elaborate deceit of entire governments and their regulators around the world as well as millions of international customers. VW’s stock has yet to recover from the 30 percent drop it experienced overnight when the deceit first became known. While this unprecedented auto industry crisis began in 2015, the effects continue unabated to today. Billions of dollars more in fines are now being levied, and more top heads are being arrested and jailed. Most recently, Audi’s CEO Rupert Stadler was arrested for his alleged role in the diesel scandal.



Guidance for Executive Management and the Board

Protiviti’s Jim DeLoach discusses strategies to enhance the risk assessment process, from ensuring the proper stakeholders are involved to accounting for disruptive change and moving beyond “enterprise list management.”

An effective risk assessment is fundamental to risk management and the board’s risk oversight process. Successful risk assessments help directors and executive management identify emerging risks and face the future confidently.

An enterprise risk assessment (ERA) is a systematic and forward-looking analysis of the impact and likelihood of potential future events and scenarios on the achievement of an organization’s business objectives within a stated time horizon. The process begins with an articulation of the enterprise’s governing business objectives as reflected in its strategy and performance goals. It applies predetermined risk criteria to well-defined risk scenarios that could lead to the organization falling short of achieving those objectives. Often, the assessment results are displayed on a grid or map for review by decision-makers to ensure risk owners are appropriately assigned and risk responses and metrics are in place. Many organizations have some sort of ERA process in place.



3 Emerging Areas of Concern

Information Security Forum recently released “Top Emerging Threats for 2019,” an annual outlook on the top global security threats businesses may face in the coming year. ISF’s Steve Durbin provides insight into some of the most pressing threats organizations should be aware of.

It’s that time of year again – time for each and every one of us to reminisce on the past year and make resolutions for how we can do better in the year ahead.

In the year ahead, organizations must prepare for the unknown so they have the flexibility to endure unexpected and high-impact security events. To take advantage of emerging trends in both technology and cyberspace, businesses need to manage risks in ways beyond those traditionally handled by the information security function, since innovative attacks will most certainly impact both business reputation and shareholder value.

Based on comprehensive assessments of the threat landscape, the Information Security Forum recommends that businesses focus on the following security topics in 2019:

  • The Increased Sophistication of Cybercrime and Ransomware
  • The Impact of Legislation
  • Smart Devices Challenge Data Integrity



Unfortunately, recent news events have shown that workplaces need to do a better job of preparing for instances of workplace violence. In today’s post, we’ll share 10 steps you can take to mitigate the risk of active shooter incidents and similar events at your organization.

In recent years, society has learned through tragedies such as the Sandy Hook and Parkland shootings the importance of taking measures at schools to mitigate the threat of gun violence to children. Most schools now routinely conduct drills to prepare for such an event.

For whatever reason, workplaces lag behind schools in this area—perhaps out of an assumption that while children need to be protected and trained, adults will automatically know what to do. This assumption isn’t true.

The fact is, workplaces are also vulnerable. Adults also need to be drilled in what to do, and workplaces also need to be prepared. This includes the facilities you are responsible for as a business continuity professional.



And Striking the Right Balance to Fight It

The reality is that the vast majority of corporations have a fraud problem to some degree. It’s a growing problem – one indicator pointing to a rise in overall economic crime globally. Michael Volkov outlines various methods to detect and prevent fraud and gives us a peek into the mind of a corporate fraudster.

“For the love of money is the root of all evil…” – 1 Timothy 6:10, King James Version, The Bible

Corporate bribery requires money. How is that for something obvious?

Companies face a variety of threats; one enduring threat is the risk of fraud or theft. Unfortunately, employee fraud is all too common.

PWC’s 2018 Global Economic Crime and Fraud Survey reported that “only 49 percent of global organizations said they’d been a victim of fraud and economic crime. However, we know this number should be much higher. So what about the other 51 percent?” PWC suggests that the other 51 percent of corporate organizations are blissfully ignorant and ignoring their obvious fraud problem.



Thursday, 20 December 2018 15:54

The Growing Problem Of Corporate Fraud

(TNS) - A tornado tore through a neighborhood near Port Orchard Tuesday afternoon, ripping roofs off of buildings and pulling trees to the ground, authorities said.

One home’s roof was torn off completely, revealing its second-story rooms to news helicopters above. Twenty homes were evacuated after a possible gas leak.

“Based on radar imagery and images we’ve seen from social media, it looks a tornado touched down near Port Orchard around 2 p.m.,” said Kirby Cook, a meteorologist with the National Weather Service.

About 50 buildings were affected, said Deputy Scott Wilson, of the Kitsap County Sheriff’s office.



Hurricane Harvey hit Texas in August 2017. Just weeks later, Irma made landfall in Florida, followed by Maria in Puerto Rico. The so-called “HIM” storms struck the U.S. 16 months ago, but final insured loss numbers have yet to be finalized.  Why?

There are at least two reasons: the storms happened in rapid succession, wreaking havoc on the claims settlement process; and the storms caused significant business interruption losses, which can take time to settle.



(TNS) - Kristi Proctor has had a small, federally-provided travel trailer parked beside the shattered remains of her home for nearly two weeks.

Now, if only she were given the keys to unlock it.

Proctor, whose Panama City home was one of many destroyed by Hurricane Michael, registered with the Federal Emergency Management Agency for direct housing assistance soon after the storm, only to wait about two months for a trailer to arrive. Crews connected water and sewer to the trailer and were supposed to then return to connect the power and hand Proctor the keys.

As of Friday evening, Proctor was still waiting, instead staying in a neighbor's spare bedroom with her son.

"I don't understand the holdup," Proctor said. "If they just get the power hooked up, we could be in it."

For more than a month, FEMA has worked with county and city officials to bring various trailers and modular homes to the area for residents whose homes were destroyed by the hurricane. However, according to FEMA, as of last week only around 44 trailers had been placed in the county. Meanwhile, around 1,400 people are registered in the agency's direct housing program.



Improving customer service, productivity and efficiency are just a few of the many benefits of a service management system. ISO has updated two standards in its service management series, with new features, topics and tips from the top.

According to a Forbes report, IT service management is highly important to most executives and a lack of a service management approach hurts competitiveness due to too much time and money spent on ongoing maintenance and management rather than new initiatives1).

A service management system (SMS) supports the management of the service life cycle, from planning to delivery and improvement, offering better value for customers as well as those delivering the service. It gives ongoing visibility, allowing for continual improvement in effectiveness and efficiency.

Published jointly by ISO and the International Electrotechnical Commission (IEC), the ISO/IEC 20000 series of standards provides comprehensive guidance on virtually every aspect of SMS and two key parts have just been updated.



Implementing cloud storage best practices can be challenging. Follow these tips below to choose the best enterprise cloud storage plan for your business needs – without wasting time and money.

We cover both these critical storage tasks:



Wednesday, 19 December 2018 15:16

Cloud Storage Best Practices

(TNS) - Fresh details emerged Monday about another of the key blunders in the response to the Parkland massacre — miscommunication about video footage that led police to think the shooter was still in the three-story building when he wasn’t.

The mistake meant that officers proceeded with more extreme caution and most paramedics, for their own safety, were not permitted to enter the school.

“Had we known the shooter wasn’t there, we probably could have flooded that building a lot faster knowing that we’re just going to go in there and just start trying to recover victims and wounded people,” George Schmidt, a member of Coral Springs Police Department’s SWAT operations team, told investigators, according to transcripts of interviews with the Florida Department of Law Enforcement.



For the past five years Continuity Central has conducted an online survey asking business continuity professionals about their expectations for the year ahead. We are repeating the survey again this year and the interim results are now available. They are as follows...

Initial demographics

So far, the majority of respondents (76.5 percent) are from business continuity professionals operating in large organizations. 9.5 percent are from medium sized and 14 percent are from small sized organizations. The top responding country so far is the United States, where 39 percent of respondents are located; followed by the UK (22.5 percent) and Australia (6 percent).



Once a year, Santa and his merry band of reindeer take an exhausting trek around the world, distributing gifts, goodies and hoping to make everyone’s holiday wishes come true.

Now, imagine what would happen if in his rush to get the “Toy of the Year” to all those excited boys and girls, Santa disregarded his trusted IT elves’ warning about potential spikes in traffic? Let’s just say there would be a riot of Black Friday-level proportions.

Unfortunately, Santa isn’t the only one who tends to make this costly mistake.

Oftentimes, companies are so focused on ensuring the holiday season is a sales success, they fail to properly allocate resources to account for periods of IT workload fluctuation.

Given that high volumes of traffic go hand-in-hand with the holidays, post-Christmas IT outagescan be all too common. As a result, any failure to prioritize load distribution and scalability is just bad for business.

Technical difficulties. Failure to register. Systems outage. These are the last things companies want to hear about – especially during their busiest time of year.

Santa may obsess over his list, hoping to discover who’s been naughty or nice. But, let’s be clear, those are far from the only things he should be checking twice.

Check out more IT cartoons.

The Real Costs to Companies

People get emotional over cyber data breaches, and the media loves to report on the latest hack attack that exposed millions of users’ information. Other than reputational damage (which is quickly forgotten, given the 24/7 news cycle), why should risk managers, executives and business owners care? Because it’s expensive. So expensive that it could hurt profits for years.

Compliance departments, risk managers and executives may not appreciate the financial damage that a data breach will cause. Ask any company executive or risk manager who has experienced a data breach and you will hear stories of disorder, the blame game, unanswered questions and the expense. The disorder and expense increase if the company did not have a data breach and incident response plan in place before the cyber incident was discovered. Why? Because it is more expensive to hire professionals and forensic experts in the midst of the data breach than negotiating reasonable rates before the event.

Even after the breach is contained, the breached company may become the target of investigations, regulatory fines, litigation costs, reputational harm and lost profits that can affect the company’s bottom line for years.

If you are wondering why a data breach is so expensive and where these expenses come from, keep reading.



Tuesday, 18 December 2018 15:32

What Makes A Cyber Data Breach Expensive?

The business risks associated with global climate change are enormously complex and nearly infinite in quantity. Your firm’s climate-related risks, however, are much more manageable (albeit complex and numerous, as well). No two organizations are exposed in exactly the same way.

Rising temperatures, sea-level rise, and more frequent and severe extreme weather events have already wreaked havoc on the growth and well-being of organizations and communities around the world. Since the 1980s, the number of annual weather-related loss events has tripled, and this trend is only worsening. The latest National Climate Assessment (the fourth) paints a grim picture for the future US economy if the world does not take significant action, immediately, to stem climate change: annual economic losses of $500 BILLION.[i]

Among a long list of contributing factors, the largest include the impacts on labor productivity, heat-induced mortality, and damage to coastal assets. The effects we’re witnessing now and projecting in the near future, coupled with stakeholder pressure, make it clear that customer-obsessed business and technology leaders have one choice: “Adapt To Climate Change Or Face Extinction.”



By Alex Becker, vice president and general manager of Cloud Solutions, Arcserve

If you’re like most IT professionals, your worst nightmare is waking up to the harsh reality that one of your primary systems or applications has crashed and you’ve experienced data loss. Whether caused by fire, flood, earthquake, cyber attack, programming glitch, hardware failure, human error, whatever – this is generally the moment that panic sets in.

While most IT teams understand unplanned downtime is a question of when, not if, many wouldn’t be able to recover business-critical data in time to avoid a disruption in business. According to new survey research commissioned by Arcserve of 759 global IT decision-makers, half revealed they have less than an hour to recover business-critical data before it starts impacting revenue, yet only a quarter cite being extremely confident in their ability to do so. The obvious question is why.


Navigating modern IT can seem like stumbling through a maze. Infrastructures are rapidly transforming, spreading across different platforms, vendors and locations, but still often include non-x86 platforms to support legacy applications. With these multi-generational IT environments, businesses face increased risk of data loss and extended downtime caused by gaps in the labyrinth of primary and secondary data centers, cloud workloads, operating environments, disaster recovery (DR) plans and colocation facilities.

Yet, despite the complex nature of today’s environments, over half of companies resort to using two or more backup solutions, further adding to the complexity they’re attempting to solve. Never mind delivering on service level agreements (SLAs) or, in many cases, protecting data beyond mission-critical systems and applications.

It seems modern disaster recovery has become more about keeping the lights on than proactively avoiding the impacts of disaster. Because of this, many organizations develop DR plans to recover as quickly as possible during an outage. But, there’s just one problem: when was their most recent backup?  


Day-old sushi is your backup. That’s right, if you’ve left your California Roll sitting out all night, chances are it’s the same age as your data if you do daily backups. One will cause a nasty bout of food poisoning and the other a massive loss of business data. Horrified or just extremely nauseated?

You may be thinking this is a bit dramatic, but if your last backup was yesterday, you’re essentially willing to accept more than 24 hours of lost business activity. For most companies, losing transactional information for this length of time would wreak havoc on their business. And, if those backups are corrupted, the ability to recover quickly becomes irrelevant.

While the answer to this challenge may seem obvious (backup more frequently), it’s far from simple. We must remember that in the quest to architect a simple DR plan, many organizations make the one wrong move that becomes their downfall: they use too many solutions, often trying to overcompensate for capabilities offered in one but not the others.

The other, and arguably more alarming reason, is a general lack of understanding about what’s truly viable with any given vendor. While many solutions today can get your organization back online in minutes, the key is minimizing the amount of business activity lost during an unplanned outage. It’s this factor that can easily be overlooked, and one that most solutions cannot deliver.


Imagine, for a moment, you have a power failure that brings down your systems and one of two scenarios plays out. In the first, you’re confident you can recover quickly, spinning up your primary application in minutes only to realize the data you’re restoring is hours - or even days old. Your manager is frantic and your sales team is furious as they stand by and watch every order from the past day go missing. In the second scenario, you’re confident you can recover quickly and spin up your primary application in minutes. This time, however, with data that was synced just a few seconds or minutes ago. This is the difference between a blip on the radar of your internal and external customers, and potentially hundreds of thousands (or more) in lost revenue, not to mention damage to you and your organization’s reputation which is right up there with financial loss.

For a variety of reasons ranging from perceived cost and complexity to limited network bandwidth and resistance to change, many shy away from deploying DR solutions that could very well enable them to avoid IT disasters. However, leveraging a solution that can keep your “blip” from turning brutal is easily the best kept secret of a DR strategy that works, and one that simply doesn’t.


Many IT leaders agree that the volume of data lost during downtime (your recovery point objective, or RPO) is equally, if not more important than the time it takes to restore (your recovery time objective, or RTO). The trick is wading through the countless solutions that promise 100 percent uptime, but fall short in supporting stringent RPOs for critical systems and applications. These questions can help you evaluate whether your solution will make the cut or leave you in the cold:

  1. Does the solution include on-premises (for quick recovery of one or a few systems), remote (for critical systems at remote locations), private cloud you have already invested in, public cloud (Amazon/Azure) and purpose-built vendor cloud options? Your needs may vary and the solution should offer broad options to fit your infrastructure and business requirements.
  2. How many vendors would be involved in your end-to-end DR solution, including software, hardware, networking, cloud services, DR hypervisors and high availability? How many user interfaces would that entail? The patchwork-based solution from numerous vendors may increase complexity, time to manage and internal costs – and more importantly it will increase risks of bouncing between vendors if something goes wrong.
  3. Does the solution provide support and recovery for all generations of IT platforms, including non-x86, x86, physical, virtual and cloud instances running Windows and/or Linux?
  4. Does the solution offer both direct-to-cloud and hybrid cloud options? This ensures you can address any business requirement and truly safeguard your IT transformation.
  5. Does the solution deliver sub five-minute, rapid push-button failover? This allows you to continue accessing business-critical applications during a downtime event, as well as power on / run your environment with the click of a button.
  6. Does it support both rapid failover (RTOs) and RPOs of minutes, regardless of network complexity? When interruption happens, it’s vital that you can access business-critical applications with minimal disruption and effectively protect these systems by supporting RPOs of minutes.
  7. Does the solution provide automated incremental failback to bring back all applications and databases in their most current state to your on-premises environment?
  8. Does your solution leverage image-based technology to ensure no important data or configuration is left behind?
  9. Is your solution optimized for low bandwidth locations, being capable of moving large volumes of data to and from the cloud without draining bandwidth?
  10. In the event of a disaster, does the solution give you options for network connectivity, such as point to site VPN, site to site VPN and site to site VPN with IP takeover?

The true value you provide your organization and your customers is the peace of mind and viability of their business when a disaster or downtime event occurs. And even when its business as usual, you’ll be able to support a range of needs - such as migrating workloads to a public or private cloud, advanced hypervisor protection, and support of sub-minute RTOs and RPOs - across every IT platform, from UNIX and x86 to public and private clouds.

By keeping these questions in mind, you’ll be better prepared to challenge vendor promises that often cannot be delivered and to select the right solution to safeguard your entire IT infrastructure - when disaster strikes and when it doesn’t. No more day old sushi. No more secrets.

About the Author

As VP and GM of Arcserve Cloud Solutions, Alex Becker leads the company’s cloud and north american sales teams. Before joining Arcserve in April 2018, Alex served in various sales and leadership positions at ClickSoftware, Digital River, Fujitsu Consulting, and PTC.

New report says that Panorays “is best for S&R [security and risk] pros that want a dedicated tool to conduct all cyber-TPRM [third party risk management] activity.”

NEW YORK – Panorays, a rapidly growing provider of automated third-party security management, today announced that it has been named a Strong Performer in The Forrester New Wave™: Cybersecurity Risk Rating Solutions, Q4 2018 evaluation.

Analysts found that Panorays provides “a competitive cyber-risk rating solution with well-rounded capabilities” and “is best for S&R pros that want a dedicated tool to conduct all cyber-TPRM activity. S&R pros seeking a tool that provides solid cyber-risk ratings along with other TPRM features for cybersecurity will find Panorays an intriguing option.”

The report reflects Panorays’ performance in 10 areas of criteria. Panorays received a “differentiated” rating — the highest one possible — in the criteria of internal and enterprise risk context, risk assessments and review portal, and global reach. Forrester evaluated nine vendors across three categories: current offering, strategy and market presence.



At many companies, emergencies are made worse because the crisis management team does not have ready access to vital, up-to-date data. In today’s post, we’ll talk about what information the crisis team might need—and how you can make sure they have it.

One of the essential questions you must consider when devising your Crisis Management and IT/Disaster Recovery strategies is: Will the crisis management team have quick access to the critical information they need to carry out their role?

They’d better.

Information is critical to our businesses. We cannot make good decisions without it.



(TNS) - To protect itself from the next major hurricane, Texas will have to build storm-surge barriers, shore up wetlands, buy out residents who live in vulnerable areas, rethink development plans and raise the first floors of existing buildings, suggests a sweeping new report prepared for Gov. Greg Abbott and released Thursday afternoon. 

The new recommendations come from Abbott's Commission to Rebuild Texas, led by Texas A&M Chancellor John Sharp. In September 2017, Abbott charged Sharp with the task of rebuilding Texas "ahead of schedule, under budget and with a friendly smile."

The report calls Hurricane Harvey a warning that should not be ignored.  "The enormous toll on individuals, businesses and public infrastructure should provide a wake-up call underlining the urgent need to 'future-proof' the Gulf Coast - and indeed all of Texas - against future disasters," the study says.  The investigation, based on hundreds of hours of interviews and dozens of scientific papers, is comprehensive in its scope, covering  issues as broad as the need to streamline emergency response and as specific as the need to improve oversight and availability of contractors.



Developing software is not always a straightforward procedure. An International Standard to apply the principles of the world’s most widely used quality management system enables engineers to smooth out the process. It has just been updated.

ISO/IEC/IEEE 90003, Software engineering – Guidelines for the application of ISO 9001:2015 to computer software, is designed as a checklist for the development, supply and maintenance of computer software. The recently updated version combines the proven benefits of ISO 9001 with some of the world’s most important support documents in software engineering, allowing an organization to benefit from international best practice in improving quality at every step of the life cycle. This includes everything from the supply, acquisition, operation and maintenance, to the circular process of continuous improvement.

Developed in conjunction with the International Electrotechnical Commission (IEC) and the Institute of Electrical and Electronics Engineers (IEEE), the standard was recently revised to align with the most recent version of ISO 9001 (published in 2015), with new concepts relevant to current software development added.



(TNS) - Thirty teenagers, members of the Providence Student Union, rallied at City Hall Wednesday to demand the removal of police officers from their high schools.

The student rights organization also called for an increase in guidance counselors and a shift toward restorative justice, which uses dialogue and understanding to resolve disciplinary issues. Instead of relying on school resource officers, students asked for safety teams whose members are trained in deescalation techniques and conflict resolution.

"Today, I am fighting for every student ever pinned down, arrested or harassed by school resource officers," said student Jayson Rodriguez. "We don't need SROs. We need more counselors. We need more nurses. We need more mental health providers. We need more social workers."



Not a day seem to go by without news of another cyber incident. As human beings, we learn to get used to things, and become desensitised to events that don’t directly involve us. Is this happening when it comes to cyber threats? Mike Smith thinks so…

1989 was of a year of positive milestones which would have a profound impact on the way we live and work today. The World Wide Web was invented, the Berlin Wall was torn down, and the first GPS satellite went into orbit. However, not everything about the year was a cause for celebration. Alongside these progressive developments was the creation of the world’s first computer worm. Initially crafted to test the size of the Internet, the worm spread out of control, causing devastation and alerting businesses to the importance of investment in security products including firewalls. This was the first defensive measure in the cyber security industry, and now in 2018, a year plagued by cyber attacks, it is one of the most basic.

Cyber complacency?

In the past, cyber attacks used to be so infrequent that hearing about just one breach in the news would be reason enough to invest in protection. Nowadays, not a day goes by without news of another hack being disseminated around the world. The temptation to roll your eyes, say ‘not another one’, and shut your browser is palpable. As human beings, we are adaptable; we learn to cope, learn to get used to things, and become desensitised to events that don’t directly involve us.

But becoming fatigued and showing complacency is one of the most dangerous things we can do.



(TNS) - Officials are still assessing the damages left in the aftermath of a tornado that hit several properties near the Tenkiller Lake area on Nov. 30.

"The numbers from the preliminary assessments shown roughly 189 structures with major and destroyed damages," said Cherokee County Emergency Manager Mike Underwood. "Federal Emergency Management Agency and Small Business Administration are in the areas impacted and are assessing the numbers of damaged homes and businesses and this will take one to two days to finish."

The storm produced an EF2 tornado, which passed through five different counties, extending nearly 60 miles.

Cherokee County Undersheriff Jason Chennault said that when the storm hit, every available deputy responded for a search-and-rescue. Most of the authorities who were on the scene brought their own chainsaws, which many ended up using because of the amount of damage left behind.



In many organizations the way that data backup is handled hasn’t changed much over the years, despite the fact that we are in the middle of a digital revolution! Gijsbert Janssen van Doorn looks at why this is and calls for organizations to move towards continuous data protection.

Digital transformation has meant that the way products and services are purchased has changed – essentially, the market has evolved along with business models. Companies increasingly cannot afford to lose data, and with the constant headlines of ransomware attacks, phishing scams, data leaks and security breaches, this is becoming an ever-increasing challenge. The question is, if the world of IT is changing, why isn’t backup?

Backup 1988 vs. 2018

The traditional backup architecture has been around since the invention of IT. It works by copying data to a different storage architecture, at a fixed point in time, generally at night. This is usually done as quickly as possible as it has an impact on the IT environment’s performance level and infrastructure. However, backup hasn’t changed much from this in the last thirty years, and so isn’t keeping up with the modern age.

Backup revolves around securing data to prevent data loss. Over time, companies have come to realise that they are heavily dependent on their data – fewer companies can cope with any kind of data loss. For example, an online shop cannot afford to lose twelve hours of data due to an outage – not only would it lose twelve hours of orders and sales, but its reputation will also be damaged in the long run. To put this into context, a recent study found that nearly half of businesses have suffered an unrecoverable data event in the last three years, and of these, 20 percent reported a loss of customers, and 19 percent had direct damage to the company reputation. All of this combined can, unfortunately, sometimes mean the end of the business.



(TNS) - Four minutes after a 7.0 earthquake shattered morning routines in Southcentral Alaska on Nov. 30, residents got a smartphone alert warning of a tsunami risk and instructing them to seek higher ground.

The alert was canceled less than an hour later, but not before Anchorage residents called 911 to ask if they should seek higher ground. Officials later said it was meant more for coastal communities like Seward and Homer than Anchorage.

But the episode got a lot of people wondering: Could Anchorage actually be inundated by a tsunami?

Scientists, shy of speaking in absolutes, won’t say it is impossible. But it is extremely unlikely.

“We cannot say the probability is zero -- but it is a very small probability,” said David Hale, an oceanographer with the National Tsunami Warning Center, located in a building inland in Palmer.



The role of a business continuity specialist is rapidly changing with the constant introduction of new technology.  We can’t help but wonder, how will the profile of a business continuity manager evolve?

For those of you reading this blog who might not be familiar with business continuity, let’s rewind to the 80’s.  This was a time when the concept of protecting your entire organization emerged.  Previously, companies were focused on identifying critical systems and how to recover them quickly after an event.  Which would explain why many business continuity professionals transitioned from an IT background.  Paul Kirvan, who wrote the article “Business continuity managers must expand their training to stay viable” spoke of this time saying, “…we have seen the development of the term resilience, which some suggest is the evolution of BC.”

Fast forward to the 90’s and we start to see technology being used in malicious ways posing threats to network security. This quickly became the number one concern of technology professionals.  It was at this time that business continuity planning become more prevalent and accepted by organizations.



In today’s post, we’ll lay out some crisis response steps your organization can and should take to protect itself in our new business landscape where disaster is just a tweet away.

In today’s world, a seemingly minor snafu can become an existential crisis in the blink of an eye thanks to social media platforms such as Facebook and Twitter.

This makes it more important than ever that your organization is ready for trouble before it strikes.



Thursday, 13 December 2018 15:22

Crisis Response in Today’s Breakneck World

(TNS) - People in Chesapeake will soon get faster emergency care with the new mobile app Pulsara.

It will be used by Chesapeake Regional Medical Center’s medical professionals and the Chesapeake Fire Department (EMS) to treat patients more efficiently.

They are the first in the region to roll out the app. It costs less than $150,000, and it was made possible through funding by the Chesapeake Regional Health Foundation.

“It will give us a head start on being prepared or getting the appropriate treatment started,” said Dr. Lewis Siegel at Chesapeake Regional. “We started training last week, but we’ve been getting very familiar with it for a while.”

Medical errors cause up to 400,000 deaths per year and 80 percent of errors stem from miscommunication, according to Pulsara’s website. The new technology is supposed to help decrease those numbers.



(And Why U.S. Companies Should Take Note)

The General Data Protection Regulation (GDPR), Europe’s sweeping data protection law, has been in effect for six months, and while fines have yet to be levied against U.S. companies for breach of the law, enforcement is beginning to take hold. Anne Shannon Baxter of Access Partnership discusses what organizations with cross-border operations should know.

The General Data Protection Regulation (GDPR) has been in effect for six months, and U.S. companies are still struggling to understand its ramifications. As readers of this publication are aware, the European Union law applies to any foreign companies processing the personal data of data subjects residing in the EU, regardless of the company’s location. This means that businesses in the U.S. that offer goods and services, monitor the behavior of individuals or have an establishment within the EU are liable.

There have not been any fines levied against U.S. companies for breach of the law at the time of writing, but this won’t be the case for long and, with fines of up to €20 million or 4 percent of annual global turnover (whichever is the higher), the risk can’t be brushed off.

Adding to the difficulty, enforcement of the GDPR so far has focused on big technology companies, making it harder for many businesses to ascertain whether or not they are sufficiently prepared. Because fines under the GDPR are retroactive, companies must ensure they do not get complacent about their compliance.



Thursday, 13 December 2018 15:20

How GDPR Enforcement Is Shaping Up In Europe

(TNS) - The federal government will soon start a $18 million project to rebuild Pleasure Island's beaches damaged by hurricanes Matthew and Florence, with Washington picking up most of the tab.

According to the Army Corps of Engineers, the contract includes the periodic nourishment scheduled for Carolina Beach and Kure Beach every three years, but "is unique because it also includes additional repairs to the shoreline for sand lost due to the passage of Hurricane Matthew in 2016 and Hurricane Florence in 2018."

Jonathan Bingham, chief of programs at the corps' Wilmington office, said the federal government will pay for about $14.2 million of the cost, with state and local governments picking up the remaining $3.8 million.

That federal cost includes roughly $3.4 million in additional federal funds, requiring no state or local match, for storm damage repair, Bingham said.



(TNS) - West Virginia left nearly $13 million in federal funds on the table that could have been used to hire staff to better respond to major flooding.

Shana Clendenin, the mayor of Clendenin, told state lawmakers Tuesday that she asked officials and legislators about the grant program, in person, in November 2017, to no avail.

On Sunday, more than one year after the meeting, the Legislative Auditor’s Office published a report stating, among other failures at the Department of Homeland Security and Emergency Management, it did not apply for more than $12.7 million in available FEMA funds earmarked for disaster management costs including grantees’ salaries, benefits, and supplies.

“It was my position that the state had the resources available to hire experts to ensure the financial welfare of West Virginia and its sub-recipients,” Clendenin said Tuesday. “The direct response was less than adequate and they seemed puzzled by our questions.”



(TNS) - Debris cleanup for last month’s wildfires in Butte, Ventura and Los Angeles counties could cost at least $3 billion, officials said Tuesday.

Mark Ghilarducci, director of the state’s Office of Emergency Services, said he expects debris removal to begin next month once crews finish cleaning up hazardous materials at the sites.

The cost, which Ghilarducci said is speculative at this point, is more than twice the $1.3 billion it cost to clean up the destruction from the Wine Country fires in October 2017. The funding will come from a combination of federal, state and local governments.

In a change from last year’s effort, the state will manage contracts and hiring for debris removal, rather than the U.S. Army Corps of Engineers.

Ghilarducci expects the debris removal to take a year.



So you’re looking for a lone worker safety device. Lone workers perform some of the most dangerous jobs, and it’s essential that you find a way to keep them safe and connected. But where do you start?

Key Considerations

There are so many different types of lone worker safety devices out there, it can be overwhelming. Start by asking some key questions about what your lone workers need from the device.

Who are your lone workers?

Think about who will be using the device. Are your lone workers tech-savvy? Are they used to using extra equipment on the job? If not, then finding a user-friendly solution is especially important. The device needs to be one that will fit seamlessly into your lone employees’ workflow.



The past year was a big one for disaster recovery teams, for better or for worse. Major challenges included the prevalence of ransomware, GDPR compliance and the cloud.

There has been a big push to elevate the maturity of disaster recovery in 2018. With the news filled with so many system outages, data breaches, cyberattacks and more, it's evident that DR needs to play a front-and-center role in the operations of every organization today.

So, where is DR in 2018 missing the mark?

In short, the biggest overarching observation is that organizations aren't completely prepared for three primary disaster recovery challenges: security, compliance and cloud. I'd like to spend the remainder of this article discussing these angles and where I think DR teams haven't quite made the cut in 2018.



Wednesday, 12 December 2018 15:19

3 common disaster recovery challenges from 2018

Ah, Florida. Home to sun-washed beaches, Kennedy Space Center, the woeful Marlins – and one of the most costly tort systems in the country.

A significant driver of these costs is Florida’s “assignment of benefits crisis.”

Today the I.I.I. published a report documenting what the crisis is, how it’s spreading and how it’s costing Florida consumers billions of dollars. You can download and read the full report, “Florida’s assignment of benefits crisis: runaway litigation is spreading, and consumers are paying the price,” here.

An assignment of benefits (AOB) is a contract that allows a third party – a contractor, a medical provider, an auto repair shop – to bill an insurance company directly for repairs or other services done for the policyholder.



Key Issues Being Discussed in the Boardroom and C-Suite

Leaders of organizations in virtually every industry, size of organization, and geographic location are reminded all too frequently that they operate in what appears to many to be an increasingly risky global landscape. Escalating concerns about the rapidly changing business environment and the potential for unexpected surprises vividly illustrate the reality that organizations of all types face risks that can disrupt their business model over time and damage reputation almost overnight. Boards of directors and executive management teams cannot afford to manage risks casually on a reactive basis, especially considering the rapid pace of disruptive innovation and technological developments in an ever-advancing digital world.

Protiviti and North Carolina State University’s ERM Initiative are pleased to provide this report focusing on the top risks currently on the minds of global boards of directors and executives. This report contains results from our seventh annual risk survey of directors and executives to obtain their views on the extent to which a broad collection of risks is likely to affect their organizations over the next year.



Wednesday, 12 December 2018 15:11


System outage cartoon

Maps, a compass and Rudolph’s red nose. This is all Santa has to rely on if his navigation system goes down during his gift-giving trek around the world. Something tells us there’s going to be a lot of sad faces on Christmas morning if he has to revert to manual navigation.

If it seems absurd that manual processes are the only failsafe, look at the many companies that find themselves following a similar path as Saint Nick. Walmart, J. Crew, Lowe’s, Ulta and other retailers all experienced technical difficulties on Black Friday as online traffic peaked.

In a perfect world, systems would always run smoothly with zero disruptions. Yet, things happen, especially around the holidays: The high volume of traffic around Black Friday, Christmas and New Year’s Eve, snow and ice storms that cause power outages, not to mention the ever-present risk of cyberthreats.

Your best bet for traffic spikes is high availability. You need to design an infrastructure that allows for high amounts of uptime without interruption. In other words, your system needs to be able to function properly even if it suffers failure to components.

For power outages and other disasters, backups and disaster recovery plans can get critical systems back online as quickly as possible. Most important, though, is to test your systems and your recovery plans, especially ahead of big events like holiday shopping.

Failing to do so could leave you with nothing more than a compass, a map, and a lot of unhappy customers. And that might actually be worse than being on the naughty list.

Check out more IT cartoons.


(TNS) - John Gaylord, shift supervisor at the county’s emergency dispatch center, is caught in a moment of crisis. Or, as he would call it, a moment.

“911, can I help you?”

A caller was sitting with a cyclist who had been struck by a vehicle in a hit-and run. Gaylord ticked down his list of questions and instructions: “Where are you? Is he alert? Is he breathing? Bleeding? Don’t move him if you can avoid it. Keep your head on a swivel. Help is on the way.”

Gaylord has been an emergency dispatcher with Clark Regional Emergency Services Agency, or CRESA, for nearly 30 years. He’s hard to rattle.

“If not explained correctly, it’s coarse and callous,” he said, sipping on a diet cranberry ginger ale Thursday afternoon. The six screens in front of him — seven, if you count his supervisor’s iPhone — are constantly flashing and changing, updating him on the movement happening across the county in real time.



With the tech sector booming and unemployment low, cybersecurity talent can be hard to recruit and retain. The implications for business resilience are particularly worrisome.

Career website Cyber Seek estimates more than 300,000 cybersecurity jobs remain unfilled in the U.S. It’s predicted that by 2021, there will be a global shortfall of 3.5 million cybersecurity jobs, according to Cybersecurity Ventures. There’s even a bipartisan bill in the House of Representatives, the Cyber Ready Workforce Act, designed to address the cybersecurity workforce shortage crisis.

C-suite executives say the inability to “identify and fill gaps in cyber talent,” along with the capacity to build a “cyber-savvy workforce,” are among their top concerns in regards to business resiliency, according to a Business Insurance study.

Here’s what you need to know about the impact of the cybersecurity talent shortage on business resilience, security, cloud computing and other IT functions—and what you can do about it.



(TNS) — The most authoritative and complete report on climate change and its impact on the U.S. has dire warnings for the Southeast: destructive wildfires like those seen in 2016 are likely to be more commonplace as the world's changing climate creates more fire-prone conditions.

The National Climate Assessment, released the day after Thanksgiving, projects a fourfold increase over the next 30 years in both the area burned by wildfire and suppression cost as forests dry out during longer and more prevalent droughts.

"The report essentially projects that wildfire risk will increase fairly substantially over the next 50 years," said James Vose, a federal coordinating lead author of the report and senior research ecologist at U.S. Forest Service. "That's not only in the West but also to some extent in the Southeastern U.S., as well."

More than 300 experts took part in the 1,600-plus-page report, ranging from 13 government agencies, universities, climate scientists and other experts. It is the first such report under the Trump administration and fourth overall. The report is mandated by law.



(TNS) - Flashing sirens atop emergency vehicles and first responders rushing to scenes have been in the foreground of the tragedies to recently strike Cass County.

But each one of those emergencies, as with just about all local emergencies, began with a 911 call received by a headset-clad dispatcher before an array of computer monitors in a downtown Logansport office. Their job is to collect information, often from panicking callers, before determining what kind of help to send. And just because they're not at the scene of a harrowing event, it doesn't mean the job doesn't take its toll.

Cass County E911 dispatches for over 20 agencies, including five law enforcement, 11 fire, two EMS, animal control, Cass County Government Building security and an emergency management agency while maintaining regular contact with the Indiana State Police and Indiana Department of Natural Resources.

Dan McDonald, Cass County E911 director, said it's been "a tough couple months" for those working in local public safety.



In May 2017, the computing world endured one of the single largest hacks in history.

The WannaCry ransomware wreaked havoc on systems across the globe. China, Russia, the United Kingdom, and even the US – few countries were spared.

Some reports indicated that the impact from WannaCry reached over 400,000 computers in 150 countries.

While the attack carried with it an oddly amusing name, the situation was anything but funny. Not just home computers, but banks, hospitals, and telecom companies were all impacted by the malicious software

So what exactly was the WannaCry ransomware attack?

Well, it was a hostage situation demanding a ransom.

What was the hostage held for ransom?




These predictions were written by Eoin Carroll, Taylor Dunton, John Fokker, German Lancioni, Lee Munson, Yukihiro Okutomi, Thomas Roccia, Raj Samani, Sekhar Sarukkai, Dan Sommer, and Carl Woodward.

As 2018 draws to a close, we should perhaps be grateful that the year has not been entirely dominated by ransomware, although the rise of the GandCrab and SamSam variants show that the threat remains active. Our predictions for 2019 move away from simply providing an assessment on the rise or fall of a particular threat, and instead focus on current rumblings we see in the cybercriminal underground that we expect to grow into trends and subsequently threats in the wild.

We have witnessed greater collaboration among cybercriminals exploiting the underground market, which has allowed them to develop efficiencies in their products. Cybercriminals have been partnering in this way for years; in 2019 this market economy will only expand. The game of cat and mouse the security industry plays with ransomware developers will escalate, and the industry will need to respond more quickly and effectively than ever before.

Social media has been a part of our lives for more than a decade. Recently, nation-states have infamously used social media platforms to spread misinformation. In 2019, we expect criminals to begin leveraging those tactics for their own gain. Equally, the continued growth of the Internet of Things in the home will inspire criminals to target those devices for monetary gain.

One thing is certain: Our dependency on technology has become ubiquitous. Consider the breaches of identity platforms, with reports of 50 million users being affected. It is no longer the case that a breach is limited to that platform. Everything is connected, and you are only as strong as your weakest link. In the future, we face the question of which of our weakest links will be compromised.

—Raj Samani, Chief Scientist and McAfee Fellow, Advanced Threat Research



Monday, 10 December 2018 16:28

McAfee Labs 2019 Threats Predictions Report

The 2018 hurricane season officially ended on November 30. The National Oceanic and Atmospheric Administration’s (NOAA) storm counts for the season were: 15 named storms, including eight hurricanes. Two of these were “major” hurricanes (Category 3, 4 or 5).

To put that into perspective, the average hurricane season has 12 named storms, including six hurricanes, of which three are major. That makes 2018 a little worse than a “normal” year, and well within NOAA’s predictions before the start of the season on June 1.

Fortunately, these numbers are down from the especially destructive 2017 season, which included the so-called “HIM” storms (Harvey, Irma, and Maria). In 2017 there were 17 named storms, including 10 hurricanes, of which six were major.

But that is little comfort to the people affected by the two major hurricanes, Florence and Michael.



Monday, 10 December 2018 16:26


What the CCPA Signals About the Future

California is leading the way to pass meaningful legislation on data privacy and cybersecurity. The new California Consumer Privacy Act (CCPA) is a strong complement to the EU’s GDPR, although many businesses will need to comply with both regulations. This primer by CipherCloud’s Anthony James on the CA AB 375 details the many new rights and entitlements for California consumers and what companies should do to comply by January 1, 2020.

California just passed the California Consumer Privacy Act, also known as California AB 375, which goes into effect on January 1, 2020. This California regulation is part of the whirlwind of global legislation impacting data privacy and cybersecurity. California is not alone in efforts to legislate the protection of data privacy. Earlier this year, on Capitol Hill, U.S. Senator Ron Wyden (OR) introduced a discussion draft (SIL18B29) for a proposed national Consumer Data Protection Act. SIL18B29 includes very tough penalties for companies that violate your data privacy, even potentially including prison time for offending CEOs.

U.S. Senators Elizabeth Warren (MA) and Senator Mark Warner (VA) have also sponsored a bill now in draft (S.2289) for a national Data Breach Prevention and Compensation Act. This act is focused on credit bureaus and other entities that hold consumer data. These definitions could extend further to a variety of business types, including digital marketing firms and more.

Outside of the United States, there is also considerable legislative activity around data privacy. Most visible and very much in the news at the Paris Peace Forum, President Emmanuel Macron announced the Paris Call for Trust and Security in Cyberspace. The Paris Call is intended to get nation-state-level agreement to basic principles of cybersecurity behavior. Earlier this year, on May 28, the European Union (EU) General Data Protection Regulation (GDPR) became operational as the toughest data privacy law worldwide. The GDPR defines many difficult requirements that must be met by any business utilizing the sensitive and private data of European Community citizens.



They are a seal of approval that producers of consumer goods have paid their dues – and that the products are the real McCoy. Excise stamps not only ensure government revenues, they also help detect the illegal and counterfeit products that abound. A new standard for the security of tax stamps has just been published to make them more effective and protect the goods on which they are applied.

Alcohol and cigarettes are the most common items on which tax is levied, as governments aim to both raise revenues and deter the consumption of health-endangering products. But the range of taxes is on the rise as many countries are introducing new ones, such as the sugar tax on soft drinks, with the same objectives in mind. For this system to work effectively, tax stamps are required to demonstrate that the duty has been paid and that the product is legitimately available in the intended market.

However, where there is tax, there are always attempts at tax avoidance, breeding criminal activity that puts illicit and counterfeit products on the market, many of which may be harmful to the health of consumers. A foolproof tax stamp, however, is an effective way of literally stamping down on the problem.



Most organizations continue to devote insufficient thought and resources to the task of assessing and managing risk in their supply chains. This leaves them vulnerable to disruptions in their supply chain and even completely unaware of the various risks that are lurking there.

In today’s post, I’ll sketch out the process your organization’s business continuity (BC) office should follow to assess and mitigate supply chain risk.

Supply chain risk management is an area where there are still significant exposures and risks in business today. This has been a really difficult area for many BC offices to get their arms around.

For those who want to get on top of this issue, today’s post will be an overview of what needs to be done.

Basically, assessing and managing supply chain risk comes down to four things:

  1. Establishing the proper governance for the process
  2. Identifying who your critical suppliers are
  3. Assessing risk at your critical suppliers
  4. Mitigating risk from your critical suppliers

We’ll talk a little about each one below.



Best Practices for Merging Security and Compliance

Within many organizations today, security and compliance teams are running in isolation. This introduces significant enterprise risk, as the security team might be doing what’s best to combat advanced attackers, but their actions may not be in compliance with corporate, industry or federal guidelines. Similarly, the compliance team might be laser-focused on adhering to regulations, but their strategy might be introducing security risks. Tim Woods, VP of Technology Alliances at FireMon, outlines the challenges of operating security and compliance in silos.

Every compliance initiative – whether regulatory or internal – poses the same central question: Are you monitoring for change? While the question is a simple one, for many companies, the answer remains elusive.

Whenever there’s a data breach, compliance failure or system outage, the first thing business leaders want to know is: What changed? And, too often, the response from security and compliance teams is “nothing,” when, in fact, change is happening – they just don’t know about it. By no means are these teams attempting to mask the truth, they are simply being forthright with the limited information available to them.

Maintaining awareness of network and access changes is an important element in achieving a strong security and compliance posture, along with reliable network operations and services. But change management is a complex challenge for many companies for two reasons: 1) limited team collaboration and 2) lack of visibility.



(TNS) - If you see construction crews drilling into the western slope of Rattlesnake Ridge, where 8 million tons of rock and dirt are inching down the hillside, the Yakima Valley Office of Emergency Management says not to be alarmed.

On Wednesday, the agency said a contractor will be installing new monitoring equipment into the hillside near the site of the slow-moving landslide. The purpose of informing the public early was so passersby didn’t become concerned when they saw workers on the ridge just east of Union Gap.

“The reason for this update is to provide advanced notification to the public in order to reduce the presumptions that may arise if left without explanation,” the agency said in a news release.



Most organizations rely on their network infrastructure to support business processes.

When your system goes down, it’s likely that your business does, too. Successful organizations typically have a system in place to assist in these situations called an IT alerting system. Through an IT alerting system, you can detect and mitigate network issues fast — reducing the potential for time-consuming and costly business disruptions.

Because of the number of IT alerting solutions on the market today, many organizations find themselves trying to force fit a solution designed for emergency notification into a tool for IT incident management alerts and escalations. Unfortunately, this isn’t a good match for driving efficiency, quality and customer support. IT alerting systems require robust features to support the growing complexity of most organizations today.

Let’s break it down into the top five functions of an IT alerting system:



Thursday, 06 December 2018 15:13

What Is an IT Alerting System?

The holiday season, with its festive atmosphere, gift giving, and family visits and traditions is many people’s favorite time of year. However, it also brings a few unique hazards. (What other time of year do you have to worry about a large, illuminated tree in your living room falling over on top of people?)

In today’s post, we’ll share MHA Consulting’s holiday safety tips on how you can stay safe at this special time of year, whatever holiday you might observe.

Have you gotten into the holiday spirit yet (if that’s something that you like to do)?

I have—and all it took was my traveling from Arizona, where it’s been in the 60s, to the Upper Midwest, where it was in the 20s and snowing. (Another thing that helped was my wife saying that this year, all our gift giving will be done online.)



Having worked in and with the automotive industry for around 25 years, the challenges that OEMs face given their size and structures often inhibit the business agility needed to provide lasting customer value in an age of digital disruption. The focus has always been more skewed toward the product experience and product features and defining greatness by “number of cars.”

Mobility as a driver for change has existed for more than 10 years, but the increased competitiveness from nontraditional players has created new challenges for OEMs and forced them to rethink their role. It has produced more service-oriented ideas such as car-sharing schemes, partnerships with ride-hailing services, and closer collaboration with urban planners.

Despite these changes, I think that the focus is still on the “number of cars.” The recent merger of Mercedes-Benz car2go and BMW DriveNow highlights the need to increase fleet size to be able to compete with nontraditional automotive players, and the main message I took away from the MQ! The Mobility Quotient 2018 Innovation Summit was that autonomous cars, smarter service offerings around cars, and better working together with urban planners would somehow manage the mobility expectations of the future. Considering that the physical format of mobility remains unchallenged — it still looks like a car — the future seems secure for the OEM.



Thursday, 06 December 2018 15:10

The Future Of Mobility Is Data, Not Cars

Compliance programs exist for the purpose of protecting against misdeeds, and the most effective programs are those that exist within a culture of ethics. Michael Volkov discusses the truism that a company’s culture and its compliance controls are mutually reinforcing.

I do not think there is much disagreement on the basic purpose of an ethics and compliance program. After all, one of the primary sources for compliance programs continues to be the United States Sentencing Guidelines, which very clearly affirm the stated purpose of a corporate compliance program.

To play devil’s advocate for a minute, let’s consider the following: the United States Sentencing Guidelines are just what they say: guidelines for criminal sentencing of a corporation. It is not the “be all, end all” of corporate compliance guidance.

And where does the importance of an ethical culture fit in? Well, an ethical culture is perhaps the best control a company can implement as a way to “prevent and detect” compliance issues.

I do not intend to repeat myself (though that is precisely what I am about to do), but companies with ethical cultures have lower rates of employee misconduct, lower rates of employee turnover, increased productivity and overall improved financial performance. Hopefully, no one will dispute that point (although there may be disagreement as to how to define an “ethical culture”).



Thursday, 06 December 2018 15:09

The Purpose Of A Compliance Program

(TNS) — Although there are nearly 50 storm shelters in Morgan, Limestone and Lawrence counties, government officials say there’s a need for more and they're worth the cost.

“When you have a storm shelter making the difference between life and death, it’s a great investment for the community,” said Morgan County District 2 Commissioner Randy Vest. “You can’t put a price on a human life. I wouldn’t want to.”

Presently Morgan County has two additional storm shelters on the drawing board. When completed the county will have 19 certified public shelters.

The Oak Ridge community has one under construction at fire station No. 2 on Vaughn Bridge Road. At last week's Morgan County Commission meeting, commissioners approved a shelter for the Tri-County Volunteer Fire Department in the Ryan-Hulaco area. A state grant will provide the money for the Tri-County shelter, and NARCOG is handling the grant writing and paperwork.



3 Arguments for Integrating RMIS and GRC Processes

Gartner suggests that integrated risk management (IRM) is the next evolution of risk management practices. This piece from Riskonnect’s Dawn Ward explores IRM practices and what they mean specifically for GRC and enterprise risk.

As risk controls and appetites evolve, managers continue to work toward improving their risk management programs. They’re becoming more informed about governance, risk and compliance (GRC) processes and how these can be leveraged with risk management information systems (RMIS) to better identify and mitigate risks. However, some managers still experience a disconnect.

For many, this disconnect stems from a lack of understanding the long-term benefits. Merging GRC processes with RMIS brings numerous advantages to enterprise operations, but many leadership teams fail to grasp the enterprise-wide changes they bring, opting instead to leave these as independent processes. So why should management teams look to merge GRC and RMIS processes?

Here are three reasons why you should consider integrating these risk areas to elevate your risk management operations.



Page 1 of 2