DRJ's Spring 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Winter Journal

Volume 31, Issue 4

Full Contents Now Available!

Industry Hot News

Industry Hot News (374)

These predictions were written by Eoin Carroll, Taylor Dunton, John Fokker, German Lancioni, Lee Munson, Yukihiro Okutomi, Thomas Roccia, Raj Samani, Sekhar Sarukkai, Dan Sommer, and Carl Woodward.

As 2018 draws to a close, we should perhaps be grateful that the year has not been entirely dominated by ransomware, although the rise of the GandCrab and SamSam variants show that the threat remains active. Our predictions for 2019 move away from simply providing an assessment on the rise or fall of a particular threat, and instead focus on current rumblings we see in the cybercriminal underground that we expect to grow into trends and subsequently threats in the wild.

We have witnessed greater collaboration among cybercriminals exploiting the underground market, which has allowed them to develop efficiencies in their products. Cybercriminals have been partnering in this way for years; in 2019 this market economy will only expand. The game of cat and mouse the security industry plays with ransomware developers will escalate, and the industry will need to respond more quickly and effectively than ever before.

Social media has been a part of our lives for more than a decade. Recently, nation-states have infamously used social media platforms to spread misinformation. In 2019, we expect criminals to begin leveraging those tactics for their own gain. Equally, the continued growth of the Internet of Things in the home will inspire criminals to target those devices for monetary gain.

One thing is certain: Our dependency on technology has become ubiquitous. Consider the breaches of identity platforms, with reports of 50 million users being affected. It is no longer the case that a breach is limited to that platform. Everything is connected, and you are only as strong as your weakest link. In the future, we face the question of which of our weakest links will be compromised.

—Raj Samani, Chief Scientist and McAfee Fellow, Advanced Threat Research



Monday, 10 December 2018 16:28

McAfee Labs 2019 Threats Predictions Report

The 2018 hurricane season officially ended on November 30. The National Oceanic and Atmospheric Administration’s (NOAA) storm counts for the season were: 15 named storms, including eight hurricanes. Two of these were “major” hurricanes (Category 3, 4 or 5).

To put that into perspective, the average hurricane season has 12 named storms, including six hurricanes, of which three are major. That makes 2018 a little worse than a “normal” year, and well within NOAA’s predictions before the start of the season on June 1.

Fortunately, these numbers are down from the especially destructive 2017 season, which included the so-called “HIM” storms (Harvey, Irma, and Maria). In 2017 there were 17 named storms, including 10 hurricanes, of which six were major.

But that is little comfort to the people affected by the two major hurricanes, Florence and Michael.



Monday, 10 December 2018 16:26


What the CCPA Signals About the Future

California is leading the way to pass meaningful legislation on data privacy and cybersecurity. The new California Consumer Privacy Act (CCPA) is a strong complement to the EU’s GDPR, although many businesses will need to comply with both regulations. This primer by CipherCloud’s Anthony James on the CA AB 375 details the many new rights and entitlements for California consumers and what companies should do to comply by January 1, 2020.

California just passed the California Consumer Privacy Act, also known as California AB 375, which goes into effect on January 1, 2020. This California regulation is part of the whirlwind of global legislation impacting data privacy and cybersecurity. California is not alone in efforts to legislate the protection of data privacy. Earlier this year, on Capitol Hill, U.S. Senator Ron Wyden (OR) introduced a discussion draft (SIL18B29) for a proposed national Consumer Data Protection Act. SIL18B29 includes very tough penalties for companies that violate your data privacy, even potentially including prison time for offending CEOs.

U.S. Senators Elizabeth Warren (MA) and Senator Mark Warner (VA) have also sponsored a bill now in draft (S.2289) for a national Data Breach Prevention and Compensation Act. This act is focused on credit bureaus and other entities that hold consumer data. These definitions could extend further to a variety of business types, including digital marketing firms and more.

Outside of the United States, there is also considerable legislative activity around data privacy. Most visible and very much in the news at the Paris Peace Forum, President Emmanuel Macron announced the Paris Call for Trust and Security in Cyberspace. The Paris Call is intended to get nation-state-level agreement to basic principles of cybersecurity behavior. Earlier this year, on May 28, the European Union (EU) General Data Protection Regulation (GDPR) became operational as the toughest data privacy law worldwide. The GDPR defines many difficult requirements that must be met by any business utilizing the sensitive and private data of European Community citizens.



They are a seal of approval that producers of consumer goods have paid their dues – and that the products are the real McCoy. Excise stamps not only ensure government revenues, they also help detect the illegal and counterfeit products that abound. A new standard for the security of tax stamps has just been published to make them more effective and protect the goods on which they are applied.

Alcohol and cigarettes are the most common items on which tax is levied, as governments aim to both raise revenues and deter the consumption of health-endangering products. But the range of taxes is on the rise as many countries are introducing new ones, such as the sugar tax on soft drinks, with the same objectives in mind. For this system to work effectively, tax stamps are required to demonstrate that the duty has been paid and that the product is legitimately available in the intended market.

However, where there is tax, there are always attempts at tax avoidance, breeding criminal activity that puts illicit and counterfeit products on the market, many of which may be harmful to the health of consumers. A foolproof tax stamp, however, is an effective way of literally stamping down on the problem.



Most organizations continue to devote insufficient thought and resources to the task of assessing and managing risk in their supply chains. This leaves them vulnerable to disruptions in their supply chain and even completely unaware of the various risks that are lurking there.

In today’s post, I’ll sketch out the process your organization’s business continuity (BC) office should follow to assess and mitigate supply chain risk.

Supply chain risk management is an area where there are still significant exposures and risks in business today. This has been a really difficult area for many BC offices to get their arms around.

For those who want to get on top of this issue, today’s post will be an overview of what needs to be done.

Basically, assessing and managing supply chain risk comes down to four things:

  1. Establishing the proper governance for the process
  2. Identifying who your critical suppliers are
  3. Assessing risk at your critical suppliers
  4. Mitigating risk from your critical suppliers

We’ll talk a little about each one below.



Best Practices for Merging Security and Compliance

Within many organizations today, security and compliance teams are running in isolation. This introduces significant enterprise risk, as the security team might be doing what’s best to combat advanced attackers, but their actions may not be in compliance with corporate, industry or federal guidelines. Similarly, the compliance team might be laser-focused on adhering to regulations, but their strategy might be introducing security risks. Tim Woods, VP of Technology Alliances at FireMon, outlines the challenges of operating security and compliance in silos.

Every compliance initiative – whether regulatory or internal – poses the same central question: Are you monitoring for change? While the question is a simple one, for many companies, the answer remains elusive.

Whenever there’s a data breach, compliance failure or system outage, the first thing business leaders want to know is: What changed? And, too often, the response from security and compliance teams is “nothing,” when, in fact, change is happening – they just don’t know about it. By no means are these teams attempting to mask the truth, they are simply being forthright with the limited information available to them.

Maintaining awareness of network and access changes is an important element in achieving a strong security and compliance posture, along with reliable network operations and services. But change management is a complex challenge for many companies for two reasons: 1) limited team collaboration and 2) lack of visibility.



(TNS) - If you see construction crews drilling into the western slope of Rattlesnake Ridge, where 8 million tons of rock and dirt are inching down the hillside, the Yakima Valley Office of Emergency Management says not to be alarmed.

On Wednesday, the agency said a contractor will be installing new monitoring equipment into the hillside near the site of the slow-moving landslide. The purpose of informing the public early was so passersby didn’t become concerned when they saw workers on the ridge just east of Union Gap.

“The reason for this update is to provide advanced notification to the public in order to reduce the presumptions that may arise if left without explanation,” the agency said in a news release.



Most organizations rely on their network infrastructure to support business processes.

When your system goes down, it’s likely that your business does, too. Successful organizations typically have a system in place to assist in these situations called an IT alerting system. Through an IT alerting system, you can detect and mitigate network issues fast — reducing the potential for time-consuming and costly business disruptions.

Because of the number of IT alerting solutions on the market today, many organizations find themselves trying to force fit a solution designed for emergency notification into a tool for IT incident management alerts and escalations. Unfortunately, this isn’t a good match for driving efficiency, quality and customer support. IT alerting systems require robust features to support the growing complexity of most organizations today.

Let’s break it down into the top five functions of an IT alerting system:



Thursday, 06 December 2018 15:13

What Is an IT Alerting System?

The holiday season, with its festive atmosphere, gift giving, and family visits and traditions is many people’s favorite time of year. However, it also brings a few unique hazards. (What other time of year do you have to worry about a large, illuminated tree in your living room falling over on top of people?)

In today’s post, we’ll share MHA Consulting’s holiday safety tips on how you can stay safe at this special time of year, whatever holiday you might observe.

Have you gotten into the holiday spirit yet (if that’s something that you like to do)?

I have—and all it took was my traveling from Arizona, where it’s been in the 60s, to the Upper Midwest, where it was in the 20s and snowing. (Another thing that helped was my wife saying that this year, all our gift giving will be done online.)



Having worked in and with the automotive industry for around 25 years, the challenges that OEMs face given their size and structures often inhibit the business agility needed to provide lasting customer value in an age of digital disruption. The focus has always been more skewed toward the product experience and product features and defining greatness by “number of cars.”

Mobility as a driver for change has existed for more than 10 years, but the increased competitiveness from nontraditional players has created new challenges for OEMs and forced them to rethink their role. It has produced more service-oriented ideas such as car-sharing schemes, partnerships with ride-hailing services, and closer collaboration with urban planners.

Despite these changes, I think that the focus is still on the “number of cars.” The recent merger of Mercedes-Benz car2go and BMW DriveNow highlights the need to increase fleet size to be able to compete with nontraditional automotive players, and the main message I took away from the MQ! The Mobility Quotient 2018 Innovation Summit was that autonomous cars, smarter service offerings around cars, and better working together with urban planners would somehow manage the mobility expectations of the future. Considering that the physical format of mobility remains unchallenged — it still looks like a car — the future seems secure for the OEM.



Thursday, 06 December 2018 15:10

The Future Of Mobility Is Data, Not Cars

Compliance programs exist for the purpose of protecting against misdeeds, and the most effective programs are those that exist within a culture of ethics. Michael Volkov discusses the truism that a company’s culture and its compliance controls are mutually reinforcing.

I do not think there is much disagreement on the basic purpose of an ethics and compliance program. After all, one of the primary sources for compliance programs continues to be the United States Sentencing Guidelines, which very clearly affirm the stated purpose of a corporate compliance program.

To play devil’s advocate for a minute, let’s consider the following: the United States Sentencing Guidelines are just what they say: guidelines for criminal sentencing of a corporation. It is not the “be all, end all” of corporate compliance guidance.

And where does the importance of an ethical culture fit in? Well, an ethical culture is perhaps the best control a company can implement as a way to “prevent and detect” compliance issues.

I do not intend to repeat myself (though that is precisely what I am about to do), but companies with ethical cultures have lower rates of employee misconduct, lower rates of employee turnover, increased productivity and overall improved financial performance. Hopefully, no one will dispute that point (although there may be disagreement as to how to define an “ethical culture”).



Thursday, 06 December 2018 15:09

The Purpose Of A Compliance Program

(TNS) — Although there are nearly 50 storm shelters in Morgan, Limestone and Lawrence counties, government officials say there’s a need for more and they're worth the cost.

“When you have a storm shelter making the difference between life and death, it’s a great investment for the community,” said Morgan County District 2 Commissioner Randy Vest. “You can’t put a price on a human life. I wouldn’t want to.”

Presently Morgan County has two additional storm shelters on the drawing board. When completed the county will have 19 certified public shelters.

The Oak Ridge community has one under construction at fire station No. 2 on Vaughn Bridge Road. At last week's Morgan County Commission meeting, commissioners approved a shelter for the Tri-County Volunteer Fire Department in the Ryan-Hulaco area. A state grant will provide the money for the Tri-County shelter, and NARCOG is handling the grant writing and paperwork.



3 Arguments for Integrating RMIS and GRC Processes

Gartner suggests that integrated risk management (IRM) is the next evolution of risk management practices. This piece from Riskonnect’s Dawn Ward explores IRM practices and what they mean specifically for GRC and enterprise risk.

As risk controls and appetites evolve, managers continue to work toward improving their risk management programs. They’re becoming more informed about governance, risk and compliance (GRC) processes and how these can be leveraged with risk management information systems (RMIS) to better identify and mitigate risks. However, some managers still experience a disconnect.

For many, this disconnect stems from a lack of understanding the long-term benefits. Merging GRC processes with RMIS brings numerous advantages to enterprise operations, but many leadership teams fail to grasp the enterprise-wide changes they bring, opting instead to leave these as independent processes. So why should management teams look to merge GRC and RMIS processes?

Here are three reasons why you should consider integrating these risk areas to elevate your risk management operations.



(TNS) — Santa Fe Public Schools will consider entering into a memorandum of understanding with the police department that would allow for police access to on-campus surveillance cameras, but only in emergency situations.

The school board on Tuesday heard from attorney Geno Zamora who talked about the impact such an agreement would have on the Family Education Rights and Privacy Act, or FERPA, a federal law that protects the privacy of students.

Zamora said the MOU would have to define “access,” but assured the board that any MOU would be “limited to emergency situations.”

Superintendent Veronica Garcia said that up to now the school district had a “gentleman’s agreement” to allow police access in certain situations. She said she had allowed police access to surveillance video from Capital or Santa Fe high schools a handful of times since a school shooting in Parkland, Fla., last February.



As cloud storage grows more popular, cloud storage security has become an urgent topic. It's a topic that businesses realize can prove challenging. Creating a set of best practices that ensures data security presents a broad array of issues and risks.

That’s because cloud storage revolves around anywhere, anytime access to data and encompasses a broader set of users, applications and data sources. Even if a cloud isn’t breached, it’s possible for hackers to break into individual accounts on Google Drive, Dropbox, Box, Microsoft OneDrive and other cloud storage providers.

Here’s a look at what it takes to address cloud storage security and better protect cloud data:



(TNS) - Crawford County for the second time in 2018 will not qualify for state or federal emergency management assistance following a tornado event.

County Emergency Management director Brad Thomas said county officials in their damage assessment after an EF-1 tornado hit the county east of Rudy and an EF-2 hit Van Buren on Friday tallied four of the 125 points needed to meet the threshold for federal assistance. Thomas also said the total wasn't "anywhere close" to what they would need for state assistance.

A county usually qualifies for state or federal emergency management assistance from damage to roads or government buildings, Thomas said. He said such assistance in the case of tornadoes would largely come from points tallied from uninsured homes with damage.

Thomas in his damage assessment tallied two uninsured homes with damage, he said. County officials in April tallied 11 uninsured homes that were either destroyed or had major damage after a tornado that month hit the Mountainburg area.



5 Risky Mistakes Companies Make

Third-party relationships result in a majority of FCPA resolutions and investigations. Dan Wendt, member at Miller & Chevalier, discusses why third-party due diligence should be a central part of any anti-corruption program and shares insights into some of the customary ways companies fall short in terms of anti-corruption due diligence.

Agents and third parties create most bribery problems these days. It is relatively rare for companies to create legal liabilities under the Foreign Corrupt Practices Act (FCPA) or similar laws by making payments directly to government officials. Instead, a majority of corporate FCPA resolutions in recent years involve fact patterns in which the infringing companies make payments to a third party, which in turn passes on some of the payments to a government official. For this reason, the U.S. Department of Justice (DOJ) and U.S. Securities and Exchange Commission (SEC) repeatedly emphasize the importance of third-party due diligence as a critical part of an effective anti-corruption compliance program. Moreover, in conducting investigations of foreign bribery, both the DOJ and SEC initially ask for all relevant third-party due diligence files in order to assess corporate wrongdoing.

Consequently, anti-corruption third-party due diligence is critically important for companies, especially companies that operate in high-risk jurisdictions. In general, third-party due diligence may be more of an art than a science, but there are many pitfalls or shortcuts that should be avoided. This article summarizes a few common mistakes seen in third-party due diligence efforts.



Thursday, 06 December 2018 15:01

Common Pitfalls In Third-Party Due Diligence

With the possibility of a winter storm bringing chilly temperatures and more severe weather to South Carolina this week, one state agency is warning residents to prepare.

A worst-case scenario is being presented by the S.C. Emergency Management Division. It is urging people to get ready for this storm, and more that could follow this winter by stocking up on supplies that could be needed should snow or ice line the roads.

Two popular items mentioned before any storm hits were not on the SCEMD's list -- milk and bread. In fact, it said skip both.

"Bread and milk shouldn't be on your list," SCEMD posted on Facebook.

One person commented on the post that "French Toast Season is upon us," because of the hysteria of people emptying store shelves of bread and milk that has become a punchline before many significant weather events.

In spite of the jokes, the SCEMD has a real list of items to stockpile -- at home and work -- in case of a winter storm. It includes:



Alex Sakaguchi believes that in 2019 we will see more innovators experimenting with blockchain use cases that demonstrate many of the blockchain data protection benefits; he also believes that the predictive capabilities that artificial intelligence (AI) offers can give organizations more control over downtime. He explains below:

Going into 2019, we’re going to see the technology market continue to transform and adapt to new customer demands. In particular, IT and data companies will be collecting, analysing and providing insights about vast volumes of data – more so than ever before. Businesses will need to start thinking about the future of how they perform these tasks, and how to take advantage of new solutions that can make the jobs and lives of the people responsible for these tasks easier. New solutions can also guarantee more security and reliability, enabling better relationships with customers.

In particular, data and IT staff will tend to incorporate more blockchain technology into their workflows for data security and protection. And with the consistent stream of business, leveraging technologies that enable predictive insights will give IT staff the tools to know how or when to upgrade technology to ensure there are no business disruptions.



(TNS) - The Board of Supervisors unanimously approved a $4.5-million plan Tuesday to reduce the strain on the Los Angeles County Fire Department, which responds to devastating incidents such as the Woolsey fire but also provides 24-hour emergency services across 2,300 square miles.

The supervisors’ plan seeks to educate the public and leaders in the 59 municipalities the department serves about the “new realities” in staffing, equipment and facilities — and to gather their input about their experiences during recent fires.

The new effort comes months after the board raised concerns about the department’s aging stations and equipment, and weeks after the Woolsey fire burned more than 90,000 acres in Los Angeles and Ventura counties.

The supervisors said the fire has highlighted a new reality for the department: It must respond to wildfires in an era of drought conditions and prolonged periods of dry, windy weather while also providing emergency medical services to 4 million county residents.



What’s the difference between solid state drives (SSD) and hard disk drives (HDD)? Even more important, when you’re adding storage should you buy a SSD or a HDD?  

The answer depends on understanding the balance of cost, performance, capacity, and reliability between these two storage technologies. In many cases the ultimate goal is to combine HDD and SDD in a manner geared for your workloads and budget.

So what’s best for your needs? Let’s dive in.

Difference between SSD and HDD

Although both SSD and HDD perform similar jobs, their underlying technology is quite different. Let’s look inside:



Wednesday, 05 December 2018 15:55


(TNS) — The Matanuska-Susitna (Mat-Su) Borough held a news conference about the magnitude 7.0 Point MacKenzie earthquake on Monday, four days after the shaking stopped.

Complaints about a lack of earthquake communications to the borough’s 100,000-plus residents came under fire over the weekend from some residents — including the borough’s own mayor.

Responding Sunday on Facebook to criticism about the relative lack of information, Mayor Vern Halter described “a total breakdown from the standpoint of public relations and information sharing from the Borough.”

The public relations department “was cut out of the process,” Halter said.



After a few quiet years, Thought Machine, the fintech behind a new core banking platform called Vault, is stealing the limelight, first with its announcement of a strategic partnership with Lloyds Banking Group, then with a similar partnership with the digital bank Atom. And today we’ve learned about another partnership, this time with IBM. We think this sudden activity reveals wider trends in three areas we have been following closely: fintech, banking transformation, and digital transformation services.

Fintech has been all the rage, and Thought Machine, with its ex-Google leaders, smart contracts, cloud, encryption, and continuous deployment, certainly fits the hype. But here’s the problem: Few CIOs are courageous (or mad) enough to select a startup to run their core operations. When evaluating emerging vendors — including fintech — we advise organizations to look at the business value that the solution will generate, the likelihood of failure, and the potential impact of that failure. Unfortunately for fintech startups aiming to disrupt the market of core banking systems, these are business-critical solutions. And the risk and cost of potential failure when replacing them is high. More importantly, not all banking leaders agree that the change will generate enough business value to justify that risk (we disagree).

The ones that do agree subscribe to the vision of digital business and the transformation they need to get there. Lloyds Banking Group and Atom Bank fit that bill perfectly. Lloyds has just entered the third phase of its business transformation. As we have written in the case study detailing the Lloyds journey, the first phase focused on digital banking channels but was unable to tackle customer experience problems holistically and across channels. The solution was to embrace a new operating model. But, as is the case at most banks, the current application landscape limited what Lloyds could do. And so simplification and progressive modernization of Lloyds’ IT and data architecture are the key objectives of the third transformation phase(2018–2020). And Atom Bank is one of the UK’s digital challengers that is competing with other agile players such as Monzo, Revolut, and Starling Bank, with both Monzo and Starling having built their banking platform from scratch to enable them flexibility and experimentation with business models.



(TNS) - The ground would stop shaking. Then the water in the pipes would drain away.

As Anchorage grapples with the aftermath of Friday’s powerful earthquake, a new study says Seattle would lose all water pressure within 24 hours of a catastrophic quake and would need at least two months to entirely restore water service in the city. Suburbs served by Seattle Public Utilities (SPU), which commissioned the study, would also lose service – including Bothell, Woodinville, Kirkland, Redmond and Bellevue.

With Seattle facing 15 to 20 percent odds of a severe earthquake in the next 50 years, the study says the city should spend $850 million through 2075 to mitigate water-system risks posed by the “Big One,” playing catch-up to California cities that already have taken dramatic steps.

“For this very catastrophic earthquake, we’re looking at very significant impacts,” said Alex Chen, SPU’s water-planning director. The utility provides drinking water to 1.4 million people.



Advice on selecting the best cloud storage for consumers isn't applicable if you're larger SMB with sensitive information or the enterprise. Why not? Because cloud storage is complicated, with many considerations to weigh in choosing the best option for your business. 

Sure, some consumers even consider free cloud storage. That's clearly not an option for a business with a demanding workload for its storage platform.

Businesses must consider factors such as:

  • Compliance
  • Security
  • Management
  • Cost
  • Performance
  • Scalability
  • SLAs

Moreover, the interplay between each one of these factors plays a part in business cloud storage decisions. Let’s look at the top ten considerations for making the right choice for your growing online storage presence.



Do cities need a “smart city platform”? It depends. Clients have been asking Forrester about our thoughts on new IoT-enabled smart city platforms launched by vendors focused on transforming city government infrastructure and applications. To answer that question, we set out to understand when and where cities might need one, and how to decide which platform is right for them. Take a look at our new report, “Smart City Platforms Enable The Insights-Driven City.”

The bottom line: New platforms can help cities become “smart” — capturing and leveraging data to enable better decisions — but there isn’t a single path.

Cities Strive To Become Insights-Driven

We’ve heard about smart cities for years. Cities have experimented with different initiatives on their journey to becoming “smart.” But city leaders now understand that the key to becoming smart is having access to information, and to use that information to improve citizen services and city operations. Now, nearly half (47%) of local government stakeholders in 2017 recognized the importance of using data and analytics insights to drive business decision making, up from 20% in 2015. Does that translate into the need to coordinate data capture and establish the common infrastructure for the city that these new platforms provide?



Tuesday, 04 December 2018 16:05

Do Cities Need A “Smart City Platform”?

(TNS) — Dr. Juliana Barrett, an ecologist focused on the state's coastal habitats, hopes students in her University of Connecticut Climate Corps class learn at least two basic lessons: Before buying a property, find out if it's in a flood zone; and don't get Lyme disease.

"If they go away from the course and remember those, I've done my job," said Barrett, an associate extension educator with Connecticut Sea Grant at UConn's Avery Point campus in Groton.

Barrett's lessons are fitting for the Northeast region, which faces seasonal shifts, extreme weather, sea level rise, erosion, increased flooding and higher risks of diseases — including West Nile virus and Lyme disease — carried by fleas, ticks and mosquitoes, according to a stark U.S. climate report released on Nov. 23.



From the California wildfires and Hurricanes Florence and Michael to rampant data exposures, the Russian VPNFilter, and other major cyberattacks, 2018 has forced many organizations to put their business continuity systems and disaster recovery plans to the test.

As 2018 comes to a close, here’s a look at what experts believe 2019 likely has in store in terms of the threats to – and the evolution of – business continuity and disaster recovery.



You probably remember the “Sand Palace,” the lone house standing after Hurricane Michael made landfall in the Florida panhandle in October.

It’s a powerful story about one man’s stand against nature’s destructive power. But the Sand Palace is also a story about insurance.

There are generally two aspects of insurance. One is to pay out claims to make people whole again after a loss. Another is to incentivize behavior that makes those losses less likely to happen. In insurance-speak, we call that “mitigation.”

Consider the Sand Palace in that context. According to an AIR Worldwide analysis, the house was built to be even more resilient than Florida’s already-stringent building codes: reinforced concrete, limited windows, minimal space below the roof to prevent uplift, a first floor 15 feet above ground, and more.



Jonathan Meltzer examines four different options for ensuring application-level continuity through high availability and disaster recovery provisions in a hybrid or exclusively Azure cloud environment.

Cloud failures - both major and minor - are inevitable. What is not inevitable is extended periods of downtime or unacceptable data loss caused by any resulting service outages.

One particularly devastating outage occurred in the South Central US Region of Microsoft’s Azure cloud when it experienced a catastrophic failure on 4th September 2018. A severe thunderstorm triggered a series of problems that ultimately resulted in bringing down an entire data center / centre. Many customers were offline for a full day, and some for over two days. Microsoft has since addressed the problems that led to the outage, but the incident will not soon be forgotten by IT professionals tasked with ensuring application-level continuity.

The untold stories from that memorable day involve the many customers that were back up and running within minutes of the outage. Indeed, there are ways all Azure customers can prepare to survive virtually any outage with very little downtime, and minimal or no data loss, even when catastrophe strikes.

This article examines four different options for ensuring application-level continuity in a hybrid or exclusively Azure cloud environment. Two of the options are general-purpose, and two are unique to Microsoft’s SQL Server database, a popular application in the Azure cloud.



Global Developments in Data Privacy

Cyber crime and increasingly imminent cyber threats have left businesses and citizens concerned about digital privacy, and some governments are responding with legislation. A.T. Kearney’s Paul Laudicina discusses the legal protections various countries are enacting and what the future looks like in terms of data protection regulation.

In an era of data breaches, hacks and digital identity theft, everyone is understandably increasingly worried about protecting their digital lives. Governments are starting to respond with new data protection regulations. The European Union’s General Data Protection Regulation (GDPR), which came into effect in May, is the most significant regulatory marker so far. And nearly 70 percent of global executives anticipate other countries will draw inspiration from the GDPR to create their own data privacy regulations, according to A.T. Kearney’s recent Views from the C-Suite survey. Businesses must prepare to adapt.

The effects of GDPR are wide ranging. The law requires user consent in data collection, data privacy protections, data breach notifications and safe handling of cross-border data transfers. This effectively shifts data collection standards from a passive “opt-out” to an active “opt-in” approach for users. Businesses across the world that collect or have EU citizens’ data — regardless of where they are domiciled — have raced to meet GDPR requirements to avoid steep fines.

GDPR is already putting pressure on other countries to change their approach to privacy protection. As the EU continues to expand and diversify its trade ties with other major economies, more countries will be incentivized to harmonize their regulations with key provisions of the GDPR. The Comprehensive Economic and Trade Agreement with Canada (CETA), for example, includes chapters on the protection of personal information and cross-border data flows. And as a complement to the EU-Japan Economic Partnership Agreement, both parties have agreed to recognize each other’s data protection systems. This includes a commitment from Japan to take further steps to protect the personal data of EU citizens.



Another Friday, Another Breach Announcement

Today, Marriott announced that it uncovered four-plus years of a previously unknown, unexpected, and unauthorized data breach that includes travel details, passport numbers, and credit card data. Five hundred million customers found out this morning when Marriott announced a multiyear breach dating back to 2014. Longstanding defects in Starwood’s database and network security allowed attackers to capture names, addresses, dates of birth, passport numbers, communication preferences, arrival and departure information, and more.  In short, an exceedingly valuable trove of data for the attackers. Marriott has followed the by-now familiar data breach announcement playbook: it apologized, promised customers it cares about security, provided a website and dial-in number, and also offered credit monitoring.

Cybersecurity M&A Due Diligence Rears Its Ugly Head

We can’t know the internal details of the acquisition due diligence process, but a thorough cybersecurity due diligence effort should have flagged the database and network security weaknesses that the attackers – by then resident in Starwood’s network for two years — exploited.  The strategic nature of M&A activity means cybersecurity issues might not stop an acquisition, but they certainly can lower the price and create arbitrage and risk-transference opportunities, as seen with Verizon and Yahoo.  Lesson for CEOs and CISOs going through M&A: Don’t skimp on cybersecurity due diligence.

The Surveillance And Data Economy Problem

When companies collect massive amounts of data in the name of customer experience, they also accept the obligation and responsibility of protecting that data. In this case, Starwood — and Marriott, by acquisition — failed in that responsibility. Consider the following:



Unfortunately, external audits of business continuity management (BCM) programs have a bad habit of going off the rails. In today’s post, I’ll lay out the kinds of things that commonly go wrong in these BCM audits—and share some tips for how you can increase the odds that if and when your program is audited, the process will be reasonable, rational, and productive.


Getting audited is a fact of life for many business continuity programs, especially those in the financial industry, banking, healthcare, and other highly regulated areas.

Such audits are generally valuable and important. They make sure BCM programs are in compliance with relevant standards and guidelines, protecting the organizations, their employees, their clients and customers, and the industries of which they are a part.

Most auditors are knowledgeable, well-meaning people who approach their work in a spirit of partnership and moderation. Their goal is not to punish BCM programs but to help them get better. I have worked with many such auditors in the 20 years I’ve been a BCM consultant.



The United States consists of 16 critical infrastructure sectors – communications, financial services, energy, emergency services, information technology, and more – whose services are crucial to the success and prosperity of the country. Sometimes, however, these services are taken for granted.

Critical Infrastructure Security and Resilience Month, observed in November, serves as a reminder for the importance of maintaining the security and availability of systems and organizations that maintain our way of life.

Critical Infrastructure Security and Resilience Month

Critical Infrastructure Security and Resilience Month, observed in November, serves as a reminder for the importance of maintaining the security and availability of systems and organizations that maintain our way of life.

From a resilience standpoint, critical industry sectors tend to do extremely well. Whether it’s because they run tests to ensure business resilience in light of specific regulations or traditional business decisions, they are generally prepared for challenges that may arise.



Some view winter weather as a welcome excused absence from work or school. Others must still find their way into the office. What they don’t want to encounter on their way are slick sidewalks, power outages, or the worst – inching your way through icy gridlock only to learn after they’ve battled the weather that the office is, in fact closed. “Sorry,” simply won’t suffice.

Reduce your risk for injuries and dissatisfied employees by doing your part to protect and inform them on bad weather days. You may not be able to stop the snow, rain, and wind, but you can ensure every employee has a safe way to an office that is in working condition.



Friday, 30 November 2018 15:04


(TNS) - As rain begins to fall in Southern California from the latest storm, towns and counties prepared for the imminent problems it could cause, especially in areas previously affected by wildfires.

Amid worries of mudslides and debris flow, parts of Riverside County where the Holy fire had burned in August were placed under mandatory evacuation Wednesday afternoon. The rain also prompted Orange County officials to issue voluntary evacuations for communities that may receive mud and debris flowing from Holy fire burn areas.

Though no mandatory evacuations were put in place in Los Angeles County, officials in the Los Angeles area also remained cautious as the ingredients for potentially dangerous conditions were in place.



The holiday season brings a festive atmosphere to many workplaces, but the relaxed attitude can create unique challenges in terms of facility security. In today’s post I’ll share some tips to help you keep your workplace safe—during the holidays and all year long.



The other day I went walking with my toddler grandson, and I was impressed by the way he waved to every person (and bird) we saw on the street. His openness and friendliness were fun to see, but what kept his trust of strangers from being potentially dangerous was that I was there keeping an eye on him.


During the holidays, a similar spirit of trust and relaxation can come into the workplace. At the same time, the holidays can be especially busy in terms of delivery people coming and going with flowers and gifts, and office parties bringing in many people we might not know.

Unfortunately, not all of the outsiders who might try to gain access to your facility at this time of year are as innocent as my grandson. Some might take advantage of the opportunity to take things, snoop around, or worse.

As you see, the holidays can pose special challenges when it comes to facility security.



(TNS) The 2018 Central Pacific hurricane season will end Friday much like it started — with a couple of months of smooth sailing and no tropical cyclones on the horizon.

It was the middle part that got rather dicey. The Central Pacific experienced six powerful tropical cyclones over a two-month period from August to the beginning of October, making it one of the most active seasons on record for the ocean basin between 140 and 180 degrees west.

During a five-week stretch, the islands seemed to be under constant threat from hurricanes rolling in from the Eastern Pacific.

One of them was a Category 5 monster known as Hurricane Lane, the scariest hurricane facing Honolulu in decades. But after dumping more than 4 feet of rain on the east side of Hawaii island, the storm slowed, weakened and largely bypassed Oahu.



(TNS) - A man who had a heart attack minutes after saying what he thought were his last goodbyes to his children following Hawaii’s infamous ballistic missile alert on Jan. 13 is suing the state for an undisclosed amount.

James Sean Shields and his girlfriend, Brenda Rei­chel, filed a lawsuit in first Circuit Court against the state and Vern Miyagi, former administrator of the Hawaii Emergency Management Agency, or HI-EMA, for the false alarm they claim triggered the heart attack.

The mistaken alert, which caused statewide panic, occurred when HI-EMA sent a text alert to most cellphones in the state warning of an incoming ballistic missile attack and advising, “This is not a drill.” HI-EMA did not send out an official retraction of the false alarm for 38 minutes.

The couple received the message on their cellphones and were “extremely frightened and thought they were shortly going to die,” according to the lawsuit filed Tuesday by Honolulu attorney Sam King.



(TNS) - Pennsylvania needs to provide funding and incentives, and take other actions to fight a "public safety crisis" resulting from a dramatic decline in volunteer firefighters during the last 40 years in the state – and nationally – according to a report released Wednesday.

About 300,000 people volunteered as firefighters in Pennsylvania in the 1970s, and that number has dwindled to fewer than 38,000, said the commission of Pennsylvania lawmakers, municipal officials and emergency service professionals that produced the report.

More than 90 percent of the state's roughly 2,400 fire companies are volunteer.

Emergency service groups estimate volunteer first responders save Pennsylvania communities about $10 billion per year. Volunteer agencies throughout Pennsylvania have turned to hiring full-time or part-time staff to fill the volunteer gap.



We are entering the decade of data. Governments daily are collecting tons of data from a wide variety of sources. Much of the data is pigeon-holed into stovepipes of programs that asked for the data and use it for, we hope, utilitarian purpose. However, this data could be put to better use by making it available to a wider spectrum of agencies and the public. 

Shaun Bierweiler is president at Hortonworks Federal, and vice president of U.S. Public Sector at Hortonworks. He has more than 15 years of experience helping the public sector navigate the intersection of technology and business. He has spent most of the last decade helping agencies leverage big data and enterprise open source solutions to accomplish critical missions. He is a graduate of the University of Florida, and the University of Maryland's Robert H. Smith School of Business. 

Bierweiler responded to a series of questions about how data could be put to better use by opening up the availability of the data. 



7 Steps to Minimize Risk

Today’s boards are increasingly aware of the significance and scrutiny given to their oversight obligations in the #MeToo era. Depending on the soundness and swiftness of their actions, an organization can be seen as capable and intent on “doing the right thing,” or it can appear to be slow to act or even unresponsive when faced with significant allegations. Having good outside resources who can quickly and independently investigate sexual harassment issues is foundational to a good response, but – while being prepared is important – the board needs to dig deeper to ask the question, “What are we doing to prevent the issue in the first place?”

In today’s environment, sexual harassment claims are becoming more visible and present at all levels. Professional and personal behavior which may have been previously tolerated or ignored is now under a spotlight with new rules and expectations in the workplace. These new developments have caught many, including board members, off guard and have put organizations in a defensive posture.

However, boards can do a lot to help prepare and protect their organizations. This starts with communicating their expectations to both senior management and employees. Of course, board guidance is only as good as the demonstrated behavior of its board members. If board members struggle with professional compliance and integrity issues, it’s hard to be convincing, relative to accountability.



(TNS) — A team of officials with the Kentucky Center for School Safety visited Country Heights Elementary on Kentucky 54 on Tuesday, while a second team visited Sorgho Elementary School.

The teams were there by invitation to look for any potential security issues in the way the schools do business.

"I've been to Daviess County a lot," said Ed McCaw, a former school principal in Lexington and a member of the team that visited Country Heights Elementary. The group comes to the county district each year to review two schools, he said.

"Daviess County has been very interested in our services," McCaw said, and the organization also conducts free security audits for schools in the Owensboro district.



What does a security breach or malicious hacker attack cost? For organizations that lack a fully resilient infrastructure, hidden costs can include operational interruptions, loss of customer trust, lawsuits and compliance regulation fines.

Consider the costs an organization can incur from ransomware.

In March 2018, Atlanta’s city government was hit with a ransomware attack, in which criminals demanded roughly $51,000 in bitcoin to restore the city’s systems. Atlanta didn’t pay. 

A security breach can cost your organization more than money. Learn how to avoid them through business resilience best practices.

Consequently, according to Engadget, more than one-third of the city’s necessary programs went offline or were disabled in part. Worse, Atlanta’s city attorney office lost six of its 77 computers and 10 years of documents. The Atlanta police department lost its dash cam recordings. Initially, the cost of recovering from the attack was an estimated $2 million—but that soon increased by another $9.5 million. 

Here are some examples of the hidden costs a security incident may bring, with tips on how to avoid them through business resilience best practices. 



Wednesday, 28 November 2018 16:42

The hidden costs of security breaches

Emergency communication tools and response protocols are critical to public safety, disaster recovery, and business continuity whenever crises arise.

While public and private entities can have different needs whenever emergencies occur, understanding how they respond, how they leverage technology, and the challenges they face during these times is integral to identifying:

  • The current trends shaping critical response actions and plans
  • The opportunities for refining response protocols, crisis communications, and emergency plans
  • How your organization compares with industry peers.

To this end, OnSolve recently conducted the biannual Crisis Communications & Emergency Notification Survey. Overseen by research firm Disaster Resource Guide (DRG), this survey included nearly 500 emergency decision makers in the U.S. public and private sectors.



How often are you faced with making a trade-off decision where you need to make a sacrifice in one area to gain an advantage in another? A classic example in computer science is the space/time trade-off where a software program runs faster with more memory. Other examples include cost versus quality, spending versus investing, and even relaxing versus exercising.

The phrase “the best of both worlds” refers to a win-win situation, the best of all possible worlds in which you can enjoy the benefit of two different opportunities at once and not have to make that trade-off.  That was what made the Apple iPhone so popular – the innovation was in being able to provide a very powerful device which was still easy to use. Apple was able to overcome the trade-off of functionality versus ease-of-use, resulting in a truly best-of-both-worlds product that was so intuitive that your grandmother could use a device designed without having to give up functionality and capability to do so.

For organizations who want the best of both worlds – the ability to reduce data protection costs, achieve secure and reliable backups, and gain rapid and simple system restores – Sungard AS and Veritas deliver fully managed services across AWS environments for business resilience.

The challenge with cloud-based backup has always been the trade-off of quick access to backed up data in your data center, versus the flexibility of storing data off-site in the public cloud. For Sungard Availability Services (Sungard AS), we believe that “the best of both worlds” aptly describes our new Managed Backup – AWS solution, which provides customers with a fully managed backup service that brings together the scale, economics and power of storing data in the public cloud with the ability to restore quickly from a local appliance in a cost-effective offering.



(TNS) — As every day goes by, searchers are finding fewer human remains in the rubble of California’s deadliest wildfire.

At the same time, they are slowly narrowing the list of the missing, which on Monday stood at 203 yet still down by about 1,000 from last week.

But amid this work, authorities are coming to terms with the possibility that the search for victims of the Camp fire might never be complete and that some human remains won’t ever be recovered.

“Is it possible that there could be a circumstance where someone was completely consumed by fire and therefore we wouldn’t have something that we could collect? I would say it is within the realm of possibility, unfortunately,” Butte County Sheriff Kory Honea said Monday.



(TNS) — When the Woolsey fire swept into the exclusive neighborhood of Bell Canyon, resident Yen Hsieh grabbed her late music teacher’s 200-year-old cello, some belongings and her son’s betta fish Sparky and fled, not sure whether her home would be destroyed.

More than 30 homes in the gated Ventura County community were lost, but Hsieh’s survived. Bell Canyon was protected by both county firefighters and a private crew covered by her homeowner’s insurance policy.

Hsieh said it’s not clear which of the firefighters — private or publicly funded — were responsible for saving her home, but she’s grateful.



We’re putting the finishing touches on a major research project on the assignment of benefits problem in Florida, a phenomenon in which a quirk in that state’s laws becomes a lever with which the less-than-scrupulous can supersize a claim settlement.

Our paper looks at how the problem has spread across lines of business – from no-fault insurance to homeowners to auto physical damage claims – and across the state – what started in South Florida has metastasized into the Interstate 4 corridor. Even far west on the Panhandle, Escambia County (Pensacola) has had 346 assignment of benefits lawsuits this year through November 9. Five years ago it had 20.

Our research focused on the growth from one line of business to another and the spread of the problem over time. Artemis.bm has an interesting take on the knock-on effect from the way the problem is rolling through Hurricane Irma claims. Artemis is a website that is expert in alternative sources of insurance capital like catastrophe bonds, collateralized reinsurance and industry loss warranties.



Wednesday, 28 November 2018 16:35


We habitually use metrics, even when they are not relevant. Our instincts are based on deeply ingrained responses to situations and inherent investment biases (time and cost). This annoys many a modern business, as these metrics have nothing to do with the quality experienced by the customer or end user in real life, giving them the impression that our IT teams are not relevant. This problem is further exasperated within sourcing & vendor management (SVM) realms as:

  • SVM professionals often over-engineer metrics because they don’t trust their suppliers.
  • Measurements are not aligned with strategic objectives, distancing SVM teams further from the business and their customers.

Allow us to elaborate.



On 6th November 2018 the Business Continuity Institute issued a  new survey-based report entitled ‘The Continuity & Resilience Report: Raising the impact of Business Continuity’. The report has received significant criticism from business continuity consultant and author David Lindstedt. In this article, Continuity Central’s editor, David Honour, examines David Lindstedt’s reaction to the report and offers an assessment of some of the points made.


The BCI’s Continuity & Resilience Report: Raising the impact of Business Continuity was launched at BCI World and is the result of 853 responses to a survey. 43 percent of respondents were based in Europe, 22 percent in North America, and 10 percent were based in Australasia.

The BCI summarises the Continuity and Resilience Report as follows:

“This study aims at highlighting the role of business continuity and its relationship with other management functions within the organization, such as information security, risk management or physical security. The findings address a range of issues to measure the impact of business continuity, such as its levels of investment, top management buy-in and its role during a crisis.

“It emerges that business continuity plays a central role across different scenarios, such as adverse weather, cyber attacks or the outbreak of a new pandemic. Furthermore, organizations tend to increasingly appreciate its value over time, as they can see return on investment.”

The report states that the results of the study show that employing business continuity over time supports:

  • The reduction of the cost of the response;
  • The improvement of employee morale;
  • Customer retention.

It also claims that the top three benefits of business continuity are:

  • Faster recovery: claimed by 87 percent of respondents;
  • Safety and accountability of staff: 80 percent of respondents;
  • A reduction of the costs of disruptions: 77 percent of respondents.

In his article "The BCI Report: Echo Chambers, Disturbing Graphics, and Status Quo" written in response to the report, David Lindstedt initially focusses on the latter finding concerning the top three benefits of business continuity, stating:

"The claim is based on a single opinion question asked of BC, ERM, and other preparedness practitioners. It was a simple survey question where preparedness planners were asked to select any number of perceived benefits. This is not a finding. This is an echo chamber, filtering on a narrow subset of people to reemphasize specific beliefs in support of preexisting positions. It may be of passing interest to learn what preparedness planners believe are the most beneficial aspects of their work, but this is not evidence upon which to invest resources, develop public policy, define regulatory requirements, or justify the costs of a BC program. Just because preparedness practitioners believe their efforts provide specific benefits does not make it so."



Planning a journey of any type can be an exhilarating experience. For some people, the destination is more interesting than the journey; they focus on the goal or the milestones of the expedition to set their direction. But to others, the journey is what it’s all about. They enjoy the “life” that happens along the way, the story that unfolds while you prepare to reach the destination. Tennis great Arthur Ashe put it like this: “Success is a journey, not a destination. The doing is often more important than the outcome.”

For businesses, planning the journey to cloud transformation with Amazon Web Services (AWS) can be just as exciting. Sungard Availability Services (Sungard AS) views this type of journey as a chance to use a roadmap called the AWS Cloud Transformation Maturity Model, or CTMM, to map the maturity of an IT organization’s process, people, and technology capabilities as they move through the four stages of the journey to AWS Cloud: project, foundation, migration, and optimization.



As California grapples with increasingly deadly wildfires with seemingly few real solutions, one small but effective way of saving communities is getting more attention and traction: deploying a network of infrared cameras on mountaintops and other high hazard areas.

The AlertWildfire network, consisting of some 80 cameras already dispersed among California forests, has already proven its worth on several occasions. As recently as last week in San Diego County, cameras caught two fire start-ups and allowed fire personnel to put them down with the appropriate amount of manpower.

Last December, cameras helped keep the Lilac Fire under 5,000 acres by allowing fire personnel to make key decisions early, before the fire got out of hand.

“The likelihood of pinning down that fire without a massive response was low,” said Graham Kent, director of the Nevada Seismological Lab in Reno, Nev., and a chief architect of the AlertWildfire camera network. “In that case, it could have been the difference between a billion dollars or not.”



Seasonal shopping days, such as Black Friday and Cyber Monday, while great for consumers, can be manically stressful for IT teams, who are tasked with keeping IT systems running smoothly and securely despite sometimes unprecedented surges in traffic. Any downtime on Black Friday could cost a retail business huge amounts, and lead to disgruntled customers.

To help IT teams prepare for seasonal shopping days, six IT experts give their tips for maintaining availability:

Gary Watson, CTO of StorCentric and Founder of Nexsan, highlights the importance of IT resilience. He said: “Seasonal shopping days such as Black Friday test the peak capacity of IT resources, often to breaking point. With many retailers expecting to see a rise in profits – and more customers coming through the door, either physically or digitally – everything right down to the fulfilment is tested. With this in mind, it’s important to ensure any IT environment can meet the retailer’s needs and isn’t being stretched beyond its limits. This also means testing the elasticity and capacity demands and ensuring there are adequate cost control measures in place when it comes to scalability.”

Echoing his sentiments, Jon Lucas, director, Hyve Managed Hosting said: “With a massive £1.4bn spent in online sales in the UK last Black Friday, the stage is definitely set for the biggest shopping date of the year. Floods of consumers wait in anticipation to get the best deals, and in turn bombard websites with traffic. The question is, is your hosting ready? All the discounted pricing, online advertising, and most importantly your merchandise, will be a waste if your website fails to cope with spikes and influxes. After all, over a third of UK shoppers say a non-functioning website immediately damages their opinion of a brand, causing perceptions of an ‘unprofessional’ and ‘poorly managed’ business. Make sure you have a scalable solution in place that can run tests on your website beforehand, adjust resources as and when needed and easily manage bursts of activity to provide a reliable, secure platform when you need it most.”



(TNS) — Last year, as the Tubbs Fire scorched its way across Napa and Sonoma counties, environmental researchers at the University of California, Davis, fielded questions about the health impact of chronic exposure to smoke from a wildfire that torched trees and urban structures alike.

UCD’s Kent Pinkerton and Rebecca Schmidt and other researchers had the same questions. They sought studies on urban wildfires and found no answers.

“Everyone had concerns about their health and what was in the smoke,” said Schmidt, who studies how environmental exposures influence child development. “We just didn’t have any answers to those questions, and when we looked and searched to see what was out there, we really found there wasn’t much.”

Yet studies on air pollution do offer up clues to what happens when the human body is under assault from microscopic particles in the air, Pinkerton said. The UCD professor has spent decades studying the effects of air pollution on lung inflammation and disease.



The Importance of Knowing the Difference

With the weekly reports of cyber events and data breaches, cybersecurity is a trending topic. While the concern is warranted, there is some confusion between information privacy and information security. Just because information is private does not necessarily mean it is secure. Executives, compliance professionals and IT departments need to understand the difference and take necessary measures to secure private data in the cyber age.

With the recent rash of data breaches and cyber incidents, companies and individuals alike are understandably concerned about cybersecurity. In a world where consumers are more aware of personal information being collected for financial gain, yet security of this information is almost an afterthought, a data breach can ruin a company’s reputation and financial reports.

The weekly reports of data breaches have resulted in privacy and cybersecurity being on everyone’s lips, yet people do not realize that there is a difference between the two concepts. While the two may overlap, it is important for executives and compliance departments to understand the difference and how the law applies to each.



Wednesday, 21 November 2018 15:05

“Privacy” Doesn’t Equal “Security”

IBM’s recent announcement that it is acquiring open source cloud software business Red Hat inspired Cutter Consortium Senior Consultant Balaji Prasad to think about the notion of hybrid in a broad sense, and also with respect to hybrid clouds.

So, what is hybrid? Implicit in the notion, explains Prasad, is plurality — there is more than one thing in play. These things, while independent, must somehow tie together with a similar purpose. Otherwise, they’d just be distinct yet complementary components.

A hybrid cloud — a public and private cloud duo — combines two different implementations of a similar capability. Writes Prasad in a recent Business & Enterprise Architecture Advisor:

[Public and private clouds] “have similarities in that they both bring similar abilities to abstract away physical infrastructure and operational concerns, while being different in the variety of services, internal integration, and control that can be exercised by the enterprises that choose one or the other. Or, both. That last part — almost an aside — is particularly important.”



Wednesday, 21 November 2018 15:04

Pondering Hybrid

The Great Southern California Shakeout in October was a great opportunity to refresh the minds of millions of the constant threat of earthquake in the region. For the Santa Ana Unified School District, it was also a way to practice interoperability and test a new communication system.

The use of the Maxxess Ambit private, two-way messaging and intel system made the annual event more realistic than in years past because of the large amount of data that was being funneled to the district’s EOC, which had to be analyzed in real time.

Ambit is described as a human sensor interface that delivers an overall picture and view of a scenario via remote communications that improves situational awareness. It can be used several different ways. It consists of the mobile app used in the field and the dashboard, used in a command center or EOC, on which users receive all the data from the mobile apps and can see the users’ locations.

The dashboard has a running list of every message that is sent by a user on a mobile app and a map with pins showing the location of all the users in the field.



Raising Awareness Through Compliance Training

When discussing the problem of bullying, we often overlook not only the victims, but also their fate. As bullies join the workforce, they continue to find targets, and the severity of their bullying behavior escalates. To address the 60.3 million U.S. workers bullying affects, companies must inform and educate their employees through an effective compliance strategy.

Just coming out of National Bullying Prevention Month in October, it’s important to evaluate the progress our society has made to prevent acts of bullying, as well as to consider what needs to be done moving forward. While it’s great to have one month dedicated to bringing attention to this issue, it needs to be top of mind 365 days of the year. This is especially true considering it was found that more than one in five students are bullied.

Bullying has been discussed widely in our culture – through TV shows, movies and pop culture – and it now spans much further than the elementary lunchroom. One problem: we often overlook the bullies themselves and their fate beyond school.

Many don’t think to ask the question, “What becomes of the bullies?” After graduation, do they realize their poor behavior and transform into model citizens and employees? Unfortunately, research concludes that this is not the case. As bullies join the workforce, they continue to find targets, and the severity of their bullying behavior escalates. Research by the Workplace Bullying Institute (WBI), which defines workplace bullying as repeated harmful, abusive conduct that is threatening, intimidating, humiliating, work sabotage or verbal abuse, estimates that bullying affects 60.3 million U.S. workers.



Wednesday, 21 November 2018 15:01

How To Address Workplace Bullying

Research conducted for Accenture has found that two-thirds of UK workers have experienced mental health challenges, with reduced productivity being one of the impacts. Given that business resilience is concerned with maintaining productivity whatever the cause of reduced output, helping employees with their mental health challenges is not only a moral imperative but is also a benefit to the organization.

The survey of more than 2,000 workers found that mental health issues are far more prevalent than the one in four figure that is often cited. For three out of four people (76 percent), mental health challenges — either their own or those of others — had affected their ability to enjoy life, with 30 percent reporting they are ‘occasionally, rarely, or never’ able to enjoy and take part fully in everyday life.

The findings come as the taboo that has long surrounded mental health starts to break down, as 82 percent of respondents said they are more willing to speak openly about mental health issues now than they were just a few years ago.



(TNS) - Leigh Bailey, 54, was awakened not by her phone, warning her about an incoming fire that would soon destroy her town, but by a neighbor pounding on her door.

Bailey had no idea how bad the fire was about to become. So she went back inside around 9:15 a.m., had a cup of tea and ate some coffee cake and slowly packed some clothes and her dog and cat before heading out of her home in Magalia, just north of Paradise.

She escaped — but barely, on a narrow dirt road she stumbled on despite driving through thick smoke and the failure of her GPS.

“We had absolutely no evacuation orders,” Bailey said. “No call, no emergency text, nothing — and neither did anyone I know.”

This has been a recurring problem.



3 Steps to Ensure Bolster Privacy

The California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR) may signal a coming global standard for data protection. Why? Business. The pressure is ever increasing to protect data, meaning we are likely to see an uptick in individual state data protection laws here in the U.S. and more outside the U.S. and the EU. Here are three practical steps to take toward ensuring stronger data privacy for customers.

No doubt, since May you’ve experienced an influx of emails from every company you’ve ever done business with, letting you know about their updated privacy policies. This was due to GDPR going into effect. These emails varied in their compliance with the new regulation; some did it right by asking for explicit consent for their new policies. Many others just sent emails saying they assumed your implied consent, meaning that if you don’t unsubscribe, they assume you consent. Yet others didn’t even bother to send that kind of noncompliant email.



Effective risk management requires reliable, proven emergency notification systems and protocols.

In fact, without the right crisis notification procedures and tools in place:

  • Risks can snowball and become increasingly difficult to manage.
  • Emergency situations can get critical and complicated quickly, potentially putting more people in harm’s way.

Revealing more about the relationship between emergency notification and risk management, the following highlights the top 3 ways these elements work together to reduce the potential for harm and keep employees safe whenever crises may arise.



Supply chain cartoon

It’s in your company’s best interest not to overlook disaster recovery (DR). If you’re hit with a cyberattack, natural disaster, power outage or any sort of other unplanned disturbance that could potentially threaten your business  ̶  you’ll be happy you had a DR plan in place.

It’s important to remember that your business is made up of a lot of moving parts, some of which may reside outside your building and under the control of others. And just because you have the foresight to prepare for the worst doesn’t mean the companies in your supply chain will also take the same precautions.

Verify that all participants within your supply chain have DR and business continuity plans in place, and that these plans are routinely tested and communicated to employees to ensure they can hold up their end of the supply chain in the event of a disaster. If you don’t, the wheels might just fall off your DR plan.

Check out more IT cartoons.

(TNS) - They were as young as 8. As old as 101. At its height Sunday, the list stretched on for 26 pages, offering a staggering 1,202 names.

Most were linked to towns where they may have lived, only about a third had ages. It appeared to include whole families. Seven people from Paradise with the same last name, the oldest 72. A couple in their 80s, another in their 60s and possibly their mother.

The list is a culmination of all the people who were reported missing — and remain unaccounted for — since the devastating Camp fire erupted in Butte County in the early hours of Nov. 8, consuming entire neighborhoods in just hours. That number dropped Sunday for the first time in days, from 1,202 to 993. But it raises a startling question: Could that many people really have died in the blaze?

Authorities say probably not.

The data are far from perfect. Some people may be listed twice, or more. Others may be safe somewhere, unaware that someone is looking for them.



A new survey-based report by EY and the Institute of International Finance (IIF) has concluded that, as technology and ongoing competitive disruption force banks to reinvent themselves, the risk management function must undergo a revolution with risk management professionals balancing their roles and operating models.

The report, ‘Accelerating digital transformation: four imperatives for risk management’ finds that risk groups link strategy and risk appetite (67 percent); identify forward-looking or emerging risks (53 percent); assess strategy and business models from a risk appetite perspective (36 percent); help influence firm risk culture and behaviors (34 percent) and implement effective risk management structures (31 percent).

Four imperatives that boards, senior management, chief risk officers (CROs) and other key executives will have to address to stay competitive, maintain trust, and successfully achieve their digital transformation ambitions are highlighted. The four imperatives are: adapting to a risk environment and risk profile that is changing faster and more intensively than ever; leveraging risk management to enable business transformation and sustained growth; delivering risk management effectively and efficiently; and managing through and recovering from disruptions.



I remember a few years ago when, as enterprise architects, we sat around in the office of the VP of architecture and planned our data strategy on the whiteboard. Replace that clunky warehouse with a modern appliance? Check. Enterprise data model? Of course! The dreaded data governance plan? Yup, but our discussion was about IBM’s latest tooling, not the fact that there was already a data code management group that didn’t talk to IT or care what we did. That was before Hadoop, data lakes, and the open source revolution, but it was just at the dawning of the age of the customer. Had we known then what was coming at us . . . ah well.

My point is that many of our clients sit in similar, if a bit more modern, situations today. I talk to these clients every day, and I want to help them prepare not for the challenges they face today but for what is coming down the pipe. The single biggest overriding factor that should concern CIOs, their staff, and their business is the relentless accelerating pace of business, driven by information technology. The age of the customer began because information technology empowered customers and firms began to offer them more information and more choices. Today, the overriding force is the power that emerging technology is bringing to innovation and business-model change efforts. With each generation of emerging technology accelerating the returns of the next, what we see is sustained, technology-driven business acceleration.



Tuesday, 20 November 2018 15:14

Is Your Data Strategy Ready To Keep Up?

(TNS) - The number of reported dead in Butte County’s Camp Fire did not increase overnight, remaining at 77 on Monday morning, Cal Fire said in an incident report.

The toll had increased by one on Sunday, when the remains of one person were found in an outdoor location in Butte Creek Canyon. Of the 77 dead, 67 have been tentatively identified, according to the Butte County Sheriff’s Office.

A total of 993 people were still reported missing, said Miranda Bowersox, a spokeswoman for the Butte County Sheriff’s Office, on Sunday.

There are now 1,318 people accounted for in the area, Bowersox said, an increase of 604 from Saturday.

The deadliest and most destructive wildfire in California history, the Camp Fire has destroyed more than 15,000 structures total, including more than 11,700 homes, according the Monday morning incident report.



Everbridge to Power Mobile Alerts for the World’s Largest Holiday Parade

BURLINGTON, Mass. – Everbridge, Inc. (NASDAQ: EVBG), the global leader in critical event management and enterprise safety software applications to help keep people safe and businesses running, today announced that the City of New York has deployed its market-leading critical event management platform to alert attendees of the 2018 Macy’s Thanksgiving Day Parade in the event of an emergency, disruption or need to share important information during the festivities. Everbridge powers the statewide emergency notification platform NY-ALERT and the New York City emergency notification platform Notify NYC.

As tens of millions watch from home, more than three million people are expected to attend the nationally-televised parade which kicks off at 9 am ET on Thursday, November 22 this week, stretching 2.5 miles along Manhattan (from 77th Street and Central Park West to 7th Avenue and 34th Street.)

Residents and visitors who are planning to line the parade route are being encouraged to register for alerts by texting THXGIVING18 to 692692 (NYC-NYC) to receive critical updates directly from New York City Emergency Management. Text messages may include safety, traffic, weather, and event alerts, including street closures and detours, transit delays, parade disruptions, reunification locations for missing persons, and updates along the parade route.

Notify NYC will also once again deploy Everbridge for city’s New Year’s Eve celebration on December 31, as an estimated one million people gather in Times Square to usher in 2019.

Everbridge and its Community Engagement solution makes it easy for residents, visitors and attendees to sign up to receive important safety and event information on their cellphones through an event-based keyword. The technology has been deployed at similar large-scale events including the Pride Parade in San Francisco, Mardi Gras in New Orleans, the March For Our Livesrally in Washington, D.C., the Presidential Inauguration, and many championship sports celebrations across the country. It is also regularly used by officials during severe weather situations, including Hurricane LaneHurricane Irma, and the California wildfires.

About Everbridge

Everbridge, Inc. (NASDAQ: EVBG) is a global software company that provides enterprise software applications that automate and accelerate organizations’ operational response to critical events in order to keep people safe and businesses running. During public safety threats such as active shooter situations, terrorist attacks or severe weather conditions, as well as critical business events including IT outages, cyber-attacks or other incidents such as product recalls or supply-chain interruptions, over 4,200 global customers rely on the company’s Critical Event Management Platform to quickly and reliably aggregate and assess threat data, locate people at risk and responders able to assist, automate the execution of pre-defined communications processes through the secure delivery to over 100 different communication devices, and track progress on executing response plans. The company’s platform sent over 2 billion messages in 2017 and offers the ability to reach over 500 million people in more than 200 countries and territories, including the entire mobile populations on a country-wide scale in Sweden, the Netherlands, the Bahamas, Singapore, Greece, Cambodia, and a number of the largest states in India. The company’s critical communications and enterprise safety applications include Mass Notification, Incident Management, Safety Connection™, IT Alerting, Visual Command Center®, Crisis Commander®, Community Engagement™ and Secure Messaging. Everbridge serves 9 of the 10 largest U.S. cities, 8 of the 10 largest U.S.-based investment banks, all 25 of the 25 busiest North American airports, six of the 10 largest global consulting firms, six of the 10 largest global auto makers, all four of the largest global accounting firms, four of the 10 largest U.S.-based health care providers and four of the 10 largest U.S.-based health insurers. Everbridge is based in Boston and Los Angeles with additional offices in Lansing, San Francisco, Beijing, Bangalore, Kolkata, London, Munich, Oslo, Stockholm and Tilburg. For more information, visit www.everbridge.com, read the company blog, and follow on Twitter and Facebook.

Fitness and I have always had a love/hate relationship. For me, it comes and goes in waves, no matter what the driving factors are – feeling better, being a good role model for my kids, an upcoming vacation or just to recover from a stretch of poor eating choices. One thing always remains the same, the only way to get back on track is through planning, training and lots of EXERCISING. My go-to plan usually involves running and setting a goal, such as signing up for a 10-13.1 miler. Due to my gaps in exercising, I usually opt for a couch to 13.1 type of training program. We can take this same approach to get our business continuity program (BCP) up and running or back on track. Unlike me, with a little automation we can close those gaps and stay consistent with the planning and maintenance which will result in the strong evolution of your program.

Let’s face it, life is busy so it’s crucial to make every effort count. When it comes to training for a race, I wish I could tap my sneakers three times and make the process “auto-magic”, knowing that’s impossible I turn to the next best thing…technology. I would be completely lost without my running apps. This technology aids the training process by telling me when to run, rest, recover and to top it off there is a built-in run coach to guide me along the way. Before you know it, I’m couch to 13.1 ready in a short 12-14 weeks!

These same principals hold true for a business continuity program. No matter what the driving factors are, regulatory, or the safety of your employees and stakeholders, these drivers hold us and our organizations accountable. Next comes the technology. A platform like BC in the Cloud takes away some of that heavy lifting.  By leveraging technology, we can make this chore so much easier.



Monday, 19 November 2018 17:18

BCP Couch to Continuity

It’s the time of year when millions of Americans sign up for health insurance, and insurance providers encourage their clients to go in for a checkup or get annual screenings to see where they stand health-wise. The steps to a healthy lifestyle are fairly obvious: eat the right foods, exercise, lose weight (if you’re overweight), reduce stress, protect your skin, and so on. But if you want to lead a more resilient lifestyle – one in which you have greater influence over your circumstances, and the ability to face challenges with courage – the steps are a little different: stay flexible, learn life lessons, stay connected, take action, release tension.

The same philosophy can be applied to enterprises looking to private clouds or hyperscale clouds like Amazon Web Services(AWS) to transform their business. You have to know where you stand in order to implement a program designed to leverage all the resiliency tools in your IT toolkit:

  • Infrastructure architecture
  • Application architecture
  • Backup and Recovery architecture
  • Security posture
  • Governance and Change Management

Having the expertise and resources to execute effectively is where many organizations find they need help. Just as humans are more likely to get sick or injured when their environment is compromised (such as catching the flu or fracturing a wrist from a fall), your applications can fail – not just the infrastructure where they reside.



Monday, 19 November 2018 17:17

Healthy clouds mean healthy business

The Strategic 3 Questions

There are a number of ways organizations can go wrong in terms of strategy. Linda Henman discusses some of the most common ways she’s seen leaders go astray.

Several years ago, I helped the president of a manufacturing company run a strategy initiative. When I walked into the room on the first day, 26 eager faces greeted me. Well-intended though he was, my client had inadvertently turned our strategy session into an execution — a session that promised to kill the strategy before it had a chance to live.

Those 26 people had shown up wondering how they could do what needed to be done, and we hadn’t even discussed what should happen, why that would be important and, most importantly, the benefits of making changes. These professionals had arrived well prepared to talk about vision, mission, plans and values. But they hadn’t given enough thought to what I call the Strategic Three Questions:

  1. A year from now, what do we want to be true that is not true now?
  2. Why is this important?
  3. What benefit will the company enjoy if you achieve this?



There we were . . . a round table of CX leaders from across Southeast Asia, senior executives with years of experience running large, successful teams and chipping away at the journey to turn our organizations into customer-obsessed enterprises. We shared our recent wins and successes and learned from each other how to go faster, stronger, more courageously . . . as we do every time we get together.

Then the question from one of our fold: “Will I still have a job in 2025?” The question was met with a stunned pause in conversation (quite a feat for a group of CX professionals!). As facilitator, I jumped in to explore: “Do you mean you predict that your organization will get tired of the CX transformation and give up?” I thought this a valid assumption, given I’d seen this happen so often.

The executive explained, “No, not that at all. In fact, quite the opposite. We are getting real momentum, and teams and leaders across the organization are taking up the skills of things like design thinking and journey mapping and integrating them into their ways of working. We’ve been pushing out dashboards and insights that everyone can access . . . but what does this leave for me and my team to be working on?”

The rest of the group understood: Her success was great, as well as what she’d been working toward, but what role do our teams have when “customer centricity” becomes the way everyone is working?



Monday, 19 November 2018 17:13

Will CX Pros Still Have A Job In 2025?

Ideas to Maximize Hotline Effectiveness

It could be a good sign if the phones aren’t ringing at your organization’s hotline – or it could be indicative of a failing ethics and compliance program. Ron Kral discusses how to maintain a successful hotline program.

Is your whistleblower hotline alive or dying a slow death? Whether it’s an effort to jumpstart your hotline program or simply to harvest ideas for continuous improvement, you will want to keep reading.

It’s been 15 years since Rule 10A-3 of the Exchange Act directed the NYSE, Nasdaq and other national securities exchanges and associations to require a listed company’s audit committee to establish formal procedures for addressing complaints thanks to the Sarbanes-Oxley Act. Specifically, listed public company audit committees were required to establish procedures for the receipt, retention and treatment of complaints regarding accounting, internal accounting controls or auditing matters on a confidential and anonymous basis. Thus, the whistleblower hotline trend was born.

Of course, many other organizations voluntarily jumped on the whistleblower hotline trend, and rightfully so. Surveys by the Association of Certified Fraud Examiners have historically concluded that tips are by far the leading detection method of occupational fraud.[1] While hotlines have long proved to be effective, too many organizations put this effort on cruise control rather than looking for opportunities to maximize value of their hotline investment.



Monday, 19 November 2018 17:00

Keeping Your Whistleblower Hotline Alive

(TNS) - The town’s emergency services radio system needs a root-to-branch upgrade, Police Chief Dennis Woessner told the Town Council this week.

Emergency “radio coverage in East Hampton is sub-standard,” Woessner said in delivering a stark report to the council.

“Our radio system is failing - and it’s only going to get worse,” Woessner said.

“This is a glaring issue. A functioning radio system is not a luxury, it’s a necessity,” Woessner said.

As much as 20 to 25 percent of the town lies in “dead zones” where police officers are left without the ability to reliably communicate with headquarters, he said.

The acceptable standard is “95 percent coverage 95 percent of the time,” Woessner said.



(TNS) - It’s an iconic if horrifying shot of the Camp fire pulverizing Paradise — a large ball of grayish-black smoke with fire radiating on the right, taken less than two hours after the Northern California inferno started a week ago.

The photo ran on the websites of the New York Times, Washington Post and Time magazine. It was taken on an iPhone from the roof of the Chico Enterprise-Record’s office by the paper’s editor, David Little.

The responsibility fell to the Chico native because the newspaper’s only photographer is on medical leave. The image also ran prominently in the Enterprise-Record’s Friday print edition.

“It was just the first photo we posted on our website that morning and stayed there till (the) afternoon,” Little said. Until “we got some real photographers in town.”

Little has run the small paper and several others, which are part of the Digital First Media Group, for almost 20 years. The Enterprise-Record’s staff was 45 when he started; now it’s 10 with four part-timers pitching in. Journalists from their sister papers in the San Francisco Bay Area were dispatched to assist with coverage.



This is part 1 in a 2-part series on serverless cloud computing.

Rapidly expanding connectivity options and increased development in more secure cloud infrastructures are leading organizations to research, migrate, and develop applications in the cloud. These organizations are taking advantage of a reliable, scalable, and secure managed infrastructure to form the basis of their development environments. Cloud providers, like AWS, are expanding services at a rapid rate to meet demand; serverless computing is one of these features in the AWS Well-Architected Framework.

The AWS Well-Architected Framework

The five pillars of the Well-Architected Framework, according to AWS, are operational excellence, security, reliability, performance efficiency, and cost optimization. Each pillar has associated principles and best practices to ensure AWS architecture and applications are designed and built with the optimal design.



(TNS) — When the Camp fire barreled toward this Sierra foothill town last Thursday morning, officials had a crucial choice to make right way: How much of Paradise should be evacuated?

The decision was complicated by history and topography. Paradise sits on a hilltop and is hemmed in by canyons, with only four narrow winding routes to flee to safety. During its last major fire in 2008, authorities evacuated so many people that roads became dangerously clogged.

So this time, they decided not to immediately undergo a full-scale evacuation, hoping to get residents out of neighborhoods closest to the fires first before the roads became gridlocked.

But it soon became clear that the fire was moving too fast for that plan, and that the whole town was in jeopardy. A full-scale evacuation order was issued at 9:17 a.m., but by then the fire was already consuming the town.

At least 56 people were killed — most of them in their homes, some trying to flee in their cars and others outside, desperately seeking shelter from the flames. More than 10,000 structures were lost in what is by far the worst wildfire in California history.



The number of management systems has risen dramatically in recent years, reflecting the needs and demands of more and more organizations looking to improve their performance across a wide range of areas and sectors. And most companies have more than one. ISO’s useful guide to integrating management system standards – whether they be from ISO or not – has just been updated.

From improving quality to energy efficiency, environmental performance or even road traffic, the use of management systems has grown rapidly in recent years, reflecting increasingly complex operating environments and contexts. The quest for continual improvement and sustained performance has prompted the need for a handbook to help guide organizations through effective management system design that is agile and integrated, to respond and grow.

ISO 9001 (quality), ISO 50001 (energy) and ISO 14001 (environment) are some of ISO’s most well-known and used management system standards (MSS), amongst more than 60 that make up the ISO portfolio, which also covers areas such as organizational health and safety (ISO 45001), food safety (ISO 22000), education (ISO 21001) and information technology (ISO 27001). Unlike other types of standards, MSSs have an impact on many different aspects and functions of an organization and, increasingly, companies have more than one.



“This isn’t working.”  “I’ve changed.”  “I don’t see a future with you.”  Those ‘breakup’ lines may apply to your Business Continuity Management software or your latest paramour.

Not all relationships succeed.  When things go awry, goodbye may be the best solution.  But that may seem impossible – even when you know it’s necessary.  You’ve invested countless hours and piles of capital (both monetary and political) populating your current BCM software.  When it no longer meets your growing needs – or the vendor ceases support – you must make a choice.

Could you lose your data?  Starting over may seem too expensive and burdensome.  Or you may fear that ditching your BCM app will leave your organization vulnerable – leaving only copies of plans.  Change seems too risky!

It really isn’t. Change could improve your existing situation.  When you search for replacement software, ensure the new vendor can reuse your old data, store your existing plans, and give your planners and responders what they need.  But don’t expect to get exactly what your old vendor provided.



There’s more to Business Continuity (BC) disaster recovery drills than tabletop exercises, but you wouldn’t know it from the practices of most organizations. Most companies limit themselves to tabletop drills, the most basic type of recovery exercise.

In today’s post, I’ll introduce you to the full range of BC drills, and explain why it’s important for your organization to push itself and tackle the more demanding and realistic types of exercise.



It’s the time of year where we start looking ahead to the New Year and the possible changes that may occur in the threat landscape. In this article, Ian Kilpatrick makes ten predictions for changes that may occur in the cyber security environment.

Increase in crime, espionage and sabotage by rogue nation-states

With the ongoing failure of significant national, international or UN level response and repercussion, nation-state sponsored espionage, cyber crime and sabotage will continue to expand. Clearly, most organizations are simply not structured to defend against such attacks, which will succeed in penetrating defences. Cyber security teams will need to rely on breach detection techniques.

GDPR - the pain still to come

The 25th of May 2018 has come and gone, with many organizations breathing a sigh of relief that it was fairly painless. They’ve put security processes in progress and can say that they are en route to a secure situation – so everything is OK? We are still awaiting the first big GDPR penalty. When it arrives, organizations are suddenly going to start looking seriously at what they really need to do. Facebook, BA, Cathay Pacific, etc. have suffered breaches recently, and will have different levels of corporate cost as a result, depending on which side of the May 25th deadline they sit. So GDPR will still have a big impact in 2019.



Thursday, 15 November 2018 17:20

Ten cyber security predictions for 2019

In today’s post, we’re going to answer questions about risk management that we’ve been asked recently by readers of our blogs and by MHA Consulting clients.

What’s so important about risk and risk management that it get its own Readers’ Mailbag?

Read on for the answer to that and several other interesting questions on the topics of measuring, monitoring, and managing risk.



Thursday, 15 November 2018 17:17

All About Risk Management: Reader’s Mailbag

A few years ago, a renowned workplace consultant named Jay Forte wrote an article about hiring employees who are “customer-ready.” He says these are the people who deliver exceptional service in that ‘moment of relevance’ where inspiration, emotion and product availability meet to produce a spark. According to Forte, one of the best examples of a “customer ready” company is Build-A-Bear, the toy company that makes personalized stuffed animals based on each customer’s special story.

If being “customer ready” means being able to deliver a customer experience that is tested and proven to produce successful outcomes, then Sungard AS is up to the challenge as the newest member of the AWS Solution Space Partner program.

The AWS Solution Space Partner program features partners who offer customer-ready solutions based on architectures validated by AWS. Sungard AS, with its Cloud Recovery – AWS solution that helps companies transform their business resiliency posture, is the newest member of the program.

But for companies looking to find the most resilient, robust technology to run their business, finding a “Build-a-Bear” type partner is a little more difficult. There are thousands of technology choices, from hardware to software to consultants and specialized apps. Not to mention that technology evolves at a lightning-fast pace, rendering it out-of-date within shorter and shorter timeframes.

One thing organizations can do is check with their technology providers to find “customer-ready” partners – vendors who have already been tested and certified to meet their needs. At Amazon Web Services (AWS), for example, the work is already done through its Solution SpacePartner program featuring partners who offer customer-ready solutions based on architectures validated by AWS.



(TNS) — Doreen Zimmerman of Paradise lost her home of 29 years last week, fleeing with her new litter of a dozen puppies in the family car as flaming embers rained down.

She isn’t sure if she will return. “Will I be able to sleep if I live there?” she asks. “I don’t know.”

One thing she says she knows: She’s suing.

Zimmerman, a real estate appraiser, works for a California law firm that has repeatedly filed class action lawsuits against Pacific Gas & Electric and other utility companies, alleging their equipment failures caused many of California’s horrific string of recent wildfires.



Free cloud storage is one of the best online storage deals – the price is right. 

Free cloud backup provides a convenient way to share content with friends, family and colleagues. Small businesses and individuals can take advantage of free online file storage to access extra space, for backup and recovery purposes or just store files temporarily.

Free cloud storage also tends to have paid options that are priced for individuals, small businesses, and large enterprises – so they will grow with you. The cloud storage pricing can vary considerably for these options.

The following are the best free cloud backup, with the associated advanced cloud storage options:

(Hint: some businesses have discovered that the most free cloud storage results from combining free cloud services:)



Thursday, 15 November 2018 17:11

6 Best Free Cloud Storage Providers

“Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds.”

These infamous words about postal carriers are etched in granite over the entrance to the New York City Post Office on 8th Avenue. While this noble sentiment might have worked for postal carriers in the days of yore, today’s workers have other guidance. Today, most organizations rely on an inclement weather policy to let employees know if they should come into work or stay home when bad weather strikes.

Most every climate has its share of challenging weather, including frigid temps, snow storms, tornadoes, floods, hurricanes, and threats of wildfires. Creating a clear inclement weather policy is important for employees so they know ahead of time what to expect when the weather turns bad.



Quickly Assembling and Organizing your Crisis Team is Critical to the Success of Your Response. A Crisis Coordinator is Essential!

What is a Crisis Coordinator? 

Each member of a crisis team has a functional role – e.g. Corporate Security, Human Resources, Corporate Communications, etc.  The Crisis Coordinator plays an organizing and facilitating role for the team allowing the functional leads to focus on the specific priorities and tasks for their function.   In fact, the Crisis Coordinator should not have a functional role to play during a crisis to enable them to focus on organizing the team, information and actions.



Thursday, 15 November 2018 17:00

The Crisis Coordinator – Organizing the Team

Nearly half of the firms we survey are prioritizing innovation as key to their business strategy. When I step back and think about it, the why is easy to understand; exponential changes like Moore’s Law in the past and Metacalfe’s Law today are accelerating the pace of business. In a recent study of 26 senior technology decision makers, 100% said the pace of business was faster today that it was five years ago. Point is, you feel it, I feel it…everybody feels it. And it’s that pressure to keep up that is driving firms to value innovation, set up innovation teams, and task these teams with applying emerging technology.

For the first time ever, emerging technology investment has superseded customer understanding as the number one thing firms want to do to be more innovative. One firm we interviewed said,

“Emerging technology is going to be essential to the future of our business. We know we need it to survive and compete.”

Here is a key challenge you will face with applying emerging technology to innovate…



(TNS) -  The day before firefighter radio transmissions revealed a malfunctioning PG&E power line may have triggered the state’s most destructive wildfire, a business owner in this tiny town near the Camp Fire’s origin said she received an email from the utility alerting her that workers had to fix a sparking problem on a nearby power line.

In the email received Wednesday, the company said they’d be coming out to work on one of their nearby towers that “were having problems with sparks,” said Betsy Ann Cowley, owner of Pulga, a former abandoned railroad town turned retreat popular with techies.

“This needs to become a class action lawsuit,” the former Oakland resident said.

Just what might have caused the sparking is unclear, but the radio transmissions reviewed by Bay Area News Group and an alert sent to state regulators indicate a transmission line created a hazard about 15 minutes before the blaze was first reported. Firefighters found downed power lines and a fast-moving fire beneath the high-tension wires when they arrived at the fire’s origin about a mile northeast of Pulga, by the dam.

A PG&E spokesman said he was looking into Cowley’s claims.



The Forrester New Wave™: Cybersecurity Risk Ratings, Q4 2018

Earlier today, we published “The Forrester New Wave™: Cybersecurity Risk Ratings, Q4 2018” evaluation. We take a close look at the nine most important vendors in this rising market, reviewing their current capabilities, customer references, and strategic road maps. This includes vendor profiles, with our analysis and buyer recommendations to support security and risk leaders in their quest to find the right cyber-risk rating tool.

Vendors covered (alphabetically): BitSight, FICO, iTrust, NormShield, Panorays, Prevalent, RiskRecon, SecurityScorecard, and UpGuard.

Third-Party Risk From The Attacker’s View

Cyber-risk rating tools show their value right away. They will scan and score your third-party risk environment and identify glaring gaps of key partners as early as your initial meeting. Especially with intuitive dashboards, reports, and risk insight all immediately in hand, it’s easy to get excited and pounce on the first solution that comes your way. Before you commit, though, determine how you will use the cyber-risk rating tool within your existing third-party risk management (TPRM) activities, noting that:



(TNS) — Fuel. Topography. Wind. Temperature. Humidity.

Sometimes they all coalesce in the worst ways possible, creating a blaze so destructive and deadly that entire towns are engulfed without warning. That was the case in California this weekend, when the Camp Fire destroyed the town of Paradise, and another swept through hundreds of homes near Malibu. Statewide, at least 44 people have been killed, with more than 200 people still missing. Some 7,500 homes and buildings have been lost.

But there’s one factor that can’t be qualified, quantified, studied or predicted: an element of firefighting that may one day run out in Eastern Washington.

“We’ve been lucky,” said Spokane Fire Chief Brian Schaeffer. “Very, very, very lucky.”

Similar to California, Washington and the rest of the Pacific Northwest have been dealing with an increasingly extended and costly fire season – spurred by rising temperatures due to climate change. This year was the worst wildfire season on record, with more than 1,700 wildfires reported — most of which started in Eastern Washington.



A missing 2-year-old boy in Morton County, North Dakota was recently found with the help of the CodeRED Mobile Alert app.

According to the Morton County Sheriff’s Office, the boy was reported missing on Thursday, October 11th after wandering away from his home.

A CodeRED Mobile Alert was sent to residents nearby informing them of the missing child along with a description. A neighbor who received the alert on her phone immediately called police when she saw the missing child outside her window. The quick action of this resident allowed officers to arrive on scene and return the child to his family.

Noting the key role that the CodeRED app played in quickly locating the missing boy, Morton County Sheriff Kyle Kirchmeier explained that:

Obviously [the mobile alert from the CodeRED App] speeds up the whole process of what we are doing. Instead of going door to door with law enforcement… you now have everybody notified of what is going on, and this is a case where it worked out very well.



All businesses work hard to communicate effectively with their employees. In the age of the mobile phone, text alerts have become a popular way for businesses to quickly and effectively disseminate information.

Companies use mass-texting for many purposes. Some use text alerts to communicate with customers. Others use them for marketing purposes, to engage with prospects. For the purpose of this article, though, we will be focusing on what you should know about one specific type of communication: mass-texting to communicate with employees.

On the surface, mass texting your employees to notify them of weather advisories, team news, travel schedules, office closings, and other updates seems like a good idea. After all, most of us tote along our cell phones everywhere (including to work), so it’s easy enough to blast out a text to a large group. Simple, free, and done. What’s not to love?



(TNS) - A forecast that includes several days of gusting Santa Ana winds has fire officials worried about the possible spread of the 83,000-acre Woolsey fire straddling Ventura and Los Angeles counties, officials said Sunday.

The fire, which has killed two people and forced more than 250,000 from their homes, was 10 percent contained as of Sunday morning.

But expected wind gusts of 40 mph or stronger over the next several days have officials concerned that the fire could spread quickly, and urged residents who were still home to leave immediately.

“Maybe 10 or 20 years ago you stayed in your homes when there was a fire and you were able to protect them,” Ventura County Fire Chief Mark Lorenzen said. “We’re entering a new normal. Things are not the way they were 10 years ago.”



Many organizations successfully endure a major business disruption and then drop the ball by not looking back at the event and drawing lessons from it for the future. In today’s blog, we’ll look at how performing a Post-Incident Analysis (PIA) can help you turn a disruption into a powerful opportunity for learning and improvement.

In many fields, it’s routine to look back at past events to learn lessons that can be applied in the future. The Army conducts after-action reports, secret agents get debriefed, and football teams review game film.

However, in the world of Business Continuity (BC) and IT/Disaster Recovery (IT/DR), such post-event reviews are surprisingly rare.



Gemma Platt shares five critical steps that businesses need to take in order to embed and embrace ISO 27001 risk assessments within their data protection processes.

If 2017 was the worst year for cyber attacks, according to the Online Trust Alliance, 2018 hasn't been much better. While we haven't yet seen a cyber incident on the scale of 2017's huge WannaCry and NotPetya ransomware attacks, which hit thousands of organizations globally, there have been many high-profile and damaging breaches.

Incidents like these can impact businesses in a number of ways. Operational systems can grind to a halt, leading to lost sales and revenues. The reputational repercussions of a data breach can travel fast in the age of social media, and be almost impossible to recover from. Finally, the introduction of GDPR adds a level of complexity that can leave businesses liable for fines of up to 4 percent of their annual global turnover in the wake of an incident.

Given that last year’s major incidents had such a significant impact on large international enterprises with huge cyber security resources at their disposal, it's understandable that small and medium sized businesses feel they have little chance of being able to defend themselves in the ever-growing threat landscape. However, that's not the case. Developing an effective cyber security posture is a procedure that can be followed and continuously measured; that process is ISO 27001.



This month’s blog post explores the importance of understanding how an organization’s culture impacts its resilience.  This is part one of a two-part series on culture and part four in our continuing series on organizational resilience.

Culture serves four functions, including providing a sense of identity to members and promoting a sense of commitment.  Culture helps members of the organization attribute sense and meaning to organizational events.  Culture reinforces the values in the organization.  Culture serves as a control mechanism for shaping behavior.

Edgar Schein is most well-known for his groundbreaking work in the 1980’s on the topics of Organizational Culture, Learning, and Change at MIT Sloan School of Management.

This post references several of Edgar Schein’s most famous quotes as we explore culture and its impact on an organization’s resilience.  One important aspect to note is the difficulty separating the impact of the leadership of the organization from its culture.



These are the five major developments Jerry Melnick, president and CEO, SIOS Technology, sees in cloud, High Availability and IT service management, DevOps, and IT operations analytics and AI in 2019:


1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

Advances in technology will make the cloud substantially more suitable for critical applications. With IT staff now becoming more comfortable in the cloud, their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever failover clustering technology they currently use in their datacenters to protect the critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations the cloud. At the same time, cloud service providers will continue to advance their service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

Dynamic utilization of the cloud’s vast resources will enable IT to more effectively manage and orchestrate the services needed to support mission-critical applications. With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internetworking, now enables standby resources to be allocated dynamically only when needed, which will dramatically lower the cost of provisioning high availability and disaster recovery protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

Given its mission-critical nature, IT departments have historically chosen to implement SAP and SAP S4/HANA in enterprise datacenters, where the staff enjoys full control over the environment. As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud “Quick-start” Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates will become the standard for complex software and service deployments in private, public and hybrid clouds. These templates are wizard-based interfaces that employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy, and the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics and Artificial Intelligence Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will continue becoming more highly focused and purpose-built for specific needs, and these capabilities will increasingly be embedded in management tools. This much-anticipated capability will simplify IT operations, improve infrastructure and application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in high availability and disaster recovery solutions, as well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve. 

Lesser-Known Risks for Corporations and Consumers

Big data corporations are always seeking new ways to capture data from consumers, and some of those tactics they employ can expose their targets – and at times, their employers – to significant privacy risk. Greg Sparrow discusses some of the most unknown privacy dangers consumers face today.

Now more than ever before, “big data” is a term that is widely used by businesses and consumers alike. Consumers have begun to better understand how their data is being used, but many fail to realize the hidden dangers in every day technology. From smartphones to smart TVs, location services and speech capabilities, oftentimes user data is stored without your knowledge. Here are some of the most common, yet least-known privacy dangers facing consumers today.



Thursday, 08 November 2018 15:32

5 Hidden Privacy Dangers

(TNS) - More than 100 people gathered at a government center in Apple Valley Monday night to send a message to all that they stand against all forms of hate and evil.

“We as a community will not tolerate intolerance and crimes based on intolerance, and we, as a community, stand in solidarity arms locked, ready to defend and protect, every citizen here in Minnesota,” U.S. Attorney Erica MacDonald said.

MacDonald was one of more than nine officials who spoke as part of a community meeting in response to the shooting that took place at the Tree of Life synagogue in Pittsburgh, Pennsylvania. The more than an hourlong event discussed a variety of topics including federal hate crimes laws, identifying and reporting hate crimes, and best practices for creating safe and secure houses of worship.



Forrester predicted in 2018 that digital would reach the hard stuff, the core of business: organization structure, operating models, and endless discussion of platforms. These themes are far from done; however, 2019 sees a distinct shift in emphasis — winter is coming.

In 2019, economic uncertainty stalks the halls, and budgets will see pressure. Many firms are failing at full-scale “big bang” enterprise transformation, 21% already think they’ve finished — and the rest risk losing their way. For digital leaders, necessity is the mother of innovation as firms accelerate their recession planning and focus on supporting operational improvements that drive tangible benefit for both firm and customer.

Forrester released its annual digital business predictions with clear advice that now is the time to reboot transformation. Firms should focus on digitizing experiences in the service of customers, using technology and partners to create short-term gains that underpin long-term ambition, and drive operational change from there.



(TNS) - Gusty northern winds, dry vegetation and low humidity across the Bay Area have created prime circumstances for wildfires, prompting a red flag warning Wednesday and possible power shutoffs for Northern California residents.

The National Weather Service in Sacramento issued the red flag warning to take effect from Wednesday night to Friday morning in the North Bay mountains and East Bay hills. Those areas are expecting wind gusts up to 45 mph, and any fires that spark could spread rapidly. The biggest threats exist in the hills of eastern Napa County and areas around Atlas Peak, Mount Diablo and Mount Hamilton, officials said.

As a result of the warning, Pacific Gas and Electric Co. announced Tuesday that customers in parts of nine counties — Butte, Lake, Napa, Nevada, Placer, Plumas, Sierra, Sonoma and Yuba — may have their power preemptively cut Thursday as a safety precaution.

PG&E officials are working with first responders and local authorities to monitor weather conditions before deciding to turn off power.



The hype around big data got several things right, one being that data volumes are big and growing quickly. Digital transformation — as well as the growing adoption of IoT and the ubiquity of the device (AKA, “mobile phone”) that we all carry with us — has accelerated the growth of data. Having successfully put that data to use within their own operations, many companies now recognize that others can benefit from their data assets, too. Data commercialization has become increasingly common, and the supply of data products and services will continue to grow.

  • Almost half of companies currently sell their data. Today, 47% of organizations report that they share or sell their data for revenue. When asked how they commercialize, 54% report exposing an API to the data for systematic or real-time access, while 38% report selling an application that enables users to see trends and insights in the data.
  • New data brokers and marketplaces facilitate data commercialization. New channels to market help aspiring data commercializers. Alternative data brokers such as Quandl and Eagle Alpha identify new sources of data, which are often firms in fintech or eCommerce with transaction data that provides investors insights into future market movements. Nascent data marketplaces — Dawex, DataStreamX, dmi.io — pitch an easy way to get data to market, with click-through categorization and licensing models.



Last year, Forrester predicted that firms would struggle with new technologies, particularly artificial intelligence. This prediction came true: Firms continued with AI experiments that lacked meaningful results. Adoption has now slowed (51% adoption in 2017; 53% adoption in 2018). And budgets remain low in contrast to the ROI and transformation expectations for AI (under $2M for 2018). Will firms claim defeat?

On the contrary — in 2019, Forrester predicts that firms will address the pragmatic side of AI now that they have a better understanding of the challenges and embrace the idea that “no pain means no AI gain.” The AI reality is here. Firms are starting to recognize what it is and isn’t, what it can do and what it cannot. And they are seeing the real challenges of AI versus what they assumed the challenges would be. Firms will focus their attention on the data foundations, take creative approaches to building and holding on to AI talent, weave intelligence into business processes, and begin to establish the mechanisms to understanding why AI is acting the way it is.

Here are the takeaways from the predictions.



Wednesday, 07 November 2018 15:20

Predictions 2019: Expect A Pragmatic Vision Of AI

(TNS) - It was May 1, 2019, and a tornado was heading toward the Kirksville R-III School District.

That was the premise of an exercise conducted at the district's administration building Wednesday, when emergency responders and school officials gathered to consider how the district and the city would respond in an emergency weather event.

"This is not a test," Kirksville Police Department Chief Jim Hughes, who led the exercise, said. "We're not gonna succeed or fail, and even in a real natural disaster, not everybody gets through it, but at some point it gets resolved. The point of doing these types of exercises is to get that resolution as positive as possible."

Hughes led the room through a scenario in which the school district's campus is hit by a tornado measured at a four on the Enhanced Fujita scale, meaning it has wind speeds up to 200 miles per hour and can do substantial damage such as leveling homes and throwing cars through the air. Participants talked through what they would do at every step of the hypothetical situation, from receiving news of a tornado watch to reunite children with parents after the danger had passed.



(TNS) - The process of rebuilding is just starting for the families struck by the worst fire to hit the area in 25 years.

Steve and Candie Smith lost their 47th Avenue home in south Kennewick in a matter of minutes Aug. 11, but it’s likely going to be more than a year before their house is rebuilt.

“We’re really happy with how things are happening” said Candie, though she admits it’s been a slow process.

Tri-City fire officials are studying what went wrong, what went well and how they can do more to protect homes and lives.

They hope to turn what they learned from the 5,000-acre, wind-driven wildfire that destroyed five homes and damaged three others into a safer future.



There’s a recurring assumption in discussions about internet of things (IoT) platforms: The platform providers make their money by mining insights from data loaded into their platform. They sell those insights back to the customer who put the data there in the first place and will also sell them to anyone else who can pay.

We hear this assumed, or “known about,” notion again and again. But it’s not really true.

Every vendor responding to Forrester’s Q2 2018 Global Data Ownership In Industrial IoT Platforms Online Survey was clear that the customer owned their own data, even when stored in the vendor’s platform.

More interestingly, most (11 of the 17 vendors responding to that specific question) cannot even use anonymized customer data, and almost all (16 of the 17) cannot make money from letting third parties download customer data.



Monday, 05 November 2018 15:48

IoT Platforms Do Not Steal Customer Data

Cloud storage uses a highly virtualized infrastructure to provide enterprises with scalable storage resources that can be provisioned in a pre-defined way or provisioned dynamically as required by the organization.

Enterprises are increasingly adopting cloud storage options because they need more capacity, elastic capacity and a better way to manage storage costs over time. The growing amount of enterprise data and cloud data are proving too difficult for IT departments to manage using their data center alone.

Not surprisingly, enterprises are supplementing what they have with cloud data storage in the form of private cloud, public cloud or both. Among the benefits: the capability to leverage cloud storage pricing, which offers great budget flexibility.



Monday, 05 November 2018 15:47

How Does Cloud Storage Work?

Moving Forward Through Ambiguity

Companies trying to find their way to compliance with the General Data Protection Regulation are struggling to ensure that their supply chains are compliant as well. The GDPR readiness of suppliers typically remains a black box, but it’s not due to willful resistance. Matan Or-El, co-Founder and CEO of Panorays, explains and spotlights a path ahead.

In order for organizations to be GDPR-compliant, they must also ensure that their vendors adhere to the same set of regulations. Enforcing GDPR standards on suppliers is no easy task, especially when some of the rules are still open to interpretation. Still, businesses must take action to ensure that each vendor is aligned with the new regulations.

As companies grow, so do their supply chains. And while this sudden surge in team members, partners and vendors can be a boon for business, it also comes with significant risk. The European Union’s recent General Data Protection Regulation (GDPR) that took effect in May aims to help keep this risk in check. But it also means ensuring that each and every supplier is held to the same standard, because in order for companies to be GDPR-compliant, their suppliers must be as well.

Of course, enforcing GDPR standards on suppliers is much easier said than done. In theory, it should be as simple as learning whether or not each vendor adheres to GDPR as part of the vetting process, but it’s not as easy as that. Steps must be taken, such as a full audit of the supply chain, to pinpoint suppliers and key areas that aren’t yet aligned. Working with several vendors simultaneously makes this task an even greater challenge, but it’s one that can’t be overlooked, as the responsibility of complying with regulations falls squarely on the shoulders of the umbrella organization.



Static RAM (SRAM) and dynamic RAM (DRAM) are different types of RAM, with contrasting performance and price levels. Both play a key role in today's technology.

SRAM: is a memory chip that is faster and uses less power than DRAM.

DRAM: is a memory chip that can hold more data than an SRAM chip, but it requires more power.

First, some background. Random access memory (RAM) is a semiconductor device placed on a processor that stores variables for CPU calculations. RAM provides memory locations for requested data (registers). The CPU receives a data read instruction with the data’s memory address or location. The CPU sends the address to the RAM controller.


The controller, in turn, sends the address to the proper pathway, opening path transistors and reading each capacitor value. The read data is transmitted back to the CPU’s cache.

The speed of this read/write operation is called timings. Faster timings with less lag between them result in faster access times and low latency. Slower timings result in lower performance and higher latency. Bandwidth also affect performance: the larger the bandwidth, the more data per second RAM can process and the faster the timings.



Monday, 05 November 2018 15:40


Tabletop recovery exercises are great, but they only get you so far.

In today’s blog, we’ll sketch out the full range of disaster recovery exercises that organizations can conduct, explain the limitations of a tabletop-only exercise regimen, and point you toward some tips that can help you raise your recovery-exercise game. 

Related on MHA Consulting: Beginner’s Guide to Recovery Exercises


Disaster Recovery Exercises in a Nutshell

Disaster recovery (DR) exercises are the activities organizations conduct to see how they would perform if they were faced with a real disaster which threatened to disrupt their operations.

DR exercises are not training activities. They are designed to reveal gaps in your plans and preparedness, so you can rectify them before a real emergency strikes. The practice obtained during an exercise can be considered a fringe benefit rather than their primary purpose.

Typically we divide exercises into business continuity DR exercises, which focus on the recoverability of business processes, and IT/DR exercises, for disasters threatening the IT system.



As a new year approaches, take some time to be sure that your communication and disaster response plan is up to date and ready for 2019.

Put your agency in the best possible position to ensure continuity and appropriate response.

Let’s take a look at what you might expect in 2019, sharing some crisis communication tips along the way. We’ll also review best practices that took place related to Hurricane Florence.

Watch Trends, Strengthen Your Plan

While changes and continuity threats affect different agencies in different ways, you should stay abreast of these anticipated trends and account for them in your plan. The time you spend improving your plan and closing any gaps in it will pay off in effectiveness and in a more responsive strategy.



Navigating Privacy and Compliance

As the recent data breach by Facebook has made clear, meeting strict GDPR guidelines is difficult. Cory Cowgill, CTO at Fusion Risk Management, discusses GDPR requirements and their impact on data retention and security.

If you are part of nearly any enterprise organization, then May 25, 2018 is likely burned into your memory forever. That was the date when a new landmark privacy law, the General Data Protection Regulation (GDPR), took effect in the European Union (EU). Many articles were written leading up to the law taking effect, and many have been written since. Nowadays, most articles have headlines like, “Taking Ownership in a Post-GDPR Age,” and, “Solving the Remaining Challenges of GDPR.”

Quite clearly, GDPR remains top of mind for business leaders all over the world – and meeting the strict guidelines for compliance remains a struggle.

GDPR consolidated all privacy laws in the EU into one consistent regulation. It expanded the privacy rights granted to individuals in every EU country and placed many new obligations on organizations that market to, track or handle personal data of individuals residing in the EU, no matter where the organization is located. The last bit is key: even if you are a U.S.-based company, if you are storing or have access to the personal data of individuals who live in the EU, GDPR regulations hold your business responsible.



Friday, 02 November 2018 14:16

Risky Business In A GDPR World

(TNS) - It’s uncertain whether a Department of Veterans Affairs stockpile of drugs and medical supplies — intended to be used in a crisis — is equipped to handle future terrorist attacks or biological or natural disasters, according to a report released Wednesday by the VA Office of Inspector General.

After the 9/11 terrorist attacks, the VA created a stockpile that could be used to treat veterans, VA employees and others in case of another mass casualty incident. Through what’s called the Emergency Cache Program, drugs and medical supplies, valued at about $44 million, are stored in stockpiles at 141 VA hospitals across the country.

IG staff inspected 26 stockpiles in February and found expired and missing drugs. The department is also skipping some of its annual inspections and activation drills, inspectors reported.



Unitil turns to Sungard AS for BC plans


This time of year, the trees of New England put on a spectacular color show as the cooler fall weather brings the annual turning of the leaves. Tourists affectionately known as “leaf peepers” come from around the country to view the stunning red, orange and gold foliage as a prelude to the holiday season.

But something tourists probably didn’t expect last week was a rash of tornadoes ripping through the Northeast region. At least three tornadoes touched down in three different New England states, leaving downed power lines, ripped off rooftops, fallen trees and shattered windows in their paths.

Freak catastrophes like this are one of the reasons Unitil, a public utility holding company with affiliates that serve approximately 105,000 electric customers and 81,300 natural gas customers in Maine, New Hampshire and Massachusetts, turned to Sungard Availability Services (Sungard AS) for Business Continuity plans earlier this year.

Unitil - Energy for Life

Through its work with Sungard AS, Unitil gained BC plans for 19 business units and a DR plan for IT operations, each essential to achieving its overarching goal—to encompass all the discrete BC deliverables into a holistic, companywide business resilience mission.

When a 2008 ice storm devastated the above-ground utility infrastructure across the U.S. northeast, it left millions without power. Like other New England utility companies, Unitil dealt with the extensive damage. Taking the lessons learned to heart, in 2009 the company established a dedicated team to focus exclusively on emergency management and business continuity as part of a multi-year plan.



Bringing in a business continuity (BC) consultant can make your life easier, but it’s not necessarily easy to find one that is right for your organization—or to know when and how to make the best use of their expertise.

In today’s post, we’ll discuss situations when you might find it especially helpful to bring in a BC consultant, areas where they can best help you, and how to find one that meets your company’s needs.

In today’s world, organizations are increasingly turning to third-party vendors to handle areas that are outside their core business functions. The same factors that make this approach advantageous for functions such as security, accounting, cloud-based IT, and software as a service (SaaS) are also true for business continuity.

Very few companies have business continuity management (BCM) as one of their core competencies, although all serious organizations need a BCM program.



Thursday, 01 November 2018 14:08

Client’s Guide to Hiring a BC Consultant

Earlier this year, space scientists discovered that the universe is expanding faster than they originally thought based on findings from the Hubble Space Telescope. This special telescope measures the distance to other galaxies by evaluating a type of star that dims and brightens in a foreseeable pattern. It’s important to know that galaxies are moving farther away from our own at an accelerating pace because in a few trillion years, this could lead to a dangerous cooling of the universe as well as stars running out of fuel. Thank goodness that’s a long way off and there’s plenty of time to prepare.

Something else expanding at an alarming rate is data; the content we all generate and consume online is accelerating more rapidly than ever before. By the time 2020 rolls around, people online will create around 1.7 megabytes of new data every second, each day. Add to that the 44 zettabytes (44 trillion gigabytes) of data that will already exist in the digital universe by then, and we’re talking about an expansion rate that could rival that of the universe – in theory, that is.

ZL Technologies blog

Sungard AS’ Enterprise Cloud Service offers ZL Technologies multiple layers of protection and failover options to keep business up and running, plus the flexibility to scale to a customer’s environment allowing them to expand as they grow.

Fortunately, there’s plenty of data space available in the cloud. Cloud archiving company ZL Technologies, for example – which started out two decades ago as an email security company – archives all those gigabytes of data into cloud environments backed up and secured by Sungard Availability Services (“Sungard AS”).



Maturing Risk Management in Light of COSO Updates

Recent updates to the COSO framework serve to clarify the significance of the connection between risk, strategy and performance. Protiviti’s Jim DeLoach discusses how organizations can get the most out of their ERM programs and three keys to advancing ERM.

In 2017, the Committee of Sponsoring Organizations (COSO) of the Treadway Commission released its updated framework on Enterprise Risk Management. While the concepts in the update aren’t new, the emphasis is markedly different, with a focus on what’s important in maximizing the value of ERM. I would argue that, considering the updated ERM framework, all companies should take a fresh look at their risk management in view of the new digital era in which they do business.

Since the 2007-2008 financial crisis, many ERM implementations have been oriented around answering three questions:

  1. Do we know what our key risks are?
  2. Do we know how they’re being managed?
  3. How do we know?

In responding to these three questions, executive management and boards in some companies have made progress in differentiating the truly critical enterprise risks from the risks associated with day-to-day business operations.



Thursday, 01 November 2018 14:03

Does ERM Really Matter In Your Organization?

The benefits of a strong company culture are well known – engagement, productivity and loyalty, to name a few. But, how can you ensure emotional investment from employees who aren’t in the office?

The biggest and most powerful global companies are often the ones with the clearest sense of "company culture," according to American performance expert and consultant Chris Dyer, whose latest book, The Power of Company Culture, won the award for best business book at the 2018 Independent Press Awards. However, this raises a tricky question for firms that have large remote workforces: How do you instill a set of company values when most of your staff never see each other?

According to a study by Regus parent group IWG earlier this year, 70% of people work outside one of their main company offices at some point of the week, with 53% doing so for half of the week or more. And research in the Harvard Business Review found that remote staff are more likely to feel mistreated and left out by colleagues.

So, is it possible to have a company culture when your employees aren’t necessarily based in the same place? And if so, what does it look like? Chris Dyer, who has spent the last 20 years consulting for leading organizations including Citigroup, Honda and Century 21, has the answers…



Thursday, 01 November 2018 14:01

Great bosses do this for their remote employees

(TNS) — Muscatine County residents currently living in floodplain areas or near one should contact Planning and Zoning to see if required flood insurance is in their future, according to Administrator Eric Furnas.

“It’s best for the homeowners, if they’re concerned, to look it up and find out,” Furnas said.

New floodplain maps were generated by FEMA. The information is available through websites including FEMA and Iowa Department of Natural Resources, but Furnas said the maps may be difficult to read if people are unfamiliar with the Geographic Information System.

The Flood Insurance Study maps determine which properties are in the floodplain and also which property owners will have to purchase flood insurance as a result. He explained that the new floodplain maps have some properties either entering the floodplain or exiting it and to generate a list of affected homeowners would take “thousands of hours.” The best way for residents to find out if they’re affected is to contact county planning and zoning.



Forget boasting about sleepless nights and round-the-clock schedules – the secret to meaningful, productive and creative work is a good night’s sleep, says Matt Burgess

At the top of the business world, it’s fashionable not to sleep, and to boast about having worked more than 100 hours per week. Tesla and SpaceX CEO Elon Musk has claimed there have been times when he hasn’t left his factory “for three or four days," former Yahoo! CEO Marissa Mayer says she can work a 130-hour work week by only sleeping for four hours per night, US president Donald Trump has claimed to sleep for just three to four hours per night, and Disney boss Bob Iger apparently wakes up at 4:30am to start his day.

In the UK, the national Sleep Council has found the average person goes to bed at 11:15pm and gets six hours and 35 minutes of sleep per night(1). (The council recommends adults get between six and nine hours of sleep per night).

“Nobody can function at their best [without enough sleep],” says Neil Stanley, author of How to Sleep Well and a sleep expert who has researched the topic for 36 years. “Essentially, if you’ve been awake for 17 hours a day, you are as impaired as if you are over the drunk-driving limit. Every aspect of your health is affected by sleep.”

Stanley is not alone in warning against the negative impact skipping sleep can have. Consistently skipping sleep causes the body to wear down. The UK’s National Health Service (NHS) says regular poor sleep can put people at greater risk of obesity, heart disease and diabetes – not to mention the mental impact of fatigue, a short temper and lack of focus.



Thursday, 01 November 2018 13:51

How well did your employees sleep last night?

A COMSAT Perspective

Comsat1We’ve seen such happen all too often – large populations devastated by natural disasters through events such as earthquakes, tsunamis, fires and extreme weather. As we’ve witnessed in the past, devastation isn’t limited to natural occurrences, it can also be man-made. Whatever the event may be, natural or man-made, first responders and relief teams depend on reliable communication to provide those most affected the help they need. Dependable satellite communication (SATCOM) technology is the difference between life and death, expedient care or delay.

Devastation can occur in the business community, as well. For businesses and government entities that depend on the Internet of Things (IoT), as most do, organizations can face tremendous loss without a communication, or continuity, plan.

How do we stay constantly connected by land, sea or air, in vulnerable situations? Today’s teleport SATCOM technology provides reliable and affordable operational resiliency that is scalable and cost effective for anyone that depends on connectivity, including IOT.

Independent of the vulnerabilities of terrestrial land lines, today’s modern teleports provide a variety of voice and data options that include offsite data warehousing, Virtual Machine (M2M) access, and a secure, reliable connection to private networks and the World Wide Web.

Manufacturing, energy, transportation, retail, healthcare, financial services, smart cities, government and education are all closing the digital divide and becoming more and more dependent on connectivity to conduct business. They all require disaster recovery systems and reliable communications that only satellite communications can provide when land circuits are disrupted.

COMSAT, a Satcom Direct (SD) company, with the SD Data Center, has been working to provide secure, comprehensive, integrated connectivity solutions to help organizations stay connected, no matter the environment or circumstances. COMSAT’s teleports, a critical component in this process, have evolved to keep pace with changing communication needs in any situation.

Comsat2“In the past, customers would come to COMSAT to connect equipment at multiple locations via satellite using our teleports. Today, the teleports do so much more. They act as a network node, data center, meet-me point and customer support center. They are no longer a place where satellite engineers focus on antennas, RF, baseband and facilities. Today’s teleports are now an extension of the customer’s business ensuring they are securely connected when needed,” said Chris Faletra, director of teleport sales.

Comsat3COMSAT owns and operates two commercial teleport facilities in the United States. The Southbury teleport is located on the east coast, about 60 miles north of New York City. The Santa Paula teleport is located 90 miles north of Los Angeles on the west coast.

Each teleport has operated continuously for more than 40 years, since 1976. The teleports were built to high standards for providing life and safety services, along with a host of satellite system platforms from metrological data gathering to advanced navigation systems. As such, they are secure facilities connected to multiple terrestrial fiber networks and act as backup for each other through both terrestrial and satellite transmission pathways.

Both facilities are data centers equipped with advanced satellite antennas and equipment backed up with automated and redundant electrical power sources, redundant HVAC systems, automatic fire detection and suppression systems, security systems and 24/7/365 network operations centers. The teleports are critical links in delivering the complete connectivity chain.

“Our teleport facilities allow us to deliver global satellite connectivity. The teleports provide the link between the satellite constellation and terrestrial networks for reliable end-to-end connectivity at the highest service levels,” said Kevin West, chief commercial officer.

COMSAT was originally created by the Communications Satellite Act of 1962 and incorporated as a publicly traded company in 1963 with the initial purpose to serve as a public, federally funded corporation intended to develop a commercial and international satellite communications system.

For the past five decades, COMSAT has played an integral role in the growth and advancement of the industry, including being a founding member of Intelsat, operating the Marisat fleet network, and founding the initial operating system of Inmarsat from its two Earth stations.

While the teleports have been in operation for more than 40 years, the technology is continuously upgraded and enhanced to proactively support communication needs. For many years, the teleports provided point-topoint connectivity for voice and low-rate data.

Now data rates are being pushed to 0.5 Gbps with thousands of remotes on the network. The teleports also often serve as the Internet service provider (ISP). They have their own diverse fiber infrastructure to deliver gigabytes of connectivity versus the megabytes of connectivity that were required not so long ago.

All in the Family

In addition to growing the teleport’s capabilities through technological advancements, COMSAT is now a part of the SD family of companies, which further expands its offerings.

SD Land and Mobile, a division of Satcom Direct, offers a wide variety of satellite phone, mobile satellite Internet units and fixed satellite Internet units. SD Land and Mobile ensures SATCOM connectivity is available no matter how remote the location or how limited the cellular and data network coverage may be.

Data security is a critically important subject today. The SD Data Center, a wholly-owned data center by Satcom Direct, brings enterprise-level security capabilities to data transmissions in the air, on the ground and over water. The SD Data Center also provides industry compliant data center solutions, and business continuity planning for numerous industries including healthcare, education, financial, military, government and technology.

“Together, we deliver the infrastructure, products and data security necessary to keep you connected under any circumstance. We have a complete suite of solutions and capabilities for our clients,” said Rob Hill, business development.

Keeping Up with Market Needs and Trends

COMSAT’s pioneering spirit is reflected in the company’s ongoing analysis of, and adjustment to, current market needs and trends. The aero market is currently the largest growing market with new services and higher data rates being offered almost daily. Maritime, mobility and government markets are thriving as well.

No matter what direction the market is headed, COMSAT’s teleports and the SD family of companies will be ready to help clients weather the storm. comsat.com

To learn about SD Land & Mobile, head over to satcomstore.com

For additional information regarding the SD Data Center, access sddatacenter.com

COMSAT’s provision of teleport services are managed by Guy White, director of US teleports. As station director for COMSAT’s Southbury, Connecticut, and Santa Paula, California, teleports, Mr. White is responsible for the day-to-day operations and engineering of both facilities, including program planning, budget control, task scheduling, priority management, personnel matters, maintenance contract control, and other tasks related to teleport operations.

Mr. White began his career in the SATCOM industry in 1980 as a technician at the Southbury facility. Since then, he successively held the positions of senior technician, lead technician, maintenance technician and customer service engineer at Southbury, until he assumed the position of operations manager in 1992 at COMSAT’s global headquarters in Washington D.C. He returned to Southbury as station engineer in 1995 and has served as station director of the Southbury teleport since May of 2000. Mr. White’s responsibilities expanded to include the Santa Paula teleport in May of 2008.

31More and more higher education institutions are including mass notifications as a vital component within their crisis communications plans.

They’ve discovered that mass notifications are an efficient and effective way to communicate with campus stakeholders in a crisis, such as an active shooter scenario or a weather emergency.

However, campuses are missing out. Most can maximize their use of mass notification systems beyond crises and use notifications creatively for important, non-emergency communication as well. Let’s look at some ways in which every division of a campus can take better advantage of a mass notification system and bolster communication with stakeholders, including students, faculty and staff, families, media, alumni, governing board, community members and others.



Reducing Cost and Complexity, Increasing Accuracy and Reliability


As we mark the 10-year anniversary of the 2008 banking crisis – considered by many to be the country’s worst financial crisis since the Great Depression — the issue of financial compliance is trending. In this article, Verint’s VP of Financial Compliance Strategy, Phil Fry, explores the current state of the industry. What more can companies do to manage risk, control costs and turn regulatory burden into strategic advantage?

With the need to comply with regulatory mandates such as the Dodd-Frank Act in the U.S. and MiFID II in the EU, effective, reliable and accurate communications capture — as well as ongoing, proactive communication monitoring and fraud analysis — is quintessential in today’s elevated compliance era.

The cost of regulatory compliance has risen dramatically in recent years, and most of the cost was and is driven by the addition of staff dedicated to testing, monitoring and other oversight responsibilities. Recent research shows complying with the multiplicity of laws and regulations governing financial transactions costs the banking and financial services industry 5 to 10 percent of annual sales and consumes scarce senior management time.

Many institutions are coming to the realization that continuing to throw endless amounts of resources at the compliance conundrum is not a sound business strategy. Researchers at McKinsey & Company, in a report on this topic state, “At many financial institutions, compliance and risk practitioners are beginning to question the sustainability of the resource-intensive approach to managing compliance risks.”



Wednesday, 31 October 2018 14:32

Reinventing Financial Compliance

Charlie Maclean Bristol, FBCI, looks at the physical effects which occur when individuals are faced with an incident and the methods that can be used to counteract them.

You know the feeling, you are told the incident you never wanted to happen has just occurred, which sets off the sinking feeling in your stomach, the clammy hands and the trickle of sweat down your back. Then the adrenaline kicks in and you are ready to respond. For some, this is their moment (think Mayor Giuliani after 9/11) and they thrive on high adrenaline incidents. All the mundane planning is over and this is their moment to respond and to lead or support their organization to survival, victory or even opportunity! Others run around in circles in panic not knowing where to start, or are paralysed ‘rabbits in the headlights’ doing nothing, knowing that all eyes are on them and the team are looking for leadership, the response plan to be activated and the fight back to begin.

In their classic paper 'Designs for Crisis Decision Units', Carolyne Smart and Ilan Vertinsky outline the characteristics of an incident, which are:

  • High level of emotional and physical stress;
  • Limited amount of time for response – leading to further pressure and stress;
  • Threat to high priority goals – which again leads to stress.



Minimizing D&O Cyber Liability


Companies of all sizes face constant cyber threats, ranging from corporate espionage and the piracy of proprietary information to digital thieves stealing funds from online accounts. While directors and officers must be concerned about these cyber threats to corporate assets, in recent years, widespread data breaches – particularly those involving consumer information – have emerged as a significant source of liability for directors and officers themselves. The technological safeguards and procedures for responding to cyberattacks are complex and often involve sophisticated technologies. Nevertheless, officers and directors must understand the steps the company is taking to protect its digital assets.

Recent class action litigation in the wake of catastrophic data breaches has demonstrated how potential litigants may seek to hold directors and officers liable when a breach of corporate security measures occurs.

For instance, in September 2017, credit monitoring and reporting firm Equifax announced a cyber “incident,” which may have disseminated personal and credit information of as many as 143 million U.S. customers. One securities class action complaint filed in the wake of the breach asserted direct nexus between oft-pled allegations that the company failed to maintain adequate measures to protect its data systems to the precipitous decline in Equifax’s stock price following the announcement of the data breach. This connection between a data breach and a decline in stock price creates demonstrable damages, even though the potential harm resulting from the misuse of the misappropriated information is incalculable.



Urbanization is alive and growing: our cities are tipped to house an additional 2.4 billion people over the next 30 years). “Building Sustainable and Resilient Cities” is the theme of this year’s United Nations World Cities Day, and ISO standards are proving to be essential tools to do exactly that.

How do you enhance a city’s attractiveness, and preserve its environmental, social and cultural assets, when faced with a growing population?

Since becoming the first community in Europe to be certified to ISO International Standard ISO 37101, Sustainable development in communities – Management system for sustainable development – Requirements with guidance for use, Sappada in Italy now benefits from better managed local complexities, new initiatives for education and environmental protection, new ways of promoting their area and a system to measure and monitor sustainability performance – all the while increasing community engagement.



Tuesday, 30 October 2018 14:30

Writing the future on World Cities Day

(TNS) — For days, Lorraine DePriest has been patching the hole in her abdomen with only toilet paper and clear packing tape.

They’re the closest things to medical supplies the 55-year-old in Panama City public housing has after Hurricane Michael swept through and destroyed the small monthly stockpile of colostomy bags she has relied on for years.

When the storm’s driving rain soaked through the bags she had and mildew crept in, DePriest washed and reused her single remaining bag until it became unusable five days later.

“I wasn’t going to put mildew on this,” she said, tapping the opening covered by wadded tissue and tape. “This is a desperate need right now. I don’t care about anything else.”



(TNS) — A child rides his bike about 40 feet, turns and gives a few more pushes before it's time to turn around again. He does this over and over, zipping past an open doorway where a man sleeps inside one of a motel's run-down units. The bed's box spring is directly on the floor. A mattress is turned upright.

Many rooms here brim with bulging garbage bags and Rubbermaid storage bins, holding the remnants of what was left after Hurricane Michael ravaged homes and upended lives Oct. 10.

By a door in one room is a part of a car engine. For many in this post-hurricane world, what's not broken, discolored in black mold or under rubble, suddenly seems irreplaceable and of value.



Embedding business continuity into your organization is a fundamental step to integrate business continuity awareness and practice into business as usual activities and organizational culture. This stage of the Business Continuity Management (BCM) lifecycle is a critical one and also one of the most difficult - as changing the culture of an organization is not always easy.

Here are some top tips to help you embed business continuity within your organization:

1. Use a collaborative approach

Collaboration is an essential factor at every stage of the BCM lifecycle. When it comes to embedding business continuity you can use this approach to get all teams and departments on board and effectively change the culture of your organization.

Make sure you involve all teams in the process, including top management, and that you communicate the benefits and opportunities that business continuity brings to the table. This will help each team understand how they can benefit from business continuity and how collaborating with other teams can make them more resilient.

The BCI offers a wide range of thought leadership resources that show the benefits of business continuity in the short and long-term. Why not share these with your teams and have them come together to discuss how business continuity can benefit their activities?



The number of connected devices worldwide is growing exponentially and this “Internet of Things” affects every area of our lives from electricity to agriculture. A recently published International Standard will help ensure these systems are seamless, safer and far more resilient.

From autonomous vehicles to precision agriculture, smart manufacturing, e-health and smart cities, the Internet of Things (IoT) is already everywhere – and growing. It involves integrating “things” within IT systems, thus enabling electronic devices to interact with the physical world. 

The applications are endless, but as the phenomenon explodes, so too does the need for trust, security and a base from which the technology can be developed further, with robust measures and systems in place.

ISO/IEC 30141, Internet of Things (IoT) – Reference architecture, provides an internationally standardized IoT Reference Architecture using a common vocabulary, reusable designs and industry best practice. 



Organizations often pressure their business continuity consultants to report that their BC programs are better than they really are. This stinks for the consultant, and it’s not good for the company, either.

Usually, in writing these blogs I try to share things I think might help the reader do better at BC.

Today, I want to write one that’s more about getting something off my chest.

I’m doing this from the point of view of a BC consultant with 25 years experience who is regularly hired by leading companies from all around the country to advise them about business continuity.

What is it I want to get off my chest? I’ll tell you in a moment. First, however, I’d like you to join me in a little thought experiment.



Franklin Mint Federal Credit Union has provided financial products and services to members in the greater Philadelphia region and beyond since 1970. FMFCU houses over $1 billion in assets. This make them the largest financial institution in Delaware County and the 10th largest in Pennsylvania. With 375 employees spread across 40 locations—many of them freestanding branch buildings—FMFCU needs a reliable mass notification system.

John Hargrove, the company’s CIO and VP of IT, has been with the credit union for nearly 30 years.



Too many business continuity professionals lack a clear and deep understanding of their own business continuity management (BCM) programs. In today’s post, we’ll examine the benefits of accurate program self-knowledge and suggest ways that you can obtain such knowledge and benefit from it.


Many sages in the past recognized the importance of self-knowledge.

The Greek philosopher Aristotle said, “Knowing yourself is the beginning of all wisdom.” His forerunner Socrates said, “The unexamined life is not worth living.” And one of the most enduring sayings from ancient Greece is, “Know thyself.”

Of course, obtaining self-knowledge is not necessarily easy. As Ben Franklin said, “There are three things extremely hard: steel, a diamond, and to know one’s self.”

In my opinion, all of these sayings hold true in the context of business continuity. It is both very important for business continuity professionals to have a clear and accurate understanding of their programs, and sometimes hard for them to obtain it.



Web applications are increasingly a gateway to successful cyber attacks. In this article Aatish Pattni looks at the threats posed to web application security - and how these can be successfully addressed.

One increasingly targeted component of organizations' IT estates is web applications. Recent victims of attacks targeting web applications have included British Airways, leading to the theft of customer payment card details; Equifax, where over a million records containing personal identifiable information were stolen; the GitHub software developer platform which was taken in down in the largest ever DDoS attack; and a number of European banks, who saw their Internet banking applications taken offline in the WebStresser attack.

These attacks on web applications can be grouped into two main categories: data breaches that aim to exfiltrate sensitive data for re-use or re-sale; and distributed denial of service (DDoS) that are designed to take websites offline to impact revenue. Both of these methods offer criminals a potentially easy, low-cost, high-reward target, but it's not as if this is a new attack vector. So, what has prompted this recent escalation in the web application war?



Thomson Reuters and ACAMS report shows transformative changes in practices and processes from CDD Rule


MINNEAPOLIS-ST. PAUL, Minn. – The commencement of anti-money laundering (AML) requirements for financial institutions issued in May 2018 has caused increasing numbers of firms to shift their hiring and focus of compliance staff toward more efficient customer due diligence (CDD) practices, rather than addressing and managing regulatory change.

That’s one of several key takeaways from the 2018 Thomson Reuters U.S. Anti-Money Laundering Insights Report. The report, compiled with the Association of Certified Anti-Money Laundering Specialists (ACAMS), provides data to how financial institutions are addressing these challenges in the current environment.

The certainty provided by the Financial Crimes Enforcement Network’s (FinCEN) CDD Rule already has had – and will continue to have – a dramatic impact upon the operations and practices of firms as the survey found 28 percent of respondents anticipate an increase in staffing for AML compliance purposes, a large rise compared to 8 percent in 2017. This focus has led to fewer CDD or AML regulatory enforcement with 22 percent of organizations experiencing regulatory action, down from 31 percent in 2017.



When people turn 50, they sometimes do unusual things to celebrate the event. The Huffington Postpublished a list of things everyone should experience at least once when they hit the half-century mark: Try surfing. Take Tango lessons. Adopt a pet from a shelter. Camp on a remote beach in Greece. Go without internet for a month. See a Broadway show.

But for most, turning 50 is simply a time to reassess their purpose in life and embrace maturity. Even the ATM industry, which recently celebrated the 50th birthday of the ATM machine, commissioned a study by the ATM Industry Association to figure out how to reinvent itself. While the “next gen” ATM may look more like your smart phone, with almost half a million ATMs working in the U.S., it’s impossible to view the ATM as a thing of the past. Service, parts, security, branding and system management are paramount to today’s financial institutions offering ATMs to provide the best ROI.

Surrey, UKThis is where a company like Cennox comes in. Cennox is a global leader in banking and retail support services, with solutions ranging from technology services to specialized equipment and parts, security and alarm solutions, and branch transformation. Relied upon by the world’s leading financial institutions, Cennox is the partner of choice for thousands of banks, financial institutions, commercial operations and retail organizations. Think of Cennox as a “fountain of youth” for its clients.



You worry about how to protect your lone workers and employees who don’t always work in the office. It’s understandable – bad things can happen to them, and sometimes they do. Lone workers all over the world face dangers that those in the office probably don’t even think about. As such, if you’re going to have employees in the field or working by themselves, then lone worker safety must be a priority for your organization.

Smart lone workers must be vigilant in knowing what to do when faced with a serious situation. They will face things like falls, accidents, injuries, personal threats, traffic emergencies, natural disasters, and sudden weather changes.

Do You Know How To Protect Your Lone Workers?

The most dangerous jobs often require more than one employee to be on site, specifically for safety reasons. But even a seemingly safe environment can have hidden dangers. Employees should always have a good understanding of the risks and dangers of their job. You have a duty of care to train them specifically in how to stay safe in their particular job.



Wednesday, 24 October 2018 14:08


One of the most notable features about NFPA’s standards development process is that it is a full, open, consensus-based process that encourages public participation in the development of its standards. A great way for your voice to be heard is to submit a Public Input (a suggested revision to a new or existing NFPA standard) during a Standard’s revision cycle. It is free, easy, and done through our  submission system.

The following Standards are accepting public input for their next revision cycle:



(TNS) — If an earthquake, volcanic eruption, wildfire or flood hits the Yakima Valley, you might not see firefighters or paramedics in your neighborhood for a while.

The experience in other disasters has shown that professional first responders can be overwhelmed as they deal with urgent needs, or they might not be able to get to where people need help because roads and bridges are out.

Instead, help for your neighborhood may come from people in green vests and hard hats like Paul Jenkins, a volunteer coordinator with the county’s Community Emergency Response Team.

“We think the government is going to send the cavalry in any minute,” Jenkins said. “Sometimes, the individual citizen can get there before the cavalry.”

CERT, as the program is known, provides training so people can be better prepared for disasters themselves, as well as help their neighbors and others. They can also assist first responders by staffing an emergency operations center, freeing up personnel for other duties, said Horace Ward, senior emergency planner with the Yakima County Office of Emergency Management.



8 Tips to Implement Now

Shane Whitlatch, EVP at FairWarning, outlines the key controls companies should have in place to quickly and confidently respond to an OCR audit should they be selected.

The best time to prepare for an audit is before you’re in one. Fortunately, requirements for various regulations are widely available so that there’s no guesswork involved and you can make sure you’re compliant ahead of time. So, you can start preparing for an Office of Civil Rights (OCR) HIPAA audit long before the notification letter hits your mailbox.

Even if you aren’t chosen for a random HIPAA audit, you can still face penalties for noncompliance if you experience a patient complaint or a breach. Taking the opportunity to proactively strengthen your privacy and compliance program will help you maintain control of your patient data and avoid costly and time-consuming compliance headaches.



Monday, 22 October 2018 14:53

Advance Preparation For An OCR HIPAA Audit

Increase your business continuity (BC) knowledge and expertise by checking out this list of an even dozen top BC resources.

Business continuity is a sprawling, fast-changing, and challenging field. Fortunately, there are a lot of great resources out there that can help you in your drive to improve your knowledge and protect your organization.

In today’s post, I round up a “dynamic dozen” resources that you should be aware of in your role as a business continuity professional.

Some of these might be old friends and others might be new to you. In any case, you might find it beneficial to review the websites and other resources on this list as you update your strategies, perform risk assessments, and identify where to focus your future efforts.

Read on to become a master of disaster. And remember that the most important resource in any BC program is capable, knowledgeable, and well-educated people.



Patrick Smith traces the history of IT disaster recovery and explains why he believes that it is time for the discipline to be pensioned off alongside RTOs and RPOs.

For businesses today, regardless of industry, the outage of a key IT system ranks among the most serious technology challenges they can face. In fact, the Business Continuity Institute’s 2018 Horizon Scan Report estimates that unplanned outages are the third biggest risk to businesses globally. Beyond the financial ramifications of downtime, the long-term reputational consequences are significant as customer confidence is dented.  Rebuilding trust after a major IT failure can be a multi-year process.

In the 1970s when data center / centre managers first came into being, they began to understand how dependent on computers their organizations would soon become. With that in mind they instigated the notion of disaster recovery – an insurance should one or more applications, storage components, databases or network elements go offline.

As IT developed into the 1990s and the dawn of the Internet era, our connectivity to and reliance upon computer systems became far more intense. As computers began to undertake real-time processing, not just batch processing, it was even more important that IT did not miss a beat. While there were global incidents caused by earthquakes, floods and other natural disasters, downtime was more likely to occur due to challenges with utilities, technology change or human error.

Two closely linked disciplines emerged: business continuity, or how the firm kept delivering its goods and services in case of an incident, and disaster recovery, otherwise known as how to get the IT environment back online after a problem.



Bottom line:

Dell EMC, as a leading scale-out NAS vendor, offers a top solution with this unit. The unit has latency less than 1 ms. It is is geared to support I/O-intensive unstructured data workloads. It does this using a modular architecture that can handle a wide variety of workloads using flash, disk, and the cloud, all from a single file system architecture. In sum, it's a top-notch unit.

There is no doubt that the Isilon F800 delivers impressive performance numbers. Pricing, though, may be an issue for some, as buyers are paying for a substantial sum for performance and capacity. Unfortunately, the company refused to disclose these exact numbers. 

Company description:

As a member of the Dell Technologies family of businesses, Dell EMC is a private company providing infrastructure for organizations protect information. This includes hybrid cloud and big data solutions incorporating converged infrastructure, servers, storage, and cybersecurity technologies. Dell EMC was formed in 2016, bringing together Dell (founded 1984) and EMC (founded 1979).



Data governance plays a key role in a meaningful analytics strategy. While the idea doesn’t spread feelings of excitement, it’s importance should not be overlooked or underestimated.

A Hare was making fun of the Tortoise one day for being so slow.

‘Do you ever get anywhere?’ he asked with a mocking laugh.

‘Yes,’ replied the Tortoise, ‘and I’ll get there sooner than you think.’

Aesop’s fable, The Hare & the Tortoise, seems timelier than ever for business operations in the digital world. We rush into using the latest and greatest tools, and we often forget to take a step back and make sure we are doing it the right way.

Understanding the Road to Meaningful Insights

Most companies, across industries, use cost reduction and operational efficiency as key performance metrics. Like the hare in the age-old fable, businesses race the finish line (operational efficiency) without understanding the impact or scope.

Visualization tools such as Tableau, Microsoft Power BI and QlikView have become industry standards for identifying operational efficiencies. However, simply seeing your data will not provide the long-term benefits companies expect. Taking the time to understand the road to meaningful data insights can lead to massive cost savings in building a long-term solution.



Used to be, hackers would spend most of their time hitting big companies with deep pockets and troves of customer data. 

But the times have changed. Launching a hack is as cheap and as easy as never before. Because of this, lots of hackers are playing small-ball by going after small businesses. 

Their calculations make sense. A ransomware payout might only be a few hundred dollars, but if hackers can hit hundreds of businesses simultaneously, their ill-gotten loot adds up pretty quickly. 



Disaster recovery is a headache that every IT department has suffered and in this arena, as in so many others, the cloud offers a better choice, says Laz Vekiarides. In fact, not only is a secondary data center for DR no longer needed, it’s actually no longer a sustainable option...

The days of the secondary data center / centre are numbered, and that is a good thing for the enterprises that have struggled to build them, fund them and maintain them solely for disaster recovery purposes. When on-premises disaster recovery was the only option, IT teams had no choice but to grit their teeth and take on the cost and resource burdens of physical secondary data centers. Today, though, the growing cloud adoption rate and availability of cloud-forward co-location providers have transformed the data center world. One result: the industry has more efficient and cost-effective choices, including hybrid cloud DR.

Key questions to ask before moving DR to the cloud

Nothing is easy in IT, and no data center leader should believe promises about quick or simple transformations from on-prem secondary data centers to cloud or hybrid models. The move is a complex one, and the stakes are high. Before teams even begin to migrate their disaster recovery, they must carefully consider their IT strategies and their business needs.

Among the questions teams should ask before embarking on migration are these:



By Cassius Rhue, Director of Engineering at SIOS Technology

All public cloud service providers offer some form of guarantee regarding availability, and these may or may not be sufficient, depending on each application’s requirement for uptime. These guarantees typically range from 95.00% to 99.99% of uptime during the month, and most impose some type of “penalty” on the service provider for falling short of those thresholds.

Most cloud service providers offer a 99.00% uptime threshold, which equates to about seven hours of downtime per month. And for many applications, those two-9’s might be enough. But for mission-critical applications, more 9’s are needed, especially given the fact that many common causes of downtime are excluded from the guarantee.

There are, of course, cost-effective ways to achieve five-9’s high availability and robust disaster recovery protection in configurations using public cloud services, either exclusively or as part of a hybrid arrangement. This article highlights limitations involving HA and DR provisions in the public cloud, explores three options for overcoming these limitations, and describes two common configurations for failover clusters.

Caveat Emptor in the Cloud

While all cloud service providers (CSPs) define “downtime” or “unavailable” somewhat differently, these definitions include only a limited set of all possible causes of failures at the application level. Generally included are failures affecting a zone or region, or external connectivity. All CSPs also offer credits ranging from 10% for failing to meet four-9’s of uptime to around 25% for failing to meet two-9’s of uptime.

Redundant resources can be configured to span the zones and/or regions within the CSP’s infrastructure, and that will help to improve application-level availability. But even with such redundancy, there remain some limitations that are often unacceptable for mission-critical applications, especially those requiring high transactional throughput performance. These limitations include each master being able to create only a single failover replica, requiring the use of the master dataset for backups, and using event logs to replicate data. These and other limitations can increase recovery time during a failure and make it necessary to schedule at least some planned downtime.

The more significant limitations involve the many exclusions to what constitutes downtime. Here are just a few examples from actual CSP service level agreements of what is excluded from “downtime” or “unavailability” that cause application-level failures resulting from:

  • factors beyond the CSP’s reasonable control (in other words, some of the stuff that happens regularly, such as carrier network outages and natural disasters)
  • the customer’s software, or third-party software or technology, including application software
  • faulty input or instructions, or any lack of action when required (in other words, the inevitable mistakes caused by human fallibility)
  • problems with individual instances or volumes not attributable to specific circumstances of “unavailability”
  • any hardware or software maintenance as provided for pursuant to the agreement


To be sure, it is reasonable for CSPs to exclude certain causes of failure. But it would be irresponsible for system administrators to use these as excuses, making it necessary to ensure application-level availability by some other means.

Three Options for Improving Application-level Availability

Provisioning resources for high availability in a way that does not sacrifice security or performance has never been a trivial endeavor. The challenge is especially difficult in a hybrid cloud environment where the private and public cloud infrastructures can differ significantly, which makes configurations difficult to test and maintain, and can result in failover provisions failing when actually needed.

For applications where the service levels offered by the CSP fall short, there are three additional options available based on the application itself, features in the operating system, or through the use of purpose-built failover clustering software.

The HA/DR options that might appear to be the easiest to implement are those specifically designed for each application. A good example is Microsoft’s SQL Server database with its carrier-class Always On Availability Groups feature. There are two disadvantages to this approach, however. The higher licensing fees, in this case for the Enterprise Edition, can make it prohibitively expensive for many needs. The more troubling disadvantage is the need for different HA/DR provisions for different applications, which makes ongoing management a constant (and costly) struggle.

The second option involves using uptime-related features integrated into the operating system. Windows Server Failover Clustering, for example, is a powerful and proven feature that is built into the OS. But on its own, WSFC might not provide a complete HA/DR solution because it lacks a data replication feature. In a private cloud, data replication can be provided using some form of shared storage, such as a storage area network. But because shared storage is not available in public clouds, implementing robust data replication requires using separate commercial or custom-developed software.

For Linux, which lacks a feature like WSFC, the need for additional HA/DR provisions and/or custom development is considerably greater. Using open source software like Pacemaker and Corosync requires creating (and testing) custom scripts for each application, and these scripts often need to be updated and retested after even minor changes are made to any of the software or hardware being used. But because getting the full HA stack to work well for every application can be extraordinarily difficult, only very large organizations have the wherewithal needed to even consider taking on the effort.

Ideally there would be a “universal” approach to HA/DR capable of working cost-effectively for all applications running on either Windows or Linux across public, private and hybrid clouds. Among the most versatile and affordable of such solutions is the third option: the purpose-built failover cluster. These HA/DR solutions are implemented entirely in software that is designed specifically to create, as their designation implies, a cluster of virtual or physical servers and data storage with failover from the active or primary instance to a standby to assure high availability at the application level.

These solutions provide, at a minimum, a combination of real-time data replication, continuous application monitoring and configurable failover/failback recovery policies. Some of the more robust ones offer additional advanced capabilities, such as a choice of block-level synchronous or asynchronous replication, support for Failover Cluster Instances (FCIs) in the less expensive Standard Edition of SQL Server, WAN optimization for enhanced performance and minimal bandwidth utilization, and manual switchover of primary and secondary server assignments to facilitate planned maintenance.

Although these general-purpose solutions are generally storage-agnostic, enabling them to work with storage area networks, shared-nothing SANless failover clusters are normally preferred based on their ability to eliminate potential single points of failure.

Two Common Failover Clustering Configurations

Every failover cluster consists of two or more nodes, and locating at least one of the nodes in a different datacenter is necessary to protect against local disasters. Presented here are two popular configurations: one for disaster recovery purposes; the other for providing both mission-critical high availability and disaster recovery. Because high transactional performance is often a requirement for highly available configurations, the example application is a database.

The basic SANless failover cluster for disaster recovery has two nodes with one primary and one secondary or standby server or server instance. This minimal configuration also requires a third node or instance to function as a witness, which is needed to achieve a quorum for determining assignment of the primary. For database applications, replication to the standby instance across the WAN is asynchronous to maintain high performance in the primary instance.

The SANless failover cluster affords a rapid recovery in the event of a failure in the primary, making this basic DR configuration suitable for many applications. And because it is capable of detecting virtually all possible failures, including those not counted as downtime in public cloud services, it will work in a private, public or hybrid cloud environment.

For example, the primary could be in the enterprise datacenter with the secondary deployed in the public cloud. Because the public cloud instance would be needed only during planned maintenance of the primary or in the event of its failure—conditions that can be fairly quickly remedied—the service limitations and exclusions cited above may well be acceptable for all but the most mission-critical of applications.

This three-node SANless failover cluster has one active and two standby server instances, making it capable of handling two concurrent failures with minimal downtime and no data lossThe figure shows an enhanced three-node SANless failover cluster that affords both five-9’s high availability and robust disaster recovery protection. As with the two-node cluster, this configuration will also work in a private, public or hybrid cloud environment. In this example, servers #1 and #2 are located in an enterprise datacenter with server #3 in the public cloud. Within the datacenter, replication across the LAN can be fully synchronous to minimize the time it takes to complete a failover and, therefore, maximize availability.

When properly configured, three-node SANless failover clusters afford truly carrier-class HA and DR. The basic operation is application-agnostic and works the same for Windows or Linux. Server #1 is initially the primary or active instance that replicates data continuously to both servers #2 and #3. If it experiences a failure, the application would automatically failover to server #2, which would then become the primary replicating data to server #3.

Immediately after a failure in server #1, the IT staff would begin diagnosing and repairing whatever caused the problem. Once fixed, server #1 could be restored as the primary with a manual failback, or server #2 could continue functioning as the primary replicating data to servers #1 and #3. Should server #2 fail before server #1 is returned to operation, as shown, server #3 would become the primary. Because server #3 is across the WAN in the public cloud, data replication is asynchronous and the failover is manual to prevent “replication lag” from causing the loss of any data.

With SANless failover clustering software able to detect all possible failures at the application level, it readily overcomes the CSP limitations and exclusions mentioned above, and makes it possible for this three-node configuration to be deployed entirely within the public cloud. To afford the same five-9’s high availability based on immediate and automatic failovers, servers #1 and #2 would need to be located within a single zone or region where the LAN facilitates synchronous replication.

For appropriate DR protection, server #3 should be located in a different datacenter or region, where the use of asynchronous replication and manual failover/failback would be needed for applications requiring high transactional throughput. Three-node clusters can also facilitate planned hardware and software maintenance for all three servers while providing continuous DR protection for the application and its data.

By offering multiple, geographically-dispersed datacenters, public clouds afford numerous opportunities to improve availability and enhance DR provisions. And because SANless failover clustering software makes effective and efficient use of all compute, storage and network resources, while also being easy to implement and operate, these purpose-built solutions minimize all capital and operational expenditures, resulting in high availability being more robust and more affordable than ever before.

# # #

About the Author

Cassius Rhue is Director of Engineering at SIOS Technology, where he leads the software product development and engineering team in Lexington, SC. Cassius has over 17 years of software engineering, development and testing experience, and a BS in Computer Engineering from the University of South Carolina. 

Speed up recovery process, improve quality and add to contractor credibility


By John Anderson, FLIR

Thermal imaging tools integrated with moisture meters can speed up the post-hurricane recovery process, improve repair quality, and add to contractor credibility. A thermal imaging camera can help you identify moisture areas faster and can lead to more accurate inspections with fewer call backs for verification by insurance companies. Many times, a good thermal image sent via email may be sufficient documentation to authorize additional work, leading to improved efficiency in the repair process.

Post-event process

Contractors need to be able to evaluate water damage quickly and accurately after a hurricane or other storm event. This can be a challenge using traditional tools, especially pinless (non-invasive) moisture meters that offer a nondestructive measurement of moisture in wood, concrete and gypsum. Operating on the principle of electrical impedance, pinless moisture meters read wood using a scale of 5 to 30 percent moisture content (MC); they read non-wood materials on a relative scale of 0 to 100 percent MC. [1] While simple to use, identifying damage with any traditional moisture meter alone is a tedious process, often requiring at least 30 to 40 readings. And the accuracy of the readings is only as good as the user’s ability to find and measure all the damaged locations.

Using a thermal imaging camera along with a moisture meter is much more accurate. These cameras work by detecting the infrared radiation emitted by objects in the scene. The sensor takes the energy and translates it into a visible image. The viewer sees temperatures in the image as a range of colors: red, orange and yellow indicate heat, while dark blue, black or purple signifies colder temperatures associated with evaporation or water leaks and damage. Using this type of equipment speeds up the process and tracks the source of the leak—providing contractors with a visual to guide them and confirm where the damage is located. Even a basic thermal imaging camera, one that is used in conjunction with a smart phone, is far quicker and more accurate at locating moisture damage than a typical noninvasive spot meter.

Infrared Guided Measurement (IGM)

An infrared (IR) thermal imaging camera paired with a moisture meter is a great combination. The user can find the cold spots with the thermal camera and then confirm moisture is present with the moisture meter. This combination is widely used today, prompting FLIR to develop the MR176 infrared guided measurement (IGM™) moisture meter. This all-in-one moisture meter and thermal imager allows contractors to use thermal imaging and take moisture meter readings for a variety of post-storm cleanup tasks. These include inspecting the property, preparing for remediation, and—during remediation— assessing the effectiveness of dehumidifying equipment. The tool can also be used down the road after remediation to identify leaks that may—or may not—be related to the hurricane.

During the initial property inspection, the thermal imaging camera visually identifies cold spots, which are usually associated with moisture evaporation. Without infrared imaging, the user is left to blindly test for moisture—and may miss areas of concern altogether.

While preparing for remediation, a tool that combines a thermal imaging camera with a relative humidity and temperature (RH&T) sensor can provide contractors with an easy way to calculate the equipment they will need for the project. This type of tool measures the weight of the water vapor in the air in grains per pound (GPP), relative humidity, and dew point values. Restoration contractors know how many gallons of water per day each piece of equipment can remove and, using the data provided by the meter, can determine the number of dehumidifiers needed in a given space to dry out the area.

The dehumidifiers reduce moisture and restores proper humidity levels, preventing the build-up of air toxins and neutralizing odors from hurricane water damage. Since the equipment is billed back to the customer or insurance company on a per-hour basis, contractors must balance the costs with the need for full area coverage.

During remediation, moisture meters with built-in thermal imaging cameras provide key data that contractors can use to spot check the drying process and equipment effectiveness over time. In addition, thermal imaging can be used to identify areas that may not be drying as efficiently as others and can guide the placement of drying equipment.

The equipment is also useful after the fact, if, for example, contractors are looking to identify the source of small leaks that may or may not be related to the damage from the hurricane. Using a moisture meter/thermal camera combination can help them track the location and source of the moisture, as well as determine how much is remaining.

Remodeling contractors who need to collect general moisture data can benefit from thermal imaging moisture meters, as well. For example, tracing a leak back to its source can be a challenge. A leak in an attic may originate in one area of the roof and then run down into different parts of the structure. A moisture meter equipped with a thermal imager can help them determine where the leak actually started by tracing a water trail up the roof rafter to the entrance spot.

Choosing the right technology

A variety of thermal imaging tools are available, depending upon whether the contractor is looking for general moisture information, or needs more precise information on temperature and relative humidity levels.

For example, the FLIR MR176 IGM™ moisture meter with replaceable hygrometer is an all-in-one tool equipped with a built-in thermal camera that can visually guide contractors to the precise spot where they need to measure moisture. An integrated laser and crosshair helps pinpoint the surface location of the issue found with the thermal camera. The meter comes with an integrated pinless sensor and an external pin probe, which gives contractors the flexibility to take either non-intrusive or intrusive measurements.

Coupled with a field-replaceable temperature and relative humidity sensor, and automatically calculated environmental readings, the MR176 can quickly and easily produce the right measurements during the hurricane restoration and remediation process. Users can customize thermal images by selecting which measurements to integrate, including moisture, temperature, relative humidity, dew point, vapor pressure and mixing ratio. They can also choose from several color palates, and use a lock-image setting to prevent extreme hot and cold temperatures from skewing images during scanning.

Also available is the FLIR MR160, which is a good tool for remodeling contractors looking for general moisture information, for example, pinpointing drywall damage from a washing machine, finding the source of a roof leak that is showing up in flooring or drywall, as well as locating ice dams. It has many of the features of the MR176 but does not include the integrated RH&T sensor.

Capturing images with a thermal camera builds contractor trust and credibility

Capturing images of hurricane-related damage with a thermal camera provides the type of documentation that builds contractor credibility and increases trust with customers. These images help customers understand and accept contractor recommendations. Credibility increases when customers are shown images demonstrating conclusively why an entire wall must be removed and replaced.

When customers experience a water event, proper photo documentation can bolster their insurance claims. The inclusion of thermal images will definitely improve insurance payout outcomes and speed up the process.

Post-storm cleanup tool for the crew

By providing basic infrared imaging functions, in combination with multiple moisture sensing technologies and the calculations made possible by the RH&T sensor, an imaging moisture meter such as the MR176 is a tool the entire remediation crew can carry during post-storm cleanup.


[1] Types of Moisture Meters, https://www.grainger.com/content/qt-types-of-moisture-meters-346, retrieved 5/29/18

Expert service providers update aging technology with minimal disruption


By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

Aging power control and automation systems can carry risk, both in terms of downtime of mission-critical power systems, through reduced availability of replacement components and the knowledge to replace existing devices within. Of course, as components age, their risk of failure increases. Additionally, as technology advances, these same components are discontinued and become unavailable, and over time, service personnel lose the know‐how to support the older generation of products. At the same time, though, complete replacement of these aging systems can be extremely expensive, and may also require far more downtime or additional space than these facilities can sustain.

The solution, of course, is the careful maintenance and timely replacement of power control and automation system components. By replacing only some components of the system at any given time, customers can benefit from the new capabilities and increased reliability of current technology, all while uptime is maintained. In particular, expert service providers can provide in-house wiring, testing, and vetting of system upgrades before components even ship to customers, ensuring minimal downtime. These services are particularly useful in in healthcare facilities and datacenter applications, where power control is mission-critical and downtime is costly.

Automatic Transfer Switch (ATS) controllers and switchgear systems require some different types of maintenance and upgrades due to the differences in their components; however, the cost savings and improved uptime that maintenance and upgrades can provide are available to customers with either of these types of systems. The following maintenance programs and system upgrades can extend the lifetime of a power control system, minimize downtime in mission-critical power systems, and save costs.

Audits and Preventative Maintenance

Before creating a maintenance schedule or beginning upgrades, getting an expert technician into a facility to audit the existing system provides long-term benefits and provides the ability to prioritize. With a full equipment audit, a technician or application engineer who specializes in upgrading existing systems can look at an existing system and provide customers with a detailed migration plan for upgrading the system, in order of priority, as well as a plan for preventative maintenance.

Whenever possible, scheduled preventative maintenance should be performed by factory-trained service employees of the power control system OEM, rather than by a third party. In addition to having the most detailed knowledge of the equipment, factory-trained service employees can typically provide the widest range of maintenance services. While third-party testing companies may only maintain power breakers and protective relay devices, OEM service providers will also maintain the controls within the system.

Through these system audits and regular maintenance plans, technicians can ensure that all equipment is and remains operational, and they can identify components that are likely to become problematic before they actually fail and cause downtime in a mission-critical system.

Upgrades for ATS Control Systems with Minimal System Disruption

In ATS controller systems, control upgrades can provide customers with greater power monitoring and metering. In addition, replacing the controls for aging ATS systems ensures that all components of the system controls are still in production, and therefore will be available for replacement at a reasonable cost and turnaround time. In comparison, trying to locate out-of-production components for an old control package can lead to high costs and a long turnaround time for repairs.

The most advanced service providers minimize downtime during ATS control by pre-wiring the control and fully testing it within their own production facilities. When Russelectric performs ATS control upgrades, a pre-wired, fully-tested control package is shipped to the customer in one piece. The ATS is shut down only for as long as it takes to install the new controls retrofit, minimizing disruption.  

In addition, new technology also improves system usability, similar to making the switch from a flip phone to a smartphone. New ATS controls from Russelectric, for example, feature a sizeable color screen with historical data and alarm reporting. All of the alerts, details and information on the switch are easily accessible, providing the operator with greater information when it matters most. This upgrade also paves the way for optional remote monitoring through a SCADA or HMI system, further improving usability and ease of system monitoring.

Switchgear System upgrades

For switchgear systems, four main upgrades are possible in order to improve system operations and reliability without requiring a full system replacement: operator interface upgrades, PLC upgrades, breaker upgrades, and controls retrofits. Though each may be necessary at different times for different power control systems, all four upgrades are cost-effective, extend system lifespans, and minimize downtime.

Operator Interface Upgrades for Switchgear Systems

Similar to the ATS control upgrade, an operator interface (OI) or HMI upgrade for a switchgear power control system can greatly improve system usability, making monitoring easier and more effective for operators. This upgrade enables operators to see the system power flow, as well as to view alarms and system events in real time.

Also similar to ATS control upgrades, upgrading the OI also ensures that components will be in production and easily available for repairs. The greatest benefit, though, is providing operators real-time vision into system alerts without requiring them to walk through the system itself and search for indicator lights and alarms. Though upgrading this interface does not impact the actual system control, it provides numerous day-to-day benefits, enabling faster and easier troubleshooting and more timely maintenance.

Upgrades to PLC and Communication Hardware without Disrupting Operations

Many existing systems utilize legacy or approaching end-of-life PLC architecture. PLC upgrades allow for upgrading a switchgear control system to the newest technology with minimal program changes. Relying on expert OEM service providers for this process can also simplify the process of upgrading PLC and communications hardware, protecting customers’ investments in power control systems while extending noticeable system benefits.

A PLC upgrade by Russelectric includes all new PLC and communication hardware for the controls of the existing system, but maintains the existing logic and converts it for the latest technology. Upgrading the technology does not require new logic or operational sequences. As a result, the operations of the system remain unchanged and existing wiring is maintained. This greatly reduces the likelihood that the system will need to be fully recommissioned and minimizes downtime necessary for testing. Russelectric’s unique process of both converting existing logic and, as previously mentioned, testing components in their own production facility before sending out to the facility for installation, gives them a correspondingly unique ability to keep a system operational through the entire upgrade process.  In addition, Russelectric has developed some very unique processes for installation, using a sequence to systematically replace the PLC’s, replacing only one PLC at a time, and converting the communications from PLC to PLC as components are replaced.  This allows Russelectric to keep systems operational throughout the process. Russelectric’s experts minimize the risk of mission-critical power system downtime.

Breaker & Protective Relay Upgrades for Added Reliability and Protection

Breaker upgrades may often be necessary to ensure system protection and reliability, even through many years of normal use. Two different types of breaker modifications or upgrades are available for switchgear power control systems: breaker retrofill and breaker retrofit.  A retrofill breaker upgrade calls for an entirely new device in place of an existing breaker system. Retrofill upgrades maintain existing protections, lengthen service life, and provide added benefits of power metering and other add-on protections, like arc flash protections and maintenance of UL approvals.

Breaker retrofits can provide these same benefits, but they do so through a process of reengineering an existing breaker configuration. This upgrade requires a somewhat more labor-intensive installation, but provides generally the same end result. Whether a system requires a retrofit or retrofill upgrade is largely determined by the existing power breakers in a system.

For medium voltage systems, protective relay upgrades from single function solid state or mechanical protective devices to multifunction protective devices provide protection and reliability upgrades to a system.  Upgrading to multifunction protective relays provide enhanced protection, lengthen service life of a system, and provide added benefits of power metering, communications and other add-on protections, like arc flash protections.

Russelectric prewires and tests new doors with the new protective devices ready for installation.  This allows for minimal disruption to a system and allows for easy replacement.   

Controls Retrofits Revive Aging Systems

For older switchgear systems that predate PLC controls, one of the most effective upgrades for extending system life and serviceability is a controls retrofit. This process includes a fully new control interior, interior control panels, and doors. This enables customers to replace end-of-life components, update to the latest control equipment and sequence standards, and access benefits of visibility described above for OI upgrades. 

The major consideration and requirement is to maintain the switchgear control wiring interconnect location to eliminate the requirement for new control wiring between other switchgear, ATS’s, and generators.  In retrofitting controls rather than replacing, retrofitting the controls allows the existing wiring to be maintained and provides a major cost savings to the system upgrade. 

Just as with ATS controls retrofits, Russelectric builds the control panels and doors within their own facilities and simulate non-controls components from the customer’s system that are not being replaced. In doing so, technicians can fully test the retrofit before replacing the existing controls. What’s more, Russelectric can provide customers with temporary generators and temporary control panels so that the existing system can be strategically upgraded, one cubicle at a time, while maintaining a fully operational system.

Benefits of an Expert Service Provider

As described throughout this article, relying on expert OEM service providers like Russelectric amplifies the benefits of power control system upgrades. With the right service provided at the right time by industry experts, mission-critical power control systems, like those in healthcare facilities and datacenters, can be upgraded with a minimum of downtime and costs. OEMs are often the greatest experts on their own products, with access to all of the drawings and documentation for each product, and are therefore most able to perform maintenance and upgrades in the most effective and efficient manner.

Some of the most important cost-saving measures for power control system upgrades can only be achieved by OEM service providers. For example, maintaining existing interconnect control wiring between power equipment and external equipment provides key cost savings, as it eliminates the need for electrical contractors in installing a new system. Given that steel and copper substructure hardware can greatly outlast control components, retrofitting these existing components can also provide major cost savings. Finally, having access to temporary controls or power sources, pre-tested components, and the manufacturer’s component knowledge all helps to practically eliminate downtime, saving costs and removing barriers to upgrades. By upgrading a power control system with an OEM service provider, power system customers with mission-critical power systems gain the latest technology without the worry of downtime and huge costs associated with full system replacement.

Using Analytics to Gain a Competitive Edge


Predictive analytics is quickly changing the way businesses effectively allocate their budgets and gain their edge over competitors. However, even companies with highly sophisticated predictive analytics programs still often run into challenges. Here are some ways to leverage your predictive analytics model by Kris Hutton, Director of Product Management at global enterprise governance SaaS provider ACL.

A growing number of companies are taking their predictive or advanced analytics strategy to a new level. They have settled on proof of concept and started to execute on a model designed to predict future targets that can help them either create value or identify loss. These areas run the gamut, from marketing campaigns to sales to supply chain and/or vendors.

In a hypercompetitive business environment, predictive analytics is fast becoming a way for organizations to gain an edge over competitors and allocate budgets more effectively.

However, even for companies with highly sophisticated predictive analytics programs, challenges abound. Here are three ways to sustain the effort and ensure it’s generating positive returns.



Thursday, 18 October 2018 14:54

3 Ways To Leverage Predictive Analytics

This document gives guidelines for monitoring hazards within a facility as a part of an overall emergency management and continuity programme by establishing the process for hazard monitoring at facilities with identified hazards.

It includes recommendations on how to develop and operate systems for the purpose of monitoring facilities with identified hazards. It covers the entire process of monitoring facilities.

This document is generic and applicable to any organization. The application depends on the operating environment, the complexity of the organization and the type of identified hazards.



Bill Villella awoke Tuesday at about 5 a.m. and stepped into foot-high water that had accumulated in his mobile home.

It had been raining all night -- as much as 13 inches fell in a span of 48 hours, according to the National Weather Service.

Villella's wife Laura woke to her husband's yelling and, still in her pajamas, grabbed her phone, purse and medications. The carpets were coming off and the ceiling was drooping. The couple waded through the home as the water continued to rise, reaching the kitchen, where Laura grabbed a meat grinder and broke a window to try to get out.

But the water outside was roughly 2 inches from the window sill. So they waited.



(TNS) - Now that the most damaging aspects of the storm has passed, local emergency management officials are in recovery mode.

"Before things happened, we already had a process in place," said acting Emergency Management Director and Incident Commander Adrienne Owen. "We just transitioned into recovery mode and headed to normalcy."

Hurricane Michael ripped through the Panhandle Wednesday, leaving an aftermath of a trail of downed trees, power lines and broken communication systems. Most of the damages have been reported at private property, officials said.



Thursday, 18 October 2018 14:46

EOC: We're in Recovery Mode

Many organizations are at risk to some extent by single points of failure, resources that have no redundancy and whose loss could have a significant impact.

In fact, this is a surprisingly widespread problem which could leave many if not most organizations hanging by a thread, whether they know it or not.

In today’s blog, I’ll sketch out some of the main issues surrounding SPOFs and share some tips for protecting yourself against their impact.



Resilience is “the capacity to recover quickly from difficulties, or toughness.” With the rise in both natural disasters and cyberthreats, today’s businesses must ensure not only their physical resilience, but the resilience of their IT systems so they can continually provide a great customer experience.

But how do you know if you’re prepared for the worst? Test, test, test. In fact, one method of testing is known as “chaos engineering,” which is defined as “the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” (From http://principlesofchaos.org/)

Chaos Engineering Blog

The goal of chaos testing is to expose weaknesses in your systems before they manifest themselves as some end-user service being down. By doing this on purpose, you and your systems become better at handling unforeseen failures.

The goal of chaos testing is to expose weaknesses in your systems before they manifest themselves as some end-user service being down. By doing this on purpose, you and your systems become better at handling unforeseen failures.

Typically, though, we don’t look for a service’s complete failure, or for high latency in a service’s response. Couple this with the fact that almost all modern IT systems are very distributed in nature, and we have other issues like cascading failures that are very hard to foresee from a test team’s perspective.



You may remember my last blog article about blockchain, I talked about how business continuity and disaster recovery will benefit from the use of blockchain technology. Now we ask, “How should businesses identify if blockchain is a good tool for the job?” The term “decentralization” is a blockchain’s main characteristic which involves game theory and cryptography as a core of its protocol design. It includes features such as immutability, liveness and safety guarantees of transaction, and censorship or attack from malicious actors.

Public and Private Blockchain

Did you know there are public and private blockchains? Public blockchains are type networks that anyone can access and add transactions to. This ensures no single entity has control, but rather everyone must trust the network’s protocol.  To give an example, Bitcoin and Ethereum are available on the public blockchain.

Other organizations, such as Hyperledger, that are private enterprises cater to companies who engage in global transactions, such as supply chain and financial services. Access to Hyperledger is permissioned and controlled to determine who can both join and access the network.



Wednesday, 17 October 2018 15:14

Deciding on Blockchain

Growth. It’s what CEOs are after. Sometimes it’s straightforward. Other times growth requires a company to move into new or unfamiliar areas. How can a company do this? Many have internal units or programs dedicated to driving breakthrough innovation. But, according to Cutter Consortium Senior Consultant Rick Eagar, there’s another, new model — the Breakthrough Incubator — that’s helping companies develop and launch radically new products, services or businesses that deliver significant value.

Eager, and colleagues Max Senechal, Michael Kolk, Tim Barder, Mitch Beaumont, and Kurt Baes describe the Breakthrough Incubator approach, and its benefits, in their recently published Executive Update, “The Breakthrough Incubator: A Radical Model for Innovative New Businesses.” The model, they write, delivers major benefits in terms of speed, cost, and likelihood of success.

The Breakthrough Incubator model is a breakthrough innovation itself. Typically, it starts with a company’s top team’s desire to create new business based on innovative products or services in an area that is non-core to the existing business. Rather than conducting the program in-house, the company engages an outside partner to take on the entire innovation process. The model goes beyond classic open innovation because it externalizes an entire innovation, product development, and new business creation effort. Says Eagar:



Wednesday, 17 October 2018 15:12

The Holy Grail of Innovation Management

Who Needs A Small Business Communication System?

Small companies can fall victim to a dangerous mindset of thinking they are too small for a modern communication system. But operating without one can limit productivity and put your people at risk–no matter how big your company is.

Companies are beginning to take notice. Implementing a small business communication system helps keep your employees safe and connected.



Earlier this summer the GDPR, or General Data Protection Regulation, superseded the European Data Protection Directive (EDPD) to become the new keystone of data protection in Europe. Its broader scope includes consumer information from personal identifiers such as social security numbers, to data on a person’s race, politics, web browsing history, and even biometrics. GDPR’s expanded reach covers not only the data of citizens in all 28 EU member states, but also the data collected on EU citizens by any company worldwide – even if they don’t have a business presence in the EU.

Buildings and blue sky

Sungard AS introduced XRS to Cloud Recovery – Amazon Web Services with an SLA-backed service for disaster recovery, assuring availability and recoverability of service.

Companies in the business of automated debt collection know that having access to a person’s most private financial information, credit card balances and debt load is highly sensitive. Industry regulations are already in place to protect individuals. But companies like Ireland-based Expert Revenue Systems (XRS) – which specializes in credit control, collections, debt recovery and litigation solutions – are under the gun to make sure they protect clients even further.



Tuesday, 16 October 2018 14:11

XRS Embraces New Data Protection Law

Corporate social responsibility in crisis management is more important than ever. Whether it’s looking after clients or taking the right steps to protect employees, organizations have a legal and moral duty to look after their people when a crisis happens. Dr Liz Royle explores this subject, explaining how organizations can prepare for and respond to a ‘Psychological Critical Incident.’

In the age of social media, an organization that doesn’t prioritise the wellbeing of staff and customers will quickly find itself the next viral scandal – whether it’s a data breach that puts customers’ financial lives at risk or a violent incident that has left staff traumatised.

The new ISO 22330 guidance for managing the people aspects of business continuity, states the importance of putting people first during a workplace crisis.

This can be broken down into four key stages:

  1. Preparation through awareness, analysis of needs, and learning and development;
  2. Coping with the immediate effects of the incident (respond);
  3. Managing people during the period of disruption (recover);
  4. Continuing to support the workforce after returning to business as usual (restore).

ISO 22330 states that ‘An employer can be deemed to have breached their duty of care by failing to do everything that was reasonable in the circumstances to keep the employee safe from harm.’



(TNS) - Avoidance is a great way to mitigate risk. As in, get out of trouble’s way, or don’t be there to begin with.

When it comes to hurricanes, that good advice is getting harder to heed. In Florida, we continue to build along our coasts, often just a few feet above sea level. New homes spring up on barrier islands. Condo towers rise where mangroves once grew.

Since 1970, the state has added nearly 15 million residents, most of them flowing into storm-prone counties that border the Gulf or the Atlantic.

We aren’t alone. The other Gulf states including Texas have peppered their waterfronts with development. So have Georgia and the Carolinas. Some inland states allow construction in flood plains, and then rebuild each time the rivers overflow.



Why Business Continuity Must Be Part of Your Strategy

Carrying insurance, having a plan, limiting liability… these are all important steps to minimize risk associated with a disruptive event. But without a dynamic business continuity management program, brand equity could suffer significantly. David Nolan, CEO and founder of Fusion Risk Management, rebuts seven common misconceptions about business continuity.

Imagine a runner on a treadmill following a preset workout program. Even as the treadmill speeds up during the higher-intensity phases, as long as the runner is prepared for changing conditions, she will stay in sync with the machine. But if the runner falters or stops and the treadmill keeps going, she’ll stumble, fall and may even end up injured.

A business trying to remain competitive and profitable in today’s world is like the runner trying to keep pace with the machine. If a business is prepared for whatever adverse circumstances come up, the organization can take it in stride and keep moving forward. If a business is not prepared, then it will experience disruptions – and, like a runner who gets injured, the business may find it difficult to recover.

To keep the business running and revenue flowing, executives must include business continuity in their overarching company strategy, and that requires a fundamental understanding of what business continuity is and what it means for the organization.



Tuesday, 16 October 2018 14:06

Is Your Company Prepared For The Worst?

Our guest blogger, Lynne McChristian, is an I.I.I. representative based in Tallahassee, about 100 miles from where Hurricane Michael came to shore.

 By Lynne McChristian

After a major natural disaster, there are various levels of survivor conditions – ranging from total devastation to mild inconvenience. In comparison to what people are experiencing in Mexico Beach and the Panama City areas of Florida, my inconveniences are extremely inconsequential. I was asked for a first-person account, and here’s where things stand on a Sunday afternoon.

In my Tallahassee neighborhood, we have been without power since about 2:20 p.m. on Wednesday. This is Day 5 of powerlessness. The air conditioners are silent in the 88-degree heat, but the rumble of portable generators is a bit overbearing, especially at night. The choice is to keep the refrigerator contents cool, or sleep.



Tuesday, 16 October 2018 14:05


(TNS) -  On Sept. 2, 2017,volunteer firefighter Chris Martin spread the word to his neighbors. The Jolly Mountain fire was raging nearby. Pack up important possessions and prepare to leave at a moment’s notice

The flames never made it to town. People stayed put, but many now live with a new sense of vulnerability.

“This was a game changer for us,” said Martin, a Roslyn volunteer firefighter who handed out the evacuation notices.

This month, on a crisp fall day, Martin once again was trying to protect the town. But this time, instead of warning of a fire, he joined 30 other men and women in setting fire to 32 ridge-top acres he owns above Roslyn.



(TNS) - Nassau County's Amateur Radio Emergency Services (NCARES) team is helping state emergency management authorities with communications networking and technical expertise in Panhandle counties hard hit by Hurricane Michael.

The hurricane destroyed critical infrastructure throughout several counties west of Tallahassee. Many are relying on volunteer Ham Radio operators utilizing the State Amateur Radio Network (SARnet) to relay information about structural damage, supply shortages and requests for assistance from the Panhandle to Northeast Florida. Those needs can be put directly into Emergency Management's web-based disaster information and mission request system, said Martha Oberdorfer, spokeswoman for Nassau County Emergency Management.

A nonprofit, NCARES pays for all of its equipment and operations through donations and two annual barbecue fundraisers. Volunteers donate their time and resources to Nassau County Emergency Management staffing the County Watch Office, Oberdorfer said.



IBM Services has released the results of a global Ponemon Institute study exploring the impact that business continuity management can have on the cost and frequency of data breaches; it shows 10 ways in which BCM provides quantifiable benefits.

The ‘2018 Cost of Data Breach Study: Impact of Business Continuity Management’, survey report sponsored by IBM and conducted by the Ponemon Institute, reinforces the call for new solutions to combat evolving cyber threats around the world. The longer it takes to identify, contain, and recover from a data breach, the more it consumes significant time, money, and resources throughout an organization.

According to the research, BCM programs can reduce the per capita cost of data breach, the mean time to identify (MTTI) and the mean time to contain (MTTC) a data breach and the likelihood of experiencing such an incident over the next two years.

On average, responding companies that prioritize business continuity management saved 44 days in the identification of the incident and 38 days in the containment of the data breach.



(TNS) - Cleanup continues from Hurricane Michael, which struck South Georgia Wednesday evening through early Thursday morning.

In a Saturday afternoon Facebook post, the City of Moultrie reported about 60 downed trees in the city and more than 300 county-wide.

On the Colquitt County Board of Commissioners Facebook page, the county listed 19 roads as still being closed about 2 p.m. Saturday.



In a survey report released by Deloitte, almost all (96 percent) of CEOs and board members say that they expect their organizations will face serious threats or disruptions to their growth prospects in the next two to three years. Despite that, many are not adequately prioritizing the strategic planning and investment needed to identify, respond to and mitigate critical risks.

‘Illuminating a path forward on strategic risk’, a survey of 400 CEOs and board members from US organizations with $1 billion or more in annual revenue, explores the leaders' posture on four critical and interconnected strategic risks:

  • Brand and reputation;
  • Culture;
  • Cyber;
  • Extended enterprise.

"This survey validates what we're seeing in the marketplace - that many CEOs and board members are risk-aware but not adequately risk-prepared," said Chuck Saia, CEO, Deloitte Risk and Financial Advisory, Deloitte & Touche LLP. "Leaders know there are threats on the horizon, but many are not viewing or managing them strategically or understanding how threats are interconnected. Many are still using traditional approaches, tools, and technologies to detect and manage threats. Today's risk environment requires leaders to challenge the status quo, prioritize investments and identify and analyze threats before they emerge. Simply put, accelerating performance and growth requires a different way of thinking about risk."  



One of the biggest areas of unmitigated risk we see across all industries is supply-chain risk. Most organizations are not adequately protected against the loss of critical third-party suppliers

In today’s post, I’ll share some thoughts about the pervasive supply-chain risk problem, as well as some ideas on what you can do about it.

From Mandatory Requirement to Valuable Business Enabler

Just because you have prohibited employees from communicating about business matters via any channel apart from email doesn’t mean your bases are covered. Mike Pagani explains how prohibition is not prevention, and why, for organizations in highly regulated fields, more concrete steps must be taken to mitigate risks and potential violations.

When it comes to archiving electronic communications used for business, the traditional drivers have been ensuring regulatory compliance for email and being prepared for legal events when they arise. Today however, there are new drivers that are causing organizations to invest in implementing advanced archiving technology and expanding its scope beyond simply email.

The reality is that businesses of all sizes and types are seeking to leverage the productivity gains and expanded reach newer and emerging non-email communications methods and channels offer – think social media, text messaging and collaboration platforms like Slack, Microsoft Teams and others. In many cases, compliance and legal teams are playing catch up, as despite policies that prohibit the use of channels other than email, employees will use them when the need outweighs the risk in their minds – even if they have attested to adhering to the usage policies. In short, prohibition does not equal prevention.

Compliance and legal professionals within regulated businesses have been in a tough spot in recent years. Prohibiting the use of the new channels has been their typical response to limit risks and potential violations by restricting the number of channels allowed for business. Regulators have also stepped up their guidance and enforcement, being very clear that if a channel is used for business, it must be properly retained and supervised – just like email. The problem is that the approach of prohibiting or restricting usage does not enable the business to leverage the benefits of the newer channels and still leaves the organization vulnerable when employees use them anyway.

Ungoverned text messaging is especially problematic, as nearly every employee has an ability to use it, and it is the undisputed channel of choice when time-sensitive responses are needed. According to CTIA, the average response time for a text message is just 90 seconds, while the average response time for an email is 90 minutes.



Were you well-prepared for Hurricane Michael? Good. Hurricanes are extremely dangerous.

But if you’re not careful, what happens after the storm can be just as harmful as the hurricane itself.

Beware the shady contractor. It’s a terrible story: someone’s home is damaged from a hurricane. A contractor shows up at their property and offers to complete immediate emergency repairs. All the homeowner needs to do is sign some paperwork and, the contractor assures them, their insurance company will pay for the repairs – easy as that!

Wrong. Shady contractors are not your friend. If you live in Florida, then the paperwork they want you to sign is often an “assignment of benefits” (AOB), a document that gives the contractor the right to receive payouts from your insurance company directly for repairs. (You can read all about how it works – or doesn’t work, as the case may be – on the Florida state website.)



A recent Continuity Central survey looked at business continuity plan success rates and asked for thoughts on the best ways to debrief after an incident. The survey has now been closed and the results are available, providing some interesting insights. Responses were received from around the world, with the most responses being from people based in the United States (34 percent), the United Kingdom (34 percent) and Australia (8 percent). The survey was conducted online using Survey Monkey.

An initial question ‘Does your company have business continuity plans?’ was asked to qualify survey respondents, with the results being compiled only from those who stated that their company did have business continuity plans.



(TNS) - The Florida Panhandle woke up to harrowing scenes of destruction Thursday in the wake of monster Hurricane Michael, the worst storm on record to ever hit the area and the fourth most powerful to strike U.S. shores.

After carving an agonizing path of destruction across the Florida Panhandle, Georgia and southeastern Alabama for nearly 10 hours and killing at least two people, the fierce storm finally slowed from top sustained winds of 155 mph to a tropical storm at midnight and continued to weaken early Thursday.

By 8 a.m.,winds had slowed to 50 mph as Michael crossed South Carolina, about 40 miles west of Columbia, The storm had picked up speed to a fast 21 mph and should continue weakening. But it could regain some strength when it emerges over the Atlantic and becomes a post-tropical storm, National Hurricane Center forecasters said.



4 Steps to Prepare Your Business for Winter

Winter may conjure up imagery suitable for a Norman Rockwell painting: sitting by the fire with a hot drink in hand, enjoying the twinkling lights and decorations, and watching through the window as snowflakes drift lazily through the air. But the reality is that the business impact of winter weather is anything but idyllic.

The economic impact of a simple snowstorm can be upwards of $1 billion. And it’s not just companies in the path of those epic nor’easters that need to take heed. Last winter, unusually cold weather as far south as Florida even caused several theme parks to close.

Every business faces changing risks as winter approaches—whether winter brings rain, snow, or plummeting temperatures. But being prepared for the many hazards of winter weather can help you better manage the impact of such incidents on your employees, your customers, and your bottom line.

Here are four steps you should take now to prepare your business for the winter months ahead:



Friday, 12 October 2018 16:01


The Power of Location: How to Use Your Employee Data to Protect

Organizations have tons of data: customer data, market data, financial data, product data, and of course, employee data. Outside of common HR functions, however, much of this employee data is untapped or disconnected. When it comes to protecting employees, integrating various data points is critical, particularly when it comes to emergency communications.

There are 4 types of employee data at the fingertips of organizations that they often overlook. When combined, these pieces of data give security leaders everything they need to continually monitor and protect employees.



We are living in a digital age where the traditional boundaries between the physical and virtual spheres are becoming increasingly blurred. This has given rise to the Fourth Industrial Revolution, which is characterized by disruptive technologies such as artificial intelligence, robotics, nanotechnology and the Internet of Things. On World Standards Day, we highlight the crucial role of International Standards.

The Fourth Industrial Revolution affects almost every industry in every country as innovative cyber-physical systems evolve. The convergence of technologies holds immense opportunities, but also presents an array of ethical, economic and scientific challenges. The rapid pace of change has no historical precedent and society cannot help but question the issues related to long-term sustainability.

International Standards can help shape our future. Not only do standards support the development of tailor-made solutions for all industries, they are also the tools to spread best practices, knowledge and innovation globally. International Standards have always had a pivotal role in enabling the smooth adoption of technologies.



The Federal Emergency Management Agency (FEMA) developed IPAWS to alert the public across multiple channels, including radio, television, wireless devices and other communication platforms.

It was designed to be deployed when an emergency threatens life and property and getting information to as many people as possible is urgent. Frequently, IPAWS is used to alert the public when a child is missing, but it can also be used to alert citizens about impending natural disasters or man-made incidents such as chemical spills.

During the last year, IPAWS has come under increased scrutiny, due to perceived mishandling of situations such as the California wildfires and a false alarm that took place in Hawaii. In response, to remind users how important and reliable the system is, FEMA will issue a new set of guidelines for IPAWS users in the coming months and into 2019.

Let’s take a closer look at IPAWS updates you can expect through 2019.



In business continuity, we have a tendency to focus on what’s wrong with our programs or organizations. However, it’s important that we also take time to recognize what we’re doing right.

Today’s post explains why this is worthwhile—and will also help you get started on identifying which parts of your business continuity management (BCM) program are actually in pretty good shape.



(TNS) - Guzzling the superheated waters of the Gulf of Mexico and tempted by a slack atmosphere, Hurricane Michael powered to a record-shattering Category 4 goliath Wednesday with an intensity that trounced some of the most elite cyclones in history.

Its growth from an unassuming tropical storm on Sunday to a 155-mph beast flirting with Cat 5 status was unexpected by meteorologists who watched astonished as Michael’s minimum sea level pressure ticked down to a mind-blowing 919 millibars at landfall.

That’s lower than 1992’s Hurricane Andrew and 2005’s Hurricane Katrina, ranking Michael 3rd in records dating to the 1800s for lowest minimum pressure at landfall, according to Colorado State University hurricane expert Phil Klotzbach.



Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

The pros and cons of cloud storage are, to be sure, debated with great enthusiasm. For every advocate of public cloud storage, there appears to be a naysayer ready to run it down. For every dream migration of data to the cloud there appears to be a cloud nightmare lurking.

So what’s the real story? Is the cloud all bad or all good?

Like actual clouds, the answer is seldom black and white. Plenty of shades of gray are apparent when you look up to the sky or view the murky world of cloud storage. So let’s take a look at some of the primary cloud storage pros and cons.



Thursday, 11 October 2018 14:20

Cloud Storage Pros and Cons

(TNS) - After transforming from a major to a mighty storm in a matter of hours, a ferocious Hurricane Michael roared ashore east of Panama City on Wednesday with pounding 155 mph winds.

The storm, the first-ever powerful Category 4 hurricane to hit the Panhandle, made landfall at 1 p.m. Central Daylight Time, five miles northwest of Mexico Beach, a quiet beach town with a population of about 1,200. The wind speed fell just 2 mph short of a more dangerous Category 5.

As it churns inland, National Hurricane Center forecasters warn that the back half of the hurricane will continue to spread dangerous storm surge and winds. Flood waters could reach as high as 14 feet in some places.



In recent weeks, we’ve seen several IT failures that left thousands of customers frustrated across the country.

First, Cisco Webex experienced a complete outage, and users were still experiencing intermittent issues 24 hours later. The interruption was apparently caused by a rogue script that began deleting the virtual machines hosting the service. As Cisco put it, “This was a process issue, not a technical issue.”

Then Verizon experienced voice, text, and data service interruptions for several hours affecting states across the South and Midwest, while also stretching into the northeast. The outage appeared to last about three hours.

To cap it off, a “technology issue” temporarily grounded Delta aircraft nationwide in the latest airline outage. Tweets from the company said the “computer tracking system” was down, and that the issues were system-wide. The outage lasted for at least an hour.

For all three companies, customer complaints spread quickly on social media, reinforced by media coverage. As we evolve with various technologies in a super-fast technology world, we expect and demand zero interruption and 100 percent connectivity.



Making Your Workplace a Harassment-Free Zone

Too many workplaces have allowed sexual harassment to continue unchecked for years. In some cases, company executives have been the worst perpetrators of these toxic cultures. Companies that fail to take notice and make changes have set themselves up for internal upheaval and legal claims that may threaten the business. Fortunately, employers can turn toxic cultures around by following four simple steps.

What started in July 2016 with Gretchen Carlson’s sexual harassment lawsuit against Fox News CEO Roger Ailes has snowballed over the last two years into hundreds of similar allegations against politicians, entertainers, media personalities and corporate executives. Recent admissions have revealed that some organizations’ leaders knowingly allowed workplace harassment to continue unchecked for years. Only now – as terminations and calls for resignations reverberate – have these workplaces begun re-evaluating their policies and training to effect an overall change in workplace culture.

If your company has not already begun examining its organizational environment and policies to ensure a harassment-free workplace, the time is now.



Wednesday, 10 October 2018 15:01

Dump The Toxic Culture

One Identity has released new global research findings that uncover a widespread inability to implement basic best practices across identity and access management (IAM) and privileged access management (PAM) security disciplines. These failures will be exposing organizations to data breaches and other significant security risks.

Conducted by Dimensional Research, One Identity’s ‘Assessment of Identity and Access Management in 2018’ study polled more than 1,000 IT security professionals from mid-size to large enterprises on their approaches, challenges, biggest fears and technology deployments related to IAM and PAM.

Among the survey’s most surprising findings are that nearly one-third of organizations are using manual methods or spreadsheets to manage privileged account credentials, and one in 20 IT security professionals admit they have no way of knowing if a user is fully deprovisioned when they leave the company or change their role. Additionally, a single password reset takes more than 30 minutes to complete in nearly 1 in 10 IT environments.

These and other findings paint a bleak picture of how many organizations approach IAM and PAM programs, indicating that critical and highly sensitive systems and data are not properly protected; user productivity is hindered; and potential threats from mismanaged access remain a major challenge.



Backgrounders and fact sheets—background information and statistics on the insurance trends and conditions in selected hurricane-prone coastal states.

Via Insurance Information Institute ...


Wednesday, 10 October 2018 14:54

Hurricane Fact Files and Market Share by State

(TNS) — Both men died when they fell — one from a ladder, the other from a roof — while they were cleaning up after Hurricane Florence even as the storm was still causing rivers to rise in Eastern North Carolina.

Gov. Roy Cooper announced last Tuesday that they were the 38th and 39th people in North Carolina to lose their lives as a result of the storm. It had taken 10 days for the two men to officially be added to the storm’s death toll.

The lag time illustrates how difficult it can be to fully account for the number of deaths caused by a natural disaster as large and widespread as a hurricane. That they were added to the list at all shows how important that full accounting is.



If you live in the projected path of Hurricane Michael, you should be prepping your home and finalizing your emergency and evacuation plans. The storm has grown to Category 2 – and there are concerns that it’ll be a Category 3 by landfall. 

Here are some Dos and Don’ts to consider for prepping and riding out the storm. 



Wednesday, 10 October 2018 14:51


A Holistic Approach to Addressing Harassment


As cultural movements continue to raise awareness about misconduct, compliance and ethics programs are putting more power behind training their employees on how to identify and report harassment in the workplace. Despite this increased emphasis, less than half of employees who observed harassment reported it last year, sending a signal that there’s more organizations must do to reduce this risk.


While harassment has been one of the most commonly observed types of misconduct for employees over the past decade,[i] recent, high-profile leaks of sexual harassment has increased the attention this type of misconduct receives. As a result, it has renewed public discourse on the topic and created greater urgency to address the issue at the CEO and board level.

To mitigate the risk harassment can present to the organization, and to stay in front of reputational failures, compliance and ethics programs are putting more power behind training their employees on how to identify and report harassment in the workplace, with over half of compliance and ethics programs already requiring most of their employees to complete anti-harassment training annually[ii]. However, despite this increased attention and training, only 46 percent of employees who observed harassment reported it in 2017[iii]  – an indication that there is much more compliance and ethics programs can do to reduce this risk to their organizations.



Wednesday, 10 October 2018 14:49

Beyond Anti-Harassment Training

Organizations often make a false assumption as they approach the start of a Business Impact Analysis (BIA) or recovery plan building: they assume that staff members can define the business processes that they are engaged in as part of normal operations.  The truth is that many people struggle to define the processes that they are regularly engaged in at the proper level, despite being part of an organization for many years and performing in the same role for a long period of time.  A process inventory is an essential prerequisite for a BIA or for plan building.   Failing to define processes at the appropriate level will yield inaccurate BIA results and could result in the creation of ineffective recovery plans.

The most common error in defining processes is the elevation of the individual tasks that are involved in performing a process to the level of a process.  If tasks are defined as processes, subject matter experts will have challenges identifying impacts at such a micro level of activity.  When processes are defined at excessively high levels of operation, the impact of a disruption can be exaggerated as all activity at such elevated levels is inflated.

Plan building is similarly problematic when processes are not properly defined.  Plans scoped at task level may fail to account for the complexity of operations and risk not identifying critical aspects of the recovery.   Planning at upper levels of the organization can result in over-sized plans that are difficult to execute and impossible to exercise effectively.



Wednesday, 10 October 2018 14:47

Step One


In the wake of the recent Facebook and Cambridge Analytica scandal, data and personal privacy matters have come to the forefront of consumer’s minds. When an organization like Facebook falls into trouble, big data is often blamed, but IS big data actually at fault? When tech companies utilize and contract with third party data mining companies aren’t these data collection firms doing exactly what they were designed to do?

IBM markets its Watson as a way to get closer to knowing about consumers; however, when it does just that, it is perceived as an infringement on privacy. In lieu of data privacy and security violations, companies have become notorious for pointing the finger elsewhere. Like any other scapegoat, big data has become an easy way out; a chance for the company to appear to side with, and support the consumer. Yet, many are long overdue in making changes that actually do protect and support the customer and now find themselves needing to attempt to earn back lost consumer trust. Companies looking to please their customers, publicly agree that big data is the issue but behind the scenes may be doing little or nothing to change how they interact with these organizations. By pushing the blame to these data companies, they redirect the problem, holding their company and consumers as victims of something beyond their control.

For years, data mining has been used to help companies better understand their customers and market environment. Data mining is a means to offer insights from business to buyer or potential buyer. Before companies and resources like Facebook, Google, and IBM’s Watson existed, customers knew very little about their personal data. More recently, the general public has begun to understand what data mining actually is, how it is used, and be aware of the data trail they leave through their online activities.

Hundreds of articles have been written surrounding data privacy, additional regulations to protect individual’s data rights have been proposed, and some even signed into law. With the passing of new legislation pertaining to data, customers are going as far as to file law suits against companies that may have been storing personal identifiable information against their knowledge or without proper consent.

State regulations have increasingly propelled the data privacy interest, calling for what some believe might develop into national privacy law. Because of this, organizations are starting to take notice and thus have begun implementing policy changes to protect their organization from scrutiny. Businesses are taking a closer look at the changing trends within the marketplace, as well as the growing awareness from the public around how their data is being used. Direct consumer-facing brands need to be most mindful of the fact that they need to have appropriate security frameworks in place. Perhaps the issue amongst consumers is not the data collected, but how it is presented back to them or shared with others.

Generally speaking, consumers like content and products that are tailored to them. Many customers don’t mind data collection, marketing retargeting, or even promotional advertisements if they know that they are benefiting from them. We as consumers and online users often times willingly give up our information in exchange for free access and convenience, but have we thoroughly considered how that information is being used, brokered and shared? If we did, would we pay more attention to who and what we share online?

Many customers have expressed their unease when their data is incorrectly interpreted and relayed. Understandably so, they are irritated by irrelevant communications and become fearful when they lack trust in the organization behind the message. Is their sensitive information now in a databank with heightened risk for breach? When a breach or alarming infraction occurs, the customer, including prospective, has more concern.

The general public has become acquainted with the positive aspects of big data, to the point where they expect retargeted ads and customized communications. On the other hand, even when in agreeance to the terms and conditions, the consumer is quick to blame big data in a negative occurrence rather than the core brand they chose to trust their information to.

About Greg Sparrow:

GregSparrowGreg Sparrow, Senior Vice President and General Manger at CompliancePoint has over 15 years of experience with Information Security, Cyber Security, and Risk Management. His knowledge spans across multiple industries and entities including healthcare, government, card issuers, banks, ATMs, acquirers, merchants, hardware vendors, encryption technologies, and key management.


About CompliancePoint:

CompliancePoint is a leading provider of information security and risk management services focused on privacy, data security, compliance and vendor risk management. The company’s mission is to help clients interact responsibly with their customers and the marketplace. CompliancePoint provides a full suite of services across the entire life cycle of risk management using a FIND, FIX & MANAGE approach. CompliancePoint can help organizations prepare for critical need such as GDPR with project initiation and buy-in, strategic consulting, data inventory and mapping, readiness assessments, PIMS & ISMS framework design and implementation, and ongoing program management and monitoring. The company’s history of dealing with both privacy and data security, inside knowledge of regulatory actions and combination of services and technology solutions makes CompliancePoint uniquely qualified to help our clients achieve both a secure and compliant framework.

As cultural movements continue to raise awareness about misconduct, compliance and ethics programs are putting more power behind training their employees on how to identify and report harassment in the workplace. Despite this increased emphasis, less than half of employees who observed harassment reported it last year, sending a signal that there’s more organizations must do to reduce this risk.

While harassment has been one of the most commonly observed types of misconduct for employees over the past decade, recent, high-profile leaks of sexual harassment has increased the attention this type of misconduct receives. As a result, it has renewed public discourse on the topic and created greater urgency to address the issue at the CEO and board level.

To mitigate the risk harassment can present to the organization, and to stay in front of reputational failures, compliance and ethics programs are putting more power behind training their employees on how to identify and report harassment in the workplace, with over half of compliance and ethics programs already requiring most of their employees to complete anti-harassment training annually. However, despite this increased attention and training, only 46 percent of employees who observed harassment reported it in 2017 – an indication that there is much more compliance and ethics programs can do to reduce this risk to their organizations.



Tuesday, 09 October 2018 14:24

A Holistic Approach to Addressing Harassment

Adesh Rampat explains why he believes that the definition of operational risk needs updating to take into account the development of cyber security related risks, and including aspects of internal controls and user awareness.

The definition of operational risk varies but generally covers the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. However, I want to take a fresh look at this general definition and present what I believe operational risk should reflect, taking into account all the cyber security related risks that are currently plaguing organizations.

We know that operational risk exists in every organization and size does not matter. What matters however are two critical areas that need to be included in the operational risk definition:

  • Internal controls
  • User awareness.



(TNS) — Nearly a month after Hurricane Florence created unprecedented flooding, the Grand Strand in South Carolina is in another storm's path.

Hurricane Michael — which was upgraded from tropical storm Monday — is in the Gulf of Mexico and expected to hit the Florida Panhandle on Wednesday. The effects of the storm will reach Horry County, S.C., most likely on Wednesday or Thursday of this week.

Predictions from the National Weather Service say the area can expect 2 to 4 inches of rain and it has a good chance of seeing tropical storm wind speeds. There is also a threat of tornadoes, especially if the storm moves to the west of Myrtle Beach.

Once the storm moves onto land, it will start to slow down and will most likely be significantly weaker than it is now, Steve Pfaff, warning coordination meteorologist with the National Weather Service in Wilmington, said.



2018 marks the 100-year anniversary of the 1918 influenza pandemic, which killed ~50 million people worldwide. The severity of this pandemic resulted from a complex interplay between viral, host, and societal factors. Here, we review the viral, genetic and immune factors that contributed to the severity of the 1918 pandemic and discuss the implications for modern pandemic preparedness. We address unresolved questions of why the 1918 influenza H1N1 virus was more virulent than other influenza pandemics and why some people survived the 1918 pandemic and others succumbed to the infection. While current studies suggest that viral factors such as haemagglutinin and polymerase gene segments most likely contributed to a potent, dysregulated pro-inflammatory cytokine storm in victims of the pandemic, a shift in case-fatality for the 1918 pandemic toward young adults was most likely associated with the host's immune status. Lack of pre-existing virus-specific and/or cross-reactive antibodies and cellular immunity in children and young adults likely contributed to the high attack rate and rapid spread of the 1918 H1N1 virus. In contrast, lower mortality rate in in the older (>30 years) adult population points toward the beneficial effects of pre-existing cross-reactive immunity. In addition to the role of humoral and cellular immunity, there is a growing body of evidence to suggest that individual genetic differences, especially involving single-nucleotide polymorphisms (SNPs), contribute to differences in the severity of influenza virus infections. Co-infections with bacterial pathogens, and possibly measles and malaria, co-morbidities, malnutrition or obesity are also known to affect the severity of influenza disease, and likely influenced 1918 H1N1 disease severity and outcomes. Additionally, we also discuss the new challenges, such as changing population demographics, antibiotic resistance and climate change, which we will face in the context of any future influenza virus pandemic. In the last decade there has been a dramatic increase in the number of severe influenza virus strains entering the human population from animal reservoirs (including highly pathogenic H7N9 and H5N1 viruses). An understanding of past influenza virus pandemics and the lessons that we have learnt from them has therefore never been more pertinent.



What to Do When a Deal Falls Apart

What happens after a planned deal falls apart? In the process of seeking approval, a wealth of sensitive company information is transferred between entities – from financials to intellectual property. This article explores how a company can properly recover following the dissolution of a merger.

For a company closing an acquisition, it’s a heady time. Months of due diligence, back-and-forth negotiations and organizational strategy gives way to the challenges of integration. But for every company celebrating the next chapter for their business, dozens more are sent back to the drawing board after a potential deal falls apart. This is more common than one might think – if 200 companies hit the deal pipeline, only about 40 will reach the letter of intent stage. Of that 40, just 15 might reach the deal finish line, leaving everyone else trying to put the genie back in the bottle.

Those who are back at the drawing board – whether the deal would’ve been industry-changing or one that simply furthered a company’s goals – all face the same problem. The former buyer – possibly a direct competitor – has just seen a lot of proprietary information. You can’t erase memories, but how can you ensure that they no longer have access to the spreadsheets, financial statements and internal knowledge that are all part of the due diligence process? Data security becomes critical for both the buyer and sellers. The risk of information leaks must be immediately mitigated, particularly if your deal has reached the letter of intent stage – a point in time when vast amounts of sensitive information has been exchanged.

Of course, everyone has signed non-disclosure agreements, but the information is out there and it’s time to eliminate the exposure as quickly as feasible.



Tuesday, 09 October 2018 14:13

Mitigating Data Risk In M&A Transactions

Emergency notification systems (ENS) are not just for government. You most likely already know that organizations can implement systems to send alerts and notifications to their employees, both in emergencies or even in the course of their day-to-day work.

In today’s post, I’ll discuss some of the types of electronic alert systems that are available to business, sketch out their benefits, and point out some of the things to be cautious about in using such platforms.



One of my favorite George Carlin quotes is, “I never worry that ALL hell will break loose. My concerns is that a PART of hell will break loose. It’ll be much harder to detect.”

I have always loved that quote because it’s true in the lives of crisis management professionals.

Many times, we write our plans and develop our procedures for unmistakable crises (ALL hell breaking loose). But it’s been my experience that when only a PART of hell breaks loose, it can really challenge our overall readiness. Our plans and procedures tend to be binary – on or off, black or white. But what about those grey areas? Are you ready for half a crisis?




IT cartoon, machine learning

Successful companies understand they have to innovate to remain relevant in their industry. Few innovations are more buzzworthy than machine learning (ML).

The Accenture Institute for High Performance found that at least 40 percent of the companies surveyed were already employing ML to increase sales and marketing performance. Organizations are using ML to raise ecommerce conversion rates, improve patient diagnoses, boost data security, execute financial trades, detect fraud, increase manufacturing efficiency and more.

When asked which IT technology trends will define 2018, Alex Ough, CTO Architect at Sungard AS, noted that ML “will continue to be an area of focus for enterprises, and will start to dramatically change business processes in almost all industries.”

Of course, it’s important to remember that implementing ML in your business isn’t as simple as sticking an educator in front of a classroom of computers – particularly when companies are discovering they lack the skills to actually build machine learning systems that work at scale.

Machine learning, like many aspects of digital transformation, requires a shift in people, processes and technology to succeed. While that kind of change can be tough to stomach at some organizations, the alternative is getting left behind.

Check out more IT cartoons.


Big data analytics projects are ubiquitous. According to one survey, 37.2% of executives report their organizations have invested more than $100MM on big data analytics initiatives within the last several years, with 6.5% of organizations investing over $1B. While 81% of executives qualify these efforts as successful, there is less certainty about these projects achieving measurable business value and widespread adoption. The same survey finds that only 37% of respondents report success in creating data-driven cultures. Other findings are less optimistic – Gartner estimates the failure rate of similar initiatives is closer to 85%.

Why do big data analytics projects fail so often? Although there are likely many causes, perhaps the largest factor is a poor understanding of the business use case. All too often, the tendency, when ramping up big data analytics projects, is to sift through the data to uncover problems. This approach may yield interesting results, but rarely produces a robust business case.

Instead of starting with a technology project and trying to produce business results, a better approach starts with the business problem and uses technology to solve it. This type of customer-centric approach, where you define the business problem up-front, allows you to understand how users will interact with and use the data insights. Only then can your data scientists design and develop a targeted solution.



The first 24 hours is the critical period when it comes to responding effectively to a crisis at any organization.

In today’s post, we’ll lay out some of the things you can do, from the business continuity standpoint, to enable you to “win” this critical period, when and if disaster strikes the company you work for.

Today’s post is inspired by a presentation called “The Two-Minute Drill,” which MHA Consulting and BCMMETRICS CEO Michael Herrera gave at the Disaster Recovery Journal Conference in Phoenix last weekend.



It’s October – and that means it’s National Cybersecurity Awareness Month.

The National Cyber Security Alliance has dedicated the first week to making homes safe from hacking. And for good reason. Families are increasingly living connected lives: on social media, in video games, and through “smart” home technology like connected thermostats or burglar alarms.

So-called “smart tech” (otherwise known as the Internet of Things) is only getting more popular: three out of five Americans have connected technology in their homes, according to a recent Insurance Information Institute and J.D. Power 2018 Consumer Cyber Insurance and Security Spotlight SurveySM.

Smart tech is convenient and efficient. Why not buy a thermostat that can automatically adjust the temperature to save you money?



Wednesday, 03 October 2018 19:59


The Impact of BPM on Organizations

Regulations and compliance are some of the most important topics of discussion in the marketplace today, yet dozens of companies are hit with fines simply because they are unable to prove they are in compliance. Business process management (BPM) is enabling companies to more adequately manage their vendors, employees and C-suite, and it assists with cost avoidance, efficiency, risk management and compliance – all of which are topics that belong in the boardroom.

If you talk to any process management or BPM vendor, they are likely to tell you that one of their biggest challenges is access to the C-Suite. This is partly due to the way vendors historically pitch their solutions as technology and features, leading them to be seen as just another IT solution for management to consider. However, it is also because many executives still fail to understand that process management is actually a valuable tool to help them and their organization perform better and mitigate risk.

A number of media publications have reported that regulators across the globe are becoming more aggressive in their inquiries and are further increasing fines. In the United States alone, Wells Fargo was fined over $1 billion for failed compliance and Bancorp fined an additional $600 million for systemic deficiencies in its anti-money laundering monitoring systems, which resulted in gaps and “a significant amount of unreported suspicious activity.” Meanwhile, a U.K.-based utility company was fined simply because they were unable to prove that they were operating in a compliant manner. Of course, the automotive industry is also impacted, as many manufacturers are facing actions against them for emissions failings.



Wednesday, 03 October 2018 15:02

Why Process Management Is A Boardroom Issue

Page 1 of 2