DRJ Fall 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 32, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (321)

A hidden weakness in many organizations is that business continuity data is kept in different places, which causes confusion and can hinder access during an emergency.

In today’s post, we’ll take a look at some of the issues surrounding these so-called data silos, including the problems they cause and the benefits of eliminating them.

Many organizations of all sizes function in silos, where each business unit has its own set of compliance regulations, vendors, and processes. 

This makes sense for many functions, but there is one area where it causes significant problems: business continuity (BC) and IT/disaster recovery.

This is due to the unique nature of BC and IT/DR data and the situations in which you need to access it. 



Using cloud storage often represents the first way that most companies adopt the cloud. They leverage cloud storage to archive their data, as a backup target, share files, or for long term data retention. These approaches offer a low risk means for companies to get started in the cloud. However, with more cloud storage offerings available than ever, companies need to ask and answer more pointed questions to screen them.

50+ Cloud Storage Offerings

As recently as a few years ago, one could count on one hand the number of cloud storage offerings. Even now, companies may find themselves hard pressed to name more than five or six of them.

The truth of the matter is companies have more than 50 cloud storage offerings from which to choose. These offerings come from general purpose cloud providers such as AmazonMicrosoft, and Googleto specialized providers such as DegoohubiC,  Jottacloud and Wasabi.



With more than 90% of Millennials saying flexible working is important to them, firms that don’t trust their staff to do so responsibly risk alienating existing workers and deterring potential new ones


The world of work has undergone a seismic shift in recent years.

Long gone are the days when pretty much everyone would get off the train or bus and pile into office blocks in formal clothes, ready to clock on at 9am and head for the exit door at 5pm sharp.

Changes in both technology and working culture have led to a boom in flexible working, with a massive 91% of Millennials surveyed saying they felt that flexible working was important to them.

It’s not just that particular age group, either. Three-quarters of employees surveyed in the UK said the option of flexible working would make a role more attractive, while almost a third even said that they’d prefer the option of flexible working over a pay rise.



No organization can prioritize and mitigate hundreds of risks effectively. The secret lies in carefully filtering out the risks, policies, and processes that waste precious time and resources.

In security, what we don't look at, don't listen to, don't evaluate, and don't act upon may actually be more important than what we do. This may sound counterintuitive at first, but I assure you that it is not. The truth is that too often the cybersecurity noise level — all the data points constantly bombarding us — creates a sensory overload that impedes our ability to think clearly and act. Here are 10 places where you can start to filter out the noise.



Healthcare organizations (HCOs), once cloud laggards, are embracing public and private cloud services to address a wide range of business problems. A recent survey revealed that 76 percent of HCOs anticipate using public cloud services within the year, putting public cloud vendors on pace to be the most popular cloud services providers in healthcare. Why? HCOs are turning to cloud vendors to address a wide range of business problems, but the three most prominent, highlighted below, are changing the game in healthcare.



The development follows speculation and concern among security experts that the attack group would expand its scope to the power grid.

The attackers behind the epic Triton/Trisis attack that in 2017 targeted and shut down a physical safety instrumentation system at a petrochemical plant in Saudi Arabia now have been discovered probing the networks of dozens of US and Asia-Pacific electric utilities.

Industrial-control system (ICS) security firm Dragos, which calls the attack group XENOTIME, says the attackers actually began scanning electric utility networks in the US and Asia-Pacific regions in late 2018 using similar tools and methods the attackers have used in targeting oil and gas companies in the Middle East and North America.

The findings follow speculation and concern among security experts that the Triton group would expand its scope into the power grid. To date, the only publicly known successful attack was that of the Saudi Arabian plant in 2017. In that attack, the Triton/Trisis malware was discovered embedded in a Schneider Electric customer's safety system controller. The attack could have been catastrophic, but an apparent misstep by the attackers inadvertently shut down the Schneider Triconex Emergency Shut Down (ESD) system.



Russ Berland considers disruptive risks – like the earthquake and resulting tsunami that caused the Fukushima disaster in 2011 – and how AI can help to identify and manage those risks before they mature into overwhelming obstacles.

In March 2011, an 8.9 magnitude earthquake and a 23-foot tsunami hit Japan. Thousands of Japanese people tragically lost their lives, and hundreds of thousands of people were forced to relocate away from the crippled Fukushima nuclear power station.

At the same time, the quake sent its own shockwaves throughout the supply chain of the auto industry. Japanese production of Honda and Toyota vehicles was suspended, but so were the electronic chips and boards that appeared in cars made throughout the world. General Motors was forced to suspend production of pickup trucks in its Shreveport, Louisiana assembly plant due to parts shortages. And other U.S. manufacturers, Ford and Chrysler, took weeks to steady out their parts supplies to keep production lines going at normal rates. In retrospect, it was evident that these car manufacturers were too dependent on the region of Japan that was affected by the quake and tsunami, but a car has thousands of parts; how can a business know what will affect its supply chain and its ability to continue to do business when there are so many things in need of our attention?

In our supply chains, we are dependent on others to provide our businesses’ inputs: raw materials, parts, equipment, tools, supplies. We need everything from paper clips to toilet paper to make our businesses run, but we face risks in our supply chain.



New analysis shows widespread DNS protection could save organizations as much as $200 billion in losses every year.

DNS protection could prevent approximately one-third of the total losses due to cybercrime – which translates into billions of dollars potentially saved.

According to "The Economic Value of DNS Security," a new report published by the Global Cyber Alliance (GCA), DNS firewalls could annually prevent between $19 billion and $37 billion in losses in the US and between $150 billion and $200 billion in losses globally. GCA used data about cybercrime losses from the Council of Economic Advisors and the Center for Strategic and Internation Studies as the basis for its GCA's estimates of how much DNS protection, such as a DNS firewall, could save the economy.

"The benefit from using a DNS firewall or protective DNS so exceeds the cost that it's something everyone should look at," says Philip Reitinger, GCA president and CEO. In many cases, he says, the DNS protection service or DNS firewall will be available at no cost to purchase or license.



(TNS) — An uptick in the frequency of severe weather is a sure sign that summer is near. And for the Southeast, that means hurricane season is quickly approaching.

Last year, two major hurricanes — Florence and Michael – caused significant damage in the Carolinas and Florida Panhandle. Forecasters are again calling for a slightly below-average hurricane season in 2019. However, as many of us already know, it only takes one bad storm to cause damage.

Hurricane season began June 1 and extends through November 30. We have already seen one named storm develop in May. Unfortunately, too many residents in the region still do not give the beginning of hurricane season enough thought — or take action to prepare their homes or their finances.

Floridians to should take a few important steps now, to give yourself peace of mind and the confidence that you're prepared financially for a major storm or emergency event.



Belgium's Asco has shut down manufacturing around the world, including the US, in response to a major cybersecurity event, but what happened isn't clear.

A cyberattack has shut down international production at Asco, a company in Zaventem, Belgium, that is a significant supplier of parts to airline manufacturers around the world. According to the company, it has shut down all manufacturing operations in Belgium, Germany, Canada, and the US, sending more than 1,000 of their 1,400 global workers home.

While many reports are calling the attack ransomware, Asco has not released information about the incident other than to confirm the effects and report that law enforcement has been notified.

Larry Trowell, principal security consultant at Synopsys, counsels patience before leaping to conclusions about the attack. "The short and sweet of it is that we don't yet know what has happened," he says. "Looking at the most recent articles, there's no proof that it was in fact a ransomware attack. Since the police and local officials were called, it may be malware or even a direct attack."



Recent research from Fenergo reveals that banks’ C-suite executives overwhelmingly agree: Underinvestment in technology negatively impacts client onboarding and retention – yet a large percentage have not invested in solutions that would help. Fenergo’s Greg Watson examines the disconnect.

The rise of fintech and regtech startups has opened up new possibilities for the finance industry, shaking up traditional ways of thinking and completely reshaping financial services as we know it. New technologies, such as big data analytics and artificial intelligence (AI) can completely transform financial institutions, increasing efficiencies and improving client life cycle management (CLM).

However, not all banks are getting on board. A recent survey showed that legacy infrastructure is preventing one in five banks from investing in new, disruptive technologies. Many banks are feeling left behind in the midst of technological innovation. The lack of investment in new technology, coupled with maturing infrastructures, creates barriers to digital transformation. Instead, banks end up stuck with old, manual processes, which negatively impacts operational efficiency, client experience and, perhaps most urgently, a bank’s regulatory compliance positioning. And it negatively impacts the bottom line too; banks can end up spending 80 percent of their budgets on maintaining and upgrading legacy technology solutions.

The same study also showed that 33 percent of those surveyed have not invested in any technology to improve client onboarding at all, despite almost every single respondent (99 percent) agreeing that underinvestment in technology directly impacts client onboarding and retention.



Cutting back on the number of security tools you're using can save money and leave you safer. Here's how to get started.

Industry reports vary, but experts estimate that the modern CISO uses somewhere between 55 and 75 discrete security products. Vendors are often guilty of overpromising and underdelivering — the reality rarely lives up to the marketing. This puts CISOs in an ironic situation — often, the tool they bought to make their lives easier ended up causing more headaches.

This is an endemic issue, but what do you do when you have too many tools that integrate poorly, require different expertise, and provide too much data but not an overall view to the security risk level? Consolidation sounds attractive. After all, what CISO wouldn't want to reduce clutter, cut costs, and simplify procedures — but where to start?



Friday, 14 June 2019 15:03

The CISO's Drive to Consolidation

Everbridge is only U.S.-based emergency notification provider to obtain C5 accreditation for cloud operations

BURLINGTON, Mass.--()--Everbridge, Inc. (NASDAQ: EVBG), the global leader in critical event management and enterprise safety software applications to help keep people safe and businesses running, today announced that it has completed its assessment for the Cloud Computing Compliance Controls Catalogue (C5) set out by the Federal Office for Information Security in Germany, also known as Bundesamt für Sicherheit in der Informationstechnik (BSI). Everbridge is the first and only U.S.-based emergency notification provider to achieve BSI C5 attestation. This accreditation assures that the Everbridge Critical Event Management platform has undergone a rigorous third-party audit to ensure it complies with all security requirements defined by C5.

Everbridge’s commitment to applying the highest levels of compliance in controls and security is shown by meeting the C5 standard that serves not only as a benchmark for the German market, but also increasingly as a measure for institutions across Europe and Asia. Using Everbridge’s C5 audit report, customers can effortlessly evaluate how legal regulations (e.g. data privacy), their own policies, or the threat environment relate to their use of the Everbridge platform.

Javier Colado, Senior Vice President, International at Everbridge, commented, “Everbridge is fully committed to the highest standards in all aspects of our operations no matter where that might be in the world. We are proud to be the first U.S.-based critical event management provider to meet the stringent C5 compliance requirements – something that should increase the trust placed in us by our growing client base not only in Germany, but also around the world.”

C5 is intended primarily for professional cloud service providers, and their auditors and customers. It has 17 distinct control requirements that cloud providers either have to comply with or meet defined minimum standards. It is a required assessment for working with the public sector in Germany and is increasingly being adopted by the private sector.

In addition to obtaining C5 compliance – which covers the Everbridge Critical Event Management platform both in the U.S. and EU – Everbridge is also ISO 27001 certified and SSAE18 SOC 2/SOC 3 compliant. C5 is a further seal of quality that underlines the commitment of Everbridge to growing its presence around the world and ensuring that it always operates at market-leading standards.

About Everbridge
Everbridge, Inc. (NASDAQ: EVBG) is a global software company that provides enterprise software applications that automate and accelerate organizations’ operational response to critical events in order to keep people safe and businesses running. During public safety threats such as active shooter situations, terrorist attacks or severe weather conditions, as well as critical business events including IT outages, cyber-attacks or other incidents such as product recalls or supply-chain interruptions, over 4,500 global customers rely on the company’s Critical Event Management Platform to quickly and reliably aggregate and assess threat data, locate people at risk and responders able to assist, automate the execution of pre-defined communications processes through the secure delivery to over 100 different communication devices, and track progress on executing response plans. The company’s platform sent over 2.8 billion messages in 2018 and offers the ability to reach over 500 million people in more than 200 countries and territories, including the entire mobile populations on a country-wide scale in Sweden, the Netherlands, the Bahamas, Singapore, Greece, and a number of the largest states in India. The company’s critical communications and enterprise safety applications include Mass Notification, Incident Management, Safety Connection™, IT Alerting, Visual Command Center®, Public Warning, Crisis Management, Community Engagement™ and Secure Messaging. Everbridge serves 9 of the 10 largest U.S. cities, 9 of the 10 largest U.S.-based investment banks, all 25 of the 25 busiest North American airports, six of the 10 largest global consulting firms, six of the 10 largest global auto makers, all four of the largest global accounting firms, four of the 10 largest U.S.-based health care providers and four of the 10 largest U.S.-based health insurers. Everbridge is based in Boston and Los Angeles with additional offices in Lansing, San Francisco, Beijing, Bangalore, Kolkata, London, Munich, Oslo, Stockholm and Tilburg. For more information, visit www.everbridge.com, read the company blog, and follow on Twitter and Facebook.

Cautionary Language Concerning Forward-Looking Statements
This press release contains “forward-looking statements” within the meaning of the “safe harbor” provisions of the Private Securities Litigation Reform Act of 1995, including but not limited to, statements regarding the anticipated opportunity and trends for growth in our critical communications and enterprise safety applications and our overall business, our market opportunity, our expectations regarding sales of our products, and our goal to maintain market leadership and extend the markets in which we compete for customers. These forward-looking statements are made as of the date of this press release and were based on current expectations, estimates, forecasts and projections as well as the beliefs and assumptions of management. Words such as “expect,” “anticipate,” “should,” “believe,” “target,” “project,” “goals,” “estimate,” “potential,” “predict,” “may,” “will,” “could,” “intend,” variations of these terms or the negative of these terms and similar expressions are intended to identify these forward-looking statements. Forward-looking statements are subject to a number of risks and uncertainties, many of which involve factors or circumstances that are beyond our control. Our actual results could differ materially from those stated or implied in forward-looking statements due to a number of factors, including but not limited to: the ability of our products and services to perform as intended and meet our customers’ expectations; our ability to attract new customers and retain and increase sales to existing customers; our ability to increase sales of our Mass Notification application and/or ability to increase sales of our other applications; developments in the market for targeted and contextually relevant critical communications or the associated regulatory environment; our estimates of market opportunity and forecasts of market growth may prove to be inaccurate; we have not been profitable on a consistent basis historically and may not achieve or maintain profitability in the future; the lengthy and unpredictable sales cycles for new customers; nature of our business exposes us to inherent liability risks; our ability to attract, integrate and retain qualified personnel; our ability to successfully integrate businesses and assets that we may acquire; our ability to maintain successful relationships with our channel partners and technology partners; our ability to manage our growth effectively; our ability to respond to competitive pressures; potential liability related to privacy and security of personally identifiable information; our ability to protect our intellectual property rights, and the other risks detailed in our risk factors discussed in filings with the U.S. Securities and Exchange Commission (“SEC”), including but not limited to our Annual Report on Form 10-K for the year ended December 31, 2018 filed with the SEC on March 1, 2019. The forward-looking statements included in this press release represent our views as of the date of this press release. We undertake no intention or obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise. These forward-looking statements should not be relied upon as representing our views as of any date subsequent to the date of this press release.

All Everbridge products are trademarks of Everbridge, Inc. in the USA and other countries. All other product or company names mentioned are the property of their respective owners.

There are hundreds of thousands of goods and passenger lifts in use at any given moment around the world, safely transporting us up and down buildings thanks to some pretty rigorous standards. But the national or regional rules and regulations that apply to them are reflected in different standards, making international trade a problem. For the first time, an ISO International Standard just published will harmonize them all, enabling safety to improve and the technology to grow.

They started thousands of years ago as manually operated pulleys, such as those operated by slaves in the Roman Coliseum. Now some are breathtaking feats of engineering, such as the Gateway Arch in Missouri. Most, however, are less glamorous and just aim to transport us from one floor to another.

There are three main standards in use around the world to outline the mechanical and operational characteristics of lifts, all arriving at a similar level of safety and quality. However, they all have different requirements, and are tied to the economic area in which they operate, meaning they are not always accepted in other parts of the world.



(TNS) — If there’s a safety threat at Fort Lauderdale-Hollywood International Airport, there’s now multiple ways for officials to warn visitors of the danger and to tell them what to do.

Passengers and other visitors will also be able to reach out quickly to airport managers if they find themselves in the middle of an emergency.

The changes are meant to prevent the mass confusion and chaos that forced the airport to close after five people were killed in the baggage claim area in January 2017. The shooter was captured in minutes, but passengers were stranded on the tarmac for hours.

“Now we have a way of notifying everybody with a smart phone if there’s an emergency,” Mayor Mark Bogen said. “Now they’ve put communication devices throughout the airport.”

Here’s a look at what’s changed:



This article by Dave Bermingham provides some practical guidance to help system and database administrators tasked with creating business continuity and disaster recovery plans. 

For those administrators who hate to plan, General George Patton offers this advice: “A good plan today is better than a perfect plan tomorrow.” No business continuity or disaster recovery plan can possibly address every possible event or set of circumstances, which is why both the BC and DR plans should continually evolve as lessons learned inform various improvements.

Providing the guidance needed to create a solid business continuity plan would fill a book. But because the business continuity plan forms the foundation for the disaster recovery plan, at least some discussion is warranted here. What follows is a summary of seven steps that have proven to be useful when creating and enhancing business continuity plans.



Organizations can't just rely on diverse and cutting-edge technologies to fight adversaries. They will also need people with diverse expertise and backgrounds.


A number of converging factors are changing enterprise cybersecurity, and as a result, we must change the way we approach it.

First, cybercriminals are becoming much better at penetrating organizations using nontechnical means. With social engineering and phishing techniques, they can bypass organizations' increasingly advanced defenses by manipulating insiders to gain access. Research shows that phishing and social engineering were the most common methods of compromise in 2018, serving as the conduit to the initial point of entry in more than 60% of security breaches in both cloud and point-of-sale environments, as well as in 46% of corporate and internal network breaches.

Second, the volume of data in organizations is growing exponentially and is increasingly stored in a more decentralized manner, making it difficult to ensure it's being optimally protected. Research firm IDC predicts the volume of data worldwide will grow tenfold by 2025 to 163 zettabytes, with the majority being created and managed by enterprises. This growth is being driven by the proliferation of artificial intelligence, the Internet of Things, and other machine-to-machine technologies in enterprises across all industries. This increase in new technologies means a larger attack surface, new attack vectors, and more points of vulnerability for organizations to secure.



When it comes to protecting the electrical grid from sophisticated and constantly evolving cyber and physical attacks, the government and private sector are converging to protect the grid from emerging threats.

The effort to secure critical infrastructure has been ongoing since the late 1990s but the threat of a major attack — either a cyberattack of a physical one — remains a viable threat. The level of security realized over the last several years, as attention becomes more and more fixed on the possibilities of these attacks, is difficult to measure.

“It’s difficult to make a definitive response to that,” said David London, senior director of the Chertoff Group. “Based on our clients and interaction with power system operators as well as our time collectively in government, there are more calories being expended on building both more preparedness and resilience and more unified situational awareness between industry and government, as well as addressing resilience objectives earlier in the supply chain.'



Advances in data science are making it possible to shift vulnerability management from a reactive to a proactive discipline

Keeping pace with the endless deluge of security vulnerabilities has become one of the truly Sisyphean tasks for enterprise IT and security teams. Every operating system, device, and application is a potential source of vulnerabilities. This can include the traditional laptops and servers that power an organization but also extends to virtual machines, cloud-based assets, Internet of Things, mobile devices, and the list goes on.

To make matters worse, the rate at which new vulnerabilities are being discovered has accelerated. A quick check of the National Vulnerability Database (NVD) shows that historically the industry would expect to see around 5,000 to 7,000 common vulnerabilities and exposures (CVEs) released each year. However, in 2017 that number spiked to 14,649, continued to climb to 16,515 in 2018, and shows no signs of slowing down. These numbers are likely underrepresenting the total number of vulnerabilities in the real world given that many platforms are not covered by CVE Numbering Authorities (typically, these are vendors or researchers that focus on specific products).



Wednesday, 12 June 2019 15:17

Predicting Vulnerability Weaponization

As they relate to IT, the functions of business continuity and information security have one common goal. That is to minimize the losses and maximize the uptime of the organization’s information systems before, during, and after an emergency situation. Business continuity and information security are interdependent and the teams must work well together if this goal is to be met.

However, when it comes to the teams working together, there tend to be various barriers to effective communication. Based on my experience, the average information security professional often thinks 90 and sometimes 180 degrees out of sync with the average business continuity professional which leads to communication breakdown and hinders overall IT progress. The following are the best ways I’ve found to effectively work with us information security types so that everyone is on the same page, systems stay up and running, and customers are kept happy.



(TNS) — The era of available electricity whenever and wherever needed is officially over in wildfire-plagued California.

Pacific Gas & Electric served stark notice of that “new normal” this past weekend when it pre-emptively shut power to tens of thousands of customers in five Northern California counties. The utility warned that it could happen again, perhaps repeatedly, this summer and fall as it seeks to avoid triggering disastrous wildfires.

The dramatic act has prompted questions and concerns: What criteria did PG&E use? Did the shutdowns prevent any fires? And what can residents do to prepare for what could be days without electricity?

The managed outages were broad but brief, affecting 22,000 customers in Napa, Yolo, Solano, Butte and Yuba counties for a handful of hours. That included Paradise and Magalia, two towns that were devastated seven months ago during the Camp Fire, a massive blaze triggered by high winds hitting PG&E transmission lines.



Criminals are using TLS certificates to convince users that fraudulent sites are worthy of their trust


One of the most common mechanisms used to secure web browser sessions — and to assure consumers that their transactions are secure — is also being used by criminals looking to gain victims' trust in phishing campaigns. The FBI has issued a public service announcement defining the problem and urging individuals to go beyond simply trusting any "https" URL. 

Browser publishers and website owners have waged successful campaigns to convince consumers to look for lock icons and the "https:" prefix as indicators that a website is encrypted and, therefore, secure. The problem, according to the FBI and security experts, is that many individuals incorrectly assume that an encrypted site is secure from every sort of security issue.

Craig Young, computer security researcher for Tripwire’s VERT (vulnerability and exposure research team) recognizes the conflict between wanting consumers to feel secure and guarding against dangerous over-confidence. "Over the years, there has been a battle of words around how to communicate online security. Website security can be discussed at a number of levels with greatly different implications," he says.



Wednesday, 12 June 2019 15:13

FBI Warns of Dangers in 'Safe' Websites

A critical component to a holistic approach to cybersecurity is conducting a penetration test, or pen test, to evaluate computer system, network, or web application vulnerabilities that could be exploited by a hacker.

The first question to consider when you conduct a pen test – what is the goal? Is it to satisfy compliance mandate, or was there a data breach and you want to ensure all of the loopholes are closed? Maybe pen testing is a best practices regimen and conducted regularly in your organization. If your company is installing a new computer system or network, it makes sense to test it to find where any vulnerabilities or weaknesses may exist.

Internal, external, privileged or not…

Types of pen testing vary widely. Depending on your goal, options include internal, external, credentialed or uncredentialed, web application testing, network testing, phishing and social engineering. An external pen test will show you what your network or application looks like to an outsider. An internal test may be used to verify segmentation of different data sets.



Security should be a high priority for every organization. Unfortunately, there is a serious shortage of quality cybersecurity staffers on the market.


Who’s overseeing your organization’s security? Are they equipped to secure your data and prevent ransomware attacks, or are they more likely to be scanning for viruses with a metal detector and patching systems with tape and paper?

When (ISC)2 asked cybersecurity professionals about gaps in their workforce, 63% said there’s a short supply of cybersecurity-focused IT employees at their companies. And 60% believe their organizations are at “moderate-to-extreme” risk of attacks because of this shortage.    

Mitch Kavalsky, Director, Security Governance and Risk at Sungard Availability Services (Sungard AS), believes you can solve this problem by focusing less on hiring cybersecurity personnel with expertise in specific technologies, and more on bringing in employees with well-rounded security-focused skillsets capable of adapting as needed.

But as Bob Petersen, CTO Architect at Sungard AS, points out, a company’s overall security should not be limited to the security team; it needs to be a key component of everyone’s job. “There needs to be more of a push to drive cybersecurity fundamentals into different IT roles. The role of the security team should be to set standards, educate and monitor. They can’t do it all themselves.”

Invest in your company’s security. But invest in it the right way – with the right people. If not, you’re bound to have more problems than solutions.

On May 28, 2019, eighteen confirmed tornadoes pummeled Ohio. The Dayton-area was devastated by fourteen of these tornadoes ranging from EF0 – EF4.

To put it in perspective, the state’s annual average number of tornadoes is nineteen. In one night, Ohio saw more tornadoes than it sees in an entire year. This level of storm intensity and magnitude is rare for the area – leaving hundreds of thousands of people without power or running water, and thousands of families homeless.

Photo of the aftermath of the Dayton Ohio tornado in 2019As an OnSolve employee, I hear about weather emergencies constantly; but I never thought I’d see this level of destruction in my hometown.

OnSolve has an office located in the heart of Dayton and is home to almost 30 employees. We also have offices and remote employees all over the world. Our headquarters in Ormond Beach, Florida, is no stranger to inclement weather as the area is prone to tropical storms and hurricanes. Having a geographically dispersed workforce is the standard for many of today’s organizations and can sometimes present challenges that most companies aren’t prepared for, such as the numerous turbulent twisters that caused over 200 injuries and an exorbitant amount of damage that will cost millions to repair.



The Benefits Of Design Thinking Are Quantifiable, And They’re Compelling

By Ryan Hart (principal analyst, CX) and Benjamin Brown (senior consultant, Total Economic Impact)

Design thinking has historically enjoyed “blind support” among executive leaders based on its perceived value. However, many of these same leaders now find themselves increasingly under pressure to show the return on their investments in the practice. For customer experience (CX) professionals and design thinking practitioners that have successfully introduced the methodology into their organizations, they may find that after the initial enthusiasm fades, the struggle to fund and scale begins. Forrester created a Total Economic Impact™ (TEI) model to empower design thinking practitioners with the tools and vernacular needed to quantify their efforts as well as form a compelling business case for the practice.

The model examines design thinking’s financial impact. While the ROI for each organization will differ depending upon the efficiency of the practice, the project, and the specific use case, the model found that mature design thinking practices can generate substantial measurable returns and a broad range of auxiliary benefits.



Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are essential to the success of any business continuity program. With a wide range of potential business interruptions, RTO and RPO are two of the most critical factors of a disaster recovery or data protection plan. Along with Business Impact Analysis (BIA), RTO and RPO are viable strategies to be included in Business Continuity Planning (BCP).



Free and fair elections thanks to well defined and managed electoral services are at the heart of a democratic political system, and casting a vote is a basic political right. Having robust systems in place is essential for this to run smoothly. Newly revised international guidance for electoral organizations will help them do just that, by applying the principles of ISO’s most widely known standard for quality, ISO 9001.

The technical specification ISO/TS 54001, Quality management systems – Particular requirements for the application of ISO 9001:2015 for electoral organizations at all levels of government creates the framework for a quality management system that helps electoral bodies provide more reliable and transparent electoral services. It is based on ISO 9001Quality management systems with specific sector requirements. It has been recently updated to reflect updates to ISO 9001 to keep it more in line with market needs.

Katie Altoft, chair of the ISO technical committee responsible for its development said it is an important tool for electoral organizations because it helps to build confidence in elections through enabling transparency, effective planning and management, and efficiency in electoral processes.



(TNS) - The Pennsylvania Statewide Radio Network (PA-STARNet) is the Commonwealth’s closed wireless voice and data network for local, state, and federal public safety entities.

"Since July 1, 2012, it has been the responsibility of the Pennsylvania State Police to develop, operate, regulate, manage, maintain, and monitor the Statewide Radio Network," said state police spokesman Ryan Tarkowski. "Twenty-one state agencies rely on PA-STARNet for communications. In 2017, PSP began to phase out the unreliable and issue-plagued OpenSky radio system in favor of an open radio technology called P25. The P25 radio system is an industry standard and will ensure first responders can seamlessly communicate with other agencies."

One of the main benefits of the P25 system is the fact that it is non-proprietary, which means that local entities (police departments, county 911 centers, etc.) can purchase radios and equipment at lower prices on the open market, said Tarkowski.



A new report sheds light on how human cognitive biases affect cybersecurity decisions and business outcomes.

It's a scenario commonly seen in today's businesses: executives read headlines of major breaches by foreign adversaries out to pilfer customers' social security numbers and passwords. They worry about the same happening to them and strategize accordingly – but in the text, they learn the breach was in a different industry, of a different size, after different data.

This incident, irrelevant to the business, distracted leaders from threats that matter to them.

It's an example of availability bias, one of many cognitive biases influencing how security and business teams make choices that keep an organization running. Availability bias describes how the frequency with which people receive information affects decisions. As nation-state attacks make more headlines, they become a greater priority among people who read about them.



The General Data Protection Regulation (GDPR) has been in effect for more than a year now, and it has already yielded significant returns, but there are still key issues that need work. Fortinet’s Jonathan Nguyen-Duy discusses.

Abuse of individuals’ personal data has led to an outcry for stronger data privacy laws. Action toward such laws has tended to apply to one industry at a time – health care, financial services and so on. In the absence of a federal mandate in the U.S., states have created their own privacy regulations, such as the California Consumer Privacy Act. Many such specific regulations can engender a “check the box” approach to data security and privacy, which fails to provide true protection, because it falls short of doing everything possible and settles for “good enough.”

For example, the EU’s 1995 Data Protection Directive (which was replaced by the General Data Protection Regulation “GDPR”) allowed individual member nations to write and pass their own breach notification laws. Not only did these laws sometimes tend to be incomplete, but the enforcement and requirements were inconsistent across the EU. Multinational companies were especially challenged, because data gathered in a specific country had to be managed differently than data collected in a neighboring one.

Taking effect last May, GDPR streamlined these various regulations into one comprehensive mandate. The regulation requires organizations to report data breaches to affected individuals and appropriate regulatory authorities within 72 hours of being discovered. Even better, it also established a common and broader definition of personal data, including things like IP addresses, biometric data, mobile device identifiers and other types of data that could potentially be used to identify an individual, determine their location or track their activities.



(TNS) - The storms across the Dallas area Sunday afternoon that killed one and injured at least five others also spread debris through damaged neighborhoods and left several hundred thousand residents without power.

Winds were strong enough — up to about 70 mph — to topple large trees onto cars and homes. Traffic signs as well as billboards were dislodged, and light poles and other small structures were strewn about.

But power outages were a more widespread problem. Close to 280,000 customers were without power in Dallas County alone, about a quarter of the Oncor customers there. By dawn Monday, almost 240,000 Dallas County customers were still affected, along with about 15,000 in Denton, Collin and Tarrant counties.



Data should never have been on subcontractor's servers, says Customs and Border Protection.

Photos used by US Customs and Border Protection (CBP) in an effort to protect travelers have been taken in an attack against a federal subcontractor. Officials confirmed the compromise, which they described as part of a malicious cyberattack.

While the agency declined to give details about the photos accessed by the attacker, the subcontractor is known to maintain databases of photos that include passport and visa photos, license plate images, and images from facial recognition systems.

According to the agency, a subcontractor transferred CBP data to its network, which was subsequently hacked. "The issue with subcontractors is that you can't completely control how they secure their network," says Pierluigi Stella, CTO of Network Box USA. "You can ask for certifications, financials, controls, and attestations, but there is always a limit to how much you can demand."



In our evermore complex, interconnected world, with health systems undergoing new challenges and stresses, risk management in the healthcare industry has never been more important. Three ISO standards play a significant role in matching clinical quality with patient safety and best practice, helping not only to deal with risks but also to prevent them in the first place.

Only the lucky few get through life in continuous good health, free from the pains and aches of growing older. Not many of us escape painful and debilitating ailments, such as sore joints that eventually require artificial replacements, and most of us, at some time or other, have to resort to health professionals and the healthcare industry in search of cures.

And it is reasonable for us to expect that those healthcare solutions and treatments will return us to our lives as healthier people, feeling better and fit for daily tasks. We put our trust in health professionals when we are at our most vulnerable and the health professionals, for their part, try to ensure that patient safety is paramount and aspire to best practices to reduce medical errors.



Tuesday, 11 June 2019 14:47

ISO: The new dawn of disease control

(TNS) — With temperatures soaring and strong winds blowing through forests across Northern California over the weekend, rural areas in the Sierra Nevada foothills plunged into darkness after Pacific Gas & Electric Co. shut off high-voltage transmission lines to avoid sparking wildfires.

The first formal deployment of its new Public Safety and Power Shutoff rules left more than 20,500 PG&E customers in portions of Butte and Yuba counties without power as 260 utility personnel conducted safety patrols, repaired electric infrastructure and inspected 800 miles of transmission and distribution lines, officials said.

The aggressive power shutoffs began at 9 p.m. Saturday and continued through Sunday. “We’re asking impacted customers to be prepared for a 24- to 48-hour outage,” Karly Hernandez, a spokeswoman for the utility said in an interview Sunday.



(TNS) — Police and fire chiefs in Broward County are recommending the county give up control of its emergency radio communications system because they don’t think it’s capable of fixing the broken system that impeded the response to the Parkland school shooting.

Associations of fire and police chiefs took votes, which are not binding, to shift the system to the Broward Sheriff’s Office, according to presentations Wednesday at a meeting of the state commission investigating the Parkland massacre.

“We think that the system is in such dire straits that this is the best solution to the problem,” Sunrise Police Chief Tony Rosa told the MSD Public Safety Commission.

Members of the commission described the failure to fix the communications system as a grave threat to public safety and said they were frustrated at the inability of local officials to address a well-known and longstanding obstacle to the delivery of life-saving services.



If your company truly is a great place to work, make sure your help-wanted ads steer clear of these common job-listing cliches. 

If you've ever bought a house or participated in online dating, you may be familiar with the creative ways people euphemistically stretch the truth, or how they'll bore you to tears with dry lists of facts. It's not just dating and real estate where this is a problem; most job listings are every bit as bad, and cybersecurity is no exception.

We all know that when you see a listing for a "cozy" apartment, that it will have roughly the same floor space as a coat-closet. And if you see a dating profile with a list of "must haves" as long as your arm, you're likely dealing with someone who's nightmarishly impossible to please.

Typically, employers seem oblivious to the message their choice of wording sends to would-be employees. For example:



(TNS) — When Rodney Andreasen first emerged from the wreckage of Hurricane Michael last October, the Jackson County emergency management director realized nearly everything he knew was gone.

The log cabin home he had intended to retire to was lost. The roads in his inland, rural county were choked with fallen trees. The roofing from the county’s emergency management office had been partially shorn away.

State and federal officials descended upon the Panhandle, promising help. But eight months after Michael’s impact, Andreasen and other residents across Florida’s northwest are still digging out from the hurricane’s ruin. Millions of rural acres are still marked by swaths of toppled or twisted trees. Parts of the Panhandle remain dotted with piles of twisted metal and other detritus that have yet to be cleared.



The summer season doesn’t always promise nonchalant days filled with sunny afternoons and fun outdoor festivities. It’s easy to forget about triple-digit temperatures, whipping thunderstorms, and wildfires. During summer months, Mother Nature can surprise your critical business systems with unleashing myriads of threats between the end of the school year and Labor Day. 

As the hurricane season is already underway, along with some wildfires happening across the country, there’s hope for us to catch a breath. Forecasters at Colorado State University anticipate the 2019 hurricane season in the Atlantic to show “slightly below normal activity.” And moist conditions from the recent winter in California could reduce the number of wildfires in that state. 

Let’s take a look at summer’s greatest perils. 



In business continuity, there is a gap between perception and reality in terms of what kinds of problems cause most of the disruptions. In today’s post, we’ll look at BC’s Big Four: the four actual, real-life problems that happen most often and cause most of the disruptions to companies’ operations.

What are the most common problems leading to interruptions of organizations’ operations and the implementation of their business continuity plans?

It would be understandable, based on news coverage, if you gave such answers as tornadoes, hurricanes, terrorism, and incidents of workplace violence.

In fact none of those is common as a factor causing impacts to business.



An overview of three common organizational structures illustrates how NOT to pit chief security and IT execs against each other.

For certain critical IT deliverables, CIOs and CISOs embody the inherent tension between cybersecurity and operational requirements. Where the CIO is charged with delivering efficient IT infrastructure at low cost, the CISO is charged with ensuring that the same IT infrastructure operates within the risk tolerance parameters set by the board and CEO. Organizational structure has a lot of influence over how these functions operate and interact, and it can either exacerbate power struggles or facilitate alignment. Let's look at three common organizational structures and how CIOs and CISOs can work together to achieve their objectives.    

Most Challenging: CIO controls CISO budget and rates CISO performance
When the CISO reports to the CIO, the onus is on the CIO to decide whether to fund and support cybersecurity initiatives, or the core deliverables that the CIO is charged with delivering. If a compromise has to be made, the CIO may be tempted to sacrifice security over functionality or infrastructure improvements.

This reporting structure can create an environment that discourages the CISO from fully disclosing risk to the CEO and board. In other words, CISOs who answer to CIOs are more likely to shape their message to please the boss.



Thursday, 06 June 2019 14:48

CISOs & CIOs: Better Together

(TNS) — When every second counted, school resource officer Scot Peterson, who was the closest person to the gunman during the Parkland school shooting and likely the only one who could have intervened, took cover instead of action and for that he has been arrested on 11 criminal charges.

Peterson on Tuesday was arrested on charges of neglect of duty for the 48 minutes he spent hiding as children and staff died, Sheriff Gregory Tony announced.

The 56-year-old had been branded a coward, nationally heckled and vilified for failing to confront the former student who gunned down and killed 17 students and staff at the school on Feb. 14, 2018.

Ten of the charges stem from the killings and injuries that happened on the third floor of the freshman building.



Containers are a very big deal in enterprise computing right now. If your organization isn't already using them, then trends indicate it probably will soon. This application virtualization technology has had profound implications for some companies that have embraced DevOps, and there is plenty of potential for it to have a similar impact on security operations.

To understand why, it's good to start with an understanding of just what containers are. A modern application is a collection of pieces of code: the main application itself, configuration files, and custom files on which it depends. These tend to be unique to the application as it's configured to be deployed on a given server.

A container bundles all of these things up into an image that can be saved and deployed quickly, consistently, and automatically across multiple servers. If differences exist in the operating system details between the development and production servers, the container insulates the application from them, making application movement between development and operations very fast and straightforward.



Nearly half of compliance and procurement professionals find it difficult to identify environmental, social and governance within customer due diligence processes. Brian Alster discusses this and other findings from a recent Dun & Bradstreet survey.

There is little doubt amongst supply chain industry experts that knowing and actively managing potential risk is essential to the success of an entire supply chain system. Supply chains are highly complex structures often spanning multiple countries and tiers of third parties’ outsourcing and offshoring outside of a company’s core operations, leaving them vulnerable to new and evolving risks every day.

Dun & Bradstreet’s third Compliance and Procurement Sentiment Report, which surveyed over 600 industry professionals from both the U.S. and U.K., found that environmental, social and governance (ESG) screening and monitoring was among the top concerns of industry professionals, a fast-rising third-party risk management issue.

While the tenets of ESG are generally understood as the measure of the sustainability and ethical impact of an organization, what remains to be clearly defined for organizations is how to monitor for, assess and comply with ESG standards within third-party risk management programs.



In the Fourth Annual Healthcare Ready national survey on the emergency preparedness of Americans, a larger number of respondents indicated a concern that they are vulnerable to a disaster, and yet 51 percent said they didn’t have a plan for a disaster.

Of the 1,245 adults surveyed, 54 percent said they are aware that they or their families could be affected by a disaster within the next five years. That was in increase from last year’s 51 percent. The survey results were released last week.

The survey also found that 37 percent of Americans can go a week or less without their medications or medical devices before facing a medical crisis. And just 40 percent of Americans could list all their prescription details, such as dosage, or prescribing doctor, if they were forced to evacuate their homes.

The slight uptick in people who are concerned about being vulnerable to a disaster could be good news, but still, the percentage of people who are unprepared is disappointing.



(TNS) — Three of the twisters that ripped through the region on Memorial Day night left 631 homes in Montgomery County communities unlivable, according to a preliminary Ohio Emergency Management Agency assessment released today.

Tornadoes destroyed 211 homes and 43 businesses in Montgomery County, according to emergency management officials. The tornadoes caused major damage to another 420 homes and 54 businesses.

Homes either destroyed or with major damage are deemed uninhabitable, according to county emergency management officials.

In all 2,550 homes and 173 businesses were affected, according to the initial survey.

“Our community was devastated by this storm, and our preliminary assessment shows the extent of the damage,” said Montgomery County Commission President Debbie Lieberman.



Employers can solve the skills gap by first recognizing that there isn't an archetypal "cybersecurity job" in the same way that there isn't an archetypal "automotive job." Here's how.


It feels like every day, there's another article citing the "cybersecurity skills shortage" as an obstacle to filling needed security jobs for the next decade. I disagree. There isn't a significant skills gap. There is a market mismatch. Most employers aren't looking at the people who are actually available; they toss up their hands, credit the skills shortage, and move on. But what's really going on?

First off, the idea of cybersecurity skills is a pretty one-dimensional view of the landscape of what the modern worker needs to bring to the table. Sometimes, it evokes the image of a black-hoodied hacker who can break applications; or maybe the security operations center (SOC) analyst watching alerts from the application security tool that monitors that application.

Even these two workers have skills that aren't really parallel. A hacker could be seen as just a quality assurance engineer, testing the negative space of an application (what it shouldn't do), while the SOC analyst is an operator/incident manager, looking for anomalous operations and following time-tested investigative steps to understand what's happening. So, how did we get to a belief in an insurmountable skills gap?



Tuesday, 04 June 2019 19:10

What Cyber Skills Shortage?

In this fourth installment on D&O liability from Fox Rothschild’s Stephanie Resnick and John Fuller, the authors explore the importance of having a diverse board to address the challenges posed by various social issues.


Why Is Diversity Important?

As with the social issues discussed in prior articles, diversity is quickly evolving from an aspiration for equality to a corporate necessity with an increasingly discernable impact on the bottom line.

At the organizational level, a lack of diversity along gender, racial, ethnic, sexual orientation and/or age lines can open the company to complex and unsavory litigation. For instance, tech giants have been rocked by the publication of internal memoranda regarding diversity practices and numerous lawsuits resulting from the termination of certain individuals who attacked or defended the existing corporate processes. Increased diversity and improved practices for promoting diversity can help avoid such litigation.

At the board level, recent studies have shown that diversity – in particular gender diversity – can have a direct impact on the corporate liability. For example, researcher Chelsea Liu studied 1,893 environmental lawsuits brought against 1,500 companies listed by Standard & Poor’s between 2000 and 2015. Liu found that for every additional woman on company’s board, there was approximately a 1.5 percent reduction in the risk of litigation. Liu also determined that the average environmental lawsuit cost 2.26 percent of an organization’s market value, meaning that the addition of a female board member equated to an average savings of $3.1 million of organizational value each year.

These findings support the widely accepted understandings that adding diverse perspectives to a board improves decision-making by among other things, refreshing core skill sets and challenging potentially complacent board culture.



(TNS) — You bought batteries, found the flashlights and stockpiled sandbags. You’re not ready yet, though.

The region’s top emergency preparedness experts shared their tips, tricks and a few of the more unusual items on their shopping lists this year to make sure your hurricane kit is ready for the worst of whatever weather comes our way.

Start with a thorough evaluation of any existing supply kits you’re relying on this hurricane season, said Tampa Bay Regional Planning Council director of resilience Brady Smith. Stocking a household hurricane kit is a lot like preparing for a long camping trip, he said. Stock up on enough supplies to keep every family member comfortable and safe for up to seven days — say, for a worst-case scenario storm.

Keep everything in a portable cooler or a durable, waterproof tote that’s easy to grab at a moment’s notice. And if your hurricane kit has survived several seasons untouched, it’s time to double-check expiration dates and identify what’s missing before store shelves empty.



(Adapted from A Manager’s Guide to Suicide Postvention in the Workplace: 10 Action Steps for Dealing with the Aftermath of a Suicide)

Death powerfully jars our concept of the way life is supposed to be. That dissonance is multiplied when the death is by suicide.

Following the tragedy of death by suicide, the workforce will include people whose personal struggles already leave them vulnerable and who now face increased risk for destructive behavior, including suicide. Tragedy can beget additional tragedies. Sometimes irrational blaming behavior includes violence. Sometimes suicide contagion or “Copycat Suicides” occur. How leaders respond (postvention) after death by suicide is critical to stopping that negative momentum.

Postvention can be prevention

Defined by the Suicide Prevention Resource Center as “The provision of crisis intervention and other support after a suicide has occurred to address and alleviate possible effects of suicide”, effective postvention has been found to stabilize community, prevent contagion, and facilitate return to a new normal.



Any business serious about building a successful future needs to meet its employees’ demands for flexibility


The fact that the work landscape is evolving at a rate of knots won’t have escaped your notice. The very idea that we should work 9 to 6, five days a week, in a traditional office, is being consigned to history.

The reason? Generation Flex, an ever-expanding pool of digital nomads who are challenging the fundamental assumptions about employment. Used to plugging in anywhere, at any time, this dynamic workforce is not only eschewing the constraints of working from one set place, it is also questioning the long hours culture.

Rather, a healthy work-life balance is the priority and flexible working – allowing employees to work in the best style and format to suit them – is seen to improve this balance.

In fact, according to a recent survey by IWG, more than 80% of workers worldwide say they would prioritise a role that offered flexible working options.



Monday, 03 June 2019 15:13

How to get the most from your team

Picture this scenario:

Your company is busier than it has ever been. Product is moving out the door, customers are happy, and your hiring rate is at its highest in company history.

Then – suddenly – the bottom falls out.

A tornado blowing through Texas levels your finance department’s office. The damage is devastating, employees are displaced, and work comes to a complete stop.

So now what?

You knew that office’s location in “Tornado Alley” was a business risk, but did you have a plan? The reality is, many organizations do not.



People need reminding more than they need instruction. Compliance training expert Ronnie Feldman stresses that instruction isn’t enough. Think like an ad man to influence workplace behavior and corporate culture.

A very wise and important person once said, “people forget stuff.” I can’t remember who said it first, but I know most recently… it was me!

“People Forget Stuff”

Most corporate training is rather long, boring and infrequent. This is particularly true of training on corporate risk – ethics, compliance, privacy, data security and so on. Long and boring is obviously a problem, because people don’t actually learn anything. But I’d like to focus this piece on the infrequency of compliance training. Even if your training is interactive and targeted and employees describe it as interesting and snazzy,* “infrequent” is still a problem. Infrequent training happens for practical reasons; you need your people to have time to do the thing they were hired to do, and taking them away from that thing – even for valuable, effective, snazzy training – is impractical. So we end up with training that happens infrequently.

But here’s the thing: The forgetting curve tells us that people forget 50 percent of content an hour after learning, 70 percent within a day and 90 percent within a week. Literary writer, poet, essayist and moralist Samuel Johnson famously said, “people need to be reminded more often than they need to be instructed.” He must be a smart fella, because he has a snazzy** wig!



The 2019 edition of the American National Standard, “NFPA 1600 Standard on Continuity, Emergency, and Crisis Management” has been published by the USA-based National Fire Protection Association. This standard, which is the most mature standard of its kind in the world, has been translated into numerous languages and adopted by numerous countries and companies in North America, South America, Asia, and the Middle East. Originally published in 1995, the 2019 edition is the 7th edition.

This important tool should be in every practitioner’s toolbox. It’s an overarching standard that defines the inter-connected elements of a preparedness program including program management, risk assessment, business impact analysis, loss prevention/hazard mitigation, emergency management, business continuity, crisis management, and crisis communications. It provides guidance for program development along with many, informative annexes. It has been designated as the criteria for program certification under PS-Prep™. NFPA 1600 is also free to download.

The technical committee that writes NFPA 1600 is comprised of subject-matter experts from the public and private sectors and representing most major industries. The standard has benefited from a revision process that has processed more than a thousand comments and proposals from end users.



Cutting-Edge Baron Radars Reinforce Trust—If You Understand the Basic Mechanics of the Tool


By Dan Gallagher, Enterprise Product Manager, Baron

In late November of 2014, residents of the Buffalo, New York area busied themselves digging out of the heaviest winter snowfall event since the holiday season of 1945. Over five feet of snow blanketed some areas east of the city. This sort of extreme weather levies serious damage: thousands of motorists were stranded, hundreds of roofs and other structures collapsed, and, tragically, thirteen people lost their lives. Perhaps this high toll would have been even greater had not diligent meteorologists caught the telltale signs of lake effect snow days before by studying lake temperature and wind trajectories at various heights. Officials warned of 3-5 inches per hour of precipitation over a day before the event began. However, this extreme snowfall did not appear on the T.V. weather map in the way such a major event usually does. The casual weather-watcher might have looked to radar imaging to make sense of the experience, but that most conspicuous weather instrument to most people was essentially blind to the phenomenon. This sort of apparent failure could make it difficult for people to believe weather technology, or create doubt in the minds of officials charged with making tough decisions for public or institutional safety—and each of those outcomes could endanger communities. While Doppler radar is a critical tool in evaluating weather conditions, it is important for institutions to understand the mechanisms it employs, its strengths and weaknesses, and available methods for analyzing raw radar data. Why does it seem like radar got it wrong for Buffalo?

Doppler RADAR Fundamentals

RADAR, or RAdio Detection And Ranging, was a new technology when Buffalo saw its last huge snow event. Developed during World War Two, radar was first used for military applications. The tool could detect the position and movement of an object, like an enemy airplane. So, when radar operators on battleships picked up signs of rain, it made their jobs more difficult by cluttering radar data with unwanted information. Then, after the War, that clutter became the target.

Picture2The first generation of weather radars were cutting edge at the time, but were rudimentary compared to today’s systems. Radar works by sending out radio waves, short bursts of energy that bounce off of objects at nearly the speed of light. When waves bounce back in the direction of the radar dish, their direction reveals where an object is relative to the system. These waves are sent out in bands. The first weather radar system, installed in Miami in 1959, could only send waves along horizontal bands, and operators had to manually adjust the elevation angle. This meant that the information radar could provide about an object was limited to a single plane. A ball and a cylinder would look the same as each other on the radar screen because only one dimension of an object was accounted for.

The next generation of radar, embraced in the late 1980s and early ‘90s, offered more information than simply location with the introduction of Doppler technology. The major advance that Doppler radar provided was that it could measure an object’s velocity. These radars can detect what is moving toward it or away from it by analyzing the shift in the frequency of returning radio waves. (Still, radar cannot ‘see’ what is moving orthogonally to the beam.) This allows atmospheric scientists and meteorologists to identify additional characteristics in a storm. For example, they can identify the presence of rotating winds in the atmosphere, which can be a strong indication of a tornado. Generally represented in today’s system as bright red juxtaposed to bright green, data on wind speed and location provides greater insight into such important weather events such as tornadoes.

Dual Polarization: The Modern Radar

The radars the National Weather Service (NWS) uses today form a network of 171 radars located throughout the United States. In 2007, Baron Services, along with their partner L3 Stratus, were selected to work with the NWS to modernize the entire network of radars to add Dual Polarization technology.

Early Doppler systems had not resolved a major limitation inherited from previous generations: the single plane of weather information. Embraced in the mid-2000s, dual-polarization technology changed that. By using both horizontal and vertical pulses, dual-pol radars offer another dimension of information on objects—a true cross-section of what is occurring in the atmosphere. With data on both the horizontal and vertical attributes of an object, forecasters can clearly identify rain, hail, snow, and other flying objects such as insects, not to mention smoke from wildfires, dust, and military chaff. Each precipitation type registers a distinct shape. For example, hail tumbles as it falls, so it appears to be almost exactly round to dual-pol radars. This additional information provides meteorologists and those responsible for monitoring the weather with very valuable information that allows them to make more informed decisions about the presence of hail, the amount of rainfall that may fall, and any change from liquid to frozen precipitation.

Dual polarization technology is especially useful in observing winter weather. Because it can differentiate between types of precipitation, it can identify where the melting layer is with precision. This allows forecasters to better evaluate what type of precipitation to expect, as they can analyze information about the path that precipitation will face on the way to the ground. If a snowflake will pass through a warm layer, for example, it may become a raindrop, and that raindrop may turn into freezing rain when it hits a colder surface.

Radar Limitations – and How Baron Uses Technology to Fight Back

limitsStill, radar technology has its undeniable limitations. Only so much of the vertical space is observed, since the atmosphere closest to the ground and directly above the radar are typically not scanned. This is best illustrated by the cone of silence phenomenon. Because radars do not transmit higher than a certain angle relative to the horizon, there is a cone-shaped blind spot above each radar.

Also, because radio waves are physical, clutter can make data hard to parse. Tall buildings, for example, give no useful information to meteorologists, and could skew the data. However, Baron has introduced cutting-edge radar processing products that can closely analyze the data that returns to a radar to determine how the atmosphere has impacted the path of a beam. Importantly, though, computers are not the only way, or always the best way, to account for deviations in data. Thus, Baron’s human resources—their in-house weather experts—are daily engaged with monitoring radar outputs.

Nevertheless, radar cannot achieve every goal some members of the public expect it to. Drizzle, for example, does not show up very well at times, because of the extremely small size of particulates and the fact that most drizzle occurs below the height of the radar beam. So, people expecting dry conditions might be puzzled by slight sprinkles on their way to work. Another example, as mentioned earlier, is the difficulty of lake effect snow.

baron buffaloPicture1A lake effect snowfall occurs when cold air passes over the warmer water of a lake. This phenomenon is common in the Great Lakes region. Why does radar have difficulty observing it? The culpable limitation lies with the angle of radar beam. In the same way that radio waves are not typically sent straight up from the ground, they are also not sent directly horizontal. Buildings, small topographical changes, and the like would foil the effectiveness of low-level radar sweeps in many cases anyway, but the further an area is away from the source of the beam, the higher the blind spot. Imagine a triangle with a small angle at one corner—the further the lines travel, the further they part. Lake effect snow is a low-level phenomenon. So, when Buffalo shivered under feet of snow in 2014, the radar essentially missed the event because the lowest beams passed over the highest signs of snow.

Reading the Radar Display: What to Remember

Understanding the history of radar technology and knowing the source of the radar display can prepare anyone to better utilize radar information. Conscientious administrators and officials can recall these radar facts as part of their decision-making process when referencing radar data:

1. Radar cannot see everything.

As apparent in the Buffalo snowfall example, the physical limitations of radar mean that a radar does not ‘see’ what occurs close to the ground. Additionally, since what radar ‘sees’ in the atmosphere is impacted by distance as the beam of energy continues to radiate away from the source, the precision of data can depend on the location of weather relative to the location of a radar. Some radar displays are compilations of data from multiple radars, and these multiple radar displays are often pieced together to account for gaps in radar coverage.

2. No one image tells the whole story.

Weather evolves in stages, so it is important to watch storm motion, growth and decay. When watching weather on radar, how it trends from one scan to the next should never be ignored. Evaluating trends allows watchers to better predict turbulent weather. Then, for the best results, avoid focusing on only one storm. People tend to watch a particular storm and not the entire picture, or what is down stream of the “worst” storm. This can distract from impactful conditions and trends.

3. Raw data can be misleading.

Even when radar ‘sees’ weather accurately, it can at times mislead watchers. Virga, for example, is precipitation that evaporates before it reaches the ground due to drier air closer to the surface.  Just because the radar shows precipitation does not always mean precipitation will reach the ground. In some cases, there is no substitute for using other information to determine what is actually happening. That is why NEXRAD/Baron delivers more than just a radar image to give users a better understanding of what the weather is doing. It gives information on Severe Winds, Threats, Velocity, Hail Tracks, and other conditions.

Baron Radar Equipment Gives Decision-Makers Actionable Information

Baron Threat Net is used by stadiums, emergency management, schools, racetracks, and other institutions because it features radar technology that makes it easier for users to identify what the risk is by removing all of the extraneous information and processing inevitable problems in the data to create a simplified picture of the weather. However, to do that, it uses computing power, scientific expertise, and other tools. Radar should not be solely relied upon to identify every weather phenomenon. It is important that decision-makers understand the limitations of the technologies they utilize in order to not only make the best decisions for their communities, but to reinforce community trust.

The best business continuity data in the world is useless if no one can make any sense out of it when planning for an emergency – or actually responding to one.

If you want to provide guidance when disaster strikes, you have to organize your information in a way that tells a clear, simple story.

We focus a lot in business continuity (BC) and IT/disaster recovery (IT/DR) on activities that result in the production of various kinds of documents.



Follow these best practices to strengthen endpoint management strategies and protect company data


Cyberattacks are increasing across industries, and cybercriminals are savvier than ever. While the total number of IT vulnerabilities is decreasing, the number considered to be critical is on the rise — and so are the number of actual security exploits. Businesses are at risk of disruption, incurring devastating financial and reputational damage. With increasingly complex endpoints, IT teams find themselves racing to keep up with securing the proliferation of smartphones, tablets, and other devices.

Bring-your-own-device (BYOD) programs and Internet of Things (IoT) technologies further complicate the process because each device that connects to your network increases your vulnerability to threats from malware and viruses. Many devices accessing corporate data are mobile, making remote, hands-off management a necessity. Having an endpoint security strategy that gives clear visibility into all devices at all times is absolutely essential and can be the difference between smooth IT sailing or a security breach that has a significant impact on the business.

Here are five steps that can strengthen endpoint management strategies and empower organizations to fight back against cybercrime and protect sensitive company data.



Financial reporting still involves pulling data manually out of an ERP/EPM system into Excel, then manipulating the data to create custom reports – a slow and error-prone process. insightsoftware’s Craig Nickerson outlines a better alternative.

A big part of financial reporting for businesses and organizations is related to regulatory compliance. This can vary widely depending on the organization and industry, such as the Consolidated Annual Financial Reports (CAFR) for a state, municipality or other government entity; FR 2900FR Y-9C and other “call reports” for banks; 10K, 10Q and other quarterly and annual reports that publicly traded companies must submit to the SEC; or even NCAA compliance reporting for higher education institutions.

But perhaps the most well-known example is the Sarbanes-Oxley Act. Enacted in 2002 in the wake of numerous corporate scandals, it was intended to restore investor confidence by strengthening audit committees, improving internal controls and making directors and officers accountable for the accuracy of financial statements, among other things.

Unfortunately, Sarbanes-Oxley and other regulations require a significant amount of financial reporting effort to ensure compliance, which in turn sheds light on a huge problem: how manual most companies’ financial controls and reporting remain. This issue is further compounded by compliance rules and reporting requirements evolving based on a specific administration’s objectives, adjustments to underlying accounting rules, industry trends or other changes.



Conventional risk management tools are appropriate for managing known or anticipated risks; but threats outside these areas need a different approach. Dr Sandra Bell says that stress testing is one tool that provides the answer, helping identify and correct organizational vulnerabilities in a safe environment.


A resilient organization is one that can fulfil its strategic aims, such as economic growth, developing competitive advantage or increasing profits, regardless of any adverse issues it faces either internally or externally. Such an organization is seen to not only survive operational disruption or hostile market environments but succeeds in thriving, despite them.

Conventional risk management tools, such as root cause analysis, SWOT, and probabilistic risk assessment (PRA), can be used to manage known or anticipated risks. But more sophisticated approaches to risk management are needed to cope with the following scenarios: risks that we suspect are more likely to occur than historical observation suggests due to some underlying cause; extreme events that we can imagine but have never actually occurred to our knowledge; or situations where we cannot rule out that the complexity and uncertainty of the environment in which we operate will result in unexpected impacts or conditions.

Stress testing offers a way to identify and correct organizational vulnerabilities in a safe environment rather than learning the hard way through experience, and suffer additional brand damage. Hindsight is always 20:20.



Recovering unstructured data after an outage can be a significant challenge, but one which can be made significantly easier through the use of a global cloud-based file system. Warren Arnold looks at the issue…

Business continuity and disaster recovery solutions have been effective in reducing downtime from days to hours; however, modern solutions are typically reliant upon backup files which are normally inactive and need to be tested and restored. That takes time.

To further complicate things, not all backup files are usable, and some can contain malware. As a result, IT teams often must go deep into their archives to find the best version to restore, and the deeper they go, the more data, time and productivity the organization stands to lose.

The good news? There is a better way to manage the recoverability of unstructured data after an outage, which isn’t through the backup application and files, but rather, how the file system itself operates. Today’s global cloud-based file systems do more than change how enterprises store, use and collaborate with data and files. They also provide IT with a powerful, fast and effective away to address backup and disaster recovery-related tasks.



First developed in 2010, the zero trust security model has recently grown significantly in popularity. Jan van Vliet explains why zero trust security offers several benefits over and above traditional network-based security approaches; and describes the fundamental aspects required for implementing it.

The zero trust model is based around a central concept which states that an organization should have no default trust options for anything/anyone either inside or outside its boundaries. Instead, everything should be properly authenticated every time, before access to the network is granted.

How does zero trust differ from traditional approaches?

Most traditional approaches to network security focus on building strong perimeter defences / defenses that make it difficult for anyone to gain access without permission. However, where these approaches tend to fall down is that once inside, there is a default level of trust assigned to everyone/everything. As such, if a hacker manages to gain access, there’s often very little to stop them from moving freely around and accessing/exfiltrating anything they like.

Conversely, the zero trust model proposes that all access should be disconnected until the network has verified the user and authorised their reason for being on the network. Of course, achieving this requires an adaptable security strategy leveraging modern technology to go over and above traditional approaches.



Digital Shadows researchers scanned various online file-sharing services and concluded the number of exposed files is up 50% from March of 2018

More than 2.3 billion files are exposed across misconfigured online file storage technologies, marking an increase of 750 million files – or a 50% jump – from 1.5 billion in March 2018.

Researchers with the Digital Shadows' Photon Research Team thought last year's 1.5B figure alone was "incredible," they say in the aptly named "Too Much Information: The Sequel" report. Files with sensitive and insensitive data were found via SMB file shares, misconfigured network-attached storage (NAS) devices, FTP and rsync servers, and Amazon S3 buckets.

The United States exposed the most data (over 326 million files), though France (151 million) and Japan (77 million) each had the highest in their geographies. The United Kingdom exposed 98 million, and countries throughout Europe collectively exposed more than one billion files.



From helping companies to develop and promote products and services, to analysing our behaviour as consumers, market research contributes to many aspects of modern life. But does it always? And is it global and consistent? The newly updated ISO 20252 will ensure it delivers on promise.

Market research helps reduce risk. Good quality research provides information and understanding, which allow users to more effectively value alternatives and make better decisions. Market research analyses are the go-to solution for many professionals embarking on a new business venture as they save time, provide new insights on the business market you are working on, and help to refine and polish up your strategy. So, when firms report market research results that aren’t based on sound research principles, they are not reducing risk; in fact, they may be inadvertently increasing it.

International Standard ISO 20252:2019, Market, opinion and social research, including insights and data analytics – Vocabulary and service requirements, sets out guidance and requirements relating to the way in which market research studies are planned, carried out, supervised, and reported to clients commissioning such projects. It will encourage consistency and transparency in the way surveys are carried out, and confidence in their results and in their providers.



Baltimore has so far refused to comply with a ransom demand. It's being forced to make a decision all such victims face: to act morally or practically.

Although ransomware took a backseat to other attack vectors in 2018, the threat has regained momentum this year. The most recent high-profile ransomware attack occurred 20 miles from my home, on the city government of Baltimore, on May 7. Baltimore was attacked by a ransomware strain known as RobinHood, and attackers demanded approximately $100,000 in exchange for the digital keys that would restore the city's systems and access to data. To date, Baltimore has refused to pay the ransom. We are now three weeks into the attack; significant disruptions continue to occur and are costing the city dearly in financial and reputational damages.

Ransomware sets the stage for a great debate on moral versus practical dilemmas. This recent surge of ransomware attacks raises the question: Is your local government next? And if you are in a position of power, will you pay the ransom?



(TNS) - Brace yourselves, New Yorkers: The long-shot possibility of a Kansas-style twister is in the forecast.

Meteorologists warned of an isolated tornado perhaps touching down in the five boroughs or its suburbs Wednesday as the strange and dangerous local weather stretched into a second day.

Golf ball-sized hail pounded Staten Island on Tuesday night as powerful thunderstorms dumped rain across the region. Things could get even worse Wednesday as the National Weather Service issued a hazardous weather outlook statement for the city, Long Island and northeast New Jersey.

“A chance of strong to severe thunderstorms developing late (Wednesday) afternoon into the evening,” read the NWS statement. “The main threat will be damaging winds and large hail. However, an isolated tornado is possible. Localized flooding is possible as well.”



(TNS) — Florida will start the 2019 hurricane season powerfully informed by Matthew’s scary miss of the east coast in 2016, the statewide crushing from Irma in 2017 and Michael’s brutal assault on the Panhandle last year.

Each of those monster storms was different in nature but none devastated a metropolitan area, something forecasters say will happen sooner or later.

Michael was particularly chilling for experts. It gained far more intensity than expected and, making landfall on Oct. 10, came as the June 1 to Nov. 30 hurricane season was winding down.

“Michael was more than a month later than the previous, strongest mainland U.S. landfalling hurricane we’ve ever seen,” said meteorologist Jeff Masters, a founder of the internet-based weather service, Weather Underground.



Data from the last half year shows devices worldwide infected with the self-propagating ransomware, putting organizations with poor patching initiatives at risk.

Two years after the WannaCry ransomware attack blitzed through major organizations and shut down manufacturing and operations, the malware still exists on an estimated 145,000 devices, continuing to attempt to spread to unpatched versions of Microsoft Windows.

Internet of Things security vendor Armis over a six-month period collected data from honeypots and DNS cache servers and found that number of devices remain infected with the WannaCry ransomware. And nearly 60% of manufacturing and 40% of healthcare organizations have had at least one device compromised by WannaCry, the company found.

The continued success of WannaCry is mainly due to unpatched versions of Microsoft Windows that are embedded in hard-to-update industrial and enterprise devices, says Ben Seri, vice president of research at Armis. 



As you know, Forrester is the market leader in customer experience (CX) analysis and guidance. Our latest CIO-oriented report shows you how the very strongest CX leaders dare to disrupt with technology-driven innovations.

Few firms empower their CIOs to be their digital transformation leaders, let alone have them drive their innovation efforts. And sadly, most firms’ innovations are simply incremental improvements to their existing products and values and/or “fast follower” responses to the digital disruptors that enter their market.

The better approach, which leading innovators are driving, is to be the disruptor yourself rather than waiting around until net-new customer value drops into your market. Doing so creates lasting competitive advantage that translates into growth and market leadership. Our latest survey, detailed in this report, shows that the leading tech-driven innovators outgrow their market three to four times faster than the industry average. This is a massive opportunity for you — if you can prioritize it and get it right.



A penetration tester shows how low-severity Web application bugs can have a greater effect than businesses realize.

Organizations could face big problems from seemingly small Web application vulnerabilities. The problem is, many of these bugs fly under the radar because they're not considered severe.

Shandon Lewis, senior Web application penetration tester at Backward Logic, discussed a few of these bugs in his presentation "Vulnerabilities in Web Applications That Are Often Overlooked" at last week's Interop conference. Lewis emphasized the importance of focusing on the bugs attackers are likely to use beyond the zero days that typically make headlines.

In his early days as a red team member, Lewis said he learned "zero days were not the way we get in." The media often focuses on zero-day and stack attacks, he explained, but the most credible threats against a business usually don't come from cybercriminals writing their own bugs. He cited three key ways to "virtually guaranteeing" success when breaking into a target: phishing attacks, physical intrusion (walking into a building and planting a device), and weak passwords.



(TNS) — All counties in Oklahoma are included in a state of emergency declared by the governor.

Following a week of severe storms that included flooding rains and tornadoes, Gov. Kevin Stitt amended an executive order he originally signed earlier this month to include all 77 counties. That includes Cleveland, which has been impacted by some flooding an an EF-1 tornado in east Norman.

Originally, the state of emergency was established for 52 counties on May 1. Stitt added 14 counties a week later.

On Thursday night, tornadoes ravaged parts of Northwest Oklahoma, damaging at least two homes in the Laverne area. That followed tornadoes on Wednesday night near Mulhall and Crescent, north of Oklahoma City.



Regulators, consumers and investors/stakeholders are increasingly not willing to accept the prevailing “not if, but when” defeatist attitude regarding data breaches. For example, the commission set up to oversee implementation for the European Union’s General Data Protection Regulation (GDPR) unequivocally states, “As an organisation it is vital to implement appropriate technical and organisational measures to avoid possible data breaches.”

So, it is not a rhetorical question to ask, what if organizations could predict, with a great degree of confidence, where and when their data might be compromised?

One of the key areas to find answers is in the deep dark web (DDW), with its known havens for cybercriminals and other bad actors. However, the DDW is a huge environment; not only does it have a decades-long history of data, but it also continues to grow at a staggering pace across a multitude of protocols, forums and sources.



Fighting cybercrime requires visibility into much more than just the Dark Web. Here's where to look and a glimpse of what you'll find.

The now-shuttered DeepDotWeb, which was a uniquely centralized and trusted repository of Dark Web links and information, had long made it easier for threat actors — and consequently, law enforcement and other defenders — to keep track of which Dark Web sites are active, and where. The repository's takedown left a void that no comparable alternative seems to be able to fill, at least for the near future.

There are other sites, known as hidden wikis, that can appear to be comprehensive directories and are often referred to as such by defenders. In reality, they tend to be little more than human-assembled catalogs that harken back to the early days of the Internet. All this volatility is largely why threat actors who operate on the Dark Web also typically frequent a number of other channels.

It's also why fighting cybercrime requires visibility into much more than just the Dark Web. Contrary to popular belief, the Dark Web accounts for just a minor subset of the many online venues that facilitate cybercrime. Even if the Dark Web were somehow to be eliminated, its absence would simply cause threat actors to rely more heavily on the various other online venues in which many, if not most, already operate.



Wednesday, 29 May 2019 15:41

Cybercrime: Looking Beyond the Dark Web

(TNS) — The governors of Kansas and Missouri are calling for help as the two states recover from several rounds of severe weather that brought flooding and tornadoes.

The Missouri National Guard was activated Monday afternoon to help respond to flooded areas. Missouri Gov. Mike Parson posted on Twitter: “As our state continues to recover from severe storms & damaging flooding, and local resources deplete, I am confident in the Guard’s capabilities to make a difference at this critical time.”

In Jefferson City, cleanup efforts continued after an EF-3 tornado struck Wednesday night, destroying structures and leaving dozens of people injured. The same night, three people were killed near Golden City, Mo., when another EF-3 tornado hit Barton County.

Meanwhile, nearly half of Kansas’ 105 counties are part of a state of disaster declaration. Two counties were added Monday, bringing the total to 49, according to the Kansas Department of Emergency Management.



Sandra Erez presents a cautionary tale for compliance practitioners and boards of directors portraying how disarmingly charming “corporate story speak” can win over audiences (internal and external) – sometimes with devastating consequences. If the corporate story doesn’t fly, take it down.

“The proper magnitude of a story is comprised within such limits that the sequence of events, according to the laws of probability and necessity, will admit of a change from bad fortune to good or from good fortune to bad.”

– Aristotle

The Elevator Pitch and the Evolution of Corporate Storytelling

Corporations are forever leveraging the gullible human element to solidify their business case in a rapacious dog-eat-dog marketplace. And, like so many bees around a single pot of honey, they will try and out-buzz any internal opposition with an exotic business new -ism acquired in Silicon Valley boardrooms or randomly overheard in passing elevator pitches. Back in their native offices, treading softly in carpeted corporate settings as to not appear obstreperous, the intrepid executive will spin a tale peppered with funky jargon – either to explain away their recent strategic failure or garner support for their NEXT BIG move. Quick to spout the newly learned catch phrases, the executives are certain that the introduction of their new vernacular will bring fresh inspiration to their peers – like the unexpected arrival of the CEO to the dying office party. With studied carelessness they toss out words like “deep dive,” “moving the needle” and “unicorn corporations,” proudly exerting their prowess in the art of tech titan speak. They are important, they innovate, they just might be wearing a black turtleneck and they each have a story to tell.

Sometimes, if luck will have it, some of the buzzwords in the story might line up smoothly next to the word strategy – thus further impressing the suited-up crowds with the hint of an actual plan that might – OMG – develop into real action. Ultimately, if the gods of fate will allow it, the story (both internally and to the public) will end up doing what it was meant to do: calming the thundering stakeholders by generating effortless, endless recurring revenue for the corporation.




The concept of “resilience analytics” has recently been proposed as a means to leverage the promise of big data to improve the resilience of interdependent critical infrastructure systems and the communities supported by them. Given recent advances in machine learning and other data‐driven analytic techniques, as well as the prevalence of high‐profile natural and man‐made disasters, the temptation to pursue resilience analytics without question is almost overwhelming. Indeed, we find big data analytics capable to support resilience to rare, situational surprises captured in analytic models. Nonetheless, this article examines the efficacy of resilience analytics by answering a single motivating question: Can big data analytics help cyber–physical–social (CPS) systems adapt to surprise? This article explains the limitations of resilience analytics when critical infrastructure systems are challenged by fundamental surprises never conceived during model development. In these cases, adoption of resilience analytics may prove either useless for decision support or harmful by increasing dangers during unprecedented events. We demonstrate that these dangers are not limited to a single CPS context by highlighting the limits of analytic models during hurricanes, dam failures, blackouts, and stock market crashes. We conclude that resilience analytics alone are not able to adapt to the very events that motivate their use and may, ironically, make CPS systems more vulnerable. We present avenues for future research to address this deficiency, with emphasis on improvisation to adapt CPS systems to fundamental surprise.



Tuesday, 28 May 2019 15:03

Rethinking Resilience Analytics

With Avengers Endgame, we’ve likely seen our heroes together for the last time in their current form, but the franchise has much to teach us about implementing an effective data governance strategy. Adlib Software’s Fahad Muhammad explains.

Can the world be saved? Will the Avengers survive? What fate awaits the mighty God of Thunder? At Adlib, we’re huge fans of the Avengers and the latest sequel has us thinking – once again – about what the world’s greatest superhero team can teach us about data.

We’ve already covered how the epic battle between Thanos and Marvel superheroes ties into data privacy (be sure to give it a read), but the latest sequel has us homing in on a heroic data governance strategy for digital transformation. Read on for the four big lessons from our favorite band of heroes.



The talent gap is too large for any one sector, and cybersecurity vendors have a big role to play in helping to close it


Numerous reports show current unfilled cybersecurity jobs in the hundreds of thousands in the US alone, with with (ISC)2 forecasting a shortfall of 1.8 million by 2020. As the dearth of cybersecurity skills continues, it is considered to be among the top cybersecurity risks for many organizations. 

Filling this gap is imperative, and it is too big for any one sector or organization to do alone. Cybersecurity vendors have a role to play and a responsibility for closing the cybersecurity skills gap that goes well beyond providing training on products and solutions.



GRC professionals in particular know the importance of tone at the top. When a leader has an ethical lapse, the ramifications can be far-reaching. Michael Volkov discusses the potential fallout of managerial misdeeds.

Company managers are the linchpin of a corporate compliance program. Without belaboring the “Tinker to Evers to Chance” baseball analogy, a corporate culture of compliance requires an important information and accountability flow (or cascade) from leadership to senior managers to on-the-ground managers. It is at this level that the compliance message requires effective communications and conduct by managers directly to employees. This is where the rubber meets the road.

More companies are coming to this realization and affirmatively enhancing managers’ ability to communicate important ethics and compliance messages and demonstrate by day-to-day examples how to implement such ethical principles in their supervisory and work responsibilities. Managers are an important reflection of a company’s culture.

A company places an extraordinary amount of trust in its managers. They carry an important message and have direct responsibilities over their employees. It is no accident that employees, when surveyed, prefer to report their concerns to their immediate supervisor. In fact, many companies explicitly encourage such reporting in their compliance program policies and code of conduct. This reporting preference reflects a basic human desire – seeking approval from an immediate superior.



Friday, 24 May 2019 14:14

What Happens When Managers Misbehave?

Weather risks are some of the most common cause of disruption to businesses in all regions of the world; and like all risks, the actual impact is related to how well the risk is managed. Ann Pickren provides some useful advice…

When risk management professionals hear that the Polar Vortex is collapsing, they aren’t simply worried about how cold it will get; they are focused on the impact to business operations. The risk manager needs to engage the proper functional groups in the organization to make sure their strategies for dealing with operations across the business during severe weather conditions properly address the risk, without introducing new risks. Requiring employees to commute into work in unsafe conditions or failing to communicate with the workforce about impending extreme and severe weather in a timely fashion can elevate human and business risk.

The stakes are higher than ever for businesses that fail to account for weather risk: natural disasters and extreme weather resulted in approximately $160 billion worth of damage last year, and reinsurance company Munich RE forecasts this figure will be surpassed in 2019. What’s more, The World Economic Forum Global Risks Report 2019 identified ‘extreme weather’ as the top risk in terms of likelihood over a 10-year horizon, and it ranks third in potential impact over the next decade.

The business of weather is not lost on the C-Suite, which is beginning to view weather risks as being as impactful as traditional threats (cyber security, geopolitical, financial markets, etc.). As a result, risk management leaders can work with HR and management to develop a plan to protect people and property from growing weather risks and ensure the strategies and policies for the company avoid the introduction of incremental risks. Core to these efforts is putting in place the right strategy to rapidly communicate with workforces and other stakeholders before, during and after a significant weather event occurs. Below are a handful of strategy components to consider:



(TNS) — With El Nino conditions expected to persist into the fall, the Central Pacific Hurricane Center is predicting a 70% chance of above-normal tropical cyclone activity during the upcoming hurricane season.

National Weather Service forecasters on Wednesday said the islands could be threatened by five to eight storms from June through November. The season norm for tropical depressions, named storms and hurricanes in the Central Pacific hurricane basin is four or five.

“The time to prepare for hurricane season is now,” said Chris Brenchley, director of the Central Pacific Hurricane Center, at a press conference at the University of Hawaii at Manoa.

Forecasters said the 2019 outlook also offers a 20% chance of a near-normal season and a 10% chance of a below-normal season.



Essential for business success, innovation is about keeping up with the competition through new products, services or ways of doing things. A new series of International Standards helps organizations maximize their innovation management processes and get the best out of all their bright ideas.

Innovation doesn’t have to mean dazzling new technology or life-transforming inventions. It could mean finding a better way to do things or modifying products that over time lead to significant long-term improvements. But harnessing these ideas and improvements calls for sound strategies to ensure they work their best.

The new ISO 56000 series of International Standards is aimed at providing organizations with guidelines and processes that enable them to get the most out of their innovation projects. ISO 56003, Innovation management – Tools and methods for innovation partnership – Guidance, provides a structured approach for organizations looking to enter into an innovation partnership with another organization. It helps them not only decide if collaborating with other organizations on their innovation project is worthwhile, but how to select the right partners, align partners and agree on a common understanding. It also gives guidance on how best to assign roles and responsibilities as well as put effective governance procedures in place.



It is the second anniversary of the 27th May 2017 data centre outage that grounded thousands of British Airways customers at a cost to the company of over £50 million. In this article Caroline Seymour looks at the issue of IT resilience in the airline sector.

It would not be completely unfounded to say that no other industry is as heavily regulated or impacted by IT outages as the airline industry. Which would hardly be surprising, given that people’s lives, safety, security, money and time are at risk every second of every day, and dependent on near-perfect operations and uptime.

In spite of this enormous pressure to ensure safe, uninterrupted experiences, the airline business has been incredibly vulnerable to IT outages - and outages have severe business implications, ranging from frustrated customers, to damaged brand reputation, to not being able to execute revenue generating operations.

However, many airlines have come to accept disruptions as a norm. SunGard AS, which has tracked airline outages back to 2007demonstrates that outages are growing over time. Perhaps most critically, its research, conducted by Qualtrics, reports that 65 percent of consumers will not travel with an airline after experiencing a technical problem that led to a flight delay. So, those IT leaders in the airline industry should be wide awake to the detrimental impact that an outage can have, and the fact that the impact can ripple across the entire business model.



Sage legal advice about navigating a data breach from a troubleshooting cybersecurity outside counsel.


While a serious security incident may be a rare occurrence inside an organization, as a troubleshooting outside counsel, I witness a range of incidents that run the gamut from serious to strange and are often riddled with common pitfalls. It never fails that the event seems to occur at the most inopportune times, such as Christmas Eve or when I'm standing in the middle of the frozen food section of the grocery store (both real-life examples) — the phone rings, and on the other line a client is experiencing their worst day ever. My job is to jump into the mix and begin troubleshooting the legal risks. Here are three traps I frequently see security teams fall into, and how best to navigate them.

Trap 1: Failure to Have a True Incident Response Plan (or to Follow It)
When was the last time you dusted off the ancient incident response plan and actually read it? No matter how sophisticated your organization may be, or how many times you've conducted a tabletop exercise in the last few years, it is important to review the plan and refresh it based on what incidents your organization may face today.



State Emergency Management Agency official discusses effort to better capture costs


Federal, state, and local governments all play key roles in paying for disaster assistance. Local governments are the first line of defense, especially in smaller events, but if a disaster is more severe, states and the federal government can step in. The result is a complex and interconnected flow of funding.

State governments are central to this system, spending on their own relief programs for events that overwhelm locals but do not meet the criteria for federal assistance, and covering a portion of costs as a requirement to receive some types of federal aid. But as research by Pew has found, most states do not track spending across the many agencies involved with the stages of preparing for and responding to  disasters—preparedness, mitigation, response, and recovery. Better tracking is critically important at a time when federal policymakers are looking to manage the rising costs of disasters in ways that could affect federal-state funding relationships. 

To gather more comprehensive spending data, Laura Adcock, disaster recovery branch chief for Ohio’s Emergency Management Agency (EMA), and colleagues at the state Office of Budget and Management (OBM) developed a system to capture cross-agency disaster costs. Adcock oversees state and federal disaster recovery programs and has administered funds from the Federal Emergency Management Agency’s (FEMA’s) largest grant program—public assistance for state and local government infrastructure repair—for 27 presidential disaster declarations since 1992.

She recently responded to questions from Pew about the policy she helped create. This interview has been edited for clarity and length.



The Most Common Overt & Covert Disruptions Businesses Face Today

It’s imperative that companies understand the risks and potential impacts of business interruptions, regardless of the cause. In this report, we gathered
industry data on what’s been prompting business interruptions (BI) in recent years, highlighting the extent of disruptions and their cost. Having mapped out today’s current landscape of BI, we wanted to once again demonstrate the integral role of a Business Continuity Disaster Recovery plan in a company’s ecosystem.



Social media has become an integral part of our everyday lives.

As its influence has increased, so have the ways we interact with it. This includes businesses using social media to connect with customers, employees, and other key stakeholders.

Recently, WhatsApp, a cross-platform messaging and VoIP service, experienced a cybersecurity breach that “left users unknowingly vulnerable to malicious spyware installed on their smartphones.” The platform, owned by Facebook, is used by over 1.5 billion people to send message, videos, pictures, and voice recordings by using an internet connection instead of a mobile network. In 2017, the company announced they were developing solutions for small businesses and enterprises to more widely use the platform to connect with shareholders.

The implications of this breach shed light on a peril that many companies may not be thinking about—is it worth the risk to communicate through a social media platform that makes your connections vulnerable? More importantly, can you solely rely on social media communication for emergencies?



We often see companies devise excellent strategies for their business continuity and IT/disaster recovery planning but meet with failure in practice due to small lapses and gaps in their BC execution.

In today’s post, we’ll look at the importance of being meticulous in executing your BC and IT/DR plans and share some tips to help you avoid losing your kingdom for want of a nail.


Related on MHA Consulting: Beginner’s Guide to Recovery Exercises

Do you remember the old proverb about the nail and the kingdom?

  • For want of a nail the shoe was lost.
  • For want of a shoe the horse was lost.
  • For want of a horse the rider was lost.
  • For want of a rider the message was lost.
  • For want of a message the battle was lost.
  • For want of a battle the kingdom was lost.

We see this happen often in the world of BC and IT/DR. Companies fail in their efforts to recover their business processes and IT systems because of small oversights and mistakes.

Ironically, these companies often have perfect strategies for their BC and IT/DR recovery. Their planning is excellent at the big picture level.




With the intensification of competition among enterprises increasing and the improvement of the complexity of supply chain, the outbreak of supply chain risk has brought a great impact on the enterprise. Research on supply chain risk propagation can not only effectively improve the development of the enterprise, but also guarantee the whole supply chain operate stably Taking into account the different attitudes to enterprises at risk and the herd mechanism between enterprises, a new model is established, which focuses on research on the risk propagation mechanism of the supply chain of risk-seeking enterprises and risk-averse enterprises. Based on the mean-field theory, the threshold of supply chain risk and the scale of risk propagation can be obtained by numerical simulation which reveals that the degree of herd mechanism and risk preference will affect the propagation of supply chain risk. Therefore, the model can provide a good explanation of the risk preference and the influence of the herd mechanism on the supply chain risk, and further verify the feasibility of the model in the supply chain risks propagation problem.



What weather-related threat causes the most deaths in the United States each year?

If you thought floods, you were wrong. It’s heat.

Heat waves can occur just about anywhere in the United States and can even happen in different neighborhoods within cities. Definitions of a heat wave can vary. The World Meteorological Organization’s definition of a heat wave is five or more straight days of heat that exceeds the daily average temperature by 9 degrees Fahrenheit.

It’s difficult to determine exactly how many heat-related deaths occur each year in the United States because some are caused indirectly by heat or exacerbated by heat and not listed as having been caused by heat on the death certificate.

According to the National Resources Defense Council, heat waves kill an average of 1,300 Americans and send more than 65,000 to the emergency rooms each year. Tarik Benmarhnia, a public health researcher at the University of San Diego, called that a vast underestimate.



Every day, keeping anything secure requires being smart about trust. The rules of trust will keep you and your data safer.

Do you trust me?

Why wouldn't you? I'm honest, have strong credentials in cybersecurity, and helped design security solutions for top technology companies and government entities.

But hold on — you don't know me from a hole in the ground. Am I fictitious? The advanced degrees in my office — are they real? My six patents — real or exaggerated? You have to trust your instincts on whether I'm trustworthy.

That's the central problem everyone faces in information security today. Casual hackers, organized crime, government-sponsored hackers, secret backdoors, malicious insiders, corrupt supply chains, and porous technological defenses all contribute to our predicament. You can listen to experts, hire security professionals, buy technologies from the right companies, and still lose the security battle. Bulletproof solutions would be prohibitively expensive, and standard off-the-shelf solutions may not keep you safe.



Thursday, 23 May 2019 14:13

The 3 Cybersecurity Rules of Trust

First forecast: ‘Don’t let a weak El Niño fool you’


By Brian Wooley & Paul Licata of Interstate Restoration

The first hurricane prediction for 2019 was less alarming than many prior years, with only two major hurricanes forecast to hit the U.S. coast. Hurricane researchers at Colorado State University announced in April they foresee a slightly below-average Atlantic hurricane season, citing a weak El Niño and a slightly cooler tropical Atlantic ocean as major contributors. Their second, often more accurate forecast is due June 4, and NOAA announces its first forecast of the hurricane season on May 23.

But don’t be fooled. Early predictions in 2017 also pointed to a slightly below-average Atlantic hurricane season, but in that year hurricanes Harvey, Irma and Maria slammed into the Atlantic and Gulf coasts as well as Puerto Rico and became three of the five costliest hurricanes in U.S. history.

Each storm is different and unpredictable which means business and property owners shouldn’t become complacent; it’s extremely important to prepare in advance for major hurricanes. According to the Federal Emergency Management Agency (FEMA), 40 percent of businesses do not reopen after a disaster and another 25 percent fail within one year. Preplanning is important because it will streamline operations during and after a storm, lead to a quicker recovery and potentially lower insurance claims costs.

According to an April 10 webinar hosted by Dr. Phil Klotzbach of the Department of Atmospheric Science at CSU, researchers are predicting 13 named storms during the 2019 Atlantic hurricane season, with two becoming major hurricanes. (In comparison, in 2018, CSU predicted 14 named storms with three reaching major hurricane strength.) Historical data, combined with atmospheric research, points to the U.S. East Coast and Florida Peninsula with about a 28 percent chance of getting hit by a major hurricane (average for the last century is 31 percent). The Gulf Coast, from the Florida panhandle westward to Brownsville, Texas, is forecast to have a 28 percent chance (average for the last century is 30 percent). The Caribbean has a 39 percent, down from the 42 percent average for the last century.

The states with the highest probability to receive sustained hurricane-force winds include Florida (47 percent), Texas (30 percent), Louisiana (28 percent), and North Carolina (26 percent), according to Klotzbach. But hurricanes can cut a wide swath, he says, as Hurricane Michael did in October 2018 as it moved into Georgia, causing high wind damage and gusts as high as 115 mph in the southwest part of the state.

Klotzbach outlined how total dollar losses from Atlantic hurricanes are increasing each year, exacerbated primarily from a doubling of the U.S. population since the 1950s and from larger homes being built, with an average of more than 2,600 square feet. It’s shocking to consider that if a category 4 storm struck Miami today, similar to the one that leveled the city in 1926, it’s estimated it would cost $200 billion to rebuild. That exceeds $160 billion in damage caused by Hurricane Katrina in 2005.

Experienced recovery experts, like those at Interstate Restoration, are skilled at delivering a quick response and delegating teams to react as soon as a storm is named. Once they identify approximately where the storm will land on U.S. soil and assess its intensity, they allocate assets, resources, and equipment as needed and keep in close contact with all clients in the path of the storm.  Staging efforts to a safe area begin many days before the event.

Powerful hurricanes, such as Irma, can disrupt businesses for weeks or months, which is why pre-planning is so important. It starts with hiring a disaster response company in advance. By establishing a long term partnership before a disaster happens, business and property owners can ensure they are on the priority list for getting repairs done quickly. The restoration partner can also assist with performing a pre-loss property assessment, recovery planning and working closely with insurance.

Quick recovery is made more difficult when business and property owners neglect proper preparations. So despite the prediction calling for fewer storms expected in 2019, it’s always better to be prepared as Mother Nature can be destructive.


Brian Wooley, vice president of operations, and Paul Licata, national account manager, at Interstate Restoration, a national disaster-response company based in Ft. Worth, Texas.

National Preparedness Month isn’t until September, but there is no better time than now to evaluate current preparedness plans and make necessary improvements before a crisis occurs.

 The CodeRED from OnSolve team works regularly with community leaders across the nation promoting the mass notification tools available and information residents need in the event of an emergency. In fact, the Federal Emergency Management Agency (FEMA) is leading the charge to encourage “prepareathons” year round to increase emergency preparedness and resilience. A nation as large and geographically diverse as the United States will see many different disasters over the course of the average year – from hurricanes to tornadoes and from wildfires to floods – so emergency preparedness is something everyone should have on their to-do lists.

A critical step in emergency preparedness is signing up for alerts in your community so disaster won’t take you by surprise. America’s PrepareAthon is the perfect time for the residents to take charge of their own preparedness by visiting their city or county website and signing up for notifications. This simple action could make all the difference in the event of an emergency. OnSolve offers community notifications and mobile alerting platforms that keep residents informed during the most critical times.



In 2018 there were 124 disaster declarations in the U.S., 58 of which were declared major disasters and 14 emergency declarations.

Of these, the most common were flooding and fires, alongside 15 named tropical storms, eight of which were hurricanes. What does this tell us? With such a high number of disasters each year, Emergency Managers need to be prepared to communicate with residents, businesses and visitors should disaster strike. Without the ability to reliably send critical alerts and instructions to those impacted, the outcome of these disasters would likely be more devastating.

Fortunately, government agencies have many tools at their disposal, including the Integrated Public Alert and Warning System (IPAWS) and the software required to issue IPAWS alerts – CodeRED.



(TNS) — At least 20 tornadoes hammered six states on Monday as an intense storm system moved across the Southern Plains and other parts of the U.S.

Twisters were spotted in Oklahoma, Texas, Missouri, Arkansas, Kansas and Arizona, according to multiple media reports. An area stretching from Tulsa to Wichita Falls, Texas remained under tornado watch until 5 a.m. Tuesday morning, officials said.

Many of the sightings came in low-population areas, but Oklahoma residents were nervous Monday because it was the sixth anniversary of the Moore tornado that left 24 people dead.



Santa Fe Group’s Gary Roboff and Protiviti’s Paul Kooney discuss today’s increasingly fraught risk environment. Among the findings from a recent study: There’s a growing need for robust third-party risk management and greater board engagement.

Increasing risk and regulatory pressure pose severe challenges to vendor risk management programs and largely offset gains in program effectiveness and efficiency, according to the just-released 2019 Vendor Risk Management Benchmark Study. From The Shared Assessments Program and Protiviti, this fifth-year Benchmarking Study is based on the Shared Assessments Vendor Risk Management Maturity Model (VRMMM), the industry standard reference in determining third-party risk management (TPRM) practice maturity.

The 2019 VRMMM recognizes eight broad categories of performance and incorporates 211 detailed practice criteria, an increase of 81 criteria over the prior edition of the VRMMM. These additional criteria enable exploration of a range of important focus areas, including continuous monitoring, cybersecurity, fourth-party risk management, privacy, resource allocation and optimization and more. The 2019 study by Shared Assessments and Protiviti was conducted during the third quarter of 2018 and is aligned with the updated 2019 VRMMM.

Only four in 10 participating organizations in the 2019 study suggested their vendor risk management programs operate at an acceptable level of maturity. Furthermore, almost one-third have either no TPRM programs or field programs with only ad hoc practices. Maturity scores in the eight VRMMM practice categories were stagnant this year.



What Two Questions Do You Need to be Prepared to Answer After a Cyberattack?

In this day and age, it’s prudent to think of a cyberattack as a “when”, not “if” scenario.   The truth is, your network is under attack right now and the tools and techniques utilized by your IT and Cybersecurity teams are (hopefully) keeping the bad actors at bay.  But what happens in the event of a breach?  Is your Crisis Management Team ready to answer the most difficult questions?

Most of us are (at least theoretically) familiar with the technical aspects of cybersecurity.  They include network security, intrusion prevention and detection and cyber forensic tools to contain and eradicate the threat.  But it is equally important to have a clearly defined response plan for your Crisis Management Team.  While the techies are off working to resolve the breach, your leadership team will need to mitigate impact to your organization’s reputation, operations and financials.  And the best way to accomplish this is to a) have a pre-defined cyber response plan in place and b) consider several key decisions that your organization might face during the crisis.



In the advent of automated GRC tools, data-compliance professionals are shooting themselves in the feet by over-relying on old-fashioned spreadsheets. Joe Stanganelli and Alia Luria discuss a better way to manage GRC data.

Terry Ray, a senior vice president at cybersecurity-software firm Imperva, is fond of saying that even when organizations are able to identify where their data is, they still fall short when it comes to identifying where their data isn’t. This truism has become the state of compliance-tracking data.

Data-compliance teams still overwhelmingly rely upon rudimentary, locally stored spreadsheets for critical policy and tracking functions. This represents a significant security, privacy and compliance risk. Unlike with cloud-based compliance-tracking software, manually tracking governance, risk and compliance (GRC) with locally stored electronic spreadsheets may mean having several spreadsheet files representing countless typically undocumented or poorly documented versions floating around a large or midsize enterprise. And without compliance-management SaaS tools, this data hygiene problem compounds itself as people are left to share compliance-related spreadsheets via insecure channels like email (or, worse, USB drives).

More problematically for compliance, poor data hygiene may mean a failure in maintaining a single version of the truth – if one can even fairly call it “truth.”



The first anniversary of GDPR is rapidly approaching on May 25. Tech companies used the past year to learn how to navigate the guidelines set in place by the law while ensuring compliance with similar laws globally. After all, for companies who violate GDPR, the legal ramifications include fines that amount to the higher between a four percent worldwide revenue or around $22.4 million.

Although the GDPR primarily applies to countries in the European Union, the law’s reach has extended beyond the continent, affecting tech companies stateside. As long as a US-based company has a web presence in the EU, that company must also follow GDPR guidelines. In an increasingly globalized world, that leaves few companies outside the mix.

GDPR acts as a model for tech companies looking to focus on the consumer’s security and data protection and compliance. A year into its existence, there is still some work surrounding the comprehension and application of the GDPR’s requirements. For GDPR’s anniversary, we’ve gathered a few experts in IT to shed some light on the GDPR, its global effects and how to ensure data protection.

Alan Conboy, Office of the CTO, Scale Computing:

“With the one-year anniversary of GDPR approaching, the regulation has made an impact in data protection around the world this century. One year later with the high standards from GDPR, organizations are still actively working to manage and maintain data compliance, ensuring it’s made private and protected to comply with the regulation. With the fast pace of technology innovation, one way IT professionals have been meeting compliance is by designing solutions with data security in mind. Employing IT infrastructure that is stable and secure, with data simplicity and ease-of-use is vital for maintaining GDPR compliance now and in the future,” said Alan Conboy, Office of the CTO, Scale Computing.

Samantha Humphries, senior product marketing manager, Exabeam:

“As the GDPR celebrates its first birthday, there are some parallels to be drawn between the regulation and that of a human reaching a similar milestone. It’s cut some teeth: to the tune of over €55 million – mainly at the expense of Google, who received the largest fine to date. It is still finding its feet: the European Data Protection Board are regularly posting, and requesting public feedback on, new guidance. It’s created a lot of noise: for EU data subjects, our web experience has arguably taken a turn for the worse with some sites blocking all access to EU IP addresses and many more opting to bombard us with multiple questions before we can get anywhere near their content (although at least the barrage of emails requesting us to re-subscribe has died down). And it has definitely kept its parents busy: in the first nine months, over 200,000 cases were logged with supervisory authorities, of which ~65,000 were related to data breaches.

With the GDPR still very much in its infancy, many organisations are still getting to grips with exactly how to meet its requirements. The fundamentals remain true: know what personal data you have, know why you have it, limit access to a need-to-know basis, keep it safe, only keep it as long as you need it, and be transparent about what you’re going to do with it. The devil is in the detail, so keeping a close watch on developments from the EDPB will help provide clarity as the regulation continues to mature,” said Samantha Humphries, senior product marketing manager, Exabeam.

Rod Harrison, CTO, Nexsan, a StorCentric Company:

“Over the past 12 months, GDPR has provided the perfect opportunity for organisations to reassess whether their IT infrastructure can safeguard critical data, or if it needs to be upgraded to meet the new regulations. Coupled with the increasing threat of cyber attacks, one of the main challenges businesses have to contend with is the right to be forgotten – and this is where most have been falling short.

Any EU customers can request that companies delete all of the data that is held about them, permanently. The difficulty here lies in being able to comprehensively trace all of it, and this has given the storage industry an opportunity to expand its scope of influence within an IT infrastructure. Archive storage can not only support secure data storage in accordance with GDPR, but also enable businesses to accurately identify all of the data about a customer, allowing it to be quickly removed from all records. And when, not if, your business suffers a data breach, you can rest assured that customers who have asked you to delete data won’t suddenly discover that it has been compromised,” said Rod Harrison, CTO, Nexsan, a StorCentric Company.

Alex Fielding, iCEO and Founder, Ripcord:

“If your company handles any data of European Union residents, you’re subjected to the regulations, expectations and potential consequences of GDPR. Critical elements of the regulation like right to access, right to be forgotten, data portability and privacy by design all require a company’s data management to be nimble, accessible and—most importantly—digital.

Notably, GDPR grants EU residents rights to access, which means companies must have a documented understanding of whose data is being collected and processed, where that data is being housed and for what purpose it’s being obtained. The company must also be able to provide a digital report of that data management to any EU resident who requests it within a reasonable amount of time. This is a tall order for a company as is, but compliance becomes almost unimaginable if a company’s current and archival data is not available digitally.

My advice to anyone struggling to achieve and maintain GDPR compliance is to develop and implement a full compliance program, beginning with digitizing and cataloguing your customer data. When you unlock the data stored within your paper records, you set your company up for compliance success,” said Alex Fielding, iCEO and founder of Ripcord.

Wendy Foote, Senior Contracts Manager, WhiteHat Security:

“Last year, the California Consumer Privacy Act (CCPA) was signed into law, which aims to provide consumers with specific rights over their personal data held by companies. These rights are very similar to those given to EU-based individuals by GDPR one year ago. The CCPA, set for Jan. 1, 2020, is the first of its kind in the U.S., and while good for consumers, affected companies will have to make a significant effort to implement the cybersecurity requirements. Plus, it will add yet another variance in the patchwork of divergent US data protection laws that companies already struggle to reconcile.

If GDPR can be implemented to protect all of the EU, could the CCPA be indicative of the potential for a cohesive US federal privacy law? This idea has strong bipartisan congressional support, and several large companies have come out in favor of it. There are draft bills in circulation, and with a new class of representatives recently sworn into Congress and the CCPA effectively putting a deadline on the debate, there may finally be a national resolution to the US consumer data privacy problem. However, the likelihood of it passing in 2019 is slim.

A single privacy framework must include flexibility and scalability to accommodate differences in size, complexity, and data needs of companies that will be subject to the law. It will take several months of negotiation to agree on the approach. But we are excited to see what the future brings for data privacy in our country and have GDPR to look to as a strong example,” said Wendy Foote, Senior Contracts Manager, WhiteHat Security.

Scott Parker, Director, product marketing, Sinequa:

“Even before the EU’s GDPR regulation took effect in 2018, organizations had been investing heavily in related initiatives. Since last year, the law has effectively standardized the way many organizations report on data privacy breaches. However, one area where the regulation has proven less effective is allowing regulators to levy fines against companies that have mishandled customer data.

From this perspective, organizations perceiving the regulation as an opportunity versus a cost burden have experienced the greatest gains. For those that continue to struggle with GDPR compliance, we recommend looking at technologies that offer an automated approach for processing and sorting large volumes of content and data intelligently. This alleviates the cognitive burden on knowledge workers, allowing them to focus on more productive work, and ensures that the information they are using is contextual and directly aligned with their goals and the tasks at hand,” said Scott Parker, Director, product marketing, Sinequa.

Caroline Seymour, VP, product marketing, Zerto:

“Last May, the European Union implemented GDPR, but its implications reach far beyond the borders of the EU. Companies in the US that interact with data from the EU must also meet its compliance measures, or risk global repercussions.

Despite the gravity of these regulations and their mutually agreed upon need, many companies may remain in a compliance ‘no man’s land’– not fully confident in their compliance status. And as the number of consequential data breaches continue to climb globally, it is increasingly critical that companies meet GDPR requirements. My advice to those impacted companies still operating in a gray area is to ensure that their businesses are IT resilient by building an overall compliance program.

By developing and implementing a full compliance program with IT resilience at its core, companies can leverage backup via continuous data protection, making their data easily searchable over time and ultimately, preventing lasting damage from any data breach that may occur.

With a stable, unified and flexible IT infrastructure in place, companies can protect against modern threats, ensure regulation standards are met, and help provide peace of mind to both organizational leadership and customers,” said Caroline Seymour, VP, product marketing, Zerto.

Matt VanderZwaag, Director, product development, US Signal:

“With the one-year anniversary of GDPR compliance upcoming, meeting compliance standards can still be a somewhat daunting task for many organizations. A year later, data protection is a topic that all organizations should be constantly discussing and putting into practice to ensure that GDPR compliance remains a top priority.

Moving to an infrastructure provided by a managed service provider with expertise is one solution, not only for maintaining GDPR compliance, but also implementing future data protection compliance standards that are likely to emerge. Service providers can ensure organizations are remaining compliant, in addition to offering advice and education to ensure your business has the skills to manage and maintain future regulations,” said Matt VanderZwaag, Director, product development, US Signal.

Lex Boost, CEO, Leaseweb USA:

“GDPR has played an important role in shifting attitude toward data privacy all around the world, not just in the EU. Companies doing business in GDPR-regulated areas have had to seriously re-evaluate their data center strategies throughout the past year. In addition, countries outside of the GDPR regulated areas are seriously considering better legislation for protecting data.

From a hosting perspective, managing cloud infrastructures, particularly hybrid ones, can be challenging, especially when striving to meet compliance regulations. It is important to find a team of professionals who can guide how you manage your data and still stay within the law. Establishing the best solution does not have to be a task left solely to the IT team. Hosting providers can help provide knowledge and guidance to help you manage your data in a world shaped by increasingly stringent data protection legislation,” said Lex Boost, CEO, Leaseweb USA.

Neil Barton, CTO, WhereScape:

“Despite the warnings of high potential GDPR fines for companies in violation of the law, it was never clear how serious the repercussions would be. Since the GDPR’s implementation, authorities have made an example of Internet giants. These high-profile fines are meant to serve as a warning to all of us.

Whether your organization is currently impacted by the GDPR or not, now’s the time to prepare for future legislation that will undoubtedly spread worldwide given data privacy concerns. It’s a huge task to get your data house in order, but automation can lessen the burden. Data infrastructure automation software can help companies be ready for compliance by ensuring all data is easily identifiable, explainable and ready for extraction if needed. Using automation to easily discover data areas of concern, tag them and track data lineage throughout your environment provides organizations with greater visibility and a faster ability to act. In the event of an audit or a request to remove an individual’s data, automation software can provide the ready capabilities needed,” said Neil Barton, CTO, WhereScape

By Brian Zawada
Director of Consulting Services, AvalutionConsulting

Adaptive BC has done a great job of stirring up the business continuity profession with some new ideas. At Avalution – we love pushing the envelope and try new things, so we were excited to learn more about the ideas in the Adaptive BC manifesto, as well as the accompanying book and training.

While Adaptive BC identified some real problems with the business continuity approaches taken by some organizations, their solutions aren’t for everyone (and not all organizations experience these problems). In fact, their focus is so narrow, we think it’s of little practical use for most organizations.

Business Continuity as Defined by Adaptive BC

From AdaptiveBCP.org: “Adaptive Business Continuity is an approach for continuously improving an organization’s recovery capabilities, with a focus on the continued delivery of services following an unexpected unavailability of people, locations, and/or resources” (emphasis on "recovery” added by Avalution).

As is clear from their definition and made explicit in the accompanying book (Adaptive Business Continuity: A New Approach - 2015), Adaptive BC is exclusively focused on improving recovery when faced with unavailability of people, locations, and other resources.

This approach – or focus – leaves out a long list of responsibilities that add considerable value to most business continuity management programs, such as (the quotes below are taken from Adaptive BC’s book):



Tuesday, 21 May 2019 19:33

Adaptive BC: Not for Most

Having a plan that helps keeps your company open for business during a crisis is vital. Here’s how this international standard can help risk managers deliver.

It’s one thing to have to deal with the immediate effects when disaster strikes your business. But it’s another to make up for the lost working hours, any dip in profit and potential damage to your company’s reputation. These are the knock-on effects when businesses cease operations so they can recover.

“In today’s world, there is an ever-increasing array of risks facing all businesses, including natural disasters, human error or intent to damage and technological failure,” says Brendan Seifried, Director of Workplace Recovery Solutions, EMEA for Regus. “The impact of these hazards can be catastrophic, whether directly affecting the organisation, or indirectly interrupting their supply chain, vendors or business partners.”

All is not lost if, as risk manager, you’re confident that your Business Continuity Management System (BCMS) empowers the company’s workforce to keep calm and carry on.



They may look familiar to you, and that isn't a coincidence. New threats are often just small twists on old ones.

Cyberattackers are often thought to be tech experts. Cyberattackers understand security vulnerabilities and loopholes that most people don't understand. However, the reality of a cyberattacker is that most are not that specialized — they bypass security solutions through small adjustments to already well-known attacks. By simply leveraging an already established attack sample that is available on the Web, hackers can and do consistently and efficiently modify attacks in order to stay one step ahead of their targets' security solutions. In fact, some malware strains have been designed to automatically modify themselves to avoid signature-based security offerings.

Even sandboxing security solutions — which involve opening suspect files in a controlled environment — are not deterring the ever-increasing rate of email attacks. Because sandboxing solutions have become popular among security practitioners, hackers have also developed sandbox-evasion techniques. Some of these techniques are quite straightforward, such as using the sleep mode to avoid scan detection. And some techniques involve more advanced tools such as sandbox presence detection, where malware runs "clean" code when a sandbox is detected.



Tuesday, 21 May 2019 17:03

Old Threats Are New Again

As I’m kicking off the next iteration of the Forrester Wave™ for vulnerability risk management in the coming weeks, I’ve been fielding a lot of questions about what I’m going to be focusing on and why. Traditional vulnerability management solutions date back 30 years and are a critical element of an infrastructure hardening process, but digital transformation has relegated them as no longer sufficient. Because of this, I’m focusing this upcoming Wave on vendors that are actively developing products to solve today’s problems and hope that, by sharing this vision, I can help drive the market a little closer to where we need to be.

Complexity Begets The Need For Vulnerability “Risk” Management

With our digital transformation has come complexity. There are simply too many devices and too many applications that we’re responsible for maintaining in our infrastructure for us to also maintain a meaningful asset inventory, much less keep everything patched and up to date. I’ve heard this problem described in reminiscences of past lives in which there was the one person responsible for keeping track of all the assets who was basically the crown jewel of the IT organization — and if that person ever left, it would be impossible to replace that knowledge. At a certain point, the “genius” of our IT organization was no longer able to keep track of everything, and we’ve been treading water ever since. This clearly isn’t the entire story, but to quote Tyrion Lannister, “people need a good story,” and this one is effective at helping people understand that complexity has outpaced our ability to manage our environments the way we used to.



A strong data protection strategy is essential as data moves across endpoints and in the cloud

 LAS VEGAS – Endpoint security is a common concern among organizations, but security teams should be thinking more broadly about protecting data wherever it resides.

"If you're just focusing on device protection and not data protection, you're missing a lot," said Shawn Anderson, executive security advisor for Microsoft's Cybersecurity Solutions Group, at the Interop conference held this week in Las Vegas. Rather than add multiple endpoint security products to corporate machines, he urged his audience of IT and security pros to think about protecting their data.

An estimated 60% percent of data is leaked electronically, Anderson said, and 40% is leaked physically. When an organization is breached, the incident costs an average of $240 per record. The average cost of a data breach was $4 million in 2017, a year when hackers stole more than 6 billion records.



(TNS) - Although they’re most common during the spring, tornadoes can happen any time of the year, and anywhere — not just in rural areas.

To stay safe during a tornado warning, it’s important to know what to do and where to go. The National Weather Service always recommends getting on the first floor of a building, away from windows.

But what if your office is in a skyscraper, or you live in a high-rise? What if you’re driving? Here’s what you need to know.

First, it’s important to distinguish the difference between a tornado watch and a tornado warning. When the weather service issues a tornado watch, it means conditions are favorable for tornadoes to develop, but it does not mean that any tornadoes have formed or been spotted. You don’t necessarily have to take shelter during a tornado watch, but you should be aware of the weather and know that the situation could change quickly.



The infamous Ryuk ransomware slammed a small company that makes heavy-duty vehicle alternators for government and emergency fleet. Here's what happened.

The tiny IT team at C.E. Niehoff & Co. had been working for two weeks to run down and clean up a malware infection that had infiltrated its network after an employee clicked on a URL in a phishing email. Unbeknownst to the company as it scrambled to quell the attack, the malware, which was later identified as Trickbot, was quietly spreading among its endpoints and servers, gathering intel about the manufacturing firm and stealing credentials from the compromised machines.

It wasn't until the morning of Sunday, Oct. 14, when C.E. Niehoff IT manager Kelvin Larrue logged into the company's network from home, that it became clear to the company that the attack was something much more serious than a bot infection. A stunned Larrue could see that an intruder was running a PowerShell session on one of the company's servers, moving from server to server with stolen credentials and disabling security tools.



Medical devices have been entering hospitals at light speed, which brings up the question of cybersecurity and how providers can maintain safe treatment. Vidya Murthy explores how regulatory actions have impacted medical device cybersecurity and, in turn, patient safety.

The health care industry is a complex web of payers, providers, medical device manufacturers, third-party vendors and, most importantly, patients. Over the last decade, technology has played a central role in advancing quality of care, creating new delivery mediums and changing access for patients, in large part due to the development of new medical devices. The less discussed shift has been viewing cybersecurity as a HIPAA compliance mitigation instead of a patient safety enabler.



In June 2017 Continuity Central published the results of a survey which looked at whether attitudes to the business impact analysis and risk assessment were changing. Two years on, we are repeating the survey to determine whether there has been any development in thinking across the business continuity profession. The survey closes on May 31st, and the interim results can now be viewed.

The original survey was carried out in response to calls by Adaptive BC for the removal of the business impact analysis and risk assessment from the business continuity process.

The interim results of this year’s survey are as follows:



I have been the Head of Thought Leadership at the Business Continuity Institute since September 2018. In the eight months since my start date, I have been quizzed about many topics by our members – Brexit preparedness, supply chain resilience, horizon scanning and cyber risk to name but a few. However, the topic which I get pressed to answer more than any other is “What is your view on Adaptive Business Continuity?”.

My research on the subject immediately took me to the Adaptive BC website and led me to purchase Adaptive BC – A New Approach in order to learn about the subject. Since this time, interest in the Adaptive-BC entitled “revolution” has gained significant traction with numerous articles from both the founders of Adaptive BC and those who are more sceptical about the subject. Articles such as David Linstedt’s 2018: The BC Ship Continues to Sink and Mark Armour’s Adaptive Business Continuity: Clearly Different, Arguably Better are being met with writing such as Alberto Mattia’s Adaptive BC Reinvents the Wheel and the very recent article from Jean Rowe challenging Adaptive BC’s approach (Adaptive BC: the business continuity industry’s version of The Emperor’s New Clothes?).

The so-called “revolution” has certainly stirred the BC community – but are the ructions justified?



One of the biggest decisions companies face in conducting a Business Impact Analysis (BIA) is what use, if any, they will make of software in doing it. In today’s post we’ll look at the main software options available for doing BIAs, discuss which work best for which types of organizations, and share some tips that can help you succeed no matter what approach you take to using software.


In broad terms, there are five approaches companies can take in using software to do their BIAs.

As a reminder, a BIA is the analysis organizations conduct of their business units to determine which processes are the most critically time sensitive, so they can make well-informed decisions in doing their recovery planning.



CISOs must consider reputation, resiliency, and regulatory impact to establish their organization's guidelines around what data matters most.

Today's CIOs are the stewards of company data, responsible for its health and performance as well as maintenance of the availability, speed, and resiliency their stakeholders expect. CISOs, however, sometimes serve as emergency room doctors for their company's data. Their role is to think about worst-case scenarios, diagnose the severity of incidents, and jump in when incidents happen or are likely. Their first priority is to keep patients alive, but keeping them healthy is worth bonus points.

Like ER doctors, CISOs need rapid prioritization tied to the health of the business to effectively triage incidents. To establish each organization's guidelines around what data matters most every CISO must consider reputation, resiliency, and regulatory impact.



Friday, 17 May 2019 16:24

The Data Problem in Security

The recent merger of CloudBees and Electric Cloud is a sign of the times in the world of DevOps as integrated DevOps solutions come back into vogue. Not too recently, this would have been looked down upon as a step in the wrong direction when it comes to innovation and providing value to developers. But why is that? What’s wrong with an integrated toolchain, and why is it taking so long for vendors and users to come around to them or for vendors to offer them? After all, DevOps as a term itself has been around for about 10 years, so what’s the sticking point?

The Dawn Of The Integrated Toolchain

To understand these questions, we need to set the Wayback Machine to the decade of the ’90s, when the full stack developer automation toolchain was being born. Source code repositories certainly existed, but the automation of continuous integration, unit test, and deployment did not. Those types of automata would come later in the early 2000s as teams from HP, IBM, Micro Focus, and Microsoft created full stack automation tools that managed source code integration, executed unit tests, automated functional tests, and packaged software in a manner that was ready for production. This all sounded great on paper, but it was expensive, not extensible, and captive — meaning once you bought into this toolchain, there was no easy way to get out. Proprietary standards and integration points made it difficult at best to variate from the prescribed toolchain. These tools were also designed to be managed by IT administrators and not their users, the developers writing and building the code.



Shuffling resources, adding administrative process, and creating a competition and incentive system will do little to grow and mature the talent we need to meet the cybersecurity challenges we face.

The recent Executive Order on America’s Cybersecurity Workforce is intended to bolster public sector cybersecurity talent and improve our ability to hire, train, and retain a skilled workforce. Unfortunately, it ignores the real challenges we face in securing our public infrastructure: high turnover, outdated models, and an excess of administrative processes. Instead, the EO focuses on a series of relatively superficial initiatives seemingly designed to get people more excited about cybersecurity. These include:

• A cybersecurity rotational program
• A common skill set lexicon/taxonomy based on the NICE framework
• An annual cybersecurity competition with financial and other rewards for civilian and military participants 
• An annual cyber education award presented to elementary and secondary school educators
• A skills test to evaluate cyber aptitude in the public sector workforce

While it's great to see the continued focus on addressing our substantial national cyber challenges, this Executive Order is an attempt to address a severe talent shortage by shuffling resources, adding administrative process, and creating a competition and incentive system that will do little to grow and mature the cyber labor force. 



Back in November, Forrester outlined its 2019 predictions for a set of hot emerging technologies. We identified which markets were likely to command big investments in the new year and even predicted that GE would turn a corner this year. Let’s see how we did so far with a few of them.

Additive manufacturing will save General Electric. 2019 has delivered glimmers of hope for GE: Investors have started showing faith in the company’s new leadership. GE’s stock price is up 42% year-to-date, and optimism is building in GE’s aviation unit. With a pipeline of over 60 proprietary 3D-printed parts, they hope to literally “reinvent” the engine. Manufacturing these parts with additive methods allows GE to cut out some of its traditional suppliers and win contracts previously held by its competitors. To show off the technology, GE even produced a set of 3D-printed gowns at this year’s Met Gala, which resulted in the most positive press it received in quite some time. Things also look good for the additive manufacturing market, as it blew past its entire 2018 investment total in the first quarter of this year with $445M.



Companies promising the safe return of data sans ransom payment secretly pass Bitcoin to attackers and charge clients added fees.

A new report sheds light on the practices of two US data recovery firms, Proven Data Recovery and MonsterCloud, both of which paid ransomware attackers and charged victims extra fees.

ProPublica researchers were able to trace four payments from a Bitcoin wallet controlled by Proven Data to a wallet controlled by the operators of SamSam ransomware, which caused millions of dollars in damages to cities and businesses across the US. Payments to this wallet, and another connected to the attackers, were banned by the US Treasury Department due to sanctions on Iran, explained former Proven Data employee Jonathan Storfer to researchers.

Proven Data claims to unlock ransomware victims' data using its own technology. Storfer and an FBI affidavit say otherwise: The company instead paid ransom to obtain decryption tools. MonsterCloud, another data recovery firm that claims to employ its own recovery practices, also pays ransoms — without telling the victims, some of which are law enforcement offices.



Mobile apps have become the touchpoint of choice for millions of people to manage their finances, and Forrester regularly reviews those of leading banks. We just published our latest evaluations of the apps of the big five Canadian banks: BMO, CIBC, RBC, Scotiabank, and TD Canada Trust.

Overall, they’ve raised the bar, striking a good balance between delivering robust, high-value functionality and ensuring that it’s easy for customers to get that value with a strong user experience. The top two banks in our review, CIBC and RBC, both made significant improvements to their app user experience (UX) over the past year by focusing on streamlining navigation and workflows. But our analysis also revealed ways all banks can — and should — improve, such as:

Banks should give customers a better view of their financial health. Banks we reviewed don’t provide external account aggregation, and they put the burden on the user to stay on top of their monthly inflows and outflows. They don’t offer useful features such as an account history view that displays projected balances after scheduled transactions hit the account — something leading banks in other regions of the world (like Europe and the US) do offer.



Learn about some of the latest findings on the devastation from a hurricane, and how to prepare your business to withstand this natural catastrophe. Read this infographic by Agility Recovery.

Agility HurricaneInfographic

Thursday, 16 May 2019 16:06

The Biggest Hurricane Risk?

What will happen to the plastic bag you threw away with lunch today? Will it sit in a landfill, clog a municipal sanitation system, or end up in your seafood? Concern over this question has helped spur the rise of the new and rapidly growing cultural trend of people aiming to live ‘Zero Waste’. The momentum of this movement has been fueled in part by an international recycling crisis between the United States and China, as described in this slightly grim article, Is this the End of Recycling?

Seeing images of injured marine animals or aerial footage of the Great Pacific Garbage Patch, shows us just how much damage this unsolved problem can cause. We can collect data from events that are occurring today to predict trends in consumption and waste reduction. We can track pilot programs of composting and trash reduction and honestly evaluate the results.

All of this sounds negative, but there is a lot of good news! More and more people are prepared to take drastic action to solve the waste and recycling problems that our country will face in the future. Like business strategies used in Business Continuity and Disaster Recovery, the Zero Waste movement tries to anticipate a future problem and attempt to mitigate its effects before they happen. To do this, we must rely on tracking real data as it occurs and test our solutions, before they become critical to operations.



Adaptive Business Continuity (Adaptive BC) is an alternative approach to business continuity planning, ‘based on the belief that the practices of traditional BC planning have become increasingly ineffectual’. In this article, Jean Rowe challenges the Adaptive BC approach.

We all can appreciate the intent to innovate, but innovation, in the end, must meet the needs of the consumer.  With this in mind, the Adaptive BC approach (The Adaptive BC Manifesto 2016), uses ‘innovation’ as a key message.

However, I believe that, upon reflection, the Adaptive BC approach can be viewed as the business continuity industry’s version of The Emperor’s New Clothes.

The Emperor’s New Clothes is “a short tale by Hans Christian Andersen, about two weavers who promise an emperor a new suit of clothes that they say is invisible to those who are unfit for their positions, stupid, or incompetent.”  As professional practitioners, we need to dispel the myth that using the Adaptive BC approach is, metaphorically speaking, draping the Emperor (i.e. top management) in some finely stitched ‘innovative’ business continuity designer clothes that only those competent enough can see the beauty of the design.   



As in many areas of business continuity and life, myths abound. Crisis management has them as well. In today’s post, we’ll look at five of the most pervasive.

Crisis management (CM) planning is an area where most companies may believe (or hope) they are in great shape at the same time as they should have doubts about their plans and hope they are never put to the test.

These myths have three things in common: 1) Believing in them makes people feel like they are off the hook, 2) they aren’t true, and 3) they are an obstacle to the company’s truly becoming prepared to deal with a crisis.

Here are five of the myths we encounter most frequently when out in the field:



(TNS) — People were evacuated from their homes and schools were closed or delayed Wednesday after Kansas was hit with back-to-back thunderstorms.

The Kansas Turnpike was also closed south of Wellington to the Oklahoma border Wednesday morning.

Emergency management officials began evacuating people from an area about 5 miles west of Manhattan about 5 a.m., according to a report by the Associated Press.

Evacuations started in the Wichita area early Wednesday. The Weather Channel reported on its Twitter account that evacuations were ongoing in parts of Peabody and Wellington before 6 a.m. Peabody is in Marion County, about 40 miles north of Wichita. Wellington is in Sumner County, about 35 miles south of Wichita.



(TNS) — As Florida enters hurricane season starting June 1, the public needs to prepare for hazardous weather and ensure disaster supply kits are complete, Sarasota County officials urged in a news release. Knowing the risk, getting prepared and staying informed are just a few steps people can take to get ready for hurricane season.

Area hurricane evacuation maps have been updated, officials noted. Residents are encouraged to check the updated maps online to know their evacuation level, previously known as a "zone."

According to Sarasota County Emergency Management officials, just because you can't see water from your home doesn't mean you're not at risk for storm surge. The updated hurricane evacuation levels and storm surge maps are available online by visiting scgov.net/beprepared.



When a critical event happens, preparedness is key.

The ever-growing threat of risks, whether natural disasters or man-made events, has put safety and resiliency top of mind for today’s organizations. Companies of all sizes have implemented mass notification systems to send alerts to employees for situations such as severe weather updates, IT alerts or organizational announcements.

Having a notification system is important, being prepared to use the system at a moment’s notice is critical. That’s why when a crisis strikes, it is better to have prewritten message scenarios ready to send rather than fumbling with the message content. However, developing prewritten messages for all kinds of events may seem daunting. Where do you start? Better yet, what do you say?

To better aide notification system admins and users, OnSolve has created the white paper, Your Alert Arsenal for Customizing and Distributing Messages during Critical Events. This critical notification resource contains over 100 pre-written example alerts for emergency and routine events. It covers a range of events in both emergency and non-emergency domains—from natural disaster alerts to customer communication notifications.



A completely trusted stack lets the enterprise be confident that apps and data are treated and protected wherever they are.

With great power comes great responsibility. Just ask Spider-Man — or a 20-something system administrator running a multimillion-dollar IT environment. Enterprise IT infrastructures today are incredibly powerful tools. Highly dynamic and dangerously efficient, they enable what used to take weeks to now be accomplished — or destroyed — with a couple of mouse clicks.

In the hands of an attacker, abuse of this power can dent a company's profits, reputation, brand — even threaten its survival. But even good actors with good intentions can make mistakes, with calamitous results. Bottom line: The combination of great power with human fallibility is a recipe for disaster. So, what's an IT organization to do?

Answer: Trust the stack, not the people.



Monday, 06 May 2019 16:18

Trust the Stack, Not the People

Many business continuity professionals think of the cloud as a magical realm where nothing bad can happen. The reality is that things go wrong in the cloud all the time and as a result we must be sure to perform our due diligence in setting up our cloud-based IT/Disaster Recovery solutions.

In today’s post we’ll look at some of the common misconceptions people have about the cloud.

We’ll also talk about some things you can do to make sure this excellent “new” invention called the cloud doesn’t disappoint you when you need it most.



With hurricane season right around the corner, it’s never too early for businesses to start preparing for potential impact. The first line of defense in protecting your people and assets is understanding how a hurricane’s category level can help your business prepare for the worst.

But first, a quick history lesson:

In the 1970s, Miami engineer Herbert Saffir teamed up with Robert Simpson, the director of the National Hurricane Center. Their mission: develop a simple scale to measure hurricane intensity and the potential damage storms of varying strength could cause residential and business structures.

The result is the Saffir-Simpson Hurricane Wind Scale, which assigns a category level to storms based on their sustained wind speeds. The scale ranks every hurricane from 1-5, with 5 being the most intense—a storm of this magnitude will leave behind catastrophic damage in its wake.



Passwords are simply too vulnerable. On the dark web the underground market for passwords and other identity details is thriving. Every month at least one major hack or data leak takes place in which millions of records, including passwords, are exposed or stolen.

If a hacker gets a password and email address they simply apply the information to online platforms such as Amazon, ebay, Facebook and others, until they get a hit. It’s common practice, known as credential stuffing. According to some, many people will have upwards of 200 online accounts within a few years. How do you remember passwords for so many accounts? The savvy use password managers, however many still use the same password across all their accounts despite warnings. 

Every year BullGuard notes that surveys of the most common passwords reveal that '123456', 'password', '123456789' and 'qwerty' still make the top 10. Cyber criminals love it. They have great success using simple keyboard patterns to break into accounts online because they know so many people are using these easy-to-remember combinations.

Because of their inherent vulnerability should we be seeing the slow decline of the password? If so, what will replace it and what will we be using five years from now? This article provides some insight by looking at how today’s developments are evolving from their password roots and how they might shape the future.



In the last few years, biometric technologies from fingerprint to facial recognition are increasingly being leveraged by consumers for a wide range of use cases, ranging from payments to checking luggage at an airport or boarding a plane. While these technologies often simplify the user authentication experience, they also introduce new privacy challenges around the collection and storage of biometric data.

In the US, state regulators have reacted to these growing concerns around biometric data by enacting or proposing legislation. Illinois was the first state to enact such a law in 2008, the Biometric Information Privacy Act (BIPA). BIPA regulates how private organizations can collect, use, and store biometric data. BIPA also enabled individuals to sue individual organizations for damages based on misuse of biometric data.



With GDPR and the California Consumer Privacy Act dominating the data privacy conversation, Baker Tilly’s David Ross discusses the myriad benefits of maintaining compliance.

Recently, we saw Google fined $57 million by France in the punishments imposed for violations of the sweeping General Data Protection Regulation (GDPR) legislation passed by the European Union. Fined for not properly disclosing or alerting consumers on how their data would be used, Google’s practices ran afoul of the new data privacy laws enacted in May 2018.

Consumers and corporations alike face unfortunate repercussions when cybersecurity precautions aren’t taken seriously. Gloomy statistics and stories of well-known corporations losing customer and vendor personal information to large-scale data breaches fill the news on a near daily basis. The frequency of data breaches has increased to an unprecedented rate, and the cost continues to rise each year. A study by the Ponemon Institute reports the average cost of a data breach is up 6.4 percent since 2017 to a whopping $3.86 million.

While there is significant press surrounding the fines organizations must pay for breaches and violations, the other less apparent and often difficult-to-quantify costs can be much greater, farther reaching and longer lasting. These may include reputational damage, loss of stock value, loss of current and future customers, class action lawsuits and remediation expenses from breaches such as notification costs or credit report monitoring for affected customers.



Exploits give attackers a way to create havoc in business-critical SAP ERP, CRM, SCM, and other environments, Onapsis says.

Exploits targeting a couple of long-known misconfiguration issues in SAP environments have become publicly available, putting close to 1 million systems running the company's software at risk of major compromise.

Risks include attackers being able to view, modify, or permanently delete business-critical data or taking SAP systems offline, according to application security vendor Onapsis.

The exploits, which Onapsis has collectively labeled 10KBLAZE, were publicly released April 23. They affect a wide range of SAP products, including SAP Business Suite, SAP S/4 HANA, SAP ERP, SAP CRM, and SAP Process Integration/Exchange Infrastructure.



And now that I have your attention… there really is a link between the two incongruous topics in the headline. Archive360’s Bill Tolson explains.

Perhaps you remember sitting through a class in high school billed as “sex education,” yet finding it dealt so indirectly with the topic that it was difficult, if not impossible, to discern the pertinent details that would help you understand what you really needed to know in this area. When faced with a real-life situation, many of us thus stumbled in blindly.

If you know anything about the General Data Protection Regulation (GDPR), then you’ll see the close analogy here. While the regulation has been in effect for almost a year now, many companies are still failing to grasp and act on the necessary details to stay compliant — the equivalent of closing their eyes and hoping for the best.



New study shows SMBs face greater security exposure, but large companies still support vulnerable systems as well.

Organizations with high-value external hosts are three times more likely to have severe security exposure to vulnerabilities such as outdated Windows software on their off-premise systems versus their on-premise ones.

While external hosts at SMBs face greater exposure than larger companies, as company revenues grow so do the number of hosts and security issues affecting them, according to a new study published yesterday by the Cyentia Institute and researched by RiskRecon. The study analyzed data from 18,000 organizations and more than 5 million hosts located in more than 200 countries.

The study, Internet Risk Surface Report: Exposure in a Hyper-Connected World, identified more than 32 million security vulnerabilities, such as old Magecart ecommerce software and systems running outdated versions of OpenSSL that are vulnerable to exploits such as DROWN and Shellshock.

Wade Baker, founder of the Cyentia Institute, says the results have to be carefully analyzed. For example, 4.6% of companies with fewer than 10 employees had high or critical exposure to security vulnerabilities, versus 1.8% of companies with more than 100,000 employees. So while the 1.8% number sounds good percentage-wise, that's still many more hosts exposed.



Thursday, 02 May 2019 14:18

Study Exposes Breadth of Cyber Risk

(TNS) — Unlicensed handgun owners would be allowed to carry their weapons — openly or concealed — in public for up to a week in any area where a local, state or federal disaster is declared, under a bill that has been overwhelmingly approved by the Texas House, 102 to 29.

House Bill 1177 by Rep. Dade Phelan, R-Beaumont, now awaits its first hearing in the Texas Senate. Phelan said he wrote the bill so gun owners don’t have to leave their firearms behind when evacuating their homes. Existing laws allow gun owners to store them in their vehicles, with some conditions.

“I don’t want someone to feel like they have to leave their firearms back in an unsecured home for a week or longer, and we all know how looting occurs in storms,” Phelan said. “Entire neighborhoods are empty and these people can just go shopping, and one of the things they’re looking for is firearms.”

Opponents say Phelan’s bill could make a bad situation worse by adding firearms to an already volatile situation.



More than six months have passed since I wrote Forrester’s predictions 2019 report for distributed ledger technology (DLT, AKA blockchain). In the blockchain world, that’s ages ago.

As I keep being asked how those predictions are shaping up, and having just attended two excellent events in New York, now’s a good time to take stock. So how did we do?

Terminology shift from blockchain to DLT: I was mostly wrong but also a little bit right. What we’re seeing today is neatly reflected in the titles of the two conferences I referred to above: the EY Global Blockchain Summit and IMN’s Synchronize 2019: DLT And Crypto For Financial Institutions. In other words, in the financial services sector, the distributed ledger/DLT terminology has become predominant; there are even firms where the term “blockchain” is banned from the vocabulary altogether. Outside of this industry, though, it’s a different picture: Say “DLT” or “distributed ledger,” and you get blank stares; say “blockchain,” and eyes light up. For the same reason, many startups continue leading with “blockchain” in their marketing, even if their software lacks some of the characteristics typically associated with that descriptor. One said to me: “Blockchain is a recognized category; DLT isn’t.”



According to hurricane research scientists at Colorado State University, early predictions for the 2019 hurricane season show a slightly below average activity level.

While this could be good news, we can’t forget about the destruction caused in the past several years from hurricanes. In 2018, fifteen named storms developed with Hurricanes Michael and Florence making landfall and causing crises for both Florida and North Caroline. The 2017 season cost more than $282 billion and caused up to 4,770 fatalities.  Whether we see two named storms or ten, preparation is your greatest ally against potential devastation.  Start by using these automated message templates for your organization’s mass notification system.

Using Hurricane Notification Message Templates

When using message templates, there are a few basic guidelines to follow. Start by keeping the message length to a minimum. This ensures recipients can get the most information in the least amount of time. In addition, SMS messages cannot exceed 918 characters; longer messages are broken up into multiple messages that may create confusion.

By creating message templates prior to severe weather, you can generate detailed and informative alerts for every step in your emergency plan. Then in the wake of a hurricane, these messages are ready to be sent to the right audiences. Recipients receive only those messages that apply to them, which helps to eliminate confusion during a stressful time.



Business Continuity Awareness Week 2019 is May 13‑17

This global event is a time to consider business continuity and the value an effective continuity management program can have for your organization.

An emergency notification system is a crucial tool in any business continuity plan. Every day, events like the following happen with no warning:

  • Hurricanes, tornadoes, and other natural disasters
  • Active shooter
  • Urban wildfire
  • Power outages
  • Cybercrime
  • Disease outbreaks
  • Workplace violence

One of the most frequent consequences of these events is limited or impaired communication, making it difficult to relay critical messages regarding safety and disaster response. Emergency notification systems have proven to be a vital tool for today’s organizations.



As corporate boards gather for annual shareholder meetings, the issues in the spotlight are defined by forces driving both business growth and risk. BDO’s Amy Rojik offers suggestions for how boards can be prepared to communicate with stakeholders this year.

For corporate boards, spring marks the arrival of annual shareholder meeting season. Every year, shareholders gather for board meetings armed with questions and concerns that, if not sufficiently addressed, may hamper their confidence in a business’s ability to manage risk and sustain long-term value creation.

In 2019, the list of issues on boards’ radars are defined by the forces driving both business growth and risk, in equal measure. This year’s key areas of shareholder concern can be grouped into four categories: digital transformation and data protection, people and culture, market movement and regulation and reporting. Here are some suggestions for how boards can address them.

Digital Transformation & Data Protection

With organizations facing increasing pressure to streamline and optimize every aspect of their business, digital transformation is at the crux of business innovation. As a result, it is nearly impossible to walk into a boardroom without hearing the phrase mentioned. And for good reason — having a digital transformation strategy is no longer optional; it is necessary for survival in today’s digital economy. Corporate boards should expect shareholders to question how much is being spent on digital transformation, who is leading the charge on strategy, what the return on investment is and how the organization compares to its peers. In communicating a digital strategy to stakeholders, linking it to clear key performance indicators (KPIs) and business objectives is critical.



When we walk into our homes, we can ask our voice assistants to turn the lights on, use our faces to unlock doors and monitor our home cameras on our phones. When we travel, the planes we take now include connected blockchain-based parts that regularly alert crews for vital maintenance. Brought on by the Fourth Industrial Revolution (4IR), smart, connected technologies are helping make life easier, faster and more convenient, because they can significantly boost the intelligence and reach of one digital technology alone. However, blended artificial intelligence (AI), internet of things (IoT), blockchain and other 4IR technologies also bring infinite entry points for risk.

Imagine, for instance, how many companies are using AI for analytics that improve with use. But data errors, or bias in software or models, can misinform decisions and bring unforeseen accidents. AI-related risks have ranged from public pushback on the use of AI-based surveillance cameras, to software glitches that led to self-driving car crashes. Add to this list evolving regulations in areas like data privacy, and missed risks can be costly. A 2018 report by the Ponemon Institute estimates noncompliance costs to be 2.7 times the cost of maintaining or meeting compliance requirements — up 45 percent since 2011.

While companies race to digitally transform themselves and realize the full potential of 4IR technologies, we should pause to consider how companies can best navigate the immense risk that these blended technologies bring.



Protiviti’s Jim DeLoach discusses one of the more pervasive issues falling within senior management’s and the board’s purview. Performance relates to virtually everything important: execution of the strategy, the customer experience, investor expectations, executive compensation and even senior management and the board itself. Accurately measuring it is critical.

Performance management is so integral to the functioning of executive management and to the oversight of the board of directors that it’s easy to forget that it, too, is a process. Like all processes, it can be effective or ineffective in delivering the desired value. Given the complexity of the global marketplace, the accelerating pace of disruptive change and ever-increasing stakeholder expectations, how should executive management direct and the board oversee the performance management process so that it is effective in driving execution of the strategy and incenting the desired behaviors across the organization?

As the ultimate champion for effective corporate governance, the board engages management with emphasis on four broad themes: strategy, policy, execution and transparency. Effective performance management touches each of these themes by focusing outwardly as well as inwardly and looking to the future as well as to the present and past. The message is that, in today’s environment, the focus on performance must be anticipatory and proactive as well as reactive and interactive in focusing company resources on the pursuit of its performance goals.

Many organizations use some variation of a balanced scorecard that integrates financial and non-financial measures to communicate what’s important, focus and align processes and people with strategic objectives and monitor progress in executing the strategy. With that as a context, we are observing in the marketplace six important areas of emphasis for measuring performance:



Financial services firms saw upticks in credential leaks and credit card compromise as cybercriminals go where the money is.

More than one-quarter of all malware attacks target the financial services sector, which has seen dramatic spikes in credential theft, compromised credit cards, and malicious mobile apps as cybercriminals seek new ways to generate illicit profits.

It's hardly surprising to learn attackers want money; what researchers highlight in IntSights' "Banking & Financial Services Cyber Threat Landscape Report" is what they look for and how they obtain it. The first quarter of 2019 saw a 212% year-over-year spike in compromised credit cards, 129% surge in credential leaks, and 102% growth in malicious financial mobile apps.

Banks and other financial services organizations were targeted in 25.7% of all malware attacks last year – more than any of the other 27 industries tracked. Researchers point to two key events that largely shaped the modern financial services threat landscape: the shutdown of cybercriminal forum Altenen and "Collections #1-5," a major global data leak earlier this year.



Cavirin‘s Anupam Sahai discusses the factors that determine whether the CCPA impacts an organization, what the requirements are if so and what action you can take to prepare for it.

Just when you thought you had a handle on GDPR, businesses have a new legislation to worry about: the California Consumer Privacy Act (CCPA). The CCPA stipulates that California residents should have greater access to and control over personal information held by businesses. In particular, the law seems targeted to online social media firms (e.g., Facebook) that have been reckless with their users’ personal information over the past few years. With the number of data breaches to date, are we really that surprised that something like this is coming into effect?

CCPA will become effective on January 1, 2020, but will not be enforced until six months afterward. However, the new law enshrines a few fundamental rights for consumers to access the information that companies hold on them and to control what is collected, stored and shared within the previous 12-months. So, come July 1, 2020, if a company has collected personal information from January 1, 2019 onward, the consumer has the right to find out exactly what data a business has collected, they can opt out from the company selling their data and they have the right to ask for their data to be deleted – or, as the GDPR regulation puts it, the right to be forgotten. 



Strategic Overview

Disasters disrupt preexisting networks of demand and supply. Quickly reestablishing flows of water, food, pharmaceuticals, medical goods, fuel, and other crucial commodities is almost always in the immediate interest of survivors and longer-term recovery.

When there has been catastrophic damage to critical infrastructure, such as the electrical grid and telecommunications systems, there will be an urgent need to resume—and possibly redirect— preexisting flows of life-preserving resources. In the case of densely populated places, when survivors number in the hundreds of thousands, only preexisting sources of supply have enough volume and potential flow to fulfill demand.

During the disasters in Japan (2011) and Hurricane Maria in Puerto Rico (2017), sources of supply remained sufficient to fulfill survivor needs. But the loss of critical infrastructure, the surge in demand, and limited distribution capabilities (e.g., trucks, truckers, loading locations, and more) seriously complicated existing distribution capacity. If emergency managers can develop an understanding of fundamental network behaviors, they can help avoid unintentionally suppressing supply chain resilience, with the ultimate goal of ensuring emergency managers “do no harm” to surviving capacity.

Delayed and uneven delivery can prompt consumer uncertainty that increases demand and further challenges delivery capabilities. On the worst days, involving large populations of survivors, emergency management can actively facilitate the maximum possible flow of preexisting sources of supply: public water systems; commercial water/beverage bottlers; food, pharmaceutical, and medical goods distributors; fuel providers; and others. To do this effectively requires a level of network understanding and a set of relationships that must be cultivated prior to the extreme event. Ideally, key private and public stakeholders will conceive, test, and refine strategic concepts and operational preparedness through recurring workshops and tabletop exercises. When possible, mitigation measures will be pre-loaded. In this way, private-public and private-private relationships are reinforced through practical problem solving.

Contemporary supply chains share important functional characteristics, but risk and resilience are generally anchored in local-to-regional conditions. What best advances supply chain resilience in Miami will probably share strategic similarities with Seattle, but will be highly differentiated in terms of operations and who is involved.

In recent years the Department of Homeland Security (DHS) and the Federal Emergency Management Agency (FEMA) have engaged with state, local, tribal and territorial partners, private sector, civic sector, and the academic community in a series of innovative interactions to enhance supply chain resilience. This guide reflects the issues explored and the lessons (still being) learned from this process. The guide is designed to help emergency managers at every level think through the challenge and opportunity presented by supply chain resilience. Specific suggestions are made related to research, outreach, and action.



Tuesday, 30 April 2019 14:48

FEMA Supply Chain Resilience Guide

vpnMentor’s research team discovered a hack affecting 80 million American households.

Known hacktivists Noam Rotem and Ran Locar discovered an unprotected database impacting up to 65% of US households.

Hosted by a Microsoft cloud server, the 24 GB database includes the number of people living in each household with their full names, their marital status, income bracket, age, and more.



(TNS) - Twenty-three men and women from Cambria, Somerset and Bedford counties graduated on Friday after a week of training by the Laurel Highlands Region Police Crisis Intervention Team.

The program, held at Pennsylvania Highlands Community College in Richland Township, included classes on suicide prevention, mental illness, strategies to de-escalate situations, dealing with juveniles and specialty courts.

“It’s critical that we give them the training,” said Kevin Gaudlip, Richland Township police detective and event coordinator. “Many of these situations are suicidal people that we encounter. In this course, officers are given the skills to effectively communicate with these people to prevent suicide.”

Police officers, 911 dispatchers, corrections officers, EMS personnel, probation officers, crisis intervention teams and others participated.



A firm’s people play essential roles in all stages of IT transformation. For companies at the beginner level of maturity, employees must come together to connect the organization. Once the organization is united, it must adopt customer-centric principles to become adaptable and reach intermediate maturity. To reach an advanced maturity level, the organization must again rely on its people to transition from being adaptable to adaptive. At each of these maturity levels, a company’s talent, culture, and structure look slightly different. The key differences in these three areas between beginner, intermediate, and advanced firms undergoing IT transformations are as follows:



In previous articles, we discussed how communicable diseases and pandemics are (or are not) addressed in personal and commercialinsurance policies. Today, we’ll talk about pandemic catastrophe bonds.

The Ebola outbreak between 2014 and 2016 ultimately resulted in more than 28,000 cases and 11,000 deaths, most of them concentrated in the West African countries of Guinea, Liberia, and Sierra Leone.

The outbreak inspired the World Bank to develop a so-called “pandemic catastrophe bond,” an instrument designed to quickly provide financial support in the event of an outbreak. The World Bank reportedly estimated that if the West African countries affected by the Ebola outbreak had had quicker access to financial support, then only 10 percent of the total deaths would have occurred.

But wait, what are “catastrophe bonds” and what’s so special about a pandemic bond?



Monday, 29 April 2019 18:31


With a year of Europe's General Data Protection Regulation under our belt, what have we learned?

There is no denying the impact of the European Union General Data Protection Regulation (GDPR), which went into effect on May 25, 2018. We were all witness — or victim — to the flurry of updated privacy policy emails and cookie consent banners that descended upon us. It was such a zeitgeist moment that "we've updated our privacy policy" became a punchline.

Pragmatically, the GDPR will serve as a catalyst for a new wave of privacy regulations worldwide — as we have already seen with the California Consumer Privacy Act (CCPA) and an approaching wave of state-level regulation from Washington, Hawaii, Massachusetts, New Mexico, Rhode Island, and Maryland.

GDPR has been a boon for technology vendors and legal counsel: A PricewaterhouseCoopers survey indicates that GDPR budgets have topped $10 million for 40% of respondents. A majority of businesses are realizing that there are benefits to remediation beyond compliance, according to a survey by Deloitte. CSOs are happy to use privacy regulations as evidence in support of stronger data protection, CIOs can rethink the way they architect their data, and CMOs can build stronger bonds of trust with their customers.



Security is a top concern at all levels of the organization, but especially at the board level and C-suite. SoftwareONE’s Mike Fitzgerald champions a “security-first” mentality and discusses the implications of failing to meet industry standards and regulations.

Instances of lost intellectual property (IP) due to data breaches are gaining attention in the mainstream press and in board rooms across the globe. C-suite executives are taking note of these events; security and compliance are no longer just IT issues. They are very real and very urgent business issues. Breaches and noncompliance have a major impact on business. After all, in the U.S. alone, the average data breach could cost a company upward of $7.9 million.

Compliance concerns are receiving attention from existing c-suite executives and have caused enough of a stir to lead to the creation of new roles, such as the Chief Compliance Officer (CCO), who is tasked with understanding and managing the plethora of compliance requirements that organizations must address. The CCO and the Chief Information Security Officer (CISO) need to be aware of compliance requirements on the global level (think General Data Protection Regulation (GDPR)) and on the local level (Health Insurance Portability and Accountability Act (HIPAA) and Sarbanes-Oxley (SOX)), since most organizations store at least some of their data in the cloud. The fine for a breach or lapse in compliance with an industry standard or regulation like GDPR can equal as much as 4 percent of a company’s revenue; that is potentially enough to put a company out of business. This new compliance-driven market makes it imperative to have a security-first mentality when it comes to IT decisions and a thorough understanding of the greater business implications resulting from a lack of proper security practices.



More and more businesses are deploying applications, operations, and infrastructure to cloud environments – but many don't take the necessary steps to properly operate and secure it.

"It's not impossible to securely operate in a single-cloud or multicloud environment," says Robert LaMagna-Reiter, CISO at First National Technology Solutions (FNTS). But cloud deployment should be strategized with input from business and security executives. After all, the decision to operate in the cloud is largely driven by business trends and expectations.

One of these drivers is digital transformation. "There is a driving force, regardless of industry, to act faster, respond to customers quicker, improve internal and external user experience, and differentiate yourself from the competition," LaMagna-Reiter says. Flexibility is the biggest factor, he adds, as employees and consumers want access to robust solutions that can be updated quickly.



Monday, 29 April 2019 18:26

How to Build a Cloud Security Model

When Newman, Calif., police officer, Ronil Singh, was murdered in December 2018, a Blue Alert was issued to notify the public of the dangers of a killer on the loose and to help apprehend the suspect.

The Blue Alert, a brief message issued via FEMA’s Integrated Public Alert and Warning System (IPAWS), was issued by the California Highway Patrol (CHP) in the Fresno and Merced areas where the suspect was believed to be on the run. The embedded link in the alert that contained a flyer with added information on the suspect was clicked on by more than a million cellphones within 30 minutes.

Developed by OnSolve, Blue Alert is a new addition to IPAWS to provide law enforcement officials with the ability to alert the public of injury or death of a law enforcement official. It is administered in California by the CHP, which acts on information provided by the local agency seeking to send an alert.



There’s a pervasive myth out there that the marijuana industry is an unregulated Wild West populated by desperadoes and mountebanks out to score a quick buck.

But even a passing familiarity with how the industry operates in states with legal recreational and medical marijuana should be enough to dispel that myth. Marijuana operations are subject to extremely strict licensing requirements and regulatory oversight. Every player in the marijuana supply chain is tightly controlled – from cultivators to retail stores to, yes, the buyers themselves.

In fact, a recent analysis from workers compensation insurer Pinnacol Assurance suggests that the industry’s strict regulatory oversight may also be the reason why it’s a safe industry to work in.



What does the future hold? This year on 28 April, the World Day for Safety and Health at Work draws attention to the future of work and reminds us of the importance of ISO solutions in combating work-related injuries, diseases and fatalities worldwide.

Health and safety at work likely isn’t an issue that’s top of mind on a daily basis. Yet, for millions of workers across the globe, their jobs can put them in some extremely high-risk environments where valuing safety can mean the difference between life and death.

Organized by the International Labour Organization (ILO), the World Day for Safety and Health at Work aims to raise awareness of the importance of occupational health and safety and build a culture of prevention in the workplace. This year’s theme looks to the future for continuing these efforts through major changes such as technology, demographics, sustainable development, and changes in work organization.



In the wake of a reported ransomware attack on global manufacturing firm Aebi Schmidt, Peter Groucutt outlines the steps companies should take to prepare for such incidents. A clear cyber incident response plan and maintaining frequent communication are critical.

The details of the attack on Aebi Schmidt remain light at this stage, but early reports suggest it was severe, with systems for manufacturing operations left inaccessible. The manufacturing sector has recently seen a number of targeted ransomware attacks using a new breed of ransomware known as LockerGoga. Norwegian aluminium producer Norsk Hydro and French engineering firm Altran have been hit in Europe. In the US, chemicals company Hexion was also attacked. The reasoning for these targets is clear – paralysing the IT systems for these businesses has an immediate effect on their production output. That means significant losses, potentially millions of dollars per day. Unlike mass ransomware attacks that might net the attacker a few hundred pounds, the ransom is correspondingly higher.

If you are hit by a ransomware attack, you have two options. You can either recover the information from a previous backup or pay the ransom. However, even if you pay the ransom, there is no guarantee you will actually get your data back, so the only way to be fully protected is to have historic backup copies of your data. When recovering from ransomware, your aims are to minimise both data loss and IT downtime. Defensive and preventative strategies are essential but outright prevention of ransomware is impossible. It is therefore vital to plan for how the organization will act when compromised to reduce the impact of attacks. Having an effective cyber incident response plan in place is critical to your recovery.



Friday, 26 April 2019 14:59

Lessons from a Ransomware Attack

Sea level rise, and its perils, is often associated with the East Coast. But California communities along the coast that don’t prepare for what’s ahead could be inviting disasters of the magnitude not yet seen in the state.

A report by the United States Geological Survey Climate Impacts and Coastal Processes Team suggests that future sea level rise, in combination with major storms like the ones the state is experiencing now, could cause more damage than wildfires and earthquakes.

This is the first study that looks not just at sea level rise in California, but also sea level rise, along with a major storm to assess total risk to coastal communities.



Spam has given way to spear phishing, cryptojacking remains popular, and credential spraying is on the rise.

The time it takes to detect the average cyberattack has shortened, but  cyberattackers are now using more subtle techniques to avoid better defenses, a new study of real incident response engagements shows.

Victim organizations detected attacks in 14 days on average last year, down from 26 days in 2017. Yet, attackers seem to be adapting to evade the greater vigilance: Spam, while up slightly in 2018, continues to account for far less of e-mail volume than during every other year in the past decade, and techniques such as hard-to-detect cryptojacking and low-volume credential spraying are becoming more popular, according to Trustwave's newly published Global Security Report

Other stealth tactics—such as code obfuscation and "living off the land," where attackers use system tools for their malicious aims—are also coming into greater use, showing that attackers are changing their strategies to avoid detection, says Karl Sigler, threat intelligence manager at Trustwave's SpiderLabs. 



(TNS) — Teenagers and adults lined up in the Jerome High School gym, ready to receive medication, while police stood guard outside.

The exercise was part of a four-day simulation, organized by the South Central Public Health District and Jerome County Office of Emergency Management, to prepare for a potential anthrax or other bioterrorism attack. The exercise coincided with similar exercises in Idaho’s six other public health districts.

The South Central Public Health District holds large-scale simulations every few years, district director Melody Bowyer said, and smaller exercises annually.

“One of our very important missions for public health is to protect and prepare the community for a real health threat,” such as a disease outbreak, natural disaster or bioterrorism attack, Bowyer said.



For 74 minutes, traffic destined for Google and Cloudflare services was routed through Russia and into the largest system of censorship in the world, China's Great Firewall.

On November 12, 2018, a small ISP in Nigeria made a mistake while updating its network infrastructure that highlights a critical flaw in the fabric of the Internet. The mistake effectively brought down Google — one of the largest tech companies in the world — for 74 minutes.

To understand what happened, we need to cover the basics of how Internet routing works. When I type, for example, HypotheticalDomain.com into my browser and hit enter, my computer creates a web request and sends it to Hypothtetical.Domain.com servers. These servers likely reside in a different state or country than I do. Therefore, my Internet service provider (ISP) must determine how to route my web browser's request to the server across the Internet. To maintain their routing tables, ISPs and Internet backbone companies use a protocol called Border Gateway Protocol (BGP).



(TNS) — Rather than let FEMA trailers sit empty at the Bay County Fairgrounds group site and the staging area in Marianna, Panama City, Fla., is asking to be given the opportunity to put people in them.

City Manager Mark McQueen said the city is negotiating with the Federal Emergency Management Agency to try to acquire the surplus trailers. As of last week, there were more than 50 empty trailers at the fairgrounds campsite, according to FEMA reports, in addition to the ones that were staged in Marianna and never rolled out for use.

"Those have gone unclaimed because FEMA has been unable to make contact with those survivors," McQueen said at the recent City Commission meeting. "Knowing that there are some already established in our group sites and that there are another 70 up in Marianna that are not yet placed, we are striving to get those donated to the city."

The hope, according to McQueen, is to get 100 trailers that city officials can offer as interim housing to people who have fallen through the cracks.



Most companies that underinvest in business continuity can give you a reason why they do so, but those reasons are almost always ill-founded. In today’s post, we’ll look at the most common rationales organizations give for skimping on BC—and show you the reality behind those same topics.

In working as a business continuity consultant, I’ve had the opportunity to become familiar with companies that come from across the spectrum in terms of the level of their BC planning. This includes many organizations with stellar programs and also many that do not fully implement their BC plan or have no BC program at all.

The companies that skimp on BC are almost always very articulate in explaining why they think it’s not worthwhile for them to develop a robust BCM program. However, the reasons they give are almost always based on false assumptions and incomplete information.



(TNS) - In Congress, battles are raging over disaster relief spending. Who should get the help? Puerto Rico, still seeking emergency reconstruction money in the wake of 2017 Hurricane Maria (and yes, Puerto Rico is part of the United States and just as deserving of help as, say, North Carolina)? How about Hawaii, where volcanic eruptions have seen molten lava destroy homes, roads and other infrastructure? Nebraska and Iowa, which were inundated by some of the worst flooding in their history? California, trying to rebuild from the most widespread and deadly wildfires the state has ever seen? Or the Florida Panhandle and parts of Georgia, where homes and farms were wiped out by the violent Hurricane Michael last year?

All those disasters and more — they are a signature national wound of the 21st century, a growing roster of attacks by natural forces that are unprecedented in their power and frequency. The object of current congressional fisticuffs is a $13 billion disaster aid package that tries to address many of those violent and devastating acts of nature. And it's not nearly enough to repair what's been broken, let alone do what's needed to prepare for a future that's likely filled with more such fire, wind and water.

Government at every level should have seen it coming, two or three decades ago. That's when we first became aware that climate change had begun, with warmer air and water temperatures and changing weather patterns that were producing more and bigger storms, and droughts where the land was once verdant. As The Washington Post reported this week, taxpayer spending on federal disaster relief funds is almost 10 times greater than it was three decades ago — and that's adjusted for inflation.



Today's application programming interfaces are no longer simple or front-facing, creating new risks for both security and DevOps

All APIs are different inside, even if they're using similar frameworks and architectures, such as REST. Under whatever architectural "roof," the data protocols are always different — even when the structure is the same.

You've likely heard of specific protocol formats, such as REST, JSON, XML, and gRPC. These are actually data formatting and transportation languages that act as APIs' spokes. Inside those formats is a lot of variation. These formatting languages are less "language" and more like airplanes that carry ticketed passengers that move through airports to get where they need to be. The languages passengers speak and their individual cultural details are highly different.

From a security perspective, the protocol itself does nothing. To be effective, security needs to translate the language and intention of each person coming through, not just let the passengers navigate freely.



Thursday, 25 April 2019 14:11

5 Security Challenges to API Protection

(TNS) — With a lot of hard work by the Shoalwater Bay Tribe, a vertical tsunami evacuation tower near Tokeland should be ready for “the big one” by the end of October 2020.

Shoalwater Bay emergency management director Lee Shipman said none of it would have been possible without a core group of driven individuals, particularly previous emergency managers like Dave Nelson and George Crawford.

“We wouldn’t have gotten the (grant) application done without their expertise,” said Shipman. “We are all passionate; we’re kind of like a tsunami evacuation tower gang.”

Nelson and Crawford were instrumental in forming the tribe’s emergency management plans. There are two tsunami warning sirens on the reservation; the one on the north end is named George, after Crawford; the one at the south end — off Blackberry Lane, next to where the evacuation tower will stand — is named Dave, after Nelson.



The Committee on Foreign Investment in the United States (CFIUS) recently forced the Chinese owner of dating app Grindr to divest its ownership interest, citing national security concerns. Fox Rothschild’s Nevena Simidjiyska explains what the decision means for companies who carry personal data going forward.

A new law has expanded the oversight powers of the Committee on Foreign Investment in the United States (CFIUS), and businesses are quickly learning that the interagency committee won’t hesitate to block a deal or force the divestment of a prior acquisition, particularly one involving sensitive customer data or “critical technologies” in industries ranging from semiconductors to social media.

Within the past two years, CFIUS blocked the acquisition of U.S. money transfer company MoneyGram International Inc., as well as a deal in which Chinese investors aimed to acquire mobile marketing firm AppLovin.



Rising to the cyber challenge

Our third Hiscox Cyber Readiness Report provides you with an up-to-the-minute picture of the cyber readiness of organisations, as well as a blueprint for best practice in the fight to counter the ever-evolving cyber threat.

Barely a week goes by without news of a major cyber incident being reported, and the stakes have never been higher. Data theft has become commonplace; the scale of ransom demands has risen steadily; and cumulatively the environment in which businesses must operate is increasingly hostile. The cyber threat has become the unavoidable cost of doing business today.

This is our third Hiscox Cyber Readiness Report and, for the first time, a significant majority of firms surveyed said they experienced one or more cyber attacks in the last 12 months. Both the cost and frequency of attacks have increased markedly compared with a year ago, and where hackers formerly focused mainly on larger companies, small-and-medium -sized firms are now equally vulnerable.



Wednesday, 24 April 2019 14:20

The Hiscox Cyber Readiness Report 2019

(TNS) - A warming Earth may add slightly more muscle to heat-hungry hurricanes, but also slash the number that form by 25 percent by the end of the century as drier air dominates the middle levels of the atmosphere.

According to a presentation given this week at the National Hurricane Conference in New Orleans, climate change is expected to intensify storms by about 3 percent, or a few miles per hour, by the year 2100.

Global warming likely added 1 percent to Hurricane Michael's Cat 5 power, or 1 to 2 mph, said Chris Landsea, tropical analysis forecast branch chief at the National Hurricane Center.

"That is a fairly small increase and most of the computer guidance by global warming models say maybe we could see 3 percent stronger by the end of the century," said Landsea, who spoke during a session on hurricane history. "That's really not very much."



Stopping malware the first time is an ideal that has remained tantalizingly out of reach. But automation, artificial intelligence, and deep learning are poised to change that.

The collective efforts of hackers have fundamentally changed the cyber defense game. Today, adversarial automation is being used to create and launch new attacks at such a rate and volume that every strain of malware must now be considered a zero day and every attack considered an advanced persistent threat.

That's not hyperbole. According to research by AV-Test, more than 121.6 million new malware samples were discovered in 2017. That is more than 333,000 new samples each day, more than 230 new samples each minute, nearly four new malware samples every second.



Wednesday, 24 April 2019 14:16

When Every Attack Is a Zero Day

The NYDFS cybersecurity requirements, first enacted in 2017, are now fully in place and helping to address glaring shortcomings in data security. OneSpan’s Michael Magrath provides a quick recap of the fourth and final phase of mandates to help organizations ensure they’re up to speed.

New York’s reputation as the “financial capital of the world” is legendary. The New York State Department of Financial Services (NYDFS) regulates approximately 1,500 financial institutions and banks, as well as over 1,400 insurance companies, and the overwhelming majority of financial institutions conducting business in the U.S. fall under NYDFS regulation – including international organizations operating in New York.

The NYDFS Cybersecurity Requirements for Financial Services Companies (23 NYCRR 500), first enacted in 2017, are now fully in place, and all banks and financial services companies operating in the state must secure their assets and customer accounts against cyberattacks in compliance with its mandates.

The regulation requires financial institutions to implement specific policies and procedures to better protect user data and to implement effective third-party risk management programs with specific requirements – both digital and physical.



Even more are knowingly connecting to unsecure networks and sharing confidential information through collaboration platforms, according to Symphony Communication Services.

An alarming percentage of workers are consciously avoiding IT guidelines for security, according to a new report from Symphony Communication Services.

The report, released this morning, is based on a survey of 1,569 respondents from the US and UK who use collaboration tools at work. It found that 24% of those surveyed are aware of IT security guidelines yet are not following them. Another 27% knowingly connect to an unsecure network. And 25% share confidential information through collaboration platforms, including Skype, Slack, and Microsoft Teams.  

While the numbers may at first appear alarming, there's another way to look at them, says Frank Dickson, a research vice president at IDC who covers security.

"What I see is a large percentage of workers who view security as an impediment," Dickson says. "When security gets in the way of workers getting their jobs done, people will go around security. Companies need to provide better tools so people can be more effective."



(TNS) - After the apocalyptic Camp Fire reduced most of Paradise to ashes last November, a clear pattern emerged.

Fifty-one percent of the 350 houses built after 2008 escaped damage, according to an analysis by McClatchy. Yet only 18 percent of the 12,100 houses built before 2008 did.

What made the difference? Building codes.

The homes with the highest survival rate appear to have benefited from “a landmark 2008 building code designed for California’s fire-prone regions – requiring fire-resistant roofs, siding and other safeguards,” according to a story by The Sacramento Bee’s Dale Kasler and Phillip Reese.

When it comes to defending California’s homes against the threat of wildfires, regulation is protection. The fire-safe building code, known as the 7A code, worked as intended. Homes constructed in compliance with the 2008 standards were built to survive.



Grounded Boeing Angers A Whole Value Chain

Boeing’s having a tough run. The self-proclaimed world’s largest aerospace company is under “intense scrutiny” after two crashes involving its 737 MAX jets, with governments around the world grounding planes, massively affecting travel and airline operations. Boeing finds itself in the center of a terrible storm of angry consumers, buyers, and regulators.

Not The First Time . . . But The Worst Time

This isn’t the first time Boeing planes have crashed — but PR-wise, it’s the worst. What’s different serves as caution for all leaders, regardless of industry. The zeitgeist has changed: No company is immune to the demands of empowered customers, not even B2B companies like Boeing. In Boeing’s case, the empowered customers are not just airlines but also the flying public. B2B companies never really had to worry about public scrutiny with its volatile fury. In an industry’s value chain, they played safely in the background, behind their B2C buyer. In this case, airline manufacturers historically didn’t interact with passengers post-crash but instead worked with regulators. A US presidential tweet hurled the issue into the public realm, a virtual court whose norms disregard protocol.



Don't let social media become the go-to platform for cybercriminals looking to steal sensitive corporate information or cause huge reputational damage.

Social media has become the No. 1 marketing tool for businesses, with 82% of organizations now using social media as a key communication and promotional tactic. It has become the window to a business, enabling companies to build a following, engage with clients and consumers, and share news and updates in a cost-effective way.

While social media can be a great tool, there are also a number of associated security threats. Just by having a presence on the platforms, organizations of all sizes put themselves at risk.



Sometimes problems result when the IT department does its own recovery planning then BC comes along and conducts an analysis that shows IT’s plans to be inadequate. In today’s post, we’ll look at why this gap in recovery strategies is dangerous and what you as a business continuity professional can do to narrow it.


The lack of alignment on key recovery objectives between IT and the business continuity team can lead to catastrophic impacts to customer service, operations, shareholder value, and other areas in the event of a critical disruption.

However, this is an area where the IT department deserves a good amount of sympathy and understanding from the BC team.

The problem starts when the IT team sets about working on its own to develop recovery plans for the organization’s systems and applications. Often they are told to do this by management, and they typically do the work in a silo, with minimal cooperation from other departments.

In devising their recovery plans, the IT department is usually flying blind because they have a limited view of the larger needs of the organization.  



Sixty-four percent of global security decision makers recognize that improving their threat intelligence capabilities is a high or critical priority. Nevertheless, companies across many industries fail to develop a strategy for achieving this. Among the many reasons why organizations struggle to develop a threat intelligence capability, two stand out: Developing a mature threat intelligence program is expensive, and it’s difficult to determine viable protections without a cohesive message of what works effectively. Fortunately, the digital risk protection (DRP) market provides a solution to the threat intelligence problem for both enterprises and small-to-medium businesses (SMBs) alike.

Digital risk protection services substantially improve an organization’s ability to mitigate risk by providing the organization with actionable and relevant intelligence. By simulating an outsider’s perspective of an organization’s digital presence, security professionals working for the organization can better determine which of their assets are most at risk and develop solutions to better protect those assets. Additionally, DRP services can be utilized to protect a company’s reputation by scouring the web for instances of data fraud, breaches, phishing attempts, and more.



Monday, 22 April 2019 16:40

Understanding The Evolving DRP Market

Compliance has yet to adopt a proper management system to substantiate the critical role they play. SEI’s Kevin Byrne discusses how, rather than continuing to raise compliance issues as they occur, CCOs should graduate to consistent, ongoing management-level reporting.

Compliance programs today are at an interesting crossroads. In 2004, the SEC adopted rule 206(4)-7, requiring all registered investment companies and investment advisers to adopt and implement written policies and procedures reasonably designed to prevent violation of the federal securities laws. Firms learned they had to review those policies and procedures annually for their adequacy and the effectiveness of their implementation and to designate a chief compliance officer (CCO) to administer the policies and procedures. Thus, the compliance program as we know it today was born.

Firms hired CCOs and tasked them with creating programs to protect investors and comply with federal securities laws. CCOs built their programs with the tools of the time – principally Microsoft Office – and while there is more experience to draw from, they largely continue to manage their programs the same way today. Policies and procedures are maintained in MS Word. Risk assessments are maintained in Excel. Communications are stored in Outlook. Documentation is maintained on shared drives or in SharePoint.



Recent studies show that before automation can reduce the burden on understaffed cybersecurity teams, they need to bring in enough automation skills to run the tools.

Cybersecurity organizations face a chicken-and-egg conundrum when it comes to automation and the security skills gap. Automated systems stand to reduce many of the burdens weighing on understaffed security teams that struggle to recruit enough skilled workers. But at the same time, security teams find that a lack of automation expertise keeps them from getting the most out of cybersecurity automation. 

A new study out this week from Ponemon Institute on behalf of DomainTools shows that most organizations today are placing bets on security automation. Approximately 79% of respondents either use automation currently or plan to do so in the near-term future.

For many, automation investments are justified to management as a way to beat back the effects of the cybersecurity skills gap that some industry pundits say has created a 3 million person shortfall in the industry. Close to half of the respondents to Ponemon's study report that the inability to properly staff skilled security personnel has increased their organizations' investments in cybersecurity automation. 



Monday, 22 April 2019 16:38

The Cybersecurity Automation Paradox

Some folks see trees when they look up at clouds. For others, clouds may take the form of a rabbit. But when IT professionals stare at clouds, they can’t help but picture a hosted private cloud with micro-segmentation. And for good reason.

What IT professionals see when they look at clouds

An increasing number of organizations are moving to the cloud for its obvious benefits. But along with this transition comes a greater need for more advanced cloud security measures. Micro-segmentation is one of these measures.

Unlike traditional security defense strategies like firewalls and edge devices that protect the flow of north-south data by focusing on the perimeter, micro-segmentation focuses on the inside, isolating individual workloads to protect traffic that’s traveling east-west within a data center. So even if a bad actor manages to get past your perimeter security measures, micro-segmentation will prevent the attack from spreading.

Failing to adapt security to meet the growing needs of increasingly complex IT environments can be catastrophic.

With cloud security top of mind for IT professionals, it’s no wonder they’re seeing it everywhere they look.


Recently, the department for Digital, Culture, Media & Sports in the United Kingdom released the Cyber Security Breaches Survey 2019.

The survey discusses statistics for cyberattacks, exposure to cyber risks, the awareness and attitudes of companies around cyber risk, and approaches to cybersecurity. Here are the four takeaways from the survey (all statistics included in this briefing are part of the survey).



Charlie Maclean Bristol discusses whether you should consider likelihood when conducting a risk assessment as part of the business continuity process. Do you need to know how likely it is that a threat will become an actuality; or is knowledge of the impact of the threat enough?

Business continuity has always had a slightly uneasy relationship with risk management. In the 2010 and 2013 BCI Good Practice Guidelines (GPGs) we looked at threat assessments, whereas in the more recent 2018 GPG, we cover a threat and risk assessment. This issue of conducting a threat assessment instead of a risk assessment was driven by a certain character in business continuity circles who was very anti-risk assessment, and hence pushed the idea of threat assessment in the two earlier GPGs.

Nowadays, risk assessment is coming of age and it seems to be everywhere. You need a risk assessment for climbing up a ladder and you also need one for running a massive multinational organization.

This article was inspired by a talk given by Tony Thornton, ARM Manager for ADNOC Refining, which I heard at The BCI UAE Forum in February. During his talk on risk assessment, he focused on there being no point in looking at likelihood when you are doing a business continuity risk assessment. He said that having a 3x3 or even a 5x5 scale was meaningless in terms of likelihood. The point he was making was that if there was a possibility it could happen, then that was good enough: and how likely it was to happen didn’t really matter. He was more enamoured with impact, which he said was worth looking at, as well as differentiating between high, medium and low impacts.



'Sea Turtle' group has compromised at least 40 national security organizations in 13 countries so far, Cisco Talos says

A sophisticated state-sponsored hacking group is intercepting and redirecting Web and email traffic of targeted organizations in over a dozen countries in a brazen DNS hijacking campaign that has heightened fears over vulnerabilities in the Internet's core infrastructure.

Since 2017, the threat group has compromised at least 40 organizations in 13 countries concentrated in the Middle East and North Africa, researchers from Cisco Talos said Wednesday.

In each case, the attackers gained access to, and changed DNS (Domain Name System) records of, the victim organizations so their Internet traffic was routed through attacker-controlled servers. From there, it was inspected and manipulated before being sent to the legitimate destination.  



Steve Blow explains that while businesses must remain consistently focussed on digital transformation in order to not fall to the back of the pack, digital transformation efforts could be futile if businesses don’t address and improve their IT resilience.

The market as we know it has been changing dramatically over the last decade, with each digital development outpacing the other at every turn in the track. Companies that are too stuck in their ways are being overtaken by contemporary companies, unencumbered by legacy and real estate, which are in line with the latest developments in IT.

This said, almost every single business must remain consistently focussed on digital transformation in order to keep up with developments; taking on new digital initiatives to drive efficiencies, create new experiences, and ultimately, beat the competition. According to recent research (1), 90 percent of businesses see data protection as important or critical for digital transformation projects. However, the same research revealed that the proper technological provisions are not yet in place, in order for these same businesses, striving to achieve digital transformation, to deliver on demands of data protection assurance.

It has become increasingly clear that having the right foundations early on in any digital journey is a critical factor in the success of transformation initiatives. So, building data protection within a robustly resilient IT infrastructure will be of paramount importance for businesses. Not only will this be critical for businesses to succeed day-to-day, but also to ensure complete transformation, modernization and cohesion. From my experience, there are three recommendations that could be key to help businesses achieve this:



I occasionally find people mapping their SOC capabilities to the ATT&CK framework by checking off specific techniques that they have shown they are able to detect with the intent of measuring coverage within their SOC. In this blog post, I hope to clarify why this strategy may be misleading.

There Are No Bad Actions, Only Bad Behavior

It’s almost impossible to have a high-confidence indictment of a process based on a single behavior. Hypothetically, if there were such a thing as a purely malicious operation, the system would not have been designed with this capability, or it would have been patched out. While there are certainly exceptions (things you would absolutely want to know if they happen in your infrastructure), it’s important to understand ATT&CK techniques as the building blocks of a cyberattack and that they are not malicious in and of themselves.



Executive coach and strategic advisor Amii Barnard-Bahn provides guidance on how executives can prepare for a board appointment: Start by following the 10 steps outlined here.

A lifelong diversity advocate, I testified in multiple legislative committees on the successful passage of California’s SB826, the first law in the U.S. requiring corporate boards to include women. This legislation was designed to create more access for diverse and qualified candidates for public boards. “More access” is important because the role of the board has become critical to the long-term health of a company and the protection of its shareholders and employees. Creating a larger pool of seasoned professionals to guide and govern our corporate institutions is paramount in a time of TeslaPapa John’sTheranos and CBS debacles.

A board search can take many years, so it’s never too early to evaluate and cultivate the skills and network you need to establish yourself as a viable candidate.



Wall Street loves a digital business. These technology-driven innovators, which put customer acquisition, retention, and experience at the center, have a different way of looking at the world. They are rewarded with growth and investment.

And it’s not just digital natives. Digitally advanced incumbents, firms such as Accenture, Capital One, Microsoft, and Philips, also see the world through a technology opportunity lens. They are also rewarded.

What do digitally advanced companies look like? How are they different from companies just starting their digital transformation? To find out, we analyzed the digital maturity of 793 enterprises in North America and Europe. We found digitally advanced firms in every industry, from retail and consumer products to manufacturing and financial services.



Archived data great for training and planning

By GLEN DENNY, Baron Services, Inc.

Historical weather conditions can be used for a variety of purposes, including simulation exercises for staff training; proactive emergency weather planning; and proving (or disproving) hazardous conditions for insurance claims. Baron Historical Weather Data, an optional collection of archived weather data for Baron Threat Net, lets users extract and view weather data from up to 8 years of archived radar, hail and tornado detection, and flooding data. Depending upon the user’s needs, the weather data can be configured with access to a window of either 30 days or 365 days of historical access. Other available options for historical data have disadvantages, including difficulty in collecting the data, inability to display data or point query a static image, and issues with using the data to make a meteorological analysis.

Using data for simulation exercises for staff training

Historical weather data is a great tool to use for conducting realistic severe weather simulations during drills and training exercises. For example, using historical lightning information may assist in training school personnel on what conditions look like when it is time to enact their lightning safety plan.

Reenactments of severe weather and lightning events are beneficial for school staff to understand how and when actions should have been taken and what to do the next time a similar weather event happens. It takes time to move people to safety at sporting events and stadiums. Examining historical events helps decision makers formulate better plans for safer execution in live weather events.

Post-event analysis for training and better decision making is key to keeping people safe. A stadium filled with fans for a major sporting event with severe weather and lightning can be extremely deadly. Running a post-event exercise with school staff can be extremely beneficial to building plans that keep everyone safe for future events.

Historical data key to proactive emergency planning

School personnel can use historical data as part of advance proactive planning that would allow personnel to take precautionary measures. For example, if an event in the past year caused an issue, like flooding of an athletic field or facility, officials can look back to that day in the archive at the Baron Threat Net total accumulation product, and then compare that forecast precipitation accumulation from the Baron weather model to see if the upcoming weather is of comparable scale to the event that caused the issue. Similarly, users could look at historical road condition data and compare it to the road conditions forecast.

The data can also be used for making the difficult call to cancel school. The forecast road weather lets officials look at problem areas 24 hours before the weather happens. The historical road weather helps school and transportation officials examine problem areas after the event and make contingency plans based on forecast and actual conditions.

Insurance claims process improved with use of historical data

Should a weather-related accident occur, viewing the historical conditions can be useful in supporting accurate claim validation for insurance and funding purposes. In addition, if an insurance claim needs to be made for damage to school property, school personnel can use the lightning, hail path, damaging wind path, or critical weather indicators to see precisely where and when the damage was likely to have occurred.

Similarly, if a claim is made against a school system due to a person falling on an icy sidewalk on school property, temperature from the Baron current conditions product and road condition data may be of assistance in verifying the claim.

Underneath the hood

public safety historical weather dataThe optional Baron Historical Weather Data addition to the standard Baron Threat Net subscription includes a wide variety of data products, including high-resolution radar, standard radar, infrared satellite, damaging wind, road conditions, and hail path, as well as 24-hour rainfall accumulation, current weather, and current threats.

Offering up to 8 years of data, users can select a specific product and review up to 72 hours of data at one time, or review a specific time for a specific date. Information is available for any given area in the U.S., and historical products can be layered, for example, hail swath and radar data. Packages are available in 7-day, 30-day, or 1-year increments.

Other available options for historical weather data are lacking

There are several ways school and campus safety officials can gain access to historical data, but many have disadvantages, including difficulty in collecting the data, inability to display the data, and the inability to point query a static image. Also, officials may not have the knowledge needed to use the data for making a meteorological analysis. In some cases, including road conditions, there is no available archived data source.

For instance, radar data may be obtained from the National Centers for Environmental Information (NCEI), but the process is not straightforward, making it time consuming. Users may have radar data, but lack the knowledge base to be able to interpret it. By contrast, with Baron Threat Net Historical Data, radar imagery can be displayed, with critical weather indicators overlaid, taking the guesswork out of the equation.

There is no straightforward path to obtaining historical weather conditions for specific school districts. The local office of the National Weather Service may be of some help but their sources are limited. By contrast, Baron historical data brings together many sources of weather and lightning data for post-event analysis and validation. Baron Threat Net is the only online tool in the public safety space with a collection of live observations, forecast tools, and historical data access.

Flooding in large swaths of the Midwest has already claimed the lives of at least three people and has caused $3 billion in damages.

A combination of melting snow and rainstorms led to breaches in levees along the Missouri River and other bodies of water.

According to FEMA flood map data, 40 million people in the continental U.S. are at risk for a 100-year flood event; that’s three times more than previously estimated. Additionally, the amount of property in harm’s way is twice the current estimate.

With communities underwater and many more at risk, officials are asking themselves how response plans can be improved.



(TNS) — As the waves of runners left Hopkinton to run the 2019 Boston Marathon, a roomful of public safety officials watched their computers, monitored video screens and radios, and talked to one another as a rolling list of incidents appeared on a screen on a wall.

A runner fell and fractured an arm. A drone was detected. An unattended package was found and cleared.

On marathon day, as 30,000 runners and countless spectators take to the streets, the Massachusetts Emergency Management Agency runs a "unified coordination center" in MEMA's underground bunker in Framingham.

The goal, said MEMA spokesman Christopher Besse, is to bring together local, state and federal public safety officials in one place so they can coordinate their responses to whatever the day brings — from weather to terrorism.



Don Boxley looks at some important questions that need to be asked to ensure that business continuity and data security are considered during digital transformation projects.

Whole industries are transforming with the help of IT and workforce digitization and as competition heats up across virtually every industry, the pressure to digitally transform escalates concurrently. 

Whether you are in IT or are a business professional who is responsible for digitization, business continuity and/or security strategies, you need to be able to think on your feet about your new priorities in a world of ongoing change.

While there are numerous variables that organizations must consider as they move towards digital transformation, perhaps the most essential considerations are business continuity and data security. With more business than ever being conducted in the cloud and more third-party partners needing digital access to that data, failing to keep business continuity and data security at the top of your business’s priority list could instantly become a fatal mistake – after all, they are often inexorably linked. 

In today’s cloud environments, one of the most important data security challenges relates to strategic partner data access and sharing. Your organization’s security safeguards are only as strong as the weakest link in your vendor and partner ecosystem. In other words, you may be inadvertently putting sensitive company data at risk every time you conduct digital business with a vendor that is granted access to your system.



(TNS) — As approximately 1,700 households and businesses remained without electricity in Mower and Freeborn counties Saturday morning, Minnesota Gov. Tim Walz said many in the state are likely unaware of the devastation caused by this week’s storm.

“If you have power at your house, the snow is going to be melted probably by tomorrow or whatever, so it appears like nothing really happened, but this was pretty catastrophic,” he said, noting power outages had wide-ranging impacts from personal medical needs to large-scale farming operations.

Walz was in Austin Saturday morning to meet with Minnesota National Guard members and sheriffs from Mower and Freeborn counties, as well as those tasked with returning power to the businesses and homes throughout the area.



The answer can lead to a scalable enterprise security solution for years to come

In early December 2018, several major corporate breaches were made public. As the news was shared and discussed around my company, one of my colleagues jokingly asked, "I wonder if I can gift some of this free credit monitoring to my future grandchildren." It was a telling comment.

Today, every organization – regardless of industry, size, or level of sophistication – faces one common challenge: security. Breaches grab headlines, and their effects extend well beyond the initial disclosure and clean-up. A breach can do lasting reputational harm to a business, and with the enactment of regulations such as GDPR, can have significant financial consequences.

But as many organizations have learned, there is no silver bullet – no firewall that will stop threats. They are pervasive, they can just as easily come from the inside as they can from outside, and unlike your security team, who must cover every nook and cranny of the attack surface, a malicious actor only has to find one vulnerability to exploit.



(TNS) - A repeat of the most powerful earthquake in San Francisco’s history would knock out phone communications, leave swaths of the city in the dark, cut off water to neighborhoods and kill up to 7,800 people, according to state and federal projections.

If a quake like that were to strike along the San Andreas Fault today, building damage would eclipse $98 billion and tens of thousands of residents would become homeless.

Thursday marks the anniversary of the 1906 quake, a 7.9-magnitude event that turned San Francisco streets into waves, flattening much of the skyline and igniting fires that raged for almost four days. The quake ruptured 296 miles of fault line — from Cape Mendocino to San Juan Bautista.

Since 1906, the fault has remained locked from Point Arena through the Peninsula. The 1989 Loma Prieta earthquake hit 50 miles south of San Francisco, on a remote segment of the San Andreas Fault, and ruptured only 25 miles.



With regulations domestically and abroad changing constantly, the risk of noncompliance is ever present. Fenergo’s Rachel Woolley discusses how this will impact functions beyond compliance.

Regulatory activity has been ramping up recently, and it doesn’t look to be slowing down in 2019. In an era of hyper-regulatory scrutiny, financial institutions find themselves in a constant battle between impending regulatory deadlines and the risk of noncompliance. Add to this the complexity of cross-jurisdictional regulations that vary across different countries even within the same region. The Asia-Pacific region is a prime example; with over 40 regulators in the same region, each with slightly varied rules and requirements, adhering to cross-border regulatory requirements is extremely challenging.

But it’s not just the compliance teams who are affected. As the challenge of regulatory change management increases, divisions and activities beyond the compliance function may potentially be impacted, including data management, operations, client-facing teams, client experience and time-to-revenue. The process needs to be managed and measured methodically in order to manage wide-ranging regulatory change in line with available budgets and resources.



In a previous article, we discussed how personal insurance policies address communicable diseases and epidemics. In this article, we’ll look at how commercial insurance policies handle these issues.

Between 1918 and 1919 the so-called Spanish influenza pandemic* killed at least 50 million people worldwide and infected about 500 million people – or about 1/3 of the entire world’s population at the time.

While the Spanish flu’s destructiveness has been an outlier over the last several decades, epidemics and pandemics on a smaller scale do still happen (avian flu, swine flu, Ebola, etc.).

How could disease outbreaks impact commercial property and general liability insurance?



(TNS) - As approximately 1,700 households and businesses remained without electricity in Mower and Freeborn counties Saturday morning, Minnesota Gov. Tim Walz said many in the state are likely unaware of the devastation caused by this week’s storm.

“If you have power at your house, the snow is going to be melted probably by tomorrow or whatever, so it appears like nothing really happened, but this was pretty catastrophic,” he said, noting power outages had wide-ranging impacts from personal medical needs to large-scale farming operations.

Walz was in Austin Saturday morning to meet with Minnesota National Guard members and sheriffs from Mower and Freeborn counties, as well as those tasked with returning power to the businesses and homes throughout the area.



Consider the following: Baseball is the only team sport where the defense has control of the ball. The side currently in offense does not handle the ball as they would in any other sport. A player does not score in baseball by bringing the ball to the finish line or passing it through a goal, but by trying to beat the ball to a goal. This sets it apart from games like basketball, soccer, football, and many others, and adds an interesting complexity. For me, the internal mechanics of baseball are the most interesting, similar to the work that a business does to set up a Business Continuity Plan.

Situational awareness in the game relies on a player reading signs and signals from other players, both on their own team and on the opposing team. A player might need to decipher the intent of the opposing player on 2nd base, and then relay back to the batter what the next pitch may be. A player might also need to relay signs on what the next pitch is from the middle infielders to the outfielders, so that they know where to position themselves or in what direction to take their first step.

My passion for baseball comes from a love of the strategy involved. The same type of strategy that makes a chess game so intriguing to watch also makes baseball continually exciting. You should know your opponent, their tendencies, strengths, and weaknesses, and then capitalize on that knowledge with the proper timing, all while continually learning from mistakes and honing your strategies for the next opponent.



Friday, 12 April 2019 15:13

Playing Hardball

The last couple weeks have been an exciting time for the customer data platform (CDP) category. At long last, major marketing technology vendors formally declared their intentions to get serious about managing and activating data for marketing. For the CDP community, the entry of marketing clouds is a big deal, carrying equal parts excitement over the implied market validation and concern (nay, fear?) as competition intensifies.

The concept of CDPs originated about three years ago in response to the very real challenges of collecting and leveraging data for marketing. Since then, a broad range of vendors offering an equally broad variety of solutions claimed the label and have been marketing themselves as such. At their core, CDPs promise to unify corporate and customer data and make it accessible to marketers for analytics and campaigns. But Forrester believes that standalone CDPs aren’t equipped to solve this problem for enterprise B2C marketers. For these reasons, Forrester welcomes continued progress from CDPs as well as new solutions entering the market. The question about CDPs was never whether there’s a business problem to address but rather who would ultimately solve it.

It was nearly inevitable that large martech vendors would join the fray. Forrester made the call in 2018 that marketing clouds would enter this market and have solutions in place by the end of 2019. In our October 2018 report, we stated that: “Ultimately, CDPs’ greatest competitive threat is the marketing clouds, such as Adobe, Oracle, and Salesforce, that are already ingrained in most enterprise martech stacks and are investing in capabilities far more sophisticated than CDPs’.”



As consumers increasingly rely on cashless spending, the PCI SSC has identified a process to secure cardholder data. Acceptto CEO Shahrokh Shahidzadeh discusses why it’s time to replace password-based credentials.

According to a recent study by the PEW Research Center, consumers in the U.S. are relying less on physical currency. The report found that “roughly three in 10 U.S. adults (29 percent) say they make no purchases using cash during a typical week.” In addition, a generational trend shows that “Americans under the age of 50 are more likely than those ages 50 and older to say they don’t really worry much about having cash on hand.”

As American consumers increasingly rely on cashless spending, it is no wonder that the Payment Card Industry Data Security Standard (PCI DSS) arose to develop a set of requirements applying to companies of any size that accept credit card payments.



The Federation of European Risk Management Associations (FERMA) has expressed concern about the ISO/IEC 27102 ‘Information Security Management Guidelines For Cyber Insurance’ standard, which is currently under development.

FERMA says that the proposed standard is “Premature and inappropriate in its current form given the fast pace of technological development” and also states that “No other insurance product is the subject of an ISO standard”.

FERMA members, the UK risk management association Airmic, French association AMRAE and Belgian association BELRIM, and insurance industry representatives have also expressed concerns about the project.

FERMA has urged other member associations to help ensure their national standardization body is aware of the concerns of the whole insurance market.



Donna Boehme, the “Lion of Compliance” shares that true compliance SME is the first and most foundational element of a strong compliance program. An experienced CCO with true compliance SME earned in the field and in the profession understands on many levels the multidisciplinary nature of the work, the optimal way to educate and facilitate collaboration and what can realistically be achieved through each phase or cycle of a strong, effective compliance program that supports and is driven by a culture of ethical leadership.

In 2016, two researchers from the University of Michigan’s Stephen M. Ross School of Business published a report on their study “Why Don’t General Counsels Stop Corporate Crime?” The simple answer: “Because it’s not their job!”

This is precisely why true compliance subject matter expertise, earned in the field and with the profession successfully designing and managing compliance programs (“Compliance SME”), is the first and foundational element of the modern Compliance 2.0 model. The modern 2.0 model recognizes compliance as an independent profession, distinct from Legal, with the subject matter expertise (SME) needed by senior management to lead and advise its approach to the modern and existential issues of compliance, ethics, culture and reputation.

The modern Compliance 2.0 model takes the place of the failed Compliance 1.0 model that was based on a naïve and misinformed assumption by boards and CEOs that compliance should be structured as a captive subset of legal and thus driven solely by the legal mandate and mindset. That flawed model failed to accommodate the stark realities that compliance and ethics was emerging as a completely separate profession and SME from legal, with very different mandates, core competencies, practices and skill sets. At the same time, advocates for the in-house bar were sensing an opportunity to respond to the chaotic legal services market and claim the new role of Chief Compliance Officer for the legal field. Yet, in their zeal to claim the CCO role as nothing more than a “legal lieutenant” and a “process integrator,” these voices resulted in driving compliance into a flawed model destined to fail because it lacked true compliance SME and positioning to drive its distinct independent mandate.



Friday, 12 April 2019 15:05

What is Compliance SME?

Although a crisis communications manual might look to be a complex contraption to the untrained eye, what the manual needs to accomplish can simply be condensed to two important things: putting the processes in place for the communication with stakeholders during a crisis and organizing the internal processes that allow the first thing to happen smoothly.

The manual, just to give an example, will both make sure that the journalists receive the information they need to be able to report on the crisis, and that the person who communicates with the journalist has the right resources in place to provide them with timely and accurate information.

Over the years, I have audited a great many manuals and what I found is that very often the same mistakes are made. Here is a look at what will go wrong.



Recent events in the news as well as trends in my own work have reminded me of how important it is for business continuity professionals to help protect their organizations against the impact of cyberattacks. In today’s post, I’ll list some ways BC teams can help their companies fend off this rising threat.


The news this week contained stories reporting a serious recent malware attack against the City of Albany, New York. Malware attacks are a kind of computer extortion, where hackers encrypt an organization’s data and refuse to provide the key unless a ransom is paid.

One of the most concerning aspects of the story was that hackers reportedly obtained the personal banking data of some city employees and used it to raid those employees’ bank accounts.

This reminded me of how important it is for BC professionals to help their organizations fend off and recover from cyberattacks.



It’s a scenario no business wants to think about: an active shooter or violent offender on the premises. From 2000 to 2017, there were 250 active shooter incidents in the United States. These horrific acts of violence took place across industries and geographic locations. According to the Bureau of Labor Statistics, 2016 alone saw 500 workplace homicides in the U.S.

We now face an unfortunate reality: no company is exempt from the potential threat of an act of violence occurring at their organization. As a result, businesses must be proactive in order to protect their people, minimize injury and loss of life, and safeguard their establishment.

Preparation, effectively communicating with staff, and maintaining protocol are critical measures every business should take when dealing with workplace violence. There’s no such thing as “too safe” when it comes to protecting human life.



An organization’s weakest link is most often human, not technological. Moss Adams’ Francis Tam explains why, when it comes to cybersecurity, anomalies like daily logins, users and infrastructure changes should be an organization’s main concerns.

In today’s technology-driven world, information can be a company’s most valuable – yet vulnerable – asset. Data breaches continue to become more frequent and costly in recent years, with many high-profile cases like the Equifax breach in 2017 making headlines. It’s crucial, then, for companies to properly utilize data monitoring and cybersecurity audits to avoid breaches or having information stolen.

Breaches can cost companies an average of $3.9 million and an alarming 54 percent of companies will experience a cyberattack at some point. Full IT assessments can be time-consuming and costly, so companies often skip this crucial process or don’t make it a priority, leaving them vulnerable. Implementing data monitoring for your company’s cybersecurity can help prevent major breaches.



(TNS) — The National Park Service has awarded the territory a little over $10 million to assist in the restoration of hurricane-damaged historic sites.

The supplemental funding was granted to the Virgin Islands State Historic Preservation Office from the Historic Preservation Fund, which will allow for the repair of hurricane-damaged National Register-listed or eligible sites throughout the territory, according to a news release from the V.I. Department of Planning and Natural Resources.

The announcement comes 18 months after hurricanes Irma and Maria tore through the territory, causing serious damage to a number of historic sites and monuments.

All of the Virgin Islands’ historic resources were included on the 31st annual list of “America’s 11 Most Endangered Historic Places,” compiled by the National Trust for Historic Preservation in 2018.



Integrating cloud environments is anything but easy. Evaluating the security risks in doing so must be a starting component of an overall M&A strategy.

Mergers and acquisitions are an essential part of the enterprise business landscape. These deals foster innovation and create some of the biggest and most successful companies in the world.

But one of the largest potential pitfalls in any M&A transaction is mishandling IT integration and creating or failing to mitigate security risk. In the era of cloud computing, the cost of inheriting poor security can be massive and quickly destroy any value the transaction poses.

In addition, a common misconception is that if the two companies merging both operate in the cloud, integration will be easier. The reality is it's actually harder due to the added complexity — no two cloud environments are identical, and the rate of change is so much faster compared with traditional IT. Post-acquisition IT integration used to take five to ten years, but these days, given the nonstop pace of innovation, organizations don't have that luxury.



Thursday, 11 April 2019 15:00

Merging Companies, Merging Clouds

(TNS) - It’s tornado season in Oklahoma; that time every year when my neighbor shuffles the beloved baby portraits of her kids from the mantle to the storm shelter.

For businesses, the seasonal fear, of course, is that they’ll lose their most precious asset: data.

Oklahoma City-based Midcon Recovery Solutions has a precaution for that: two unmarked, double steel-reinforced, windowless concrete buildings in Oklahoma City and one in Broken Arrow in which the company hosts the data of hundreds of organizations — from energy and telecommunications companies to insurance agencies and banks. For $100 a month to several thousands of dollars, companies rent rack spaces of 1 ¾ inches to 200 square feet.



Page 1 of 2