DRJ Fall 2019

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 32, Issue 2

Full Contents Now Available!

Industry Hot News

Industry Hot News (323)

Applicants to three private colleges this week discovered just how steep the price of admission can run.

Hackers breached the system that stores applicant information for Oberlin College in Ohio, Grinnell College in Iowa and Hamilton College in New York and emailed applicants, offering them the chance to buy and view their admissions file. For a fee, the sender promised access to confidential information in the applicant’s file, including comments from admissions officers and a tentative decision. The emails demanded thousands of dollars in ransom from prospective students for personal information the hackers claimed to have stolen.

All three schools use Slate, a popular software system, to manage applicants’ information. Slate is used by more than 900 colleges and universities worldwide. The company is not aware of other affected colleges, said Alexander Clark, chief executive of Technolutions, Slate’s parent company. Officials from the affected schools declined to comment on the scope of the data breach.

...

https://www.washingtonpost.com/education/2019/03/08/hackers-breach-admissions-files-three-private-colleges/

There are a lot of ways that business continuity programs go off track. Here are some of the main ones, together with a list of what successful programs do to keep rolling along.

We are seeing an increase in the number of companies that recognize that a business continuity program is a must-have.

This is great, but it’s still the case that too many programs are floundering.

In on our experience working as BC consultants for firms of a range of sizes and industries, we see the same problems come up again and again.

If you’re just starting a program, do yourself a favor: Try not to make any of the mistakes listed below.

...

https://bcmmetrics.com/bcm-offices-fail/

How do you create an insights-driven organization? One way is leadership. And we’d like to hear about yours.

Today, half of the respondents in Forrester’s Business Technographics® survey data report that their organizations have a chief data officer (CDO). A similar number report having a chief analytics officer (CAO). Many firms without these insights leaders report plans to appoint one in the near future. Advocates for data and analytics now have permanent voices at the table.

To better understand these leadership roles, Forrester fielded its inaugural survey on CDO/CAOs in the summer of 2017. Now we’re eager to learn how the mandates, responsibilities, and influence of data and analytics leaders and their teams have evolved in the past 18 months. Time for a new survey!

Take Forrester’s Data And Analytics Leadership Survey

Are you responsible for data and analytics initiatives at your firm? If so, we need your expertise and insights! Forrester is looking to understand:

  • Which factors drive the appointment of data and analytics leaders, as well as the creation of a dedicated team?
  • Which roles are part of a data and analytics function? How is the team organized?
  • What challenges do data and analytics functions encounter?
  • What is the working relationship between data and analytics teams and other departments?
  • What data and analytics use case, strategy, technology, people, and process support do these teams offer? How does the team prioritize data and analytics requests from stakeholders?
  • Which data providers do teams turn to for external data?
  • Which strategies do teams use to improve data and analytics literacy within the company?

Please complete our 20-minute (anonymous) Data and Analytics Leadership Survey. The results will fuel an update to the Forrester report, “Insights-Driven Businesses Appoint Data Leadership,“as well as other reports on the “data economy.”

For other research on data and analytics leadership, please also take a look at “Strategic CDOs Accelerate Insights-To-Action” and “Data Leaders Weave An Insights-Driven Corporate Fabric.”

As a thank-you, you’ll receive a courtesy copy of the initial report of the survey’s key findings.

Thanks in advance for your participation.

https://go.forrester.com/blogs/data-and-analytics-leaders-we-need-you/

Friday, 08 March 2019 16:27

Data And Analytics Leaders, We Need You!

As more enterprise work takes place on mobile devices, more companies are feeling insecure about the security of their mobile fleet, according to a new Verizon report.

SAN FRANCISCO – As more enterprise work takes place on mobile devices, more companies are feeling insecure about the security of their mobile fleet. That's one of the big takeaways from Verizon's "Mobile Security Index 2019," released here this week.

The report is based on responses from 671 enterprise IT professionals from a wide range of business sizes across a broad array of industries. The picture they paint in their responses is one where mobile security is a major concern that's getting worse, not better, as time goes on.

More than two-thirds (68%) say the risks of mobile devices have grown in the past year, with 83% now saying their organizations are at risk from mobile threats. Those risks have changed in the year since the first edition of the "Mobile Security Index."

"In the first iteration, organizations were more nervous about losing access to the device itself" through theft or accidental loss, said Matthew Montgomery, a director with responsibilities for business operations, sales, and marketing at Verizon, in an interview at the RSA Conference. This time, they are worried about " ... having a breach or losing access to the data, because the device became very centric to businesses in the way they work."

...

https://www.darkreading.com/mobile/companies-having-trouble-translating-security-to-mobile-devices/d/d-id/1334111

Comforte AG’s Jonathan Deveaux stresses that while compliance with the GDPR is a worthy goal, adhering to the regulation doesn’t necessarily mean your organization is safe. Consider both compliance and security a journey, not a destination.

The European General Data Protection Regulation (GDPR) came into effect on May 25, 2018, ushering in a new era of data compliance regulation across the world. GDPR-like regulations have emerged in Brazil, Australia, Japan and South Korea, as well as U.S. states such as New York and California.

The GDPR was introduced to protect EU individuals’ personal information, collected by organizations, through regulation on how the data can be collected and used. Even though it is European law, the scope of the legislation effects organizations around the world.

Despite a two-year phase-in period (May 24, 2016 to May 25, 2018), many organizations around the globe remain noncompliant. A GDPR pulse survey by PwC in November 2017 revealed only 28 percent of U.S. companies had begun preparing for GDPR, and only 10 percent responded saying they were compliant.

...

https://www.corporatecomplianceinsights.com/is-data-compliance-equal-to-data-security/

Social engineering scam continued to be preferred attack vector last year, but attackers were forced to adapt and change.

The growing sophistication of tools and techniques for protecting people against phishing scams is forcing attackers to adapt and evolve their methods.

A Microsoft analysis of data collected from users of its products and services between January 2018 and the end of December showed phishing was the top attack vector for yet another year. The proportion of inbound emails containing phishing messages surged 250% between the beginning and end of 2018. Phishing emails were used to distribute a wide variety of malware, including zero-day payloads.

However, the growing use of anti-phishing controls and advances in enterprise detection, investigation, and response capabilities is forcing attackers to change their strategies as well. Microsoft said.

For one thing, phishing attacks are becoming increasingly polymorphic. Rather than using a single URL, IP address, or domain to send phishing emails, attackers last year began using varied infrastructure to launch attacks, making them harder to filter out and stop.

...

https://www.darkreading.com/attacks-breaches/phishing-attacks-evolve-as-detection-and-response-capabilities-improve-/d/d-id/1334109

ERP Maestro’s CEO Jody Paterson discusses cybersecurity risk disclosure and compliance and how executives are being held more personally accountable for nondisclosure as outlined by the SEC.

Companies face a multitude of risks and threats. Reporting them to stakeholders and investors is a requirement, and serious consequences may ensue for a failure to do so – for the company and, increasingly, for business leaders. It’s a liability no company wants and a personal disaster no executive wishes to encounter. To prevent such catastrophes for the latter, individuals need to understand how they may be accountable.

For public companies, disclosing business risks has long been mandatory on periodic reports, such as annual reports, 10-K forms, quarterly 10-Qs and 8-K current incident reports as needed.

As technology has become not only the primary offering of many companies, but also the norm for business operations and financial management, external risks, such as security breaches and cyberattacks, have been included in in the Security and Exchange Commission’s (SEC) risk reporting requirements.

...

https://www.corporatecomplianceinsights.com/executive-accountability-for-internal-cybersecurity-disclosure/

Over and over, clients tell us they just don’t get enough funding for the kind of privacy programs they want to create. In fact, many privacy budgets shrank in 2019, after firms were forced to spend more than they expected on GDPR compliance in 2018. But what if we told you that customer-centric privacy programs could actually drive a positive ROI — would your CFO find the budget then? We’re betting so.

That’s why we recently built a Total Economic Impact model on the ROI of privacy. We were convinced that there’s more to privacy investments than CYA, and we were right.

...

https://go.forrester.com/blogs/think-privacys-just-a-cost-center-think-again/

(TNS) — They started Alabama's way from Louisiana as soon as word went out about Sunday's deadly tornadoes in Lee County. It was the same when Hurricane Michael flattened Mexico Beach, Fla., last year. It's been the same since 2016. People were in trouble, and they went on the road.

They're called the Cajun Navy, but they're not one organization. The Louisiana Secretary of State's website lists 11 different organizations with "Cajun Navy" in their name. The best known, perhaps, is Cajun Navy 2016. It is named for the year it was founded by two friends in Baton Rouge after they had volunteered in the catastrophic flooding there.

"We're the ones that have been to the White House multiple times," Vice President Billy Brinegar said Tuesday. "We do things the right way. We try to get involved with the local EOCs (Emergency Operations Centers) or fire departments or whoever, just coordinate with them so they know we're on the scene and we work together."

...

http://www.govtech.com/em/disaster/Cajun-Navy-Came-Quickly-After-Tornadoes-but-What-Is-It-Exactly.html

Why Do Bots Fail to Scale Across the Enterprise?

The interest in RPA has skyrocketed, and company leaders are challenging their teams to find out more about the technology and its associated benefits.  With the increased interest in RPA, we have seen a significant uptick in teams testing the RPA waters by starting Bot development and implementation pilots.  What we have also found is that teams are struggling to move beyond the pilots due to some fundamental errors made during RPA Program Setup and Execution and Bot Development and Implementation.

RPA Program Setup and Execution

What we find is that there is a lack of an RPA enterprise strategy and foundation, and a lack of understanding about RPA, solution capabilities and where to focus efforts.

...

http://www.enaxisconsulting.com/blog/2019/03/06/why-do-bots-fail-to-scale-across-the-enterprise/

Whitefly is exploiting DLL hijacking with considerable success against organizations since at least 2017, Symantec says. 

Whitefly, a previously unknown threat group targeting organizations in Singapore, is the latest to demonstrate just how effective some long-standing attack techniques and tools continue to be for breaking into and maintaining persistence on enterprise networks.

In a report Wednesday, Symantec identified Whitefly as the group responsible for an attack on Singapore healthcare organization SingHealth last July that resulted in the theft of 1.5 million patient records. The attack is one of several that Whitefly has carried out in Singapore since at least 2017.

Whitefly's targets have included organizations in the telecommunications, healthcare, engineering, and media sectors. Most of the victims have been Singapore-based companies, but a handful of multinational firms with operations in the country have been affected as well.

...

https://www.darkreading.com/attacks-breaches/new-threat-group-using-old-technique-to-run-custom-malware/d/d-id/1334089

The failed Fyre Festival of 2017 serves as a cautionary tale to any who’d ignore warnings from trusted advisers and key stakeholders. Sandra Erez discusses how the Fyre Festival went so disastrously wrong – and the lesson compliance practitioners should take away.

The recent Netflix documentary “Fyre: The Greatest Party that Never Happened” revealed the 2017 fiasco to be a real “trip” – the kind that comes from bad LSD with lingering, long-term effects. Touted as a luxury music festival set on the balmy beaches of the Bahamas, this highly publicized would-be event tantalized millennials with the chance to live the elusive elite lifestyle for a weekend (and talk about it for the rest of their lives). Dangling ads of bikinied supermodels frolicking in the waves succeeded as the bait that would reel thousands of suckers in to this Titanic event – hook, line and sinker. Never mind that it all seemed to be too good to be true; everything is possible if you have the right app, the right hair, the right attitude and are in search of the perfect Instagram backdrop – real or not.

The Fyre Festival launch started off with a splash worthy of any jet ski – selling 95 percent of the costly tickets within 24 hours. Like moths to a flame, the target audience was enticed into the web spun with golden promises, thereby proving to founder Billy McFarland and his team that his idea was on fire. Now, totally pumped and egged on by their initial spectacular success, the staff and partners literally dug their heels (and unfortunately their heads) into the sand to get this show on the road.

...

https://www.corporatecomplianceinsights.com/liar-liar-pants-on-fyre/

Thursday, 07 March 2019 14:21

Liar, Liar, Pants on Fyre

Information travels more quickly than ever.

If a disaster occurs in your community, you will need to work quickly and decisively to ensure that the information that gets to the public is accurate, balanced and useful to the people who need it most. Good crisis communications is the result of a clear and well-developed media relations policy. If you want the headlines to reflect an accurate story, you will need to understand what drives them and how you can establish a beneficial and positive relationship with the press. 

Good Crisis Communication Starts Before the Crisis

A good crisis communications plan will ensure that your organization is prepared to get information out in a way that is helpful to all stakeholders. While you cannot anticipate every potential crisis, most well-constructed plans are flexible enough to address a range of needs.

Begin by considering what sorts of crises are most likely in your community. What will be the potential impact on the people and businesses within your community? For instance, a city in the Midwest can expect periodic severe snow storms. These may cause power outages and leave roads impassable for a period of time. Cities in the southeastern part of the US should be prepared for hurricanes in the warmer half of the year. Areas throughout the country should have plans for manmade disasters that include mass shooter events.

...

https://www.onsolve.com/blog/what-will-your-headline-be/

Wednesday, 06 March 2019 15:34

What will your headline be?

(TNS) — The city of Dayton, Ohio received more than 12,000 phone calls during its nearly catastrophic water main break emergency that happened Feb. 13-15.

The cause of the break still isn’t known, since city crews still haven’t been able to inspect the line because of high river levels.

The city said it has been monitoring the river daily and would evaluate again Monday.

Though it wasn’t an especially long emergency, it was an intense time in Dayton.

In 72 hours, Dayton’s water dispatch center received 8,958 calls, or about 20 times the number it receives in a typical week at this time of year. Dispatch handled 393 calls in the last week of January and 463 in the first week of February.

“When this happened, our call centers were completely overwhelmed with phone calls,” said Dayton City Manager Shelley Dickstein.

...

http://www.govtech.com/em/disaster/Water-Outage-City-Receives-24K-Comments-12K-Calls-During-Emergency.html

Problem lies in the manner in which Word handles integer overflow errors in OLE file format, Mimecast says.

The manner in which Microsoft Word handles integer overflow errors in the Object Linking and Embedding (OLE) file format has given attackers a way to sneak weaponized Word documents past enterprises sandboxes and other anti-malware controls.

Security vendor Mimecast, which discovered the issue, says its researchers have observed attackers taking advantage of the OLE error in recent months to hide exploits for an old bug in the Equation Editor component of Office that was disclosed and patched in 2017.

In one instance, an attacker dropped a new variant of a remote access backdoor called JACKSBOT on a vulnerable system by "chaining" or combining the Equation Editor exploit with the OLE file format error. 

...

https://www.darkreading.com/attacks-breaches/word-bug-allows-attackers-to-sneak-exploits-past-anti-malware-defenses/d/d-id/1334070

When business continuity (BC) professionals hear that the Polar Vortex is collapsing, they aren’t simply worried about the inconvenience of cold temperatures — they are focused on the impact of severe weather to business operations and workforce safety.

Natural disasters and extreme weather resulted in approximately $160 billion worth of damage last year, and reinsurance company Munich RE forecasts this figure will be surpassed in 2019. Abnormal weather patterns — the type that can cause extended cold weather snaps as well as more frequent and intense winter storms — require that BC leaders properly plan for this new weather reality.

And it appears that organizations are acutely aware of the role workforce communications plays with winter weather. In the survey conducted by research firm DRG last year, 47% of decision-makers said severe and extreme weather events are their leading concern when it comes to emergency communications and response — outpacing other events such as active shooters (23%), cybersecurity attacks (13%), IT outages (10%), and workplace violence (6%).

With extreme and severe winter weather raising the stakes for business continuity, it also raises the probability of mistakes: requiring that employees commute into work in unsafe conditions or failing to communicate with your workforce in a timely fashion can elevate human and business risk. Organizations can’t change the weather, but they can mitigate its impact through proper preparation and communication before, during and after adverse winter weather hits. This starts with eliminating six common winter weather mistakes.

...

https://www.missioncriticalmagazine.com/blogs/14-the-mission-critical-blog/post/92106-winter-weather-mistakes-that-can-disrupt-business-continuity

Last April, we outlined how the “Tech Titans” (Amazon, Google, and Microsoft) were poised to change the cybersecurity landscape by introducing a new model for enterprises to consume cybersecurity solutions. Security has long been delivered as siloed solutions located on-premises. These solutions were hard to buy, hard to use, and existed in silos. Security leaders were hampered by the technologies’ lack of connectedness, poor user interfaces, and difficulty of administration. Understaffed, stressed security teams struggled to balance the responsibilities of defending their enterprise while updating an ever-expanding toolset.

Cloud adoption by cybersecurity also lags other parts of the enterprise. Many of the security tools enterprises rely on are still deployed on-premises, even as more and more of IT shifts to the cloud. Running counter to other parts of the enterprise, most security teams incur the expense of pulling logs from cloud environments to then process and store them on-premises.

Security analytics platforms such as legacy security information management (SIM) systems struggled to keep pace with the increasing volume and variety of data they process. Unhappy users complained about the inability of their SIMs to scale and the volume of alerts they must investigate.

Enterprises struggling with the cost of data analysis and log storage turned to open source tools such as Elasticsearch, Logstash, and Kibana (ELK) or Hadoop to build their own on-premises data lakes. But then they were unable to glean useful insight from the data they had collected and realized that the expense of building and administering these “free” tools was just as great as the cost of commercial tools.

...

https://go.forrester.com/blogs/tech-titans-google-and-microsoft-are-transforming-cybersecurity/

There are nine enterprise risk management (ERM) activities that at least nine-in-ten of the North American chief risk officers (CROs) we surveyed said that they perform in one way or another over the course of a year. None of these activities is necessarily strategic, but a strategic CRO can put a strategic spin on any of them.

And the more times CROs are heard speaking strategically about their work, the more likely they will be invited to play a role in the future strategic activities of the firm.

In general, the way to make any of these activities more strategic is to shift orientation away from focusing on separating information by risk and toward presenting the information in the context of strategy.  Easy to say, but the nuances of how to do that play out differently for each activity

Let’s review these nine activities and see how the seemingly mundane can be strategic.

...

https://blog.willis.com/2019/02/how-chief-risk-officers-can-make-everyday-enterprise-risk-management-tasks-strategic/

Slack, the cloud-based set of collaborative tools for teams, is taking over, and changing the way we work for good. Here’s what co-founder Stewart Butterfield has to say about the workplace of the future

Haven’t you heard? Email is dead. At least, that’s what Stewart Butterfield would have you believe. Launched in 2014, his cloud-based ‘virtual assistant’ (which provides team collaboration tools) is doing away with the need for time-consuming and inefficient electronic communication – and changing the way we work altogether.

He might be on to something. Slack is one of the fastest-growing business applications in the last decade. According to its latest figures, there are now more than eight million daily active users across more than 500,000 organisations that use the platform. The company has more than three million paid users and 65 per cent of companies in the Fortune 100 are paid Slack users. More than 70,000 paid teams with thousands of active users connect in Slack channels across departments, borders and oceans.

So when Butterfield and his team share their opinions on the future of work, it’s worth paying attention. Here are five of their predictions.

...

https://www.regus.com/work-us/future-work-according-slack/

Monday, 04 March 2019 16:13

The Future of Work According to Slack

Charlie Maclean Bristol explains why developing a playbook for the main types of cyber attacks will help businesses response effectively when an attack occurs. He also provides a checklist covering the areas that such a playbook should include.

When I first thought about cyber playbooks I envisaged the playbook helping senior management or the crisis team make a key decision in a cyber incident, such as, whether or not to unplug the organization from the internet and prevent any network traffic on the organization’s IT network. As this is a critical decision for the organization and the consequences of making the wrong decision are huge, this type of playbook would help the team understand, at short notice, what factors they should consider and the impact of the different decisions they could make.

I was running a cyber exercise a couple of weeks ago and suddenly thought that there was a need for another type of playbook, which is basically a plan for how to deal with different types of cyber attack. As we know, the more planning we do the better prepared we will be for managing an incident, and thinking through how we would respond throws up questions and issues which we can work to solve, without the cold sweat and pressure of the incident taking place.

Cyber response should be in two parts. Firstly, you need an incident management team to manage the consequences of the cyber-attack. This team is separate from a cyber incident response team, who should deal with the technical response, and should concentrate on restoring the organization’s IT service. The organization’s incident management team can be the same as the crisis management team, as they are going to be dealing with the reputation and strategic impacts of the incident.

...

https://www.continuitycentral.com/index.php/news/technology/3784-what-should-a-cyber-incident-playbook-include

Oftentimes, responsibility for securing the cloud falls to IT instead of the security organization, researchers report.

Businesses are embracing the cloud at a rate that outpaces their ability to secure it. That's according to 60% of security experts surveyed for Firemon's first "State of Hybrid Cloud Security Survey," released this week.  

Researchers polled more than 400 information security professionals, from operations to C-level, about their approach to network security across hybrid cloud environments. They learned not only are security pros worried – oftentimes they don't have jurisdiction over the cloud.

Most respondents say their businesses are already deployed in the cloud: Half have two or more different clouds deployed, while 40% are running in hybrid cloud environments. Nearly 25% have two or more different clouds in the proof-of-concept stage or are planning deployment within the next year.

...

https://www.darkreading.com/cloud/security-pros-agree-cloud-adoption-outpaces-security/d/d-id/1334013

Emergency Response? Crisis Management? Business Continuity? Disaster Recovery? How do you know which plan to use during an incident?

It’s often confusing which plan to activate, and who is in charge.

Each plan should clearly identify the scope and responsibilities for executing the plan and have distinct and disparate objectives. During the life-cycle of an incident, all of the plans may be activated – but often only some of them are.  Like many things “it depends”.

Let’s go into more detail on each of the plans and their purpose.

...

http://www.preparedex.com/which-plan-activate-during-incident/

With famous CEOs and big-name proponents of a shorter working week getting their voices heard, Ben Hammersley finds out whether more time out of the office – with the same amount of work to do – really can be achieved

On the face of it, it’s kind of a classic line for a billionaire who owns a tropical island paradise to say. The sort of statement that, when read on a rainy commute home from another 60-hour week would usually result in the newspaper being tossed aside. But, when Sir Richard Branson opined in a blog post that flexible working, with unlimited holiday time, is the way to achieve happiness and success at work, he wasn’t just talking about senior management. It was about everyone. Further still, according to CNBC, he’s recommending even longer weekends:

“Many people out there would love three-day or even four-day weekends,” he reportedly said. “Everyone would welcome more time to spend with their loved ones, more time to get fit and healthy and more time to explore the world.”

...

https://www.regus.com/work-us/three-day-work-week-really-work/

Over a billion people around the world have some form of disability. Empowerment and inclusiveness of this large section of the population are therefore essential for a sustainable society, and make up the theme of this year’s International Day of Persons with Disabilities. The Day also contributes to the goals outlined in the United Nations 2030 Agenda for Sustainable Development, which pledges to “leave no one behind”. Many of ISO’s International Standards are key tools to achieving these goals, and there are many more in the pipeline.

From signage in the street to the construction of buildings, ISO standards help manufacturers, service providers, designers and policy makers create products and services that meet the accessibility needs of every person. These include standards for assistive technology, mobility devices, inclusivity for aged persons and much more. In fact, the subject is so vast, we even have guidelines for standards developers to ensure they take accessibility issues into account when writing new standards.

Developed by ISO in collaboration with the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU), ISO/IEC Guide 71, Guide for addressing accessibility in standards, aims to help standards makers consider accessibility issues when developing or revising standards, especially if they have not been addressed before.

...

https://www.iso.org/news/ref2351.html

Summary FINRA is conducting a retrospective review of Rule 4370 (Business Continuity Plans and Emergency Contact Information), FINRA’s emergency preparedness rule, to assess its effectiveness and efficiency. This Notice outlines the general retrospective rule review process and seeks responses to several questions related to firms’ experiences with this specific rule.

...

http://www.finra.org/sites/default/files/notice_doc_file_ref/Regulatory-Notice-19-06.pdf

To effectively defend against today's risks and threats, organizations must examine their failings as well as their successes.

In life in general — and, of course, in security specifically — it is helpful to understand when I am the problem or when my organization is the problem. By that, I mean that it is important to discern when an approach to a problem is simply ineffective. When I understand that an approach doesn't work, I can try different things until I find the right solution. This is the definition of repetition.

Redundancy, on the other hand, is when I (or my organization) keeps trying the same approach and nothing changes. It makes no sense to expect different results without a different approach. This, of course, is the definition of redundancy. What can the difference between repetition and redundancy teach us about security? An awful lot.

...

https://www.darkreading.com/threat-intelligence/solving-security-repetition-or-redundancy--/a/d-id/1333983

(TNS) - This would be a first for California: state government buying insurance to protect itself against overspending its budget.

But before you start pelting the politicians and screaming fiscal irresponsibility, know that the budget-busting would be for fighting wildfires.

That puts it in an entirely different category from, say, controversial spending to help immigrants who are here illegally, or trying to register voters at the notoriously jammed DMV.

No sane person is going to gripe about overspending tax dollars to douse a deadly wildfire.

But it does amount to a sucker punch for state budgeters, who might be forced to grab money from other state programs to pay for the firefighting. Fortunately in recent years the robust California economy has been producing state revenue surpluses. So, little problem.

...

http://www.govtech.com/em/preparedness/Should-California-Insure-Against-Spending-too-Much-on-Fighting-Wildfires.html

The Watchlist, which contained the identities of government officials, politicians, and people of political interest, is used to identify risk when researching someone.

A data leak at Dow Jones exposed the financial firm's Watchlist database, which contains information on high-risk individuals and was left on a server sans password.

Watchlist is used by major global financial institutions to identify risk while researching individuals. It helps detect instances of crime, such as money laundering and illegal payments, by providing data on public figures. Watchlist has global coverage of senior political figures, national and international government sanction lists, people linked to or convicted of high-profile crime, and profile notes from Dow Jones citing federal agencies and law enforcement. 

The leak was discovered by security researcher Bob Diachenko, who found a copy of the Watchlist on a public Elasticsearch cluster. The database exposed 2.4 million records and was publicly available to anyone who knew where to find it – for example, with an Internet of Things (IoT) search engine, he explained in a blog post.

...

https://www.darkreading.com/cloud/dow-jones-leak-exposes-watchlist-database/d/d-id/1334006

(TNS) - One of the winter’s strongest storms brought flooding across Northern California’s wine country Wednesday, with no region hit harder than the town of Guerneville and the Russian River Valley, which has been inundated repeatedly over the decades.

Some 3,600 people in about two dozen communities near the river were evacuated Wednesday by the flooding, which prompted the Sonoma County Board of Supervisors to declare a local emergency. Authorities warned that those who chose to stay in their homes could be stuck there for days.

“We have waterfront property now,” said Dane Pitcher, 70, who watched from the third-story window of his bed and breakfast, the Raford Inn in Healdsburg, as rising water pooled to create a 100-acre lake in front of his property. “We’re marooned for all intents and purposes.”

The Russian River, which sat about 10 feet Monday morning, rose an extraordinary 34 feet over two days, said Carolina Walbrun, meteorologist with the National Weather Service in the Bay Area. By Wednesday afternoon, the river had swollen to 44.3 feet — more than 12 feet above flood stage. One rain gauge near Guerneville reported receiving nearly 20. 5 inches of rain in 48 hours by early Wednesday, turning the town into a Russian River island.

...

http://www.govtech.com/em/disaster/Heavy-Flooding-Turns-Sonoma-County-Towns-into-Islands.html

The Threat and Risk Assessment (TRA) is one aspect of business continuity that has come under criticism recently. In our opinion, this tool remains highly valuable, provided it is used correctly.

The complaints against the TRA are similar to those expressed about the Business Impact Analysis. People say it isn’t useful, that the information gathered tends to be of low quality, and that it’s too disruptive to the staff of other departments.

...

https://www.mha-it.com/2019/02/27/threat-and-risk-assessment/

(TNS) - Dozens of emergency responders rushed Tuesday around the Capitol Federal building in downtown Topeka during an exercise simulating an active assailant incident.

The training was organized by the Shawnee County, Kan., Department of Emergency Management and included several area agencies.

"We like to think when it happens here, versus if — that way we have that mindset and we're more prepared," said emergency management director Dusty Nichols.

The rescue task force stems from the 1999 Columbine High School shooting and other mass casualty events.

...

http://www.govtech.com/em/preparedness/Exercise-in-Downtown-Topeka-Prepares-Responders-for-Active-Assailant-Incident.html

Beware of These Risks to Build Resilience

Steve Durbin, Managing Director of the Information Security Forum (ISF), discusses some of the key risks to organizations today and provides guidance on how to steer clear of them while becoming more resilient.

Until recently, leading executives at organizations around the world received information and reports encouraging them to consider information and cybersecurity risk. Yet not all of them understood how to respond to those risks and the implications for their organizations. A thorough understanding of what happened (and why it is necessary to properly understand and respond to underlying risks) is needed by the C-suite, as well as all members of an organization’s board of directors in today’s global business climate. Without this understanding, risk analyses and resulting decisions may be flawed, leading organizations to take on greater risk than intended.

Cyberspace is an increasingly attractive hunting ground for criminals, activists and terrorists motivated to make money, get noticed, cause disruption or even bring down corporations and governments through online attacks. Over the past few years, we’ve seen cybercriminals demonstrating a higher degree of collaboration amongst themselves and a degree of technical competency that caught many large organizations unawares.

...

https://www.corporatecomplianceinsights.com/what-the-c-suite-needs-to-know-about-cybersecurity-and-compliance/

(TNS) - This weekend’s storm has meant long hours for emergency personnel as numerous stranded motorists were in need of rescuing.

According to Mower County, Minn., Sheriff Steve Sandvik, preliminary numbers indicate that over 150 vehicles were abandoned throughout Mower County during the storm.

Many of those vehicles contained people in need of rescue.

The severity of the storm became apparent after a deputy’s squad car and a snowplow sent to rescue a stranded woman and her grandchild both got stuck Saturday night six miles north of Austin. Sandvik had to call in a road grader to get them out.

...

http://www.govtech.com/em/disaster/Many-Helping-Hands-in-Storm-Rescue-Efforts.html

Six years ago, I noticed a pattern in the inquiry calls I was fielding from clients. At the time, many of them centered around things like BYOD, whether to take away local admin rights from PCs, and other decisions driven by escalating fears of security or compliance risks. If I was able to answer their questions in less than 30 minutes, it gave me an opportunity to ask a question or two of my own: “So you have responsibility for the productivity of 10,000 people, yes?” Their answer was usually some variation of “I guess you could say that.” To which I would then ask: “OK, tell me what you know about how your decisions will impact their motivation or willingness to engage.” After a few moments of uncomfortable silence, their answer was often “I don’t know.” An opportunity was born.

Fast-forward to today, and I’m proud to be sharing with you the results of six years’ worth of research to better understand what really drives employee experience (EX). Spoiler alert: It’s not what you think it is. Ask any group of managers to rank in order of importance the factors they think are most likely to create a positive employee experience. They will say things like recognition, pay-for-performance, important work, great colleagues, or flexibility. Of course these things are important, but they’re not the most important. Psychological research shows that the most important factor for employee experience is being able to make progress every day toward the work that they believe is most important. But when presented with this option, managers will consistently rank it dead last. Clearly, we have a gap.

...

https://go.forrester.com/blogs/the-employee-experience-index/

Thursday, 28 February 2019 15:06

The Employee Experience Index

In the cyber threat climate of the 21st century, sticking with DevOps is no longer an option

In 2016, about eight years following the birth of DevOps as the new software delivery paradigm, Hewlett Packard Enterprise released a survey of professionals working in this field. The goal of the report was to gauge application security sentiment, and it found nearly 100% of respondents agreed that DevOps offers opportunities to improve overall software security.

Something else that the HPE report revealed was a false sense of security among developers since only 20% of them actually conducted security testing during the DevOps process, and 17% admitted to not using any security strategies before the application delivery stage.

Another worrisome finding in the HPE report was that the ratio of security specialists to software developers in the DevOps world was 1:80. As can be expected, this low ratio had an impact among clients that rely on DevOps because security issues were detected during the configuration and monitoring stages, thereby calling into question the efficiency of DevOps as a methodology.

...

https://www.darkreading.com/cloud/embracing-devsecops-5-processes-to-improve-devops-security/a/d-id/1333947

When developing their business continuity plans, office managers, IT leads and risk teams now have a new weapon in their arsenal – flexible workspace

According to a recent global study by Regus, a staggering 73% of respondents claimed that flexible workspace solutions have helped mitigate risks that could threaten the flow of business operations.

As Joe Sullivan, Regus’ Managing Director of Workspace Recovery observes: “Flex space has become a preferred choice when companies establish or upgrade their business continuity plans.

“Today we no longer assume that all the bad stuff happens to someone else,” observes Sullivan. Indeed, according to the 2019 WEF Global Risks Report, “extreme weather events” were cited as the number one risk facing countries globally, followed closely by natural disasters, data fraud and cyber-attacks.

...

https://www.regus.com/work-us/risk-managers-suddenly-interested-flexible-working/

The UK Government has published a new document which highlights some of the expected impacts of a no-deal Brexit on businesses, it concludes that 'lack of preparation by businesses and individuals is likely to add to the disruption experienced in a no-deal scenario.'

Entitled 'Implications for business and trade of a no deal exit on 29 March 2019', the document summarises Government activity to prepare for no deal as a contingency plan, and provides an assessment of the implications of a no deal exit for trade and for businesses, given the preparations that have been made.

Some of the highlights from the document include:

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/3779-uk-government-says-that-businesses-are-under-prepared-for-no-deal-brexit-impacts

Because many organizations tend to overlook or underestimate the threat, social media sites, including Facebook, Twitter, and Instagram, are a huge blind spot in enterprise defenses.

Social media platforms present far more than just a productivity drain for organizations.

New research from Bromium shows that Facebook, Twitter, Instagram, and other high-traffic social media sites have become massive centers for malware distribution and other kinds of criminal activity. Four of the top five websites currently hosting cryptocurrency mining tools are social media sites.

Bromium's study also finds one in five organizations have been infected with malware distributed via a social media platform, and more than 12% already have experienced a data breach as a result. Because many organizations tend to overlook or underestimate the threat, social media sites are a huge blind spot in enterprise defenses, the study found.

...

https://www.darkreading.com/vulnerabilities---threats/social-media-platforms-double-as-major-malware-distribution-centers/d/d-id/1333973

(TNS) - Aurora, Ill., police released audio of the 911 calls and emergency dispatch made during the Feb. 15 Aurora warehouse shooting at Henry Pratt Co. that left five employees dead.

Police communications detail the hour-long manhunt that injured six police officers, who were also identified by the department Monday.

The shooter — Gary Martin — began firing either during or shortly after a meeting where he was fired from the job he held for 15 years. He then retreated into the back of the 29,000-square-foot facility at 641 Archer Ave. and was eventually killed in a shootout with Aurora and Naperville police.

Five officers quickly responded to dispatch calls. One said “we are moving north through the warehouse. We haven’t heard anything yet,” when suddenly another officer screams that shots were fired outside in a bay area.

...

http://www.govtech.com/em/disaster/Aurora-Police-Release-Dispatch-911-Audio-From-Mass-Shooting-Do-not-Give-Him-a-Target.html

As more organizations move to the public cloud and to DevOps and DevSecOps processes, the open source alternative for host-based intrusion detection is finding new uses.

Used by more than 10,000 organizations around the world, OSSEC has provided an open source alternative for host-based intrusion detection for more than 10 years. From Fortune 10 enterprises to governments to small businesses, OSSEC has long been a standard part of the toolkit for both security and operations teams.

As more organizations move to the public cloud infrastructure and to DevOps and DevSecOps processes, OSSEC is finding new use cases and attracting new fans. Downloads of the project nearly quadrupled in 2018, ending the year at more than 500,000. Much of this new activity was driven by Amazon, Google, and Azure public cloud users.

While many security and operations engineers are familiar with OSSEC in the context of on-premise intrusion detection, this article will focus on the project's growing use and applicability to cloud and DevSecOps use cases for security and compliance.

...

https://www.darkreading.com/cloud/a-cloudy-future-for-ossec/a/d-id/1333927

Wednesday, 27 February 2019 14:43

A 'Cloudy' Future for OSSEC

At least 21 individuals died during the 2019 Polar Vortex—including two university students.

The University of Vermont and the University of Iowa both experienced deaths suspected to be due to exposure to sub-zero temperatures. These universities are no strangers to severe winter weather, but these extreme weather conditions are becoming more common, and campuses must prepare.

It’s impossible to reliably predict every emergency. But weather events are one crisis that can be anticipated, based on your region and common weather threats experienced. Universities and college campuses are also often in the unique position of coordinating with internal safety officials and campus police along with community safety officials. A weather preparedness plan puts processes in place to protect your students, faculty, and institution. By having a weather preparedness plan ready for deployment, your campus can react swiftly to threats—and substantially reduce the risk of injury or even death.

...

https://www.onsolve.com/blog/your-guide-for-creating-a-weather-preparedness-plan-for-your-campus/

Weather phenomenon isn’t the only concern when considering an emergency plan.

OSHA defines workplace emergencies as “an unforeseen situation that threatens your employees, customers, or the public; disrupts or shuts down your operations; or causes physical or environmental damage” which can include:

  • Floods
  • Hurricanes
  • Tornadoes
  • Fires
  • Toxic gas releases
  • Chemical spills
  • Radiological accidents
  • Explosions
  • Civil disturbances
  • Workplace violence resulting in bodily harm and trauma

Keeping employees safe during a critical event is the top priority for any company, so consider these five steps to ensure trauma is kept at a minimum.

...

https://www.onsolve.com/blog/5-steps-to-ensuring-employee-safety-during-an-emergency/

The right to be forgotten is a fundamental aspect of both the GDPR and CCPA privacy laws; but its impact on personal information in data backups has yet to be tested. Bill Tolson explains the issue and provides some practical advice.

A great deal has been written about the GDPR and CCPA privacy laws, both of which includes a ‘right to be forgotten’. The right to be forgotten is an idea that was put into practice in the European Union (EU) in May 2018 with the General Data Privacy Regulation (GDPR).

The main trigger for this radical step came from the business practices of major Internet companies such as Google and Facebook (among others) around how they collect and use personal data they collect and subsequently sell to other companies for marketing and sales purposes. Additionally, as ‘fake news’ spread, those affected found it was almost impossible to get the Internet companies (including news publishers) to fix or remove the false data.  Because of this, the GDPR and CCPA were established to ensure end-user rights to know what data is being collected on them, how it's being used, and if it's being sold and to whom. The right to be forgotten includes the right to have privacy information (PI) fixed or removed, quickly.

There continues to be a debate about the practicality of establishing a right to be forgotten (which amounts to an international human right) due in part to the breadth of the regulations and the potential costs to implement. Additionally, there continues to be concern about its impact on the right to freedom of expression. However, most experts don’t foresee these new privacy rights disappearing, ever.

...

https://www.continuitycentral.com/index.php/news/business-continuity-news/3759-the-right-to-be-forgotten-versus-the-need-to-backup

(TNS) - Cambria County officials are making efforts to ensure that first responders can communicate effectively and consistently with each other when it matters most – during emergency calls.

An overhaul of the county’s 911 radio system got rolling last March, when the Cambria County commissioners approved a contract with Mission Critical Partners – tasked with analyzing the current 911 network, and tracking immediate fixes and future design enhancements.

The $201,870 contract covers network design services, along with a $16,500 equipment allowance.

Robbin Melnyk, county 911 coordinator, said the coverage area of the current radio system has been affected by tree growth and the use of analog radios instead of digital units.

That has created situations in which first responders can’t communicate with dispatchers at the 911 center or with each other on emergency scenes.

...

http://www.govtech.com/em/preparedness/Cambria-County-Pa-Preparing-for-911-Radio-System-Overhaul.html

Financial software company Intuit discovered that tax return info was accessed by an unauthorized party after an undisclosed number of TurboTax tax preparation software accounts were breached in a credential stuffing attack.

A credential stuffing attack is when attackers compile username and passwords that were leaked from previous security breaches and use those credentials to try and gain access to accounts at other sites. This type of attack works particularly well against users who use the same password at every site.

...

https://www.bleepingcomputer.com/news/security/tax-returns-exposed-in-turbotax-credential-stuffing-attacks/

Despite the openness of the Android platform, Google has managed to keep its Play store mainly free of malware and malicious apps. Outside of the marketplace is a different matter.

In 2018, Google saw more attacks on users' privacy, continued to fight against dishonest developers, and focused on detecting the more sophisticated tactics of mobile malware and adware developers, the Internet giant stated in a recent blog post. 

Google's efforts — and those of various security firms — highlight that, despite ongoing success against mobile malware, attackers continue to improve their techniques. Malware developers continue to find news ways to hide functionality in otherwise legitimate-seeming apps. Mobile applications with potentially unwanted functionality, so-called PUAs, and applications that eventually download additional functionality or drop malicious code, known as droppers, are both significant threats, according to security firm Kaspersky Lab.

For Google, the fight against malicious mobile app developers is an unrelenting war to keep bad code off its Google Play app store, the firm said. 

...

https://www.darkreading.com/mobile/lessons-from-the-war-on-malicious-mobile-apps/d/d-id/1333946

The reports of the death of the field of business continuity have been greatly overstated. But those of us who work in it do have to raise our performance in a few critical areas.

For some time, reports predicting the imminent demise of the field of business continuity have been a staple of industry publications and gatherings.

The most prominent of these have been the manifesto and book written by David Lindstedt and Mark Armour. For an interesting summary and review of their work, check out this articleby Charlie Maclean Bristol on BC Training.

...

https://bcmmetrics.com/business-continuity-r-i-p/

Friday, 22 February 2019 15:25

Business Continuity, R.I.P.?

Recommended best practices not effective against certain types of attacks, they say.

Automated online password-guessing attacks, where adversaries try numerous combinations of usernames and passwords to try and break into accounts, have emerged as a major threat to Web service providers in recent years.

Next week, two security researchers will present a paper at the Network and Distributed System Security Symposium (NDSS Symposium) in San Diego that proposes a new, more scalable approach to addressing the problem.

The approach — described in a paper titled "Distinguishing Attacks from Legitimate Authentication Traffic at Scale" — is designed specifically to address challenges posed by untargeted online password-guessing attacks. These are attacks where an adversary distributes password guesses across a very large range of accounts in an automated fashion.

...

https://www.darkreading.com/attacks-breaches/researchers-propose-new-approach-to-address-online-password-guessing-attacks/d/d-id/1333939

Safe Web Use Practices for Investment Firms

Regulating web use for employees via compliance handbook and URL filters for blacklisted (bad) and whitelisted (good) online resources has failed to improve compliance. Authenic8’s John Klassen discusses how firms are increasingly turning to a centrally managed and monitored cloud browser to regain control, unobtrusively maximize visibility into employees’ web activities and ensure compliance without sacrificing productivity or risking an internal backlash.

Pressure from the SEC and state authorities has increased over the past two years to remediate areas of cybersecurity weakness. Yet regulators and compliance professionals agree that alarming gaps remain in how regulated financial services firms use the web.1  Many firms still struggle to effectively control, secure and monitor employee web activities.

So what’s the holdup?

Industry insiders point to the ubiquitous use of a tool that was conceived almost 30 years ago: the locally installed browser. Many firms still use a traditional “free” browser for all their web activities, its inherent architectural flaws and vulnerabilities notwithstanding. At the same time, CCOs and IT are also increasingly aware of the risks associated with local browser use:

...

https://www.corporatecomplianceinsights.com/compliance-and-the-blacklist-whitelist-fallacy/

UK businesses are most concerned about the susceptibility of 5G to cyber attacks according to EY’s latest Technology, Media and Telecommunications (TMT) research.

40 percent of respondents are worried about 5G and cyber attacks while a similar percentage (37 percent) were cautious over the security of Internet of Things (IoT) connectivity. The survey also found that while 5G investment is set to catch-up with Internet of Things spend over the next two years, doubts surround its readiness and relevance. Just over one third of respondents feared that 5G is too immature, while 32 percent believe it lacked relevance to overall technology and business strategy.

The survey of 200 UK businesses looked at attitudes towards the adoption of 5G and IoT technology as well as organizations’ expectations from tech suppliers.

...

https://www.continuitycentral.com/index.php/news/erm-news/3753-risks-for-uk-businesses-adopting-5g-and-iot-assessed-by-ey

The constant stresses from advanced malware to zero-day vulnerabilities can easily turn into employee overload with potentially dangerous consequences. Here's how to turn down the pressure.

Cybersecurity is one of the only IT roles where there are people actively trying to ruin your day, 24/7. The pressure concerns are well documented. A 2018 global survey of 1,600 IT pros found that 26% of respondents cited advanced malware and zero-day vulnerabilities as the top cause for the operational pressure that security practitioners experience. Other top concerns include budget constraints (17%) and a lack of security skills (16%).

As a security practitioner, there is always the possibility of receiving a late-night phone call any day of the week alerting you that your environment has been breached and that customer data has been publicized across the web. Today, a data breach is no longer just a worse-case scenario; it's a matter of when, a consequence that weighs heavily on everyone — from threat analyst to CISO.

...

https://www.darkreading.com/threat-intelligence/why-cybersecurity-burnout-is-real-(and-what-to-do-about-it)/a/d-id/1333906

tabletop exercisePreparing a business for the unknown requires a series of important steps to protect your employees and your operations. For many business owners, this foundation starts with an emergency plan and grows to include a business continuity plan, an inclement weather policy, and perhaps even a lone worker policy to keep employees safe.

So, you’ve made your emergency plans and identified the best people to lead your teams through each phase. Now, it’s time to practice with the low-cost but high-impact emergency planning event known as a tabletop exercise.

...

https://www.alertmedia.com/blog/tabletop-exercises/

What You Need to Know for 2019 – and Beyond

In the fast-moving world of cybersecurity, predicting the full threat landscape is near impossible. But it is possible to extrapolate major risks in the coming months based on trends and events of last year. Anthony J. Ferrante, Global Head of Cybersecurity at FTI Consulting, outlines what organizations must be aware of to be prepared.

In 2018, cyber-related data breaches cost affected organizations an average of $7.5 million per incident — up from $4.9 million in 2017, according to the U.S. Securities and Exchange Commission. The impact of that loss is great enough to put some companies out of business.

As remarkable as that figure is, associated monetary costs do not include the potentially catastrophic effects a cyberattack can have on an organization’s reputation. An international hotel chain, a prominent athletic apparel company and a national ticket distributor were just three of several organizations that experienced data breaches in 2018 affecting millions of their online users — incidents sure to cause public distrust. It’s no coincidence that these companies were targeted — all store valuable user data that is coveted by hackers for nefarious use.

These events and trends should serve as eye openers for what’s ahead this year, as malicious actors are becoming more sophisticated and focused with their attacks. Consider these 10 predictions over the next 10 months:

...

https://www.corporatecomplianceinsights.com/10-corporate-cybersecurity-predictions/

Thursday, 21 February 2019 17:01

10 Corporate Cybersecurity Predictions

Companies think their data is safer in the public cloud than in on-prem data centers, but the transition is driving security issues.

More business-critical data is finding a new home in the public cloud, which 72% of organizations believe is more secure than their on-prem data centers. But the cloud is fraught with security challenges: Shadow IT, shared responsibility, and poor visibility put data at risk.

These insights come from the second annual "Oracle and KPMG Cloud Threat Report 2019," a deep dive into enterprise cloud security trends. Between 2018 and 2020, researchers predict the number of organizations with more than half of their data in the cloud to increase by a factor of 3.5.

"We're seeing, by and large, respondents are having a high degree of trust in the cloud," says Greg Jensen, senior principal director of security at Oracle. "From last year to this year, we saw an increase in this trust."

...

https://www.darkreading.com/cloud/as-businesses-move-critical-data-to-cloud-security-risks-abound/d/d-id/1333924

ASSP TR-Z590.5-2019 provides guidance from safety experts on proactive steps businesses can take to reduce the risk of an active shooter, prepare employees and ensure a coordinated response should a hostile event occur. It also provides post-incident guidance and best practices for implementing a security plan audit.

Active shooter fatalities spiked to 729 deaths in 2017, more than three times our country’s previous high. A business must know where its threats and vulnerabilities exist. Our consensus-based document contains recommendations on how a business in any industry can better protect itself in advance of such an incident. Based on the collaborative work of more than 30 professionals experienced in law enforcement, industrial security and corporate safety compliance, the report aims to drive a higher level of preparedness against workplace violence.

...

https://www.assp.org/standards/standards-topics/active-shooter-technical-report

A new toolkit developed by the Global Cybersecurity Alliance aims to give small businesses a cookbook for better cybersecurity.
Small and mid-sized businesses have most of the same cybersecurity concerns of larger enterprises. What they don't have are the resources to deal with them. A new initiative, the Cybersecurity Toolkit, is intended to bridge that gulf and give small companies the ability to keep themselves safer in an online environment that is increasingly dangerous.

The Toolkit, a join initiative of the Global Cyber Alliance (GCA) and Mastercard, is intended to give small business owners basic, usable, security controls and guidance. It's not, says Alexander Niejelow, senior vice president for cyber security coordination and advocacy and MasterCard, that there's no information available to the small business owners. He points out that government agencies in the U.S. and the U.K. provide a lot of information on cybersecurity for businesses.

It's just that, "It's very hard for small businesses to consume that. What we wanted to do was remove the barriers to effective action," he says, and go beyond broad guidance to giving them very specific instructions presented, "…if at all possible in a video format and clear easy to use tools that they could use right now to go in and significantly reduce their cyber risk so they could be more secure and more economically stable in both the short and long term."

...

https://www.darkreading.com/threat-intelligence/mastercard-gca-create-small-business-cybersecurity-toolkit/d/d-id/1333914

Bankers around the world are rightly worried about the threats posed by digital disruptors getting in between them and their retail banking customers. But Forrester’s newest research reveals that executives should be just as worried — perhaps even more worried — about another market that is being upended: Small business banking.

Small and medium-sized businesses (also called small and medium-sized enterprises or SMEs) are crucial sources of revenues and profits at most banking providers, so the prospect of bank brands losing their relevance among SMEs should keep bankers awake at night.

Here are just a few of the insights you’ll find in our new research report:

...

https://go.forrester.com/blogs/small-business-banking-has-been-disrupted-and-theres-no-going-back/

New data from CrowdStrike's incident investigations in 2018 uncover just how quickly nation-state hackers from Russia, North Korea, China, and Iran pivot from patient zero in a target organization.

It takes Russian nation-state hackers just shy of 19 minutes to spread beyond their initial victims in an organization's network - yet another sign of how brazen Russia's nation-state hacking machine has become.

CrowdStrike gleaned this attack-escalation rate from some 30,0000-plus cyberattack incidents it investigated in 2018. North Korea followed Russia at a distant second, at around two hours and 20 minutes, to move laterally; followed by China, around four hours; and Iran, at around five hours and nine minutes.

"This validated what we've seen and believed - that the Russians were better [at lateral movement]," says Dmitri Alperovitch, co-founder and CTO of CrowdStrike. "We really weren't sure how much better," and their dramatically rapid escalation rate came as a bit of a surprise, he says.

Cybercriminals overall are slowest at lateral movement, with an average of nine hours and 42 minutes to move from patient zero to another part of the victim organization. The overall average time for all attackers was more than four-and-a-half hours, CrowdStrike found.

...

https://www.darkreading.com/threat-intelligence/19-minutes-to-escalation-russian-hackers-move-the-fastest/d/d-id/1333907

Weather tools help Team Rubicon respond quicker and reduce risks

By Glen Denny, President, Enterprise Solutions, Baron Critical Weather Solutions

Team Rubicon is an international disaster response nonprofit with a mission of using the skills and experiences of military veterans and first responders to rapidly provide relief to communities in need. Headquartered in Los Angeles, California, Team Rubicon has more than 80,000 volunteers around the country ready to jump into action when needed to provide immediate relief to those affected by natural disasters.

More than 80 percent of the disasters Team Rubicon responds to are weather-related, including crippling winter storms, catastrophic hurricanes, and severe weather outbreaks – like tornadoes. While always ready to serve, the organization needed better weather intelligence to help them prepare and mitigate risks. After adopting professional weather forecasting and monitoring tools, operations teams were able to pinpoint weather hazards, track storms, view forecasts, and set up custom alerts. And the intelligence they gained made a huge difference in the organization’s response to Hurricanes Florence and Michael.

Team Rubicon relies on skills and experiences of military veterans and first responders

About 75 percent of Team Rubicon volunteers are military veterans, who find that their skills in emergency medicine, small-unit leadership, and logistics are a great fit with disaster response. It also helps with their ability to hunker down in challenging environments to get the job done. A further 20 percent of volunteers are trained first responders, while the rest are volunteers from all walks of life. The group is a member of National Voluntary Organizations Active in Disaster (National VOAD), an association of organizations that mitigate and alleviate the impact of disasters.

By focusing on underserved or economically-challenged communities, Team Rubicon seeks to make the largest impact possible. According to William (“TJ”) Porter, manager of operational planning, Team Rubicon’s core mission is to help those who are often forgotten or left behind; they place a special emphasis on helping under-insured and uninsured populations.

Porter, a 13-year Air Force veteran, law enforcement officer, world traveler, and former American Red Cross worker, proudly stands by Team Rubicon’s service principles, “Our actions are characterized by the constant pursuit to prevent or alleviate human suffering and restore human dignity – we help people on their worst day.”

Weather-related disasters pose special challenges

The help Team Rubicon provides for weather-related disasters spans the gamut, from removing trees from roadways, clearing paths for service vehicles, bringing in supplies, conducting search and rescue missions (including boating rescues), dealing with flooded out homes, mucking out after a flood, mold remediation, and just about anything else needed. While Team Rubicon had greatly expanded its equipment inventory in recent years to help do these tasks, the organization lacked the deep level of weather intelligence that could help them understand and mitigate risks – and keep their teams safe from danger.

That’s where Baron comes into the story. After learning of the impressive work Team Rubicon is doing at the Virginia Emergency Management Conference, a Baron team member struck up a conversation with Team Rubicon, asking if they had a need for detailed and accurate weather data to help them plan their efforts. Team Rubicon jumped at the opportunity and Baron ultimately donated access to its Baron Threat Net product. Key features allow users to pinpoint weather hazards by location, track storms, view forecasts and set up custom alerts, including location-based pinpoint alerting and standard alerts from the National Weather Service (NWS). The web portal weather monitoring system provides street level views and the ability to layer numerous data products. Threat Net also offers a mobile companion application that gives Team Rubicon access to real-time weather monitoring on the go.

This suited Team Rubicon down to the ground. “In years past, we didn’t have a good way to monitor weather,” explains Porter. “We went onto the NWS, but our folks are not meteorologists, and they don’t have that background to make crucial decisions. Baron Threat Net helped us understand risks and mitigate the risks of serious events. It plays a crucial role in getting teams in as quickly as possible so we can help the greatest number of people.”

New weather tools help with response to major hurricanes

Baron1The new weather intelligence tools have already had a huge impact on Team Rubicon’s operations. Take the example of how access to weather data helped Team Rubicon with its massive response to Hurricane Florence. A day or so before the hurricane was due to make landfall, Dan Gallagher, Enterprise Product Manager and meteorologist at Baron Services, received a call from Team Rubicon, requesting product and meteorological support. Individual staff had been using the new Baron Threat Net weather tools to a degree since gaining access to them, but the operations team wanted more training and support in the face of what looked like a major disaster barreling towards North Carolina, South Carolina, Virginia, and West Virginia.

Gallagher, a trained meteorologist with more than 18 years of experience in meteorological research and software development, quickly hopped on a plane, arriving at Team Rubicon’s National Operations Center in Dallas. His first task was to meet operational manager Porter’s request to help them guide reconnaissance teams entering the area. They wanted to place a reconnaissance team close to the storm – but not in mortal danger. Using the weather tools, Gallagher located a spot north of Wilmington, NC between the hurricane’s eyewall and outer rain bands that could serve as a safe spot for reconnaissance.

The next morning, Gallagher provided a weather briefing to ensure that operations staff had the latest weather intelligence. “I briefed them on where the storm was, where it was heading, the dangers that could be anticipated, areas likely to be most affected, and the hazards in these areas.”

Throughout the day, Gallagher conducted a number of briefings and kept the teams up to date as Hurricane Florence slowly moved overland. He also provided video weather briefings for the reconnaissance team in their car en route to their destination.

Another crew based in Charlotte was planning the safest route for trucking in supplies based on weather conditions. They wanted help in choosing whether to haul the trailer from Atlanta, GA or Alexandria, VA. “I was not there to make a recommendation on an action but rather to give them the weather information they need to make their decision,” explains Gallagher. “As a meteorologist, I know what the weather is, but they decide how it impacts their operation. As soon as I gave a weather update they could make a decision within seconds, making it possible for actions based on that decision.” Team Rubicon used the information Gallagher provided to select the Alexandria VA route; their crackerjack logistics team was then able to quickly make all the needed logistical arrangements.

In addition to weather briefings, Gallagher provided more detailed product training on Baron Threat Net, observed how the teams actually use the product, and learned how the real-time products were performing. He also got great feedback on other data products that might enhance Team Rubicon’s ability to respond to disasters.

Team Rubicon gave very high marks to the high-resolution weather/forecast model available in Baron Threat Net. They relied upon the predictive precipitation accumulation and wind speed information, as well as information on total precipitation accumulation (what has already fallen in the past 24 hours).

The wind damage product showing shear rate was very useful to Team Rubicon. In addition, the product did an excellent job of detecting rotation, including picking out the weak tornadoes spawned from the hurricane that were present in the outer rain bands of Hurricane Florence. These are typically very difficult to identify and warn people about, because they spin up quickly and are relatively shallow and weak (with tornado damage of EF0 or EF1 as measured on the Enhanced Fujita Scale). Gallagher had seen how well the wind damage product performed in larger tornado cases but was particularly gratified at how well it helped the team detect these smaller ones.

For example, Lauren Vatier of Team Rubicon’s National Incident Management Teamcommented that she had worked with Baron Threat Net before the Florence event, but using it so intensively made her more familiar with how to use the product and really helped cement her knowledge. “Before Florence I had not used Baron Threat Net for intel purposes. Today I am looking for information on rain accumulation and wind, and I’m looking ahead to help the team understand what the situation will look like in the future. It helps me understand and verify the actual information happening with the storm. I don’t like relying on news articles. Now I can look into the product and get accurate and reliable information.”

Vatier also really likes the ability to pinpoint information on a map showing colors and ranges. “You can click on a point and tell how much accumulation has occurred or what the wind speed is. The pinpointing is a valuable part of Baron Threat Net.” The patented Baron Pinpoint Alerting technology automatically sends notifications any time impactful weather approaches; alert types include severe storms and tornadoes; proximity alerts for approaching lightning, hail, snow and rain; and National Weather Service warnings. She concludes, “I feel empowered by the program. It ups my confidence in my ability to provide accurate information.”

Baron2TJ Porter concurred that Baron Threat Net helped Team Rubicon mobilize the large teams that deployed for Hurricane Florence. “It is crucial to put people on the ground and make sure they’re safe. Baron Threat Net helps us respond quicker to disasters. It also helps the strike teams ensure they are not caught up in other secondary or rapid onset weather events.”

Porter explains that the situation unit leaders actively monitor weather through the day using Baron Threat Net. “We are giving them all the tools at our disposal, because these are the folks who provide early warnings to keep our folks safe.”

Future-proofing weather data

Being on the ground with Team Rubicon during the Hurricane Florence disaster recovery response gave Baron’s Gallagher an unusual opportunity to discuss other ways Baron weather products could help respond to weather-related disasters. According to Porter, “We are looking to Baron to help us understand secondary events, like the extensive flooding resulting from Hurricane Florence, and to understand where these hazards are today, tomorrow, and the next day.”

In addition, Team Rubicon is committed to targeting those areas of greatest need, so they want to be able to layer weather information with other data sets, especially social vulnerability, including location of areas with uninsured or underinsured populations. Says Porter, “Getting into areas we know need help will shave minutes, hours, or even days off how long it takes to be there helping”.

In the storm’s aftermath

At the time this article was written, hundreds of Team Rubicon volunteers were deployed as part of Hurricane Florence response operations and later in response to Hurricane Michael. Their work has garnered them a tremendous amount of national appreciation, including a spotlight appearance during Game 1 of the World Series. T-Mobile used its commercial television spots to support the organization, also pledging to donate $5,000 per post-season home run plus $1 per Twitter or Instagram post using #HR4HR to Team Rubicon.

Baron’s Gallagher appreciated the opportunity to see in real time how customers use its products, saying “The experience helped me frame improvements we can develop that will positively affect our clients using Baron Threat Net.”

By Alex Winokur, founder of Axxana

 

Disaster recovery is now on the list of top concerns of every CIO. In this article we review the evolution of the disaster recovery landscape, from its inception until today. We look at the current understanding of disaster behavior and as a result the disaster recovery processes. We also try to cautiously anticipate the future, outlining the main challenges associated with disaster recovery.

The Past

The computer industry is relatively young. The first commercial computers appeared somewhere in the 1950s—not even seventy years ago. The history of disaster recovery (DR) is even younger. Table 1 outlines the appearance of the various technologies necessary to construct a modern DR solution.

AxxanaTable

Table 1 – Early history of DR technology development

 

From Magnetic Tapes to Data Networks

The first magnetic tapes for computers were used as input/output devices. That is, input was punched onto punch cards that were then stored offline to magnetic tapes. Later, UNIVAC I, one of the first commercial computers, was able to read these tapes and process their data. Later still, output was similarly directed to magnetic tapes that were connected offline to printers for printing purposes. Tapes began to be used as a backup medium only after 1954, with the

Axxana1

Figure 1: First Storage System - RAMAC


Although modern wide-area communication networks date back to 1974, data has been transmitted over long-distance communication lines since 1837 via telegraphy systems. These telegraphy communications have since evolved to data transmission over telephone lines using modems.
introduction of the mass storage device (RAMAC).

Modems were massively introduced in 1958 to connect United States air defense systems; however, their throughput was very low compared to what we have today. The FAA clustered system deployed communication that was designed for computers to communicate with their peripherals (e.g., tapes). Local area networks (LANs) as we now know them had not been invented yet.

Early Attempts at Disaster Recovery

It wasn’t until the 1970s that concerns about disaster recovery started to emerge. In that decade, the deployment of IBM 360 computers reached a critical mass, and they became a vital part of almost every organization. Until the mid-1970s, the perception was that if a computer failed, it would be possible to fail back to paper-based operation as was done in the 1960s. However, the wide-spread rise of digital technologies in the 1970s led to a corresponding increase in technological failures on one hand; while on the other hand, theoretical calculations, backed by real-world evidence, showed that switching back to paper-based work was not practical.

The emergence of terrorist groups in Europe like the Red Brigades in Italy and the Baader-Meinhof Group in Germany further escalated concerns about the disruption of computer operations. These left-wing organizations specifically targeted financial institutions. The fear was that one of them would try to blow up a bank’s data centers.

At that time, communication networks were in their infancy, and replication between data centers was not practical.

Parallel workloads. IBM came up with the idea to use the FAA clustering technology to build two adjoining computer rooms that were separated by a steel wall and had one node cluster in each room. The idea was to run the same workload twice and to be able to immediately fail over from one system to the other in case one system was attacked. A closer analysis revealed that in a case of a terror attack, the only surviving object would be the steel wall, so the plan was abandoned.

Hot, warm, and cold sites. The inability of computer vendors (IBM was the main vendor at the time) to provide an adequate DR solution made way for dedicated DR firms like SunGard to provide hot, warm, or cold alternate site. Hot sites, for example, were duplicates of the primary site; they independently ran the same workloads as the primary site, as communication between the two sites was not available at the time. Cold sites served as repositories for backup tapes. Following a disaster at the primary site, operations would resume at the cold site by allocating equipment, executing a restore from backup operations, and restarting the applications. Warms sites were a compromise between a hot site and a cold site. These sites had hardware and connectivity already established; however, recovery was still done by restoring the data from backups before the applications could be restarted.

Backups and high availability. The major advances in the 1980s were around backups and high availability. On the backup side, regulations requiring banks to have a testable backup plan were enacted. These were probably the first DR regulations to be imposed on banks; many more followed through the years. On the high availability side, Digital Equipment Corporation (DEC) made the most significant advances in LAN communications (DECnet) and clustering (VAXcluster).

The Turning Point

On February 26, 1993 the first bombing of the World Trade Center (WTC) took place. This was probably the most significant event shaping the disaster recovery solution architectures of today. People realized that the existing disaster recovery solutions, which were mainly based on tape backups, were not sufficient. They understood that too much data would be lost in a real disaster event.

SRDF. By this time, communication networks had matured, and EMC became the first to introduce a storage-to-storage replication software called Symmetrix Remote Data Facility (SRDF).

 

Behind the Scenes at IBM

At the beginning of the nineties, I was with IBM’s research division. At the time, we were busy developing a very innovative solution to shorten the backup window, as backups were the foundation for all DR and the existing backup windows (dead hours during the night) started to be insufficient to complete the daily backup. The solution, called concurrent copy, was the ancestor of all snapshotting technologies, and it was the first intelligent function running within the storage subsystem. The WTC event in 1993 left IBM fighting the “yesterday battles” of developing a backup solution, while giving EMC the opportunity to introduce storage-based replication and become the leader in the storage industry.

 

The first few years of the 21st century will always be remembered for the events of September 11, 2001—the date of the complete annihilation of the World Trade Center. Government, industry, and technology leaders realized then that some disasters can affect the whole nation, and therefore DR had to be taken much more seriously. In particular, the attack demonstrated that existing DR plans were not adequate to cope with disasters of such magnitude. The notion of local, regional, and nationwide disasters crystalized, and it was realized that recovery methods that work for local disasters don’t necessarily work for regional ones.

SEC directives. In response, the Securities Exchange Commission (SEC) issued a set of very specific directives in the form of the “Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S.” These regulations, still intact today, bind all financial institutions. The DR practices that were codified in the SEC regulations quickly propagated to other sectors, and disaster recovery became a major area of activity for all organizations relying on IT infrastructure.

The essence of these regulations is as follows:

  1. The economic stance of the United States cannot be compromised under any circumstance.
  2. Relevant financial institutions are obliged to correctly, without any data loss, resume operations by the next business day following a disaster.
  3. Alternate disaster recovery sites must use different physical infrastructure (electricity, communication, water, transportation, and so on) than the primary site.

Note that Requirements 2 and 3 above are somewhat contradictory. Requirement 2 necessitates synchronous replication to facilitate zero data loss, while Requirement 3 basically dictates long distances between sites—thereby making the use of synchronous replication impossible. This contradiction is not addressed within the regulations and is left to each implementer to deal with at its own discretion.

The secret to resolving this contradiction lies in the ability to reconstruct missing data if or when data loss occurs. The nature of most critical data is such that there is always at least one other instance of this data somewhere in the universe. The trick is to locate it, determine how much of it is missing in the database, and augment the surviving instance of the database with this data. This process is called data reconciliation, and it has become a critical component of modern disaster recovery. [See The Data Reconciliation Process sidebar.]

 

The Data Reconciliation Process

If data is lost as a result of a disaster, the database becomes misaligned with the real world. The longer this misalignment exists, the greater the risk of application inconsistencies and operational disruptions. Therefore, following a disaster, it is very important to align back the databases with the real world as soon as possible. This process of alignment is called data reconciliation.

The reconciliation process has two important characteristics:

  1. It is based on the fact that the data lost in a disaster exists somewhere in the real word, and thus it can be reconstructed in the database.
  2. The duration and complexity of the reconciliation is proportional to the recovery point objective (RPO); that is, it’s proportional to the amount of data lost.

One of the most common misconceptions in disaster recovery is that RPO (for example, RPO = 5) refers to how many minutes of data the organization is willing to lose. What RPO really means is that the organization must be able to reconstruct and reconsolidate (i.e., reconcile) that last five minutes of missing data. Note that the higher the RPO (and therefore, the greater the data loss), the longer the RTO and the costlier the reconciliation process. Catastrophes typically occur when RPO is compromised and the reconciliation process takes much longer.

In most cases, the reconciliation process is quite complicated, consisting of time-consuming processes to identify the data gaps and then resubmitting the missing transactions to realign the databases with real-world status. This is a costly, mainly manual, error-prone process that greatly prolongs the recovery time of the systems and magnifies risks associated with downtime.

 

The Present

The second decade of the 21st century has been characterized by new types of disaster threats, including sophisticated cyberattacks and extreme weather hazards caused by global warming. It is also characterized by new DR paradigms, like DR automation, disaster recovery as a service (DRaaS), and active-active configurations.

These new technologies are for the most part still in their infancy. DR automation tools attempt to orchestrate a complete site recovery through invocation of one “site failover” command, but they are still very limited in scope. A typical tool in this category is the VMware Site Recovery Manager (SRM). DRaaS attempts to reduce the cost of DR-compliant installation by locating the secondary site in the cloud. The new active-active configurations try to reduce equipment costs and recovery time by utilizing techniques that are used in the context of high availability; that is, to recover from a component failure rather than a complete site failure.

Disasters vs. Catastrophes

The following definitions of disasters and disaster recovery have been refined over the years to make a clear distinction between the two main aspects of business continuity: high availability protection and disaster recovery. This distinction is important because it crystalizes the difference between disaster recovery and a single component failure recovery covered by highly available configurations, and in doing so also accounts for the limitations of using active-active solutions for DR.

A disaster in the context of IT is either a significant adverse event that causes an inability to continue operation of the data center or a data loss event where recovery cannot be based on equipment at the data center. In essence, disaster recovery is a set of procedures aimed to resume operations following a disaster by failing over to a secondary site.

From a DR procedures perspective, it is customary to classify disasters into 1) regional disasters like weather hazards, earthquakes, floods, and electricity blackouts and 2) local disasters like local fires, onsite electrical failures, and cooling system failures.

Over the years, I have also noticed a third, independent classification of disasters. Disasters can also be classified as catastrophes. In principal, a catastrophe is a disastrous event where in the course of a disaster, something very unexpected happens that causes the disaster recovery plans to dramatically miss their service level agreement (SLA); that is, they typically exceed their recovery time objective (RTO).

When DR procedures go as planned for regional and local disasters, organizations fail over to a secondary site and resume operations within pre-determined parameters for recovery time (i.e., RTO) and data loss (i.e., RPO). The organization’s SLAs, business continuity plans, and risk management goals align with these objectives, and the organization is prepared to accept the consequent outcomes. A catastrophe occurs when these SLAs are compromised.

Catastrophes can also result from simply failing to execute the DR procedures as specified, typically due to human errors. However, for the sake of this article, let’s be optimistic and assume that DR plans are always executed flawlessly. We shall concentrate only on unexpected events that are beyond human control.

Most of the disaster events that have been reported in the news recently (for example, the Amazon Prime Day outage in July 2018 and the British Airways bank holiday outage in 2017) have been catastrophes related to local disasters. If DR could have been properly applied to the disruptions at hand, nobody would have noticed that there had been a problem, as the DR procedures were designed to provide almost zero recovery time and hence zero down time.

The following two examples provide a closer look at how catastrophes occur.

9/11 – Following the September 11 attack, several banks experienced major outages. Most of them had a fully equipped alternate site in Jersey City—no more than five miles away from their primary site. However, the failover failed miserably because the banks’ DR plans called for critical personnel to travel from their primary site to their alternate site, but nobody could get out of Manhattan.

A data center power failure during a major snow storm in New England – Under normal DR operations at this organization, the data was synchronously replicated to an alternate site. However, 90 seconds prior to a power failure at the primary site, the central communication switch in the area lost power too, which cut all WAN communications. As a result, the primary site continued to produce data for 90 seconds without replication to the secondary site; that is, until it experienced the power failure. When it finally failed over to the alternate site, 90 seconds of transactions were missing; and because the DR procedures were not designed to address recovery where data loss has occurred, the organization experienced catastrophic down time.

The common theme of these two examples is that in addition to the disaster at the data center there was some additional—unrelated—malfunction that turned a “normal” disaster into a catastrophe. In the first case, it was a transportation failure; in the second case, it was a central switch failure. Interestingly, both failures occurred to infrastructure elements that were completely outside the control of the organizations that experienced the catastrophe. Failure of the surrounding infrastructure is indeed one of the major causes for catastrophes. This is also the reason why the SEC regulations put so much emphasis on infrastructure separation between the primary and secondary data center.

Current DR Configurations

In this section, I’ve included examples of two traditional DR configurations that separate the primary and secondary center, as stipulated by the SEC. These configurations have predominated in the past decade or so, but they cannot ensure zero data loss in rolling disasters and other disaster scenarios, and they are being challenged by new paradigms such as that introduced by Axxana’s Phoenix. While a detailed discussion would be outside the scope of this article, suffice it to say that Axxana’s Phoenix makes it possible to avoid catastrophes such as those just described—something that is not possible with traditional synchronous replication models.

AxxanaFig2

Figure 2 – Typical DR configuration

 

Typical DR configuration. Figure 2 presents a typical disaster recovery configuration. It consists of a primary site, a remote site, and another set of equipment at the primary site, which serves as a local standby.

The main goal of the local standby installation is to provide redundancy to the production equipment at the primary site. The standby equipment is designed to provide nearly seamless failover capabilities in case of an equipment failure—not in a disaster scenario. The remote site is typically located at a distance that guarantees infrastructure independence (communication, power, water, transportation, etc.) to minimize the chances of a catastrophe. It should be noted that the typical DR configuration is very wasteful. Essentially, an organization has to triple the cost of equipment and software licenses—not to mention the increased personnel costs and the cost of high-bandwidth communications—to support the configuration of Figure 2.

AxxanaFig2

Figure 3 – DR cost-saving configuration

 

Traditional ideal DR configuration. Figure 3 illustrates the traditional ideal DR configuration. Here, the remote site serves both for DR purposes and high availability purposes. Such configurations are sometimes realized in the form of extended clusters like Oracle RAC One Node on Extended Distance. Although traditionally considered the ideal, they are a trade-off between survivability, performance, and cost. The organization saves on the cost of one set of equipment and licenses, but it compromises survivability and performance. That’s because the two sites have to be in close proximity to share the same infrastructure, so they are more likely to both be affected by the same regional disasters; at the same time, performance is compromised due to the increased latency caused by separating the two cluster nodes from each other.

AxxanaFig4

Figure 4 – Consolidation of DR and high availability configurations with Axxana’s Phoenix


True zero-data-loss configuration. Figure 4 represents a cost-saving solution with Axxana’s Phoenix. In case of a disaster, Axxana’s Phoenix provides a zero-data-loss recovery to any distance. So, with the help of Oracle’s high availability support (fast start failover and transparent application failover), Phoenix provides functionality very similar to extended cluster functionality. With Phoenix, however, it can be implemented over much longer distances and with much smaller latency, providing true cost savings over the typical configuration shown in Figure 3.

The Future

In my view, the future is going to be a constant race between new threats and new disaster recovery technologies.

New Threats and Challenges

In terms of threats, global warming creates new weather hazards that are fiercer, more frequent, and far more damaging than in the past—and in areas that have not previously experienced such events. Terror attacks are on the rise, thereby increasing threats to national infrastructures (potential regional disasters). Cyberattacks—in particular ransomware, which destroys data—are a new type of disaster. They are becoming more prolific, more sophisticated and targeted, and more damaging.

At the same time, data center operations are becoming more and more complex. Data is growing exponentially. Instead of getting simpler and more robust, infrastructures are getting more diversified and fragmented. In addition to legacy architectures that aren’t likely to be replaced for a number of years to come, new paradigms like public, hybrid, and private clouds; hyperconverged systems; and software-defined storage are being introduced. Adding to that are an increasing scarcity of qualified IT workers and economic pressures that limit IT spending. All combined, these factors contribute to data center vulnerabilities and to more frequent events requiring disaster recovery.

So, this is on the threat side. What is there for us on the technology side?

New Technologies

Of course, Axxana’s Phoenix is at the forefront of new technologies that guarantee zero data loss in any DR configuration (and therefore ensure rapid recovery), but I will leave the details of our solution to a different discussion.

AI and machine learning. Apart from Axxana’s Phoenix, the most promising technologies on the horizon revolve around artificial intelligence (AI) and machine learning. These technologies enable DR processes to become more “intelligent,” efficient, and predictive by using data from DR tests, real-world DR operations, and past disaster scenarios; in doing so, disaster recovery processes can be designed to better anticipate and respond to unexpected catastrophic events.These technologies, if correctly applied, can shorten RTO and significantly increase the success rate of disaster recovery operations. The following examples suggest only a few of their potential applications in various phases of disaster recovery:

  • They can be applied to improve the DR planning stage, resulting in more robust DR procedures.
  • When a disaster occurs, they can assist in the assessment phase to provide faster and better decision-making regarding failover operations.
  • They can significantly improve the failover process itself, monitoring its progress and automatically invoking corrective actions if something goes wrong.

When these technologies mature, the entire DR cycle from planning to execution can be fully automated. They carry the promise of much better outcomes than processes done by humans because they can process and better “comprehend” far more data in very complex environments with hundreds of components and thousands of different failure sequences and disaster scenarios.

New models of protection against cyberattacks. The second front where technology can greatly help with disaster recovery is on the cyberattack front. Right now, organizations are spending millions of dollars on various intrusion prevention, intrusion detection, and asset protection tools. The evolution should be from protecting individual organizations to protecting the global network. Instead of fragmented, per-organization defense measures, the global communication network should be “cleaned” of threats that can create data center disasters. So, for example, phishing attacks that would compromise a data center’s access control mechanisms should be filtered out in the network—or in the cloud— instead of reaching and being filtered at the end points.

Conclusion

Disaster recovery has come a long way—from naive tape backup operations to complex site recovery operations and data reconciliation techniques. The expenses associated with disaster protection don’t seem to go down over the years; on the contrary, they are only increasing.

The major challenge of DR readiness is in its return on investment (ROI) model. On one hand, a traditional zero-data-loss DR configuration requires organizations to implement and manage not only a primary site, but also a local standby and remote standby; doing so essentially triples the costs of critical infrastructure, even though only one third of it (the primary site) is utilized in normal operation.

On the other hand, if a disaster occurs and the proper measures are not in place, the financial losses, reputation damage, regulatory backlash, and other risks can be devastating. As organizations move into the future, they will need to address the increasing volumes and criticality of data. The right disaster recovery solution will no longer be an option; it will be essential for mitigating risk, and ultimately, for staying in business.

Thursday, 07 February 2019 18:15

Disaster Recovery: Past, Present, and Future

What Recent News Means for the Future

The compliance landscape is changing, necessitating changes from the compliance profession as well. A team of experts from CyberSaint discuss what compliance practitioners can expect in the year ahead.

Regardless of experience or background, 2019 will not be an easy year for information security. In fact, we realize it’s only going to get more complicated. However, what we are excited to see is the awareness that the breaches of 2018 have brought to information security – how more and more senior executives are realizing that information security needs to be treated as a true business function – and 2019 will only see more of that.

Regulatory Landscape

As constituents become more technology literate, we will start to see regulatory bodies ramping up security compliance enforcement for the public and private sectors. Along with the expansion of existing regulations, we will also see new cyber regulations come into fruition. While we may not see U.S. regulations similar to GDPR on a federal level in 2019, these conversations around privacy regulation will only become more notable. What we are seeing already is the expansion of the DFARS mandate to encompass all aspects of the federal government, going beyond the Department of Defense.

...

https://www.corporatecomplianceinsights.com/a-cybersecurity-compliance-crystal-ball-for-2019/

Today Forrester closed the deal to acquire SiriusDecisions.  

SiriusDecisions helps business-to-business companies align the functions of sales, marketing, and product management; Sirius clients grow 19% faster and are 15% more profitable than their peers. Leaders within these companies make more informed business decisions through access to industry analysts, research, benchmark data, peer networks, events, and continuous learning courses, while their companies run the “Sirius Way” based on proven, industry-leading models and frameworks.  Forrester Acquisition of SiriusDecisions

Why Forrester and SiriusDecisions? Forrester provides the strategy needed to be successful in the age of the customer; SiriusDecisions provides the operational excellence. The combined unique value can be summarized in a simple statement:

We work with business and technology leaders to develop customer-obsessed strategies and operations that drive growth. 

...

https://go.forrester.com/blogs/forrester-siriusdecisions/

Thursday, 03 January 2019 15:49

Forrester + SiriusDecisions

By Alex Becker, vice president and general manager of Cloud Solutions, Arcserve

If you’re like most IT professionals, your worst nightmare is waking up to the harsh reality that one of your primary systems or applications has crashed and you’ve experienced data loss. Whether caused by fire, flood, earthquake, cyber attack, programming glitch, hardware failure, human error, whatever – this is generally the moment that panic sets in.

While most IT teams understand unplanned downtime is a question of when, not if, many wouldn’t be able to recover business-critical data in time to avoid a disruption in business. According to new survey research commissioned by Arcserve of 759 global IT decision-makers, half revealed they have less than an hour to recover business-critical data before it starts impacting revenue, yet only a quarter cite being extremely confident in their ability to do so. The obvious question is why.

UNTANGLING THE KNOT OF 21ST CENTURY IT

Navigating modern IT can seem like stumbling through a maze. Infrastructures are rapidly transforming, spreading across different platforms, vendors and locations, but still often include non-x86 platforms to support legacy applications. With these multi-generational IT environments, businesses face increased risk of data loss and extended downtime caused by gaps in the labyrinth of primary and secondary data centers, cloud workloads, operating environments, disaster recovery (DR) plans and colocation facilities.

Yet, despite the complex nature of today’s environments, over half of companies resort to using two or more backup solutions, further adding to the complexity they’re attempting to solve. Never mind delivering on service level agreements (SLAs) or, in many cases, protecting data beyond mission-critical systems and applications.

It seems modern disaster recovery has become more about keeping the lights on than proactively avoiding the impacts of disaster. Because of this, many organizations develop DR plans to recover as quickly as possible during an outage. But, there’s just one problem: when was their most recent backup?  

WOULD YOU EAT DAY-OLD SUSHI?        

Day-old sushi is your backup. That’s right, if you’ve left your California Roll sitting out all night, chances are it’s the same age as your data if you do daily backups. One will cause a nasty bout of food poisoning and the other a massive loss of business data. Horrified or just extremely nauseated?

You may be thinking this is a bit dramatic, but if your last backup was yesterday, you’re essentially willing to accept more than 24 hours of lost business activity. For most companies, losing transactional information for this length of time would wreak havoc on their business. And, if those backups are corrupted, the ability to recover quickly becomes irrelevant.

While the answer to this challenge may seem obvious (backup more frequently), it’s far from simple. We must remember that in the quest to architect a simple DR plan, many organizations make the one wrong move that becomes their downfall: they use too many solutions, often trying to overcompensate for capabilities offered in one but not the others.

The other, and arguably more alarming reason, is a general lack of understanding about what’s truly viable with any given vendor. While many solutions today can get your organization back online in minutes, the key is minimizing the amount of business activity lost during an unplanned outage. It’s this factor that can easily be overlooked, and one that most solutions cannot deliver.

WHEN A BLIP TURNS BRUTAL

Imagine, for a moment, you have a power failure that brings down your systems and one of two scenarios plays out. In the first, you’re confident you can recover quickly, spinning up your primary application in minutes only to realize the data you’re restoring is hours - or even days old. Your manager is frantic and your sales team is furious as they stand by and watch every order from the past day go missing. In the second scenario, you’re confident you can recover quickly and spin up your primary application in minutes. This time, however, with data that was synced just a few seconds or minutes ago. This is the difference between a blip on the radar of your internal and external customers, and potentially hundreds of thousands (or more) in lost revenue, not to mention damage to you and your organization’s reputation which is right up there with financial loss.

For a variety of reasons ranging from perceived cost and complexity to limited network bandwidth and resistance to change, many shy away from deploying DR solutions that could very well enable them to avoid IT disasters. However, leveraging a solution that can keep your “blip” from turning brutal is easily the best kept secret of a DR strategy that works, and one that simply doesn’t.

ASK THESE 10 QUESTIONS TO MAKE SURE YOUR DR SOLUTION ISN’T TRICKING YOU

Many IT leaders agree that the volume of data lost during downtime (your recovery point objective, or RPO) is equally, if not more important than the time it takes to restore (your recovery time objective, or RTO). The trick is wading through the countless solutions that promise 100 percent uptime, but fall short in supporting stringent RPOs for critical systems and applications. These questions can help you evaluate whether your solution will make the cut or leave you in the cold:

  1. Does the solution include on-premises (for quick recovery of one or a few systems), remote (for critical systems at remote locations), private cloud you have already invested in, public cloud (Amazon/Azure) and purpose-built vendor cloud options? Your needs may vary and the solution should offer broad options to fit your infrastructure and business requirements.
  2. How many vendors would be involved in your end-to-end DR solution, including software, hardware, networking, cloud services, DR hypervisors and high availability? How many user interfaces would that entail? The patchwork-based solution from numerous vendors may increase complexity, time to manage and internal costs – and more importantly it will increase risks of bouncing between vendors if something goes wrong.
  3. Does the solution provide support and recovery for all generations of IT platforms, including non-x86, x86, physical, virtual and cloud instances running Windows and/or Linux?
  4. Does the solution offer both direct-to-cloud and hybrid cloud options? This ensures you can address any business requirement and truly safeguard your IT transformation.
  5. Does the solution deliver sub five-minute, rapid push-button failover? This allows you to continue accessing business-critical applications during a downtime event, as well as power on / run your environment with the click of a button.
  6. Does it support both rapid failover (RTOs) and RPOs of minutes, regardless of network complexity? When interruption happens, it’s vital that you can access business-critical applications with minimal disruption and effectively protect these systems by supporting RPOs of minutes.
  7. Does the solution provide automated incremental failback to bring back all applications and databases in their most current state to your on-premises environment?
  8. Does your solution leverage image-based technology to ensure no important data or configuration is left behind?
  9. Is your solution optimized for low bandwidth locations, being capable of moving large volumes of data to and from the cloud without draining bandwidth?
  10. In the event of a disaster, does the solution give you options for network connectivity, such as point to site VPN, site to site VPN and site to site VPN with IP takeover?

The true value you provide your organization and your customers is the peace of mind and viability of their business when a disaster or downtime event occurs. And even when its business as usual, you’ll be able to support a range of needs - such as migrating workloads to a public or private cloud, advanced hypervisor protection, and support of sub-minute RTOs and RPOs - across every IT platform, from UNIX and x86 to public and private clouds.

By keeping these questions in mind, you’ll be better prepared to challenge vendor promises that often cannot be delivered and to select the right solution to safeguard your entire IT infrastructure - when disaster strikes and when it doesn’t. No more day old sushi. No more secrets.

About the Author

As VP and GM of Arcserve Cloud Solutions, Alex Becker leads the company’s cloud and north american sales teams. Before joining Arcserve in April 2018, Alex served in various sales and leadership positions at ClickSoftware, Digital River, Fujitsu Consulting, and PTC.

Ah, Florida. Home to sun-washed beaches, Kennedy Space Center, the woeful Marlins – and one of the most costly tort systems in the country.

A significant driver of these costs is Florida’s “assignment of benefits crisis.”

Today the I.I.I. published a report documenting what the crisis is, how it’s spreading and how it’s costing Florida consumers billions of dollars. You can download and read the full report, “Florida’s assignment of benefits crisis: runaway litigation is spreading, and consumers are paying the price,” here.

An assignment of benefits (AOB) is a contract that allows a third party – a contractor, a medical provider, an auto repair shop – to bill an insurance company directly for repairs or other services done for the policyholder.

...

http://www.iii.org/insuranceindustryblog/study-florida-assignment-benefits-crisis-is-spreading-and-is-costing-consumers-billions-dollars/

Supply chain cartoon

It’s in your company’s best interest not to overlook disaster recovery (DR). If you’re hit with a cyberattack, natural disaster, power outage or any sort of other unplanned disturbance that could potentially threaten your business  ̶  you’ll be happy you had a DR plan in place.

It’s important to remember that your business is made up of a lot of moving parts, some of which may reside outside your building and under the control of others. And just because you have the foresight to prepare for the worst doesn’t mean the companies in your supply chain will also take the same precautions.

Verify that all participants within your supply chain have DR and business continuity plans in place, and that these plans are routinely tested and communicated to employees to ensure they can hold up their end of the supply chain in the event of a disaster. If you don’t, the wheels might just fall off your DR plan.

Check out more IT cartoons.

Free cloud storage is one of the best online storage deals – the price is right. 

Free cloud backup provides a convenient way to share content with friends, family and colleagues. Small businesses and individuals can take advantage of free online file storage to access extra space, for backup and recovery purposes or just store files temporarily.

Free cloud storage also tends to have paid options that are priced for individuals, small businesses, and large enterprises – so they will grow with you. The cloud storage pricing can vary considerably for these options.

The following are the best free cloud backup, with the associated advanced cloud storage options:

(Hint: some businesses have discovered that the most free cloud storage results from combining free cloud services:)

...

http://www.enterprisestorageforum.com/cloud-storage/best-free-cloud-storage-providers.html

Thursday, 15 November 2018 17:11

6 Best Free Cloud Storage Providers

These are the five major developments Jerry Melnick, president and CEO, SIOS Technology, sees in cloud, High Availability and IT service management, DevOps, and IT operations analytics and AI in 2019:

 

1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

Advances in technology will make the cloud substantially more suitable for critical applications. With IT staff now becoming more comfortable in the cloud, their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever failover clustering technology they currently use in their datacenters to protect the critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations the cloud. At the same time, cloud service providers will continue to advance their service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

Dynamic utilization of the cloud’s vast resources will enable IT to more effectively manage and orchestrate the services needed to support mission-critical applications. With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internetworking, now enables standby resources to be allocated dynamically only when needed, which will dramatically lower the cost of provisioning high availability and disaster recovery protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

Given its mission-critical nature, IT departments have historically chosen to implement SAP and SAP S4/HANA in enterprise datacenters, where the staff enjoys full control over the environment. As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud “Quick-start” Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates will become the standard for complex software and service deployments in private, public and hybrid clouds. These templates are wizard-based interfaces that employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy, and the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics and Artificial Intelligence Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will continue becoming more highly focused and purpose-built for specific needs, and these capabilities will increasingly be embedded in management tools. This much-anticipated capability will simplify IT operations, improve infrastructure and application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in high availability and disaster recovery solutions, as well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve. 

A COMSAT Perspective

Comsat1We’ve seen such happen all too often – large populations devastated by natural disasters through events such as earthquakes, tsunamis, fires and extreme weather. As we’ve witnessed in the past, devastation isn’t limited to natural occurrences, it can also be man-made. Whatever the event may be, natural or man-made, first responders and relief teams depend on reliable communication to provide those most affected the help they need. Dependable satellite communication (SATCOM) technology is the difference between life and death, expedient care or delay.

Devastation can occur in the business community, as well. For businesses and government entities that depend on the Internet of Things (IoT), as most do, organizations can face tremendous loss without a communication, or continuity, plan.

How do we stay constantly connected by land, sea or air, in vulnerable situations? Today’s teleport SATCOM technology provides reliable and affordable operational resiliency that is scalable and cost effective for anyone that depends on connectivity, including IOT.

Independent of the vulnerabilities of terrestrial land lines, today’s modern teleports provide a variety of voice and data options that include offsite data warehousing, Virtual Machine (M2M) access, and a secure, reliable connection to private networks and the World Wide Web.

Manufacturing, energy, transportation, retail, healthcare, financial services, smart cities, government and education are all closing the digital divide and becoming more and more dependent on connectivity to conduct business. They all require disaster recovery systems and reliable communications that only satellite communications can provide when land circuits are disrupted.

COMSAT, a Satcom Direct (SD) company, with the SD Data Center, has been working to provide secure, comprehensive, integrated connectivity solutions to help organizations stay connected, no matter the environment or circumstances. COMSAT’s teleports, a critical component in this process, have evolved to keep pace with changing communication needs in any situation.

Comsat2“In the past, customers would come to COMSAT to connect equipment at multiple locations via satellite using our teleports. Today, the teleports do so much more. They act as a network node, data center, meet-me point and customer support center. They are no longer a place where satellite engineers focus on antennas, RF, baseband and facilities. Today’s teleports are now an extension of the customer’s business ensuring they are securely connected when needed,” said Chris Faletra, director of teleport sales.

Comsat3COMSAT owns and operates two commercial teleport facilities in the United States. The Southbury teleport is located on the east coast, about 60 miles north of New York City. The Santa Paula teleport is located 90 miles north of Los Angeles on the west coast.

Each teleport has operated continuously for more than 40 years, since 1976. The teleports were built to high standards for providing life and safety services, along with a host of satellite system platforms from metrological data gathering to advanced navigation systems. As such, they are secure facilities connected to multiple terrestrial fiber networks and act as backup for each other through both terrestrial and satellite transmission pathways.

Both facilities are data centers equipped with advanced satellite antennas and equipment backed up with automated and redundant electrical power sources, redundant HVAC systems, automatic fire detection and suppression systems, security systems and 24/7/365 network operations centers. The teleports are critical links in delivering the complete connectivity chain.

“Our teleport facilities allow us to deliver global satellite connectivity. The teleports provide the link between the satellite constellation and terrestrial networks for reliable end-to-end connectivity at the highest service levels,” said Kevin West, chief commercial officer.

COMSAT was originally created by the Communications Satellite Act of 1962 and incorporated as a publicly traded company in 1963 with the initial purpose to serve as a public, federally funded corporation intended to develop a commercial and international satellite communications system.

For the past five decades, COMSAT has played an integral role in the growth and advancement of the industry, including being a founding member of Intelsat, operating the Marisat fleet network, and founding the initial operating system of Inmarsat from its two Earth stations.

While the teleports have been in operation for more than 40 years, the technology is continuously upgraded and enhanced to proactively support communication needs. For many years, the teleports provided point-topoint connectivity for voice and low-rate data.

Now data rates are being pushed to 0.5 Gbps with thousands of remotes on the network. The teleports also often serve as the Internet service provider (ISP). They have their own diverse fiber infrastructure to deliver gigabytes of connectivity versus the megabytes of connectivity that were required not so long ago.

All in the Family

In addition to growing the teleport’s capabilities through technological advancements, COMSAT is now a part of the SD family of companies, which further expands its offerings.

SD Land and Mobile, a division of Satcom Direct, offers a wide variety of satellite phone, mobile satellite Internet units and fixed satellite Internet units. SD Land and Mobile ensures SATCOM connectivity is available no matter how remote the location or how limited the cellular and data network coverage may be.

Data security is a critically important subject today. The SD Data Center, a wholly-owned data center by Satcom Direct, brings enterprise-level security capabilities to data transmissions in the air, on the ground and over water. The SD Data Center also provides industry compliant data center solutions, and business continuity planning for numerous industries including healthcare, education, financial, military, government and technology.

“Together, we deliver the infrastructure, products and data security necessary to keep you connected under any circumstance. We have a complete suite of solutions and capabilities for our clients,” said Rob Hill, business development.

Keeping Up with Market Needs and Trends

COMSAT’s pioneering spirit is reflected in the company’s ongoing analysis of, and adjustment to, current market needs and trends. The aero market is currently the largest growing market with new services and higher data rates being offered almost daily. Maritime, mobility and government markets are thriving as well.

No matter what direction the market is headed, COMSAT’s teleports and the SD family of companies will be ready to help clients weather the storm. comsat.com

To learn about SD Land & Mobile, head over to satcomstore.com

For additional information regarding the SD Data Center, access sddatacenter.com

COMSAT’s provision of teleport services are managed by Guy White, director of US teleports. As station director for COMSAT’s Southbury, Connecticut, and Santa Paula, California, teleports, Mr. White is responsible for the day-to-day operations and engineering of both facilities, including program planning, budget control, task scheduling, priority management, personnel matters, maintenance contract control, and other tasks related to teleport operations.

Mr. White began his career in the SATCOM industry in 1980 as a technician at the Southbury facility. Since then, he successively held the positions of senior technician, lead technician, maintenance technician and customer service engineer at Southbury, until he assumed the position of operations manager in 1992 at COMSAT’s global headquarters in Washington D.C. He returned to Southbury as station engineer in 1995 and has served as station director of the Southbury teleport since May of 2000. Mr. White’s responsibilities expanded to include the Santa Paula teleport in May of 2008.

Increase your business continuity (BC) knowledge and expertise by checking out this list of an even dozen top BC resources.

Business continuity is a sprawling, fast-changing, and challenging field. Fortunately, there are a lot of great resources out there that can help you in your drive to improve your knowledge and protect your organization.

In today’s post, I round up a “dynamic dozen” resources that you should be aware of in your role as a business continuity professional.

Some of these might be old friends and others might be new to you. In any case, you might find it beneficial to review the websites and other resources on this list as you update your strategies, perform risk assessments, and identify where to focus your future efforts.

Read on to become a master of disaster. And remember that the most important resource in any BC program is capable, knowledgeable, and well-educated people.

...

https://bcmmetrics.com/key-bc-resources/

By Cassius Rhue, Director of Engineering at SIOS Technology

All public cloud service providers offer some form of guarantee regarding availability, and these may or may not be sufficient, depending on each application’s requirement for uptime. These guarantees typically range from 95.00% to 99.99% of uptime during the month, and most impose some type of “penalty” on the service provider for falling short of those thresholds.

Most cloud service providers offer a 99.00% uptime threshold, which equates to about seven hours of downtime per month. And for many applications, those two-9’s might be enough. But for mission-critical applications, more 9’s are needed, especially given the fact that many common causes of downtime are excluded from the guarantee.

There are, of course, cost-effective ways to achieve five-9’s high availability and robust disaster recovery protection in configurations using public cloud services, either exclusively or as part of a hybrid arrangement. This article highlights limitations involving HA and DR provisions in the public cloud, explores three options for overcoming these limitations, and describes two common configurations for failover clusters.

Caveat Emptor in the Cloud

While all cloud service providers (CSPs) define “downtime” or “unavailable” somewhat differently, these definitions include only a limited set of all possible causes of failures at the application level. Generally included are failures affecting a zone or region, or external connectivity. All CSPs also offer credits ranging from 10% for failing to meet four-9’s of uptime to around 25% for failing to meet two-9’s of uptime.

Redundant resources can be configured to span the zones and/or regions within the CSP’s infrastructure, and that will help to improve application-level availability. But even with such redundancy, there remain some limitations that are often unacceptable for mission-critical applications, especially those requiring high transactional throughput performance. These limitations include each master being able to create only a single failover replica, requiring the use of the master dataset for backups, and using event logs to replicate data. These and other limitations can increase recovery time during a failure and make it necessary to schedule at least some planned downtime.

The more significant limitations involve the many exclusions to what constitutes downtime. Here are just a few examples from actual CSP service level agreements of what is excluded from “downtime” or “unavailability” that cause application-level failures resulting from:

  • factors beyond the CSP’s reasonable control (in other words, some of the stuff that happens regularly, such as carrier network outages and natural disasters)
  • the customer’s software, or third-party software or technology, including application software
  • faulty input or instructions, or any lack of action when required (in other words, the inevitable mistakes caused by human fallibility)
  • problems with individual instances or volumes not attributable to specific circumstances of “unavailability”
  • any hardware or software maintenance as provided for pursuant to the agreement

 

To be sure, it is reasonable for CSPs to exclude certain causes of failure. But it would be irresponsible for system administrators to use these as excuses, making it necessary to ensure application-level availability by some other means.

Three Options for Improving Application-level Availability

Provisioning resources for high availability in a way that does not sacrifice security or performance has never been a trivial endeavor. The challenge is especially difficult in a hybrid cloud environment where the private and public cloud infrastructures can differ significantly, which makes configurations difficult to test and maintain, and can result in failover provisions failing when actually needed.

For applications where the service levels offered by the CSP fall short, there are three additional options available based on the application itself, features in the operating system, or through the use of purpose-built failover clustering software.

The HA/DR options that might appear to be the easiest to implement are those specifically designed for each application. A good example is Microsoft’s SQL Server database with its carrier-class Always On Availability Groups feature. There are two disadvantages to this approach, however. The higher licensing fees, in this case for the Enterprise Edition, can make it prohibitively expensive for many needs. The more troubling disadvantage is the need for different HA/DR provisions for different applications, which makes ongoing management a constant (and costly) struggle.

The second option involves using uptime-related features integrated into the operating system. Windows Server Failover Clustering, for example, is a powerful and proven feature that is built into the OS. But on its own, WSFC might not provide a complete HA/DR solution because it lacks a data replication feature. In a private cloud, data replication can be provided using some form of shared storage, such as a storage area network. But because shared storage is not available in public clouds, implementing robust data replication requires using separate commercial or custom-developed software.

For Linux, which lacks a feature like WSFC, the need for additional HA/DR provisions and/or custom development is considerably greater. Using open source software like Pacemaker and Corosync requires creating (and testing) custom scripts for each application, and these scripts often need to be updated and retested after even minor changes are made to any of the software or hardware being used. But because getting the full HA stack to work well for every application can be extraordinarily difficult, only very large organizations have the wherewithal needed to even consider taking on the effort.

Ideally there would be a “universal” approach to HA/DR capable of working cost-effectively for all applications running on either Windows or Linux across public, private and hybrid clouds. Among the most versatile and affordable of such solutions is the third option: the purpose-built failover cluster. These HA/DR solutions are implemented entirely in software that is designed specifically to create, as their designation implies, a cluster of virtual or physical servers and data storage with failover from the active or primary instance to a standby to assure high availability at the application level.

These solutions provide, at a minimum, a combination of real-time data replication, continuous application monitoring and configurable failover/failback recovery policies. Some of the more robust ones offer additional advanced capabilities, such as a choice of block-level synchronous or asynchronous replication, support for Failover Cluster Instances (FCIs) in the less expensive Standard Edition of SQL Server, WAN optimization for enhanced performance and minimal bandwidth utilization, and manual switchover of primary and secondary server assignments to facilitate planned maintenance.

Although these general-purpose solutions are generally storage-agnostic, enabling them to work with storage area networks, shared-nothing SANless failover clusters are normally preferred based on their ability to eliminate potential single points of failure.

Two Common Failover Clustering Configurations

Every failover cluster consists of two or more nodes, and locating at least one of the nodes in a different datacenter is necessary to protect against local disasters. Presented here are two popular configurations: one for disaster recovery purposes; the other for providing both mission-critical high availability and disaster recovery. Because high transactional performance is often a requirement for highly available configurations, the example application is a database.

The basic SANless failover cluster for disaster recovery has two nodes with one primary and one secondary or standby server or server instance. This minimal configuration also requires a third node or instance to function as a witness, which is needed to achieve a quorum for determining assignment of the primary. For database applications, replication to the standby instance across the WAN is asynchronous to maintain high performance in the primary instance.

The SANless failover cluster affords a rapid recovery in the event of a failure in the primary, making this basic DR configuration suitable for many applications. And because it is capable of detecting virtually all possible failures, including those not counted as downtime in public cloud services, it will work in a private, public or hybrid cloud environment.

For example, the primary could be in the enterprise datacenter with the secondary deployed in the public cloud. Because the public cloud instance would be needed only during planned maintenance of the primary or in the event of its failure—conditions that can be fairly quickly remedied—the service limitations and exclusions cited above may well be acceptable for all but the most mission-critical of applications.

This three-node SANless failover cluster has one active and two standby server instances, making it capable of handling two concurrent failures with minimal downtime and no data lossThe figure shows an enhanced three-node SANless failover cluster that affords both five-9’s high availability and robust disaster recovery protection. As with the two-node cluster, this configuration will also work in a private, public or hybrid cloud environment. In this example, servers #1 and #2 are located in an enterprise datacenter with server #3 in the public cloud. Within the datacenter, replication across the LAN can be fully synchronous to minimize the time it takes to complete a failover and, therefore, maximize availability.

When properly configured, three-node SANless failover clusters afford truly carrier-class HA and DR. The basic operation is application-agnostic and works the same for Windows or Linux. Server #1 is initially the primary or active instance that replicates data continuously to both servers #2 and #3. If it experiences a failure, the application would automatically failover to server #2, which would then become the primary replicating data to server #3.

Immediately after a failure in server #1, the IT staff would begin diagnosing and repairing whatever caused the problem. Once fixed, server #1 could be restored as the primary with a manual failback, or server #2 could continue functioning as the primary replicating data to servers #1 and #3. Should server #2 fail before server #1 is returned to operation, as shown, server #3 would become the primary. Because server #3 is across the WAN in the public cloud, data replication is asynchronous and the failover is manual to prevent “replication lag” from causing the loss of any data.

With SANless failover clustering software able to detect all possible failures at the application level, it readily overcomes the CSP limitations and exclusions mentioned above, and makes it possible for this three-node configuration to be deployed entirely within the public cloud. To afford the same five-9’s high availability based on immediate and automatic failovers, servers #1 and #2 would need to be located within a single zone or region where the LAN facilitates synchronous replication.

For appropriate DR protection, server #3 should be located in a different datacenter or region, where the use of asynchronous replication and manual failover/failback would be needed for applications requiring high transactional throughput. Three-node clusters can also facilitate planned hardware and software maintenance for all three servers while providing continuous DR protection for the application and its data.

By offering multiple, geographically-dispersed datacenters, public clouds afford numerous opportunities to improve availability and enhance DR provisions. And because SANless failover clustering software makes effective and efficient use of all compute, storage and network resources, while also being easy to implement and operate, these purpose-built solutions minimize all capital and operational expenditures, resulting in high availability being more robust and more affordable than ever before.

# # #

About the Author

Cassius Rhue is Director of Engineering at SIOS Technology, where he leads the software product development and engineering team in Lexington, SC. Cassius has over 17 years of software engineering, development and testing experience, and a BS in Computer Engineering from the University of South Carolina. 

Speed up recovery process, improve quality and add to contractor credibility

 

By John Anderson, FLIR

Thermal imaging tools integrated with moisture meters can speed up the post-hurricane recovery process, improve repair quality, and add to contractor credibility. A thermal imaging camera can help you identify moisture areas faster and can lead to more accurate inspections with fewer call backs for verification by insurance companies. Many times, a good thermal image sent via email may be sufficient documentation to authorize additional work, leading to improved efficiency in the repair process.

Post-event process

Contractors need to be able to evaluate water damage quickly and accurately after a hurricane or other storm event. This can be a challenge using traditional tools, especially pinless (non-invasive) moisture meters that offer a nondestructive measurement of moisture in wood, concrete and gypsum. Operating on the principle of electrical impedance, pinless moisture meters read wood using a scale of 5 to 30 percent moisture content (MC); they read non-wood materials on a relative scale of 0 to 100 percent MC. [1] While simple to use, identifying damage with any traditional moisture meter alone is a tedious process, often requiring at least 30 to 40 readings. And the accuracy of the readings is only as good as the user’s ability to find and measure all the damaged locations.

Using a thermal imaging camera along with a moisture meter is much more accurate. These cameras work by detecting the infrared radiation emitted by objects in the scene. The sensor takes the energy and translates it into a visible image. The viewer sees temperatures in the image as a range of colors: red, orange and yellow indicate heat, while dark blue, black or purple signifies colder temperatures associated with evaporation or water leaks and damage. Using this type of equipment speeds up the process and tracks the source of the leak—providing contractors with a visual to guide them and confirm where the damage is located. Even a basic thermal imaging camera, one that is used in conjunction with a smart phone, is far quicker and more accurate at locating moisture damage than a typical noninvasive spot meter.

Infrared Guided Measurement (IGM)

An infrared (IR) thermal imaging camera paired with a moisture meter is a great combination. The user can find the cold spots with the thermal camera and then confirm moisture is present with the moisture meter. This combination is widely used today, prompting FLIR to develop the MR176 infrared guided measurement (IGM™) moisture meter. This all-in-one moisture meter and thermal imager allows contractors to use thermal imaging and take moisture meter readings for a variety of post-storm cleanup tasks. These include inspecting the property, preparing for remediation, and—during remediation— assessing the effectiveness of dehumidifying equipment. The tool can also be used down the road after remediation to identify leaks that may—or may not—be related to the hurricane.

During the initial property inspection, the thermal imaging camera visually identifies cold spots, which are usually associated with moisture evaporation. Without infrared imaging, the user is left to blindly test for moisture—and may miss areas of concern altogether.

While preparing for remediation, a tool that combines a thermal imaging camera with a relative humidity and temperature (RH&T) sensor can provide contractors with an easy way to calculate the equipment they will need for the project. This type of tool measures the weight of the water vapor in the air in grains per pound (GPP), relative humidity, and dew point values. Restoration contractors know how many gallons of water per day each piece of equipment can remove and, using the data provided by the meter, can determine the number of dehumidifiers needed in a given space to dry out the area.

The dehumidifiers reduce moisture and restores proper humidity levels, preventing the build-up of air toxins and neutralizing odors from hurricane water damage. Since the equipment is billed back to the customer or insurance company on a per-hour basis, contractors must balance the costs with the need for full area coverage.

During remediation, moisture meters with built-in thermal imaging cameras provide key data that contractors can use to spot check the drying process and equipment effectiveness over time. In addition, thermal imaging can be used to identify areas that may not be drying as efficiently as others and can guide the placement of drying equipment.

The equipment is also useful after the fact, if, for example, contractors are looking to identify the source of small leaks that may or may not be related to the damage from the hurricane. Using a moisture meter/thermal camera combination can help them track the location and source of the moisture, as well as determine how much is remaining.

Remodeling contractors who need to collect general moisture data can benefit from thermal imaging moisture meters, as well. For example, tracing a leak back to its source can be a challenge. A leak in an attic may originate in one area of the roof and then run down into different parts of the structure. A moisture meter equipped with a thermal imager can help them determine where the leak actually started by tracing a water trail up the roof rafter to the entrance spot.

Choosing the right technology

A variety of thermal imaging tools are available, depending upon whether the contractor is looking for general moisture information, or needs more precise information on temperature and relative humidity levels.

For example, the FLIR MR176 IGM™ moisture meter with replaceable hygrometer is an all-in-one tool equipped with a built-in thermal camera that can visually guide contractors to the precise spot where they need to measure moisture. An integrated laser and crosshair helps pinpoint the surface location of the issue found with the thermal camera. The meter comes with an integrated pinless sensor and an external pin probe, which gives contractors the flexibility to take either non-intrusive or intrusive measurements.

Coupled with a field-replaceable temperature and relative humidity sensor, and automatically calculated environmental readings, the MR176 can quickly and easily produce the right measurements during the hurricane restoration and remediation process. Users can customize thermal images by selecting which measurements to integrate, including moisture, temperature, relative humidity, dew point, vapor pressure and mixing ratio. They can also choose from several color palates, and use a lock-image setting to prevent extreme hot and cold temperatures from skewing images during scanning.

Also available is the FLIR MR160, which is a good tool for remodeling contractors looking for general moisture information, for example, pinpointing drywall damage from a washing machine, finding the source of a roof leak that is showing up in flooring or drywall, as well as locating ice dams. It has many of the features of the MR176 but does not include the integrated RH&T sensor.

Capturing images with a thermal camera builds contractor trust and credibility

Capturing images of hurricane-related damage with a thermal camera provides the type of documentation that builds contractor credibility and increases trust with customers. These images help customers understand and accept contractor recommendations. Credibility increases when customers are shown images demonstrating conclusively why an entire wall must be removed and replaced.

When customers experience a water event, proper photo documentation can bolster their insurance claims. The inclusion of thermal images will definitely improve insurance payout outcomes and speed up the process.

Post-storm cleanup tool for the crew

By providing basic infrared imaging functions, in combination with multiple moisture sensing technologies and the calculations made possible by the RH&T sensor, an imaging moisture meter such as the MR176 is a tool the entire remediation crew can carry during post-storm cleanup.

References

[1] Types of Moisture Meters, https://www.grainger.com/content/qt-types-of-moisture-meters-346, retrieved 5/29/18

Expert service providers update aging technology with minimal disruption

 

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

Aging power control and automation systems can carry risk, both in terms of downtime of mission-critical power systems, through reduced availability of replacement components and the knowledge to replace existing devices within. Of course, as components age, their risk of failure increases. Additionally, as technology advances, these same components are discontinued and become unavailable, and over time, service personnel lose the know‐how to support the older generation of products. At the same time, though, complete replacement of these aging systems can be extremely expensive, and may also require far more downtime or additional space than these facilities can sustain.

The solution, of course, is the careful maintenance and timely replacement of power control and automation system components. By replacing only some components of the system at any given time, customers can benefit from the new capabilities and increased reliability of current technology, all while uptime is maintained. In particular, expert service providers can provide in-house wiring, testing, and vetting of system upgrades before components even ship to customers, ensuring minimal downtime. These services are particularly useful in in healthcare facilities and datacenter applications, where power control is mission-critical and downtime is costly.

Automatic Transfer Switch (ATS) controllers and switchgear systems require some different types of maintenance and upgrades due to the differences in their components; however, the cost savings and improved uptime that maintenance and upgrades can provide are available to customers with either of these types of systems. The following maintenance programs and system upgrades can extend the lifetime of a power control system, minimize downtime in mission-critical power systems, and save costs.

Audits and Preventative Maintenance

Before creating a maintenance schedule or beginning upgrades, getting an expert technician into a facility to audit the existing system provides long-term benefits and provides the ability to prioritize. With a full equipment audit, a technician or application engineer who specializes in upgrading existing systems can look at an existing system and provide customers with a detailed migration plan for upgrading the system, in order of priority, as well as a plan for preventative maintenance.

Whenever possible, scheduled preventative maintenance should be performed by factory-trained service employees of the power control system OEM, rather than by a third party. In addition to having the most detailed knowledge of the equipment, factory-trained service employees can typically provide the widest range of maintenance services. While third-party testing companies may only maintain power breakers and protective relay devices, OEM service providers will also maintain the controls within the system.

Through these system audits and regular maintenance plans, technicians can ensure that all equipment is and remains operational, and they can identify components that are likely to become problematic before they actually fail and cause downtime in a mission-critical system.

Upgrades for ATS Control Systems with Minimal System Disruption

In ATS controller systems, control upgrades can provide customers with greater power monitoring and metering. In addition, replacing the controls for aging ATS systems ensures that all components of the system controls are still in production, and therefore will be available for replacement at a reasonable cost and turnaround time. In comparison, trying to locate out-of-production components for an old control package can lead to high costs and a long turnaround time for repairs.

The most advanced service providers minimize downtime during ATS control by pre-wiring the control and fully testing it within their own production facilities. When Russelectric performs ATS control upgrades, a pre-wired, fully-tested control package is shipped to the customer in one piece. The ATS is shut down only for as long as it takes to install the new controls retrofit, minimizing disruption.  

In addition, new technology also improves system usability, similar to making the switch from a flip phone to a smartphone. New ATS controls from Russelectric, for example, feature a sizeable color screen with historical data and alarm reporting. All of the alerts, details and information on the switch are easily accessible, providing the operator with greater information when it matters most. This upgrade also paves the way for optional remote monitoring through a SCADA or HMI system, further improving usability and ease of system monitoring.

Switchgear System upgrades

For switchgear systems, four main upgrades are possible in order to improve system operations and reliability without requiring a full system replacement: operator interface upgrades, PLC upgrades, breaker upgrades, and controls retrofits. Though each may be necessary at different times for different power control systems, all four upgrades are cost-effective, extend system lifespans, and minimize downtime.

Operator Interface Upgrades for Switchgear Systems

Similar to the ATS control upgrade, an operator interface (OI) or HMI upgrade for a switchgear power control system can greatly improve system usability, making monitoring easier and more effective for operators. This upgrade enables operators to see the system power flow, as well as to view alarms and system events in real time.

Also similar to ATS control upgrades, upgrading the OI also ensures that components will be in production and easily available for repairs. The greatest benefit, though, is providing operators real-time vision into system alerts without requiring them to walk through the system itself and search for indicator lights and alarms. Though upgrading this interface does not impact the actual system control, it provides numerous day-to-day benefits, enabling faster and easier troubleshooting and more timely maintenance.

Upgrades to PLC and Communication Hardware without Disrupting Operations

Many existing systems utilize legacy or approaching end-of-life PLC architecture. PLC upgrades allow for upgrading a switchgear control system to the newest technology with minimal program changes. Relying on expert OEM service providers for this process can also simplify the process of upgrading PLC and communications hardware, protecting customers’ investments in power control systems while extending noticeable system benefits.

A PLC upgrade by Russelectric includes all new PLC and communication hardware for the controls of the existing system, but maintains the existing logic and converts it for the latest technology. Upgrading the technology does not require new logic or operational sequences. As a result, the operations of the system remain unchanged and existing wiring is maintained. This greatly reduces the likelihood that the system will need to be fully recommissioned and minimizes downtime necessary for testing. Russelectric’s unique process of both converting existing logic and, as previously mentioned, testing components in their own production facility before sending out to the facility for installation, gives them a correspondingly unique ability to keep a system operational through the entire upgrade process.  In addition, Russelectric has developed some very unique processes for installation, using a sequence to systematically replace the PLC’s, replacing only one PLC at a time, and converting the communications from PLC to PLC as components are replaced.  This allows Russelectric to keep systems operational throughout the process. Russelectric’s experts minimize the risk of mission-critical power system downtime.

Breaker & Protective Relay Upgrades for Added Reliability and Protection

Breaker upgrades may often be necessary to ensure system protection and reliability, even through many years of normal use. Two different types of breaker modifications or upgrades are available for switchgear power control systems: breaker retrofill and breaker retrofit.  A retrofill breaker upgrade calls for an entirely new device in place of an existing breaker system. Retrofill upgrades maintain existing protections, lengthen service life, and provide added benefits of power metering and other add-on protections, like arc flash protections and maintenance of UL approvals.

Breaker retrofits can provide these same benefits, but they do so through a process of reengineering an existing breaker configuration. This upgrade requires a somewhat more labor-intensive installation, but provides generally the same end result. Whether a system requires a retrofit or retrofill upgrade is largely determined by the existing power breakers in a system.

For medium voltage systems, protective relay upgrades from single function solid state or mechanical protective devices to multifunction protective devices provide protection and reliability upgrades to a system.  Upgrading to multifunction protective relays provide enhanced protection, lengthen service life of a system, and provide added benefits of power metering, communications and other add-on protections, like arc flash protections.

Russelectric prewires and tests new doors with the new protective devices ready for installation.  This allows for minimal disruption to a system and allows for easy replacement.   

Controls Retrofits Revive Aging Systems

For older switchgear systems that predate PLC controls, one of the most effective upgrades for extending system life and serviceability is a controls retrofit. This process includes a fully new control interior, interior control panels, and doors. This enables customers to replace end-of-life components, update to the latest control equipment and sequence standards, and access benefits of visibility described above for OI upgrades. 

The major consideration and requirement is to maintain the switchgear control wiring interconnect location to eliminate the requirement for new control wiring between other switchgear, ATS’s, and generators.  In retrofitting controls rather than replacing, retrofitting the controls allows the existing wiring to be maintained and provides a major cost savings to the system upgrade. 

Just as with ATS controls retrofits, Russelectric builds the control panels and doors within their own facilities and simulate non-controls components from the customer’s system that are not being replaced. In doing so, technicians can fully test the retrofit before replacing the existing controls. What’s more, Russelectric can provide customers with temporary generators and temporary control panels so that the existing system can be strategically upgraded, one cubicle at a time, while maintaining a fully operational system.

Benefits of an Expert Service Provider

As described throughout this article, relying on expert OEM service providers like Russelectric amplifies the benefits of power control system upgrades. With the right service provided at the right time by industry experts, mission-critical power control systems, like those in healthcare facilities and datacenters, can be upgraded with a minimum of downtime and costs. OEMs are often the greatest experts on their own products, with access to all of the drawings and documentation for each product, and are therefore most able to perform maintenance and upgrades in the most effective and efficient manner.

Some of the most important cost-saving measures for power control system upgrades can only be achieved by OEM service providers. For example, maintaining existing interconnect control wiring between power equipment and external equipment provides key cost savings, as it eliminates the need for electrical contractors in installing a new system. Given that steel and copper substructure hardware can greatly outlast control components, retrofitting these existing components can also provide major cost savings. Finally, having access to temporary controls or power sources, pre-tested components, and the manufacturer’s component knowledge all helps to practically eliminate downtime, saving costs and removing barriers to upgrades. By upgrading a power control system with an OEM service provider, power system customers with mission-critical power systems gain the latest technology without the worry of downtime and huge costs associated with full system replacement.

This document gives guidelines for monitoring hazards within a facility as a part of an overall emergency management and continuity programme by establishing the process for hazard monitoring at facilities with identified hazards.

It includes recommendations on how to develop and operate systems for the purpose of monitoring facilities with identified hazards. It covers the entire process of monitoring facilities.

This document is generic and applicable to any organization. The application depends on the operating environment, the complexity of the organization and the type of identified hazards.

...

https://www.iso.org/standard/67159.html

By GREG SPARROW

In the wake of the recent Facebook and Cambridge Analytica scandal, data and personal privacy matters have come to the forefront of consumer’s minds. When an organization like Facebook falls into trouble, big data is often blamed, but IS big data actually at fault? When tech companies utilize and contract with third party data mining companies aren’t these data collection firms doing exactly what they were designed to do?

IBM markets its Watson as a way to get closer to knowing about consumers; however, when it does just that, it is perceived as an infringement on privacy. In lieu of data privacy and security violations, companies have become notorious for pointing the finger elsewhere. Like any other scapegoat, big data has become an easy way out; a chance for the company to appear to side with, and support the consumer. Yet, many are long overdue in making changes that actually do protect and support the customer and now find themselves needing to attempt to earn back lost consumer trust. Companies looking to please their customers, publicly agree that big data is the issue but behind the scenes may be doing little or nothing to change how they interact with these organizations. By pushing the blame to these data companies, they redirect the problem, holding their company and consumers as victims of something beyond their control.

For years, data mining has been used to help companies better understand their customers and market environment. Data mining is a means to offer insights from business to buyer or potential buyer. Before companies and resources like Facebook, Google, and IBM’s Watson existed, customers knew very little about their personal data. More recently, the general public has begun to understand what data mining actually is, how it is used, and be aware of the data trail they leave through their online activities.

Hundreds of articles have been written surrounding data privacy, additional regulations to protect individual’s data rights have been proposed, and some even signed into law. With the passing of new legislation pertaining to data, customers are going as far as to file law suits against companies that may have been storing personal identifiable information against their knowledge or without proper consent.

State regulations have increasingly propelled the data privacy interest, calling for what some believe might develop into national privacy law. Because of this, organizations are starting to take notice and thus have begun implementing policy changes to protect their organization from scrutiny. Businesses are taking a closer look at the changing trends within the marketplace, as well as the growing awareness from the public around how their data is being used. Direct consumer-facing brands need to be most mindful of the fact that they need to have appropriate security frameworks in place. Perhaps the issue amongst consumers is not the data collected, but how it is presented back to them or shared with others.

Generally speaking, consumers like content and products that are tailored to them. Many customers don’t mind data collection, marketing retargeting, or even promotional advertisements if they know that they are benefiting from them. We as consumers and online users often times willingly give up our information in exchange for free access and convenience, but have we thoroughly considered how that information is being used, brokered and shared? If we did, would we pay more attention to who and what we share online?

Many customers have expressed their unease when their data is incorrectly interpreted and relayed. Understandably so, they are irritated by irrelevant communications and become fearful when they lack trust in the organization behind the message. Is their sensitive information now in a databank with heightened risk for breach? When a breach or alarming infraction occurs, the customer, including prospective, has more concern.

The general public has become acquainted with the positive aspects of big data, to the point where they expect retargeted ads and customized communications. On the other hand, even when in agreeance to the terms and conditions, the consumer is quick to blame big data in a negative occurrence rather than the core brand they chose to trust their information to.

About Greg Sparrow:

GregSparrowGreg Sparrow, Senior Vice President and General Manger at CompliancePoint has over 15 years of experience with Information Security, Cyber Security, and Risk Management. His knowledge spans across multiple industries and entities including healthcare, government, card issuers, banks, ATMs, acquirers, merchants, hardware vendors, encryption technologies, and key management.

 

About CompliancePoint:

CompliancePoint is a leading provider of information security and risk management services focused on privacy, data security, compliance and vendor risk management. The company’s mission is to help clients interact responsibly with their customers and the marketplace. CompliancePoint provides a full suite of services across the entire life cycle of risk management using a FIND, FIX & MANAGE approach. CompliancePoint can help organizations prepare for critical need such as GDPR with project initiation and buy-in, strategic consulting, data inventory and mapping, readiness assessments, PIMS & ISMS framework design and implementation, and ongoing program management and monitoring. The company’s history of dealing with both privacy and data security, inside knowledge of regulatory actions and combination of services and technology solutions makes CompliancePoint uniquely qualified to help our clients achieve both a secure and compliant framework.

https://blog.sungardas.com/2018/10/machine-learning-cartoon-its-time-to-study-up-for-the-next-wave-of-innovation/

IT cartoon, machine learning

Successful companies understand they have to innovate to remain relevant in their industry. Few innovations are more buzzworthy than machine learning (ML).

The Accenture Institute for High Performance found that at least 40 percent of the companies surveyed were already employing ML to increase sales and marketing performance. Organizations are using ML to raise ecommerce conversion rates, improve patient diagnoses, boost data security, execute financial trades, detect fraud, increase manufacturing efficiency and more.

When asked which IT technology trends will define 2018, Alex Ough, CTO Architect at Sungard AS, noted that ML “will continue to be an area of focus for enterprises, and will start to dramatically change business processes in almost all industries.”

Of course, it’s important to remember that implementing ML in your business isn’t as simple as sticking an educator in front of a classroom of computers – particularly when companies are discovering they lack the skills to actually build machine learning systems that work at scale.

Machine learning, like many aspects of digital transformation, requires a shift in people, processes and technology to succeed. While that kind of change can be tough to stomach at some organizations, the alternative is getting left behind.

Check out more IT cartoons.

 

IT security cartoon

What is the price of network security? If your company understands we live in an interconnected world where cyber threats are continuously growing and developing, no cost is too great to ensure the protection of your crown jewels.

However, no matter how many resources you put into safeguarding your most prized “passwords,” the biggest threat to your company’s security is often the toughest to control – the human element.

It’s not that your employees are intentionally trying to sabotage the company. But, even if you’ve locked away critical information that can only be accessed by passing security measures in the vein of “Mission Impossible,” mistakes happen. After all, humans are only human.

The best course of action is to educate employees on the importance of having good cybersecurity hygiene. Inform them of the potential impacts of a cybersecurity incident, train them with mock phishing emails and other security scenarios, and hold employees accountable.

Retina scanners, complex laser grids and passwords stored in secure glass displays seem like adequate enough security measures. Unfortunately, employees don’t always get the memo that sensitive information shouldn’t be shouted across the office. Then again, they’re only human.

Check out more IT cartoons.

https://blog.sungardas.com/2018/09/it-security-cartoon-why-humans-are-cybersecuritys-biggest-adversary/

Complex system provided by Russelectric pioneers microgrid concept

By Steve Dunn, Aftermarket Product Line Manager, Russelectric Inc.

PV RooftopA unique power control system for Quinnipiac University’s York Hill Campus, located in Hamden, Connecticut, ties together a range of green energy power generation sources with utility and emergency power sources. The powerful supervisory control and data acquisition (SCADA) system gives campus facilities personnel complete information on every aspect of the complex system. Initially constructed when the term microgrid had barely entered our consciousness, the system continues to grow as the master plan’s vision of sustainability comes into fruition.

Hilltop campus focuses on energy efficiency and sustainability

In 2006, Quinnipiac University began construction on its New York Hill campus, perched high on a hilltop with stunning views of Long Island Sound. Of course, the campus master plan included signature athletic, residence, parking, and activity buildings that take maximum advantage of the site. But of equal importance, it incorporated innovative electrical and thermal distribution systems designed to make the new campus energy efficient, easy to maintain, and sustainable. Electrical distribution requirements, including primary electrical distribution, emergency power distribution, campus-wide load shedding, and cogeneration were considered, along with the thermal energy components of heating, hot water, and chilled water.

The final design includes a central high-efficiency boiler plant, a high-efficiency chiller plant, and a campus-wide primary electric distribution system with automatic load shed and backup power. The design also incorporates a microturbine trigeneration system to provide electrical power while recovering waste heat to help heat and cool the campus. Solar and wind power sources are integrated into the design. The York Hill campus design engineer was BVH Integrated Services, PC, and Centerbrook Architects & Planners served as the architect. The overall campus project won an award for Best Sustainable Design from The Real Estate Exchange in 2011.

Implementation challenges for the complex system

The ambitious project includes numerous energy components and systems. In effect, it was a microgrid before the term was widely used. Some years after initial construction began, Horton Electric, the electrical contractor, brought in Russelectric to provide assistance and recommendations for all aspects of protection, coordination of control, and utility integration – especially protection and control of the solar, wind and combined heating and power (CHP) components. Russelectric also provided project engineering for the actual equipment and coordination between its system and equipment, the utility service, the emergency power sources, and the renewable sources. Alan Vangas, current VP at BVH Integrated Services, said that “Russelectric was critical to the project as they served as the integrator and bridge for communications between building systems and the equipment.”

Startup and implementation was a complex process. The power structure system infrastructure, including the underground utilities, had been installed before all the energy system components had been fully developed. This made the development of an effective control system more challenging. Some of the challenges arose from utility integration with existing on-site equipment, in particular the utility entrance medium voltage (MV) equipment that had been installed with the first buildings. Because it was motor-operated, rather than breaker-operated, paralleling of generator sets with the utility (upon return of the utility source after power interruption) was not possible in one direction. They could parallel the natural gas generator to the utility, but the generator was also used for emergency power, so they could not parallel from the utility back to their microgrid.

Unique system controls all power distribution throughout the campus

In response to the unique challenges, Russelectric designed, delivered, and provided startup for a unique power control system, and has continued to service the system since startup. The system controls all power distribution throughout the campus, including all source breakers – utility (15kV and CHP), wind, solar, generators, MV loop bus substations, automatic transfer switches (ATSs), and load controls.

As might be expected, this complex system requires a very complex load control system. For example, it has to allow the hockey rink chillers to run in the summer during an outage but maintain power to the campus. 

Here is the complete power control system lineup:

  • 15 kilovolt (kV) utility source that feeds a ring bus with 8 medium voltage/low voltage (MV/LV) loop switching substations for each building. Russelectric controls the open and close of the utility main switch and monitor’s the utility main’s health and protection of the utility main.
  • 15kV natural gas 2 megawatt (MW) Caterpillar CAT generator with switchgear for continuous parallel to the 15kV loop bus. Russelectric supplied the switchgear for full engine control and breaker operations to parallel with the utility and for emergency island operations.
  • One natural gas 750kW Caterpillar generator used for emergency backup only.
  • One gas-fired FlexEnergy micro turbine (Ingersoll Rand MT250 microturbine) for CHP distributed energy and utility tie to the LV substations. 
  • Control and distribution switchgear that controls the emergency, CHP, and utility. 
  • 12 ATSs for emergency power of 4 natural gas engines in each building. 
  • 25 vertical-axis wind turbines that generate 32,000 kilowatt-hours of renewable electricity annually. The wind turbines are connected to each of the LV substations. Russelectric controls the breaker output of the wind turbines and instructs the wind turbines when to come on or go off.
  • 721 rooftop photovoltaic panels gathering power from the sun, saving another 235,000 kilowatt-hours (kWh) per year. These are connected to each of the 3 dormitory LV substations. Russelectric controls the solar arrays’ breaker output and instructs the solar arrays when to come on or go off.

The system officially only parallels the onsite green energy generation components (solar, wind and micro turbine) with the utility, although they have run the natural gas engines in parallel with the solar in island mode for limited periods.

Since the initial installation, the system has been expanded to include additional equipment, including another natural gas generator, additional load controls, and several more ATSs.

SCADA displays complexity and detail of all the systems

Another feature of the Russelectric system for the project was the development of the Russelectric SCADA system, which takes the complexity and detail of all the systems and displays it for customer use. Other standard SCADA systems would not have been able to tie everything together – with one line diagrams and front views of equipment that provide the ability to visually see the entire system.

While the Russelectric products used are known for their quality and superior construction, what really made this project stand out is Russelectric’s ability to handle such an incredibly wide variety of equipment and sources without standardizing on the type of generator or power source used. Rather than requiring use of specific players in the market, the company supports any equipment the customer wishes to use – signing on to working through the challenges to make the microgrid work. This is critical to success when the task is controlling multiple traditional and renewable sources.

By HANK YEE

https://www.anexinet.com/blog/disaster-recovery-components-within-policies-and-procedures/

While many would consider a discussion about disaster recovery policies and procedures boring (I certainly don’t), in reality, policies and procedures are 110% vital to a successful DR. Your organization could have the greatest technology in the world, but without a solid plan and policy guide in place, your disaster recovery efforts are doomed to fail.

A tad hyperbolic, perhaps. But the lack of properly updated documentation is one of the biggest flaws I see in most companies’ DR plans.

A disaster recovery plan is a master plan of a company’s approach to disaster recovery. It includes or references items like runbooks, test plans, communications plan, and more. These plans detail the steps an organization will take before, during, and after a disaster, and are usually related specifically to technology or information. Having it all written down ahead of time helps streamline complex scenarios, ensures no steps are missing from each process, and provides guidance around all elements associated with the DR plan (e.g. runbooks and test plans).

Creating a plan also provides the opportunity for discussion around topics that have likely not been considered before or are assumed to be generally understood.

*Which applications or hardware should be protected?
*When, specifically, should a disaster be declared, who can make that declaration, and who needs to be notified?
*Have response-tiers been identified depending on the type of disaster?

*Which applications correspond to each tier?

The most critical condition of a successful DR plan is that it be kept updated and current—frequently. An outdated DR plan is a weak DR plan. Applications change. Hardware changes. And organizations change, both in terms of people and locations. Dealing with a disaster is hard enough, but no one needs the added pressure of trying to correlate an outdated organization chart with a current one. Or trying to map old server names and locations to existing ones. Pick a time-metric and a change-metric for when your DR plan will be update (e.g. every six months, every year, upon a major application update to a mission-critical system). Pick some conditions and stick to them.

1) Runbooks
Runbooks are step-by-step procedure guides for select tasks within an IT organization. These reference guides are tailored to describe how your organization configured and implemented a specific technology or software and focuses on the tasks the relevant teams would need to perform in the event of a disaster.

Examples:
*How to startup or shutdown an application/database/server.
*How to fail-over a server/database/storage array to another site.
*How check if an application/database has started-up correctly.

The goal is to make your runbooks detailed enough that any proficient IT professional could successfully execute the instructions, regardless of their association with your organization. A runbook can consist one big book, or several smaller ones. They can be physical or electronic (or both). Ideally, they are stored in multiple locations.

Nobody likes documentation. But in a disaster, emotions and stress can run very high. So why leave it all up to memory? Having it all documented gives you a reliable back-up option.

Depending on the type of disaster, it’s possible the necessary staff members wouldn’t be able to get online, specifically the person who specializes in Server X, Y, or Z. Perhaps the entire regional team is offline, and a server/application has failed. A Linux admin is available, but he doesn’t support this server day in and day out. Now suddenly, he’s tasked with starting up the server and applications. Providing this admin with guide on what to do, what scripts to call, and in what order, might just be the thing that literally saves your company.

And if your startup is automated—first off, great. But how do you check to be sure everything started up correctly? Which processes should be running? Or what log to check for errors? Is there a status code that can referenced? Maybe this is a failover scenario: the server is no longer located in Philadelphia, and as such, certain configuration values need to be changed. Which values are they and what should they be changed to?

Runbooks leave nothing to memory or chance. They are the ultimate reference guide and as such should detail each detail of your organization’s DR plan.

2) Test Plans
Test Plans are documents that detail the objects, resources, and processes necessary to test a specific piece of software or hardware. Like runbooks, they serve as a framework or guideline to aide in testing, and can help eliminate the unreliable memory-factor from the disaster equation. Usually, test plans are synonymous with Quality Assurance departments. But in a disaster, they can be a massive help in organization and accuracy.

Test Plans catalog the test’s objectives, and the steps needed to test those objectives. They also define acceptable pass/fail criteria, and provide a means of documenting any deviations or issues encountered during testing. They are generally not as detailed as runbooks, and in many cases will reference the runbooks required for a specific step. 

3) Crisis Communication Plan
A Crisis Communication Plan outlines the basics of who, what, where, when, and how information gets communicated in a crisis. As with the above, the goal of a Crisis Communication Plan is to get many items sorted out beforehand, so they don’t need to be made up and/or decided upon in the midst of a trying situation. Information should be communicated accurately and consistently, and made available to everyone who needs it. This includes not only technical engineers but also your Marketing or Public Relations teams.

Pre-defined roles and responsibilities help alleviate the pressure on engineers to work in many different directions at once and can allow them to focus on fixing the problems while providing a nexus for higher-level managers to gather information and make decisions.
 
Remember, the best DR plans prepare your organization before, during and after a disaster,  are focused equally on people as well as data and computers, and its creators have taken the time to and money to test, implement, and update it over time – engaging the entire company for a holistic approach.

Hank YeeAs an Anexinet Delivery Manager in Hybrid IT & Cloud Services, Hank Yee helps design, implement and deliver quality solutions to clients. Hank has over a decade of experience with Oracle database technologies and Data Center Operations for the Pharmaceutical industry, with a focus on disaster recovery, and datacenter and enterprise data migrations.

BY BOB GRIES

https://www.anexinet.com/blog/3-key-disaster-recovery-components-infrastructure/

Disaster Recovery (DR) is a simple concept that unfortunately gets quite complex real quick. At a high level, disaster recovery ensures the persistence of critical aspects of your business during or following a disaster, whether natural or man-made. How does one achieve this persistence? That’s where things can become very complex.

With regard to DR Infrastructure, when most people talk DR they want to get right into the specific nitty-gritty: what are the most optimal data protection parameters? What’s the most ideal configuration for a database management/monitoring solution? And that’s all well and good, but let’s worry about the cake first, and the frosting later.

So, let’s take it up a few levels. Within Infrastructure, you have the systems, the connectivity, and the data.

1. Systems
Production Systems
These include servers, compute power, and non-human workhorses. You use these to process your orders, make decisions, and process your business-critical data. They may be physical, virtual, or in the cloud, but you know each one by name. You start your day by logging into one and every step of your workday involves some server doing something to assist you. Without it, you lose your customer-facing website, you lose your applications, and you lose everything else that makes your business an efficient organization.

Disaster Recovery Systems
If your production systems had a twin, the DR Systems would be it. These are duplicate, regularly tested, and fully capable systems, able to take over all the work you depend on your production systems for, the moment a failure occurs. Ideally, your DR Systems are housed in a different facility than the production system and are able to run at full capacity with no assistance from the production systems.

2. Connectivity
This is how everything talks to one another. Your production systems are connected by at least two separate network switches. If you use a SAN, you will have two separate fabrics. If you use the cloud, your connection to the cloud will also be redundant. Any secondary offices, remote data centers, or remote locations also use redundant network connections. Any replication flows over these lines. Your network provides connectivity to all your production and DR systems, such that end users can access their data, systems, and applications seamlessly, regardless of the state of your environment.

3. Data
The Hot Copy
This is the data your business depends on the active dataset that your applications, users, and databases read and write-to each day. Typically, this data is raid-protected, but further protections are necessary to ensure the data is safe.

The Backup Copy
This data set can exist in many forms, including backup storage array, replicated storage, checkpoints, journaled file systems, etc. It is meant as a low Recovery Point Objective option you can quickly use to restore data to handle non-catastrophic recoveries.

The Offsite Copy
This data is for long-term storage and is usually kept on a different medium than the Hot Copy and Backup Copy, including on tape, on removable media, in the cloud, or on a dedicated backup array. This data should be stored offsite and tested regularly. Additionally, this copy should be able to restore the data independent of any existing infrastructure and can be used to recover from a full disaster.

With those three areas identified, your business may begin holding strategic planning sessions to determine exactly which technologies and DR path are most appropriate for your organization and applications.

Bob GriesBob Gries is a Senior Storage Consultant at Anexinet. Bob has specialized in Enterprise Storage/Backup Design and Implementation Services for over 13 years, utilizing technologies from Dell EMC, HPE, CommVault, Veeam and more.

By SARAH YOUNG

https://www.anexinet.com/blog/is-your-disaster-recovery-plan-still-sufficient-to-handle-unexpected-disasters/

A “disaster” is defined as a sudden, unexpected event that causes great damage and disruption to the functioning of a community and results in material, economic and environmental loss that strains that community’s resources. Disasters may occur naturally or be man-made. They may range in damage, from trivial—causing only brief delays and minimal loss—to catastrophic, costing hundreds of thousands to fully recover from.

The Insurance Information Institute asserts that in 2016 alone, natural catastrophes accounted for $46 billion in insured losses worldwide, while man-made disasters resulted in additional losses of approximately $8 billion.

At some point, we all experience a disaster: car accidents, fires, floods, tornados, job loss, etc. When it comes to routine or common disasters, we generally have a good idea what our recovery plan should be. If a pipe breaks, causing a flood, you call a plumber to fix the pipe and maybe a cleaning service to mop up. When disaster strikes a business, standard plans should be in place to quickly recover critical assets so as not to interrupt essential computer systems and production.

Meanwhile, the typical enterprise IT team is constantly on guard for the standard sources of disaster: power outages, electrical surges, and water damage that has the potential to cripple data centers, destroy records, halt revenue-generating apps, and cause business activities to freeze. Since these types of disasters are so common, we’ve developed ways to recover from them quickly, we developed plans of action. But what about disasters we’ve never encountered or haven’t prepared for? Are we sure our recovery plans will save us from incurring huge costs, especially in the case of disasters we can’t predict?

In the last two decades, unforeseen disasters have hit 29 states, causing catastrophic problems for companies. Two planes crash into buildings in lower Manhattan, wiping out major data centers. Multi-day city-wide blackouts result in massive data loss. Hurricanes force cities to impose a mandatory closure of all non-essential work. These disasters not only created IT nightmares, they also exposed a whole host of DR-related issues companies had not yet even considered.

Business leaders forget how hard it is to think clearly under the stress and pressure of a sudden and unexpected event. Often, a sense of immunity or an indifference to disasters prevails— specifically catastrophic events, since these types of disasters, tend to be rare or unpredictable, so no sense in pouring money into a one-off DR plan for a disaster that has a slim chance of ever occurring, right? Wrong.

A standard DR plan provides for after a disaster has occurred. The best disaster recovery plan takes a holistic approach, preparing your company before, during, and after disaster strikes. Disaster recovery is as much about your people as it is about your data and computers. It’s about having a crisis communication plan (and about having plans, period). It’s about taking the time and spending the money, to test and implement your DR plans. From dedicated DR personal and DR checks to plan updates and documentation, an effective DR plan needs to engage the entire company.

So what should your DR plan look like? How will you know when it’s ready? How do you keep your DR plan from failing? Proper planning, design, and implementation of a solid DR plan can mean the difference between a downtime that lasts for days to one that’s resolved in under an hour.

Sarah YoungAs an Anexinet Project Manager in Cloud & Hybrid IT Services, Sarah Young partners with clients and engineers to ensure projects are delivered effectively and efficiently while meeting all shareholder expectations. Having deftly handled complex client issues for over a decade, Sarah excels at translating technical requirements for audiences who may not be as technically fluent.

By STEVE SILVESTRI

https://www.anexinet.com/blog/6-best-practices-for-business-continuity-and-disaster-recovery-planning/


These days, organizations must be prepared for everything and anything: from cyber-threats to natural disasters. A BC/DR plan is your detailed process foundation, focused on resuming critical business functionality while minimizing losses in revenue (or other business operations).
Business leaders forget how hard it is to think clearly under the intense pressure of a sudden and unexpected disaster event, especially one that has the potential to severely impact the success of an organization. With the number of threat vectors looming today, it’s critical to protect your organization against future threats and prepare for recovery from the worst. Below are six best practice tips for creating a BC/DR plan that encompasses all areas of your business.

1. Devise a consistent plan, and ensure all plan components are fully accessible in the event of a major disaster.
You may prepare for weeks or even months, creating the best documentation and establishing resources to run to in a time of crisis. However, if those resources are useless if they’re unavailable when most needed. Many companies document their BC/DR plan in Excel, Visio, Word, or as PDFs. And while this isn’t a bad approach, the files need to be stored in a consistently available location—whether that’s in the cloud, on physical paper, or in a DR planning system. Ensuring unhindered access should be a top priority; an inaccessible BC/DR plan is just as bad as not having a plan at all.

2. Maintain full copies of critical data OUTSIDE your production region.
If your organization keeps its primary data center in Houston, don’t build a secondary backup data center 30 miles down the road. Recent events have taught us that closely located data centers are all severely impacted by disaster, and business services and data availability are hindered across nearby locations.
A general rule for maintaining a full copy of critical data and services is to keep it at least 150 miles from the primary data center. Of course, cases may exist where keeping a secondary data center close to its primary is recommended. However, these cases should be assessed by an expert consultant prior to pursuing this approach.

3. Keep your BC/DR plan up to date and ensure any production changes are reflected.
A lot may change between the inception of your BC/DR plan and the moment disaster strikes. For this reason, it should be a priority for your organization to maintain an up-to-date plan as production changes come into play.

Consider: your organization has successfully implemented a new plan, with recovery points and time all proven to work. Six months later, you’ve deployed a new application system that runs in the cloud instead of on-premise. Without an updated BC/DR plan, all your hard work would have been for nothing since you wouldn’t be able to quickly recover anything. Keeping your plan in alignment with the production environment, and practicing change management are important methods for staying on top of your latest additions.

4. Test your plan in a realistic way to make sure it works.
Without testing, a plan will never have successful execution to back itself up. In the chaos of a crisis, your untested plan will likely fail since people won’t know which parts of the plan work and which don’t. Your testing should encompass all possibilities—from a small process failing, to the entire facility being wiped out by a tornado. Included with these tests should be detailed explanations describing what’s working in the plan and what isn’t. These will develop and mature your plan over time, until business continuity is maintained even if something small is failing, and your organization doesn’t suffer any losses in revenue or customer trust. Testing also allows for recovery practice training, which will also reduce recovery time when real chaos occurs.

5. Leverage the use of virtualization
Load-balancing and failover systems are becoming more popular in the technology sector as cyber threats and natural disasters continue to affect business operations. Ensuring users are seamlessly transferred to a secondary environment creates the illusion that nothing is actually happening to your environment, allowing users to continue enjoying your services without disruption.

6. Create your plan with the mentality that anything can happen.
Regardless of how many times you test your plan, review each recovery process, or go over the points of failure, something may still go awry when the real thing happens. Always have a trusted team or experienced partner who can assist you in covering any gaps, and swiftly pull your organization out of a jam. Be sure to compose a list of priorities and, for each one, ask yourself: if this fails, what will we need to do to recover? Assume necessary personnel are not available and even make your team trade roles during the recovery period in order to spread awareness. Keep your team innovative and sharp for when something goes wrong so at least one person is aware of the right steps to take in each specific area.

Steve SilvestriSteve Silvestri is a Consultant of Anexinet's ISG team, focusing on Cyber Security issues, including Data Loss Prevention, Digital Forensics, Penetration Testing, and Incident Response.

With the ever-growing amount of social media platforms, it’s inevitable that you find yourself using at least one form of social media throughout the day. As of 2017, 77% of US adults are on social media; odds are, you are using one of them. In the professional world, social media is a great way to network, build B2B partner relationships and form avenues of communication between other individuals in your industry. Here are some interesting facts about the platform that may boost your professionalism the most, LinkedIn.

As of 2018, LinkedIn has over 500 million members. Of those members, 260 million log-in monthly, and of those monthly users, 40% are daily users. That makes for a great tool to utilize in building beneficial business relationships with others in the business continuity and disaster recovery industry. In fact, amongst Fortune 500 Companies, LinkedIn is the most used social media platform.  Most users of LinkedIn are high-level decision makers who leverage the platform to accomplish a variety of business tasks. Whether its gathering news, marketing, networking, or hiring, the opportunities are endless. Ninety-one percent of executives rated LinkedIn as their number one choice for professionally relevant content. Content consumption has jumped tremendously over recent years, so it’s no longer just person-to-person interaction, it is also useful for reading and sharing business content amongst a large set of people, across many different industries, including business continuity and disaster recovery.

...

https://www.bcinthecloud.com/2018/09/the-power-of-linkedin-for-bc-dr/

Wednesday, 19 September 2018 16:31

The Power of LinkedIn for BC/DR

Combining business continuity and risk management into a single operational process is the most effective way to prepare for the worst.

By ROBERT SIBIK

Bowtie infographicCombining two seemingly unrelated entities to make a better, more useful creation is a keystone of innovation. Think of products like the clock radio and the wheeled suitcase, or putting meat between two slices of bread to make a sandwich, and you can see how effective it can be to combine two outwardly disparate things.

This viewpoint is useful in many scenarios, including in the business realm, especially when it comes to protecting a business from risk. Many companies treat risk management and business continuity as different entities under the same workflows, and that is a mistake; to be optimally effective, the two must be combined and aligned.

Mistaken Approaches

Business continuity traditionally starts with a business impact assessment, but many companies don’t go beyond that, making no tactical plan or strategic decisions on how to reduce impact once they have identified what could go wrong. The risk management process has been more mature, identifying various ways to treat problems, assigning it to someone, and trying to reduce the likelihood of the event occurring, but not doing much to reduce the impact of the event.

Organizations must move beyond simplistic goals of creating a business continuity plan using legacy business continuity/disaster recovery tools, or demonstrating compliance to a standard or policy using legacy governance, risk management and compliance software tools. Those approaches incorrectly move the focus to, “do we have our plans done?” or create a checklist mentality of, “did we pass the audit?” 

In addition to legacy approaches, benchmarking must be avoided, because it can provide misleading conclusions about acceptable risk and appropriate investment, and create a false sense of having a competitive advantage over others in the industry. Even companies in the same industry should have their own ideas about what constitutes risk, because risks are driven by business strategy, process, how they support customers, what they do, and how they do it.

Take the retail industry. Two organizations may sell the same basic product – clothing – but one sells luxury brands and the other sells value brands. The latter store’s business processes and strategies will focus on discounts and sales as well as efficiencies in stocking and logistics. The former will focus on personalized service and in-store amenities for shoppers. These two stores may exist in the same industry and sell the same thing, but they have vastly different types of merchandise, prices and clientele, which means their shareholder value and business risks will look very different from each other.

Businesses need to understand levels of acceptable risk in their individual organization and map those risks to their business processes, measuring them based on how much the business is impacted if a process is disrupted. By determining what risks are acceptable, and what processes create a risk by being aligned too closely to an important strategy or resource, leadership can make rational decisions at the executive level on what extent they invest in resilience – based not on theory, but on reality.

Creating an Integrated Approach with the Bowtie Model

Using the bowtie model, organizations can appropriately marry business continuity and risk management practices.

The bowtie model – based on the preferred neckwear of high school science teachers and Winston Churchill – uses one half of the bow to represent the likelihood of risk events and the other half to represent mitigation measures. The middle – the knot – represents a disaster event, which may comprise disruptions like IT services going down, a warehouse fire, a workforce shortage or a supplier going out of business.

To use this model, first, determine every possible disruption to your organization through painstaking analysis of your businesses processes. Then determine the likelihood of each disruption (the left part of the bow), as well as mitigating measures one can take to reduce the impact of the disruption should it occur (the right part of the bowtie).

Consider as an example the disruptive event of a building fire – the “knot” in this case. How likely is it? Was the building built in the 1800s and made of flammable materials like wood, or is it newer steel construction? Are there other businesses in the same building that would create a higher risk of fire, such as a restaurant? Do employees who smoke appropriately dispose of cigarettes in the right receptacle?

On the other half of the bowtie are the measures that could reduce the impact of a building fire, such as ensuring water sources and fire extinguishers throughout the building, testing sprinkler systems, having an alternate workspace to move to if part or all of the office is damaged during a fire, and so on.

The mitigating measures are especially key here, as they aren’t always captured in traditional insurance- and compliance-minded risk assessments. Understanding mitigation measures as well as the likelihood of risk events can change perspectives on how much risk an organization can take, because the organization then will understand what its business continuity and response capabilities are. Mitigation methods like being ready to move to an alternate workspace are more realistic than trying to prevent events entirely; at some point, you can accept the risk because you know how to address the impact.

A Winning Combination

Bob Sibik Fusion HeadshotWhere risk management struggles is where business continuity can shine: understanding what creates shareholder value, what makes an organization unique in its industry among its competitors, and how it distinguishes itself. Alternately, risk management brings a new perspective to the idea of business continuity by focuses on types of disruptions, their likelihoods, and how to prevent them.

To create a panoramic view of where an organization can be harmed if something bad happens, businesses must merge the concepts of business resilience (dependencies, impacts, incident management, and recovery) and risk management (assessment, controls, and effectiveness) and optimize them.

Bringing the two views together and performing holistic dependency mapping of entire ecosystem allows an organization to treat both as a single operational process, bringing data together to create actionable info (based on the “information foundation” the company has created about impacts to business operations that can result from a wide variety of disruptions and risks) to empower decisive actions and positive results.

Using the bowtie method to create this holistic view, companies get the best of both worlds and ensure they understand the possibilities of various disruptions, are taking steps to mitigate the possibilities of disasters, and have prepared their responses to disasters should they strike. This approach to risk management will help keep a business up and running and ensure greater value for shareholders – this year and in years to come.

Fusion♦♦♦

Robert Sibik is senior vice president at Fusion Risk Management. Sibik can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

 
 
Technology Modeling – the eBRP Way
Definition:

Technology modeling is a point-in-time snapshot of an Enterprise’s IT Services – including its dependencies on infrastructure – and interfaces to other services and Business Processes which depend on them.  This organizational Technology Model provides executives the critical decision support they need to understand the impacts of a service disruption.

...

https://ebrp.net/wp-content/uploads/2018/04/Technology-Modeling-the-eBRP-Way.pdf

Tuesday, 11 September 2018 14:48

Technology Modeling – the eBRP Way

Do you know the book “Don’t Sweat the Small Stuff”? Today’s post is about sweating the big stuff.

It lays out the five things that matter most for the success of your organization’s business continuity management (BCM) program.

DETAILS, DETAILS

Most business continuity managers are extremely detail-oriented. They have to be to do their job. If BCM teams don’t sweat the details of what they do, then their work is probably not very good and whatever plans they have made can probably not be relied upon.

However, everyone has the defects of their good points. Sometimes, people who are very detail-oriented can become focused on the wrong or less impactful items.

Imagine that you have been in a fender bender caused by another driver. The detail-oriented person gets out and carefully takes pictures of all of the scrapes on their car caused by the collision. The overly detail-oriented person does the same thing while not realizing that the front half of their car is hanging off a cliff.

By this definition, there are a lot of overly detail-oriented people on BCM teams!

We at MHA have found over the years that many BCM programs are obsessing over minor dents and scrapes at the same time as their programs are hanging off a cliff, so to speak.

With all that in mind, we thought it would be worthwhile to remind you about what really matters when it comes to business continuity management.

...

https://www.mha-it.com/2018/09/business-continuity-management-program/

mind the skills gap match made in it heavenThe hiring process would be so much easier if finding IT personnel was like matching on a dating website. Unfortunately, many candidates and employees lack the technical skills needed to make them “Mr. Right.”

Thanks to shifts in technology, including the implementation of machine learning, new cybersecurity challenges and more, IT decision-makers are realizing the biggest roadblock to achieving digital transformation is the lack of qualified candidates with the right skills to do the job. Luckily, organizations have found several ways to address this dilemma.

Nurturing and developing the skills of your existing employees is one way to deal with the shortage of qualified candidates. By creating a positive work environment that empowers employees to test new technologies and learn new skillsets, organizations are crafting opportunities from within, developing the skills they need and retaining talent through a commitment to education.

Finding a partner for consulting or to fully manage aspects of your IT also has its advantages. Instead of struggling to find candidates that can do the job, you can save time and resources by working with an organization that already possesses the talents you’re searching for. That frees up time for your IT team to focus on more strategic projects.

Whether it’s molding talent from within or cultivating a relationship with a partner, that perfect IT “match” may be closer than you think.

https://blog.sungardas.com/2018/09/mind-the-skills-gap-match-made-in-it-heaven/

Tuesday, 04 September 2018 14:38

Mind the Skills Gap: Match Made in IT Heaven?

Definition:

A Business Impact Analysis (BIA) is the cornerstone of creating a BCM program. Basically, a BIA helps prioritize restoration efforts in the initial response activities following an operational disruption. A secondary objective of a BIA is identification of all operational dependencies to enable successful business restoration.

...

https://ebrp.net/wp-content/uploads/2018/04/eBIA-the-eBRP-Way-1.pdf

Wednesday, 29 August 2018 15:10

eBIA – The eBRP Way

It must be human nature to worry more about serious dangers that are unlikely to happen than moderate ones whose likelihood of happening is high.

This would explain why the term “shark attack” brings up 98 million results on Google and the word “sunburn” brings up only 22 million results, even though the odds of a beachgoer getting attacked by a shark are one in 11.5 million, according to Wikipedia, while the Centers for Disease Control says that half of all people under thirty report having gotten a sunburn in the past year.

The chances of a beachgoer’s getting bit by a shark are less than one in ten million and of someone getting a sunburn are one out of two, but we’re roughly three times more likely to write and post—and presumably talk, think, and worry—about shark attacks.

Sunburn is no joke since serious cases are associated with an increase in skin cancer later in life.

On the other hand, shark attacks are not only potentially catastrophic, they’re also perversely entertaining to think about. Sunburn, not so much.

“SUNBURN PROBLEMS” AND BUSINESS CONTINUITY

We at MHA Consulting have noticed that a similar pattern prevails in business continuity management (BCM).

The BC community focuses a great deal of attention on such high-drama but low-probability scenarios as a hurricane wiping out a data center, a plane crashing into a facility, or an active shooter entering the workplace.

Obviously, all of these do happen, and they are very serious and potentially catastrophic. The responsible BCM program includes plans to handle all of these types of incidents. (Of course, they should focus on the type of impact rather than the particular scenario, as we’ve discussed before.)

But there are many BC problems which are more like a sunburn than shark attacks: they aren’t especially dramatic, but they do bring pain and discomfort and sometimes worse, and they happen almost constantly.

In today’s post, we’ll set forth some of the most common “sunburn problems.”

It’s essential to conduct enterprise risk assessments that look at the most serious potential impacts to the organization. But don’t forget to also consider these more modest but highly likely problems.

...

https://www.mha-it.com/2018/08/sunburn-problems/

Daniel Perrin, Global Solutions Director, Workplace Recovery, IWG

With hurricanes and other natural disasters impacting the U.S., now, more than ever, companies are re-examining their business continuity plans. Traditional workplace recovery strategies haven’t kept pace with modern business needs though. Historically, companies built their strategy around IT. This meant that when disaster stuck, to keep critical staff working, businesses needed access to their data.

The solution was to keep offices near a recovery server ready for when a problem shut the office down. If that happened, businesses would send the 20 or so needed staff to work from space next to the server. That’s the model the industry has followed, but it is a model which is redundant.

Why? There are three main reasons:
  1. Technology has evolved dramatically since most large businesses first developed a workplace recovery strategy. The rise in cloud computing means that data is not housed in one particular place. It can be accessed from anywhere. This means a recovery plan no longer needs to be based entirely on the location of servers. It can be based on what works best for your business at a particular time.
  2. Recovering to one fixed location can be a logistical nightmare – if not ill-advised. Of course, if a small leak in an office has rendered it unusable, you can move staff to a specific, identified back-up office. But, what if your city is flooded or facing another equally significant impact event? Chances are one of two things will occur, if you are dependent for recovery on one specific location. Either your back-up location will also be hit or your people won’t be able to get there. In today’s world, a smart business needs to develop a workplace recovery strategy that is responsive and dynamic. One which can evolve to a live situation.
  3. The traditional financial model of making workplace recovery centers profitable revolves around oversubscribing each one – essentially selling the same “seat” to 10 or so different businesses. This makes sense based on the assumption that different businesses will not need recovery at the same time. But, in the example above – a major incident affecting large swathes of a city – chances are multiple companies will be impacted. Businesses therefore run the risk that at the one time they need the recovery seat they’ve been paying for, someone else may be sitting in it.

 

What makes a dynamic workplace recovery provider?

Primarily, one that offers a network of locations to choose from and offers flexibility to meet customers’ needs. And, a provider that will guarantee you space in any type of emergency, especially ones that impact entire cities.

For example, when Hurricane Harvey hit Texas in 2017, Regus, which provides flexible workspace and is owned by IWG, offered the capacity to ensure that customers could continue working because it had 70 locations in the area. For example, one of our customers wanted to recover to one of our offices in the Woodlands, outside of Houston. This seemed sensible, but as the storm approached it became clear that this client’s employees would not be able to reach the site. We were able, proactively, to contact the customer and adapt their plan in real time, by the minute, recovering them to another location that would not be affected.

Businesses are realizing that workplace recovery plans are critical and that their current plans may not be fit for purpose. It’s a good time for companies to evaluate their plans and ensure that they are working with dynamic partners that have the flexibility to meet their needs.

For more information, visit http://www.iwgplc.com/.

Albert Einstein once stated “The important thing is not to stop questioning. Curiosity has its own reason for existing.” As a recent college graduate, this quote has helped influence my decisions in college and starting a career. I was always very quiet and did not like being outside of my comfort zone…until recently…my curiosity helped me step out of my comfort zone. Being curious and confident was the reason why I graduated in the field of Information Science Technology (IST), and why I chose to intern at BC in the Cloud.

While in college, I faced many important decisions in my life. When I started my freshman year at Penn State I wanted to be a computer scientist and develop software. I’ve always had a great passion in technology and thought this field would be great. It took me about two years to realize that I was losing interest in computer science, the course materials were overly complicated and lacked excitement. But I didn’t want to stop pursuing my passion in technology. Stuck, I felt wondering if I should stay in this field. Then a friend told me about a major called Information Science Technology (IST). What he told me blew my mind because I could learn and enjoy development without taking excessively complex engineering courses. IST breaks up into two sections: Integrations and Application, and Design and Development. Also, this major does not just provide development courses, but courses in networking, telecommunication, cyber security and project management. After learning about this, I became curious, but also afraid. I was afraid that if I decided to change my major, people would think of me as a person who only just works on computers (IT guy stereotype). I ended up following my curiosity and studied IST. And I don’t regret it at all.

...

https://www.bcinthecloud.com/2018/07/keys_to_growing/

In today’s constantly moving and changing world your community needs a mass notification system.

How else will you quickly reach your residents with warnings or instructions during a pending storm? An active shooter scenario? A flash flood across a major highway?

Once a system is purchased, the key to success?  Implementation.  A well-considered implementation will lay the foundation for how effectively the system will operate during a crisis.  Check out these five pro tips for a smooth and stellar implementation.

...

https://www.onsolve.com/blog/top-5-keys-to-implementing-a-great-mass-notification-solution/

According to ISO, risk is defined as the effect of uncertainty on objectives, focusing on the effect of incomplete knowledge of events or circumstances on an organization’s decision- making. For companies that have accepted this definition and are looking to mature their risk programs and enable a risk culture, ISO 31000’s risk management framework is a great place to start. The ISO 31000 principles can help these organizations score the maturity of their risk processes and culture. 

Technology is a critical element of implementing effective risk and decision-making practices because it bridges the communication gap between teams, breaks down departmental silos, facilitates collaboration and information access, and automates tedious tasks. Great technology can’t make up for bad practice but without it, no program will meet the ISO 31000 principles. 

ISO 31000 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions.”

To explain how Resolver believes risk technology can help organizations match ISO’s vision, we break down the 11 principles into groups and share our insight:

...

https://www.resolver.com/blog/iso-31000-principles-technology/

Food is a passion of mine. I will eat pretty much anything – from cricket fried rice to a Bloody Mary topped with two Cornish hens, an avocado and fried okra to anything and everything with bacon. I love food!  When I travel to industry conferences, I make it a point to hunt out at least one unique restaurant for one of my meals. My latest obsession – José Andrés restaurants. José Andrés is a James Beard awarding winning chef with 29 restaurants. My current favorite restaurant of his is called Bazaar, where you can enjoy an amazing 18 course culinary experience. OMG it’s the absolute best, if you ever get a chance to try it…TRY IT!

So, what does all this amazing food talk have to do with business continuity or disaster recovery? Recently I found out it has a lot to do with disaster recovery.  Let me explain… We always look immediately to the big organizations like FEMA and Red Cross to help support relief efforts. But, in reality, they can’t do it all. There is only so much food, supplies and volunteers to go around. That is why there are many other organizations (probably lesser known, but still very effective) that assist in supporting disaster relief.  One of those organizations is called World Central Kitchen (WCK), which the chef that I speak so highly of, José Andrés is a major contributor of.

...

https://www.bcinthecloud.com/2018/07/what-does-food-have-to-do-with-it/

Tuesday, 17 July 2018 05:29

What does food have to do with it…

Early in the morning of 5th July 2018 the BCI became aware that we had become the subject of a targeted cyber-attack.

An attacker compromised account credentials and ultimately gained access to a single BCI email account. On discovering unauthorized access to the email account, we initiated our standard incident response process. We engaged outside specialists to assure ourselves, clients, and other stakeholders that the review was thorough and objective. The BCI took a variety of actions:

  • Immediately executed steps to stop and contain the attack.
  • Ascertained the size and scope of the attack. The team reviewed logs from the incident to understand what the attacker did in the email platform, and it used this information to guide its response to the attack.
  • Determined what the attacker targeted. The attacker targeted an email platform. This system is distinct and separate from other BCI platforms, including those that host client data, collaborative work among BCI professionals, engagement systems and other non-cloud based email systems. None of these were impacted. We know from the forensic review conducted by our own cyber professionals that the attacker was specifically focused on obtaining details of one particular client.
  • Reviewed materials targeted by the hacker. This incident involved unstructured data; namely, email. Through a detailed review of logs, the BCI was able to determine what the attacker actually did and that the number of email messages targeted by the attacker was a small fraction of those stored. We looked at all of the targeted email messages in a manual document-by-document review process, with careful assessment of the nature of the information contained in each email. By conducting this eyes- on review, we were able to determine the very few instances where there may have been active credentials, personal information, or other sensitive information that had an impact on clients.
  • Contacted impacted clients. The BCI contacted the single client impacted.
  • Alerted authorities. The BCI began contacting governmental authorities immediately.

The team determined that:

  • The attacker is no longer in the BCI’s system. The BCI has seen no signs of any subsequent activities. We have taken a number of important steps to remove the attacker’s access to our environment, including the blocking of IP addresses, disabling accounts, resetting passwords, and implementing enhanced monitoring.
  • No disruption occurred to client businesses, to the BCI's ability to serve clients, or to consumers.

The BCI remains deeply committed to ensuring that its cyber-security defences provide a high standard of protection, to investing heavily in protecting confidential information and to continually reviewing and enhancing cyber security.

Business continuity has a defined role with cyber resilience strategies, and it has become intertwined with cyber security for threats requiring coordinated responses across organizations’ departments.

This is one of the key findings of the 2018 Cyber Resilience Report, published today by the Business Continuity Institute, in collaboration with Sungard Availability Services.

Since the first publication of this report we have witnessed an increase in the number of cyber-attacks and the development of new cyber threats with the potential to cause major damages to organizations, including severe financial and reputational impacts at a scale that threaten their very existence.

The financial cost of cyber-attacks is growing. This is not a surprising result after the events that occurred last year, where large-scale cyber-attacks cost organizations worldwide millions of euros. Reputational damages are also of major concern, 66% of respondents consider reputational damage as the most concerning trend when it comes to cyber security incidents.

Moreover, cyber security incidents cannot be considered exclusively non-physical incidents anymore. 46% of respondents consider cyber-attacks with physical security consequences as one of the concerning trends.

The cyber threat landscape today is highly complex and rapidly changing and it has become clear that business continuity plays a key role in responding to an incident and ensuring that the organization is able to manage any disruption and prevent it from becoming a crisis.

According to this year’s results, business continuity remains key to building cyber resilience and there is the need for it to collaborate with cyber/information security departments to improve the way organizations deal with disruptions caused by cyber security incidents.

David Thorp, Executive Director at the BCI, commented: “The best way to protect organizations from one of the greatest threats of our times is to invest in people and preparedness. Investing in training and collaborative strategies should be at the heart of any plans aimed at mitigating cyber- attacks and ensuring a fast recovery.”

The 2018 BCI Cyber Resilience Report is now available for download. Log-in into your profile and visit the knowledge library.

https://www.thebci.org/news/business-continuity-has-a-defined-role-with-cyber-resilience-strategies.html

In 2018, MetricStream Research surveyed 120 respondents from 20 different industries to understand the level of GDPR awareness and preparedness across enterprises. A majority (53%) of the respondents who have implemented governance, risk, and compliance (GRC) solutions reported that they would be GDPR compliant by the May 25 deadline.

Download this report to learn more about the survey findings, including:

• The state of GDPR awareness and engagement
• The state of GDPR readiness
• GDPR compliance challenges, benefits, and spend

Access the complimentary copy of the report today.

By URI SHAY

Among the concerns about disaster-recovery, the assurance of recovery is the most important one for businesses. Data movers focus only on the test fail over procedure. In order to have resilience recovery, organizations must have disaster recovery simulation on weekly or monthly basis. Moreover, short periods of DR tests, Provides the organization, the confidence and experience necessary to respond to real emergency. Practice makes perfect.

Organizations should be able to identify failure in the recovery plan prior to actual disaster situation. It is a very challenging journey to walk through from the unknown and the risky position, to 100% Recovery assurance! The demand for a thorough, frequent automatic DR test tool become to be urgent as highly important. Organizations would like to get ready to any disaster situation. To be recovery guarantee.

During real disaster, a lot of unexpected problems will popup. you must know at least you are DR READY. Reliable disaster recovery is critical for business survival. Organizations don’t get second chance when disaster strike. During that critical demanding moment, a lot of unexpected problems will popup.

Without periodic testing, time has a way of eroding a disaster-recovery plan’s effectiveness. Most of the organizations don’t know to tell is they really DR READY.

Environmental changes can prevent servers to turn-on properly, network problems like mac address, IP address, DHCP and dissimilar infra. Application unable to run or DB inconsistent: sometimes we have notices customers who changed the number of servers that run a certain application. They didn’t know they haven’t updated the secondary site. DC that can't recover, that can shut down the entire site. Personnel dependency; Sometimes its personnel turnover, missing knowledge, availability - is he onsite or is he away. And in the end, all you get is a yearly test, which is far from being enough.

An intelligent DR test should include:

  • Automation testing that cut resources and save money
  • Determining the feasibility of the recovery process
  • Identifying areas of the plan that need modification or enhancement
  • Demonstrating the ability of the business to recover
  • Identifying deficiencies in existing procedures
  • And increasing the quality and knowledge of the people who execute the disaster-recovery

When disaster occurs, the organization got one chance to recover. DR Readiness is critical for business survival. Only short periods DR test can address that need.

Shay UriUri Shay is the chief executive officer of EnsureDR ltd., a software that simulates a disaster recovery process, automatically and frequently.

Wednesday, 13 June 2018 14:19

Are You Disaster Ready?

By TIFFANY BLOOMER, President, Aventis Systems

There’s screaming in the background. A window breaks. A peak around the cubicle reveals coworkers fleeing in terror while others hide hopelessly under their desks.

No, it’s not the end of the world. … It’s your network. Your systems failed, and critical, sensitive business data is lost permanently. It’s a data apocalypse, and your company is infected.

For any business to survive, it has to have availability. It must be up and running at all times for its customers, as well as its employees. Connections to business information must be reliable and continuous. This means backing up workstations and laptops, but also server and storage data, which is equally important.

With the exception of its employees, a business’s data is its most important asset, and a major loss can be fatal. Some 60% of small businesses that lose their data will shut down completely within just six months, yet the majority of small businesses still don’t backup their data. Why?

The good news is that downtime and lost data, productivity and revenue can be avoided if you are adequately prepared. Here are some top data backup survival tools every small business needs to avoid a data apocalypse:

 

Data Backup: Easy as Pi

To create a safe zone around your data, back up following this simple rule: Keep your data in three different places, on two different forms of media, with one stored offsite.

A single data center leaves you much more vulnerable than if your data is backed up in multiple places. IT best practices dictate redundancy — which includes the physical space. When the grid goes down and the zombies advance, it won’t help to have all your backup data stored in your office building.

To be safe, keep your original data plus multiple backups current at all times and store one offsite — as far away as possible! For added protection, store it in a weather-proof and fireproof safe at another geographic location.

 

Survival Tool #1: Backup Hardware

The first thing you need in your survival kit is the right storage device for your business environment and budget. There are four main types of backup hardware:

  • NAS — Network Attached Storage (NAS) is most often used for shared file systems joined by an ethernet network connection. It also works well for advanced applications such as file shares. Any server with attached storage can be used as NAS, allowing multiple servers or workstations to access data from a single network. The most scalable storage solution for SMBs, NAS storage equipment comes in a variety of configurable drive options and interfaces, is very versatile and includes a management interface.
  • SAN — Storage Attached Network (SAN) is a dedicated storage network for those requiring high-end storage capabilities. It provides block-level access to data at high speeds. Making large amounts of data more manageable, block-level storage allows you to control each block, or group, of data as an individual hard drive. SAN solutions are ideal for enterprise organizations because of their ability to transfer large data blocks between servers and storage.
  • DAS — Direct Attached Storage (DAS) is used to expand existing server storage with additional disks. It’s compatible with any server and is favored for its cost-saving benefits. It allows you to extend the size of your current box without an additional operating system. When used with a file server, DAS still allows user and application sharing.
  • Tape — Tape backup might be more “old school,” but it’s making a comeback in some SMB environments — primarily because it is offline. With tape, data is periodically copied from a primary storage device to tape cartridges, so you can recover it in case of a failure or hard disk crash. You can do manual backups or program them to be automatic. Tape is the least expensive way to store your data offsite because it’s light and compact, allowing you to take it with you or ship it to a holding space.
 
Survival Tool #2: Backup Software

If you have the right backup hardware in place, you need backup software you can trust to recover your data without compromising security.

Veeam Availability Suite is an excellent backup option for virtual machines (VMs) and physical servers. Software is managed through the same space as virtual backups. When disaster strikes, Veeam has your back with:

  • Guaranteed Availability — Get access to fast recovery time and recovery point objectives for all VM systems in less than 15 minutes for all applications and data.
  • Absolute Privacy — With licensing, your backup data is always secure with unique end-to-end encryption.
  • Long-Term Retention — Data is retained for as long as you need it with advanced native-tape support and direct-storage integrations with industry-leading storage providers like EMC, Hewlett Packard Enterprise and NetApp.
  • Built-In Disaster Recovery — With the high-level license, disaster recovery testing is built-in, and Veeam guarantees recovery point objectives of less than 15 minutes for all applications and data, as well as simplified proof of compliance with automated reporting.
 
Survival Tool #3: Cloud Services

When zombies, floods, hurricanes or other catastrophes wipe out the office, you’ll be glad you backed up your data offsite. Backing up everything in the cloud ensures it is always safe — no matter what happens.

What is cloud disaster recovery?

Simply put, cloud disaster recovery is a way to store and maintain copies of electronic data in a cloud storage environment to keep it safe. This way, if your system goes down, you can easily recover your company’s mission-critical data.

Why trust the cloud?

Some major benefits to managed services in the cloud include:

Business Continuity

While you’re recovering from an on-premise failure, cloud storage options will allow you to access mission-critical data and applications. As a result, your business can continue to function.

Lower Upfront Costs

Upfront costs are low, and ongoing costs are predictable, so you can more accurately budget your IT dollars.

More Time to Prep

By outsourcing data protection duties, your IT team can focus on more strategic issues.

Be Prepared

A system failure or loss of data can have catastrophic consequences on your business. To ensure you’re not left in the dark, learn more about the other tools you need and the steps you should take with this free e-book.

Choose backup hardware, software, a managed service provider and cloud storage to make sure your data is protected — no matter what or where disaster strikes. Also, don’t forget to test the local and remote backups to ensure the data you’re storing is usable.

You may not be able to predict the next tornado or save the world from walkers, but you can make sure your data survives!

About the Author

Tiffany Bloomer is president of Aventis Systems. Aventis Systems provides IT services and equipment to small and medium businesses around the world.

Security and resilience – Business Continuity Management Systems – Guidelines for people Aspects of business Continuity

This document gives guidelines for the planning and development of policies, strategies and procedures for the preparation and management of people affected by an incident.

This includes:

  • preparation through awareness, analysis of needs, and learning and development;
  • coping with the immediate effects of the incident (respond);
  • managing people during the period of disruption (recover);
  • continuing to support the workforce after returning to business as usual (restore).

The management of people relating to civil emergencies or other societal disruption is out of the scope of this document.

...

https://www.iso.org/standard/50067.html

Wednesday, 06 June 2018 15:09

ISO/TS 22330:2018

MrCleanThe founder and President of Safety Projects International Inc. has a mission – to help clean up the U.S., Canada, and several other countries. However, rather than doing it himself, Dr. Bill Pomfret aka Dr Clean is getting the workers themselves to do it – which is simple in its logic but offers a huge challenge in its execution.

"The state of cleanliness affects us in every aspect of our everyday lives, whether we're a patient in a hospital, a pupil in school, a customer in a restaurant or an employee in the workplace," Dr. Bill says.

"But most people fail to realize that cleaning is a science." Treatment of the cause, not the symptoms, coupled with a healthy dose of preventive medicine, is his prescription for the endemic problem faced by most countries that he visits. First, that means completely breaking down the tolerance for filth and replacing it with a culture of cleanliness.

And second, people will have to be educated on the best ways to clean up and to stay clean. Dr. Bill is well aware of the big, big job that is cut out for him, and that it involves more than just trying to change people's attitude or mindset. That is but a starting point, even though it is a massive challenge in itself, as evidenced by the limited success of the numerous public cleanliness campaigns undertaken in many countries so far, including South Africa, the Philippines and Malaysia to name a few.

There is no question that 72-year-old Dr. Bill is committed to his cause. He has, after all, got a 40-year-old lucrative business. But to him, raising most country's standards of cleanliness is part and parcel of occupational health and safety, both curative and preventive. Five years ago, he set up the education training Center for Cleaning Science and Technology in the Philippines (CCST), the country's first such facility.

MrClean2Located in San Isidro, Nueva Ecija the center conducts, inter alia training programs for the cleaning service industry, as well as local councils, building owners, and property managers. With the primary objective of raising the status and standards of the Philippines's cleaning industry, After all, like Porta Rico for the U.S.A. the number one export from the Philippines, is its people, mostly exported as live in caregivers. The Open University’s Institute of Professional Development accredits the center’s cleaning proficiency program. Before setting up the facility, Dr Pomfret had personally audited and surveyed the way cleaning operators normally worked. Some of his findings proved to be shocking. For example, a same mop was used to clean the toilet and the kitchen; the same rag to clean the bathroom and to wipe tables in eateries; and a same pail of filthy water used to mop corridor after corridor.

His conclusion was that many contract cleaners, not restricted to the Philippines, but Internationally were simply clueless about cleaning.

Mostly, the exercise seemed to be aimed not at actually cleaning but at creating the impression that cleaning had been done, that is, not to sanitize but to look clean.

"The thing is you have to clean right," Dr Pomfret stresses. "You may not be able to control the public entirely but you can control the cleaners and the quality of cleaning." During his travels, he had also visited Singapore's Institute of Cleaning Sciences, a franchise of the British Institute of Cleaning Sciences. Graduates, and professional cleaners are required to sit a proficiency test, both theory and practical.

In most countries, it is important that building owners, property managers and local councils send their staff for formal, practical training, Dr. Pomfret adds. This is because there are today very wide ranges of cleaning machines designed for all kinds of functions. Then there are the chemicals, which must be handled properly. In addition, cleaning processes can be quite job-specific, be it the cleaning of air ducts, treatment and prevention of graffiti, maintenance of various types of surfaces or basics like chewing gum removal.

For cleaning companies, such training makes economic sense, too. For instance, without this knowledge, they will not be able to realistically device a price structure upon which to negotiate a cleaning contract. As for the prospective clients, most will recognize that it is best to go with a professional outfit to minimize the risk of ending up with a whopping bill on restoration works for a botched-up job.

"Lack of know-how among property managers is the primary cause of poor maintenance of buildings," says Dr Pomfret. "They get incompetent cleaners and these people destroy the properties.

So the management has to cough up money to do yearly restoration and refurbishing." Business owner Bill Thompson agrees. "The notion that a mop and bucket is all you need to clean is archaic.”

In most developed countries, cleaning has become a highly professional field. In fact, the 'First World Facility, Third World Mentality' complaint from visitors regarding the U.A.E. amenities can be attributed to the fact that cleaning as a process has been hugely neglected.

"The industry must become professional in the shortest time possible. As a matter of urgency, a body comprising the Government, local and city councils, training schools, suppliers, contractors and other stakeholders should be set up to draw up minimum standards," Pomfret says, some 20 years ago, I helped develop the 5 Star Health and Safety Management SystemÔ the first part I concentrated on, was housekeeping “Cleanliness and Order” this gives the employer, the biggest bang for the buck.

Arguing that Governments should be more receptive and exposed to the cleaning service industry, Pomfret - whose company has been in the health and safety business for over 50 years - says: "Right now, it's a free-for-all. Unless standards are imposed and cleaning contractors are certified and classified, many countries will continue to be plagued by poor maintenance and dirty surroundings." Dr Pomfret may remind one of a young Don Aslett, the author of numerous books on cleaning techniques and self-styled No. 1 cleaner in America, but all he dreams of is a day when no person would fear to walk into a public toilet in any country he has trained.

Meanwhile, Dr. Clean as he is known has trained staff from many companies in the Philippines and the U.A.E. South Africa and elsewhere. The going has been tough, still is, principally because of the need for him to relentlessly prod and irritate people into action, even just to see the urgency of the matter. On the positive side, he can be likened to a grain of sand in an oyster, which will one day become a pearl – and be appreciated.

DR CLEAN'S DIAGNOSIS INDUSTRY MUST BE RATIONALISED: Nobody can tell for sure about something as basic as the size of the industry. There are so many players but numbers don't guarantee quality. And there are no proper guidelines to qualify cleaning enterprises for bids to undertake a cleaning and building maintenance job.

Without guidelines on such things as a company's manpower, technological and management capacity as well as know-how, anyone with minimal or zero knowledge can bid for contracts. Unlike in the construction industry where contractors are graded, there is no classification of cleaners based on professional competence.

THE CLEANERS, THEY MUST BUCK UP: Cleaning know-how and cleaning product knowledge are not fully pursued by cleaners. Unlike the UK and Singapore, which imposes practical and theory tests on would-be cleaning operatives (questions range from which chemical to use on which type of surface to which color pad to use for which scrubber machine for which function), most western countries cleaning service industry operates on the basis of: “even my grandmother can do that job”.

WHAT STANDARD? There are no established standards for cleanliness.

Lack of education on the part of the authorities (such as local councils), building owners and property managers and employers, as well as the cleaners themselves is a major obstacle against the much-needed professionalisation of the industry. "Our architectural and engineering ability has reached the point where we can build the world's tallest buildings but our cleaning and maintenance ability has lagged far behind." WHAT BENCHMARK? There is no benchmark for players to strive to match and maybe exceed, with a view to promoting the development of the International cleaning service industry to the level where it can compete in the international market and export cleaning services. "The Government should nurture the industry so that it will reach that level."

Dr Bill Pomfret; MSc, FIOSH; RSP. Can be contacted: 26, Drysdale Street, Kanata, Ontario, K2K 3L3. Tel: 613-2549233; Website www.spi5star.com; e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it..

Thursday, 24 May 2018 20:14

The Importance of Professional Cleaning

Community bank strengthens enterprise-wide business continuity program and vendor risk management capabilities

Fusion logoWith 53 branches, multiple ATMs, and banking seven days a week at two locations, TBK Bank strives to do the right thing to make customers’ lives better and easier.

Now, the bank has done the right thing for its customers by doing the right thing for its business continuity program, moving in just six months from a legacy planning tool to a data-centric business continuity management program built on the Fusion Framework® System™.  

The power of the solution creates synergies that allow the business continuity program to continue to grow and mature, taking on high priorities that were previously out of scope such as vendor risk management. This has significantly improved TBK Bank’s risk profile, with the end result being a greater ability to deliver great customer service at all times under any circumstances.

TBK Bank’s ongoing success has been accelerated with a regular infusion of Fusion’s creative Fuel offering and by connecting with the Fusion Community where best practices and new ideas are openly shared.

Making Business Continuity Holistic and Actionable

logo 2xTBK Bank recognizes the criticality of being always available for its customers. When the time came to move away from the lightweight legacy product the bank used for its business continuity program, Deb Wagamon, Business Continuity Manager at TBK Bank, examined the options in the marketplace. One of the vendors she contacted was Fusion Risk Management.

Wagamon explained why Fusion piqued her interest: “The first thing that impressed me was the fact that they were extremely interested in what I was doing and what my hindrances were and how they could help us. They didn’t start out like a normal vendor with ‘I can sell you this. This is what we can do for you.’ That told me I had a partner, rather than just a vendor trying to get money out of my company.”

Fusion rose to the top of the potential vendors because of the opportunity Wagamon had to try out the system. “They gave me a month trial period where I could enter my program’s data into the system and test it,” stated Wagamon. “Other vendors were offering much shorter trial periods – only a few days to a week. Plus, not only did Fusion allow me the sandbox to test in, but I was able to bounce questions off of Fusion personnel while I was doing it. Even before I was a customer, it was like I had a whole team helping bring my vision to life using the Fusion Framework System.

Recognizing that Fusion would make TBK Bank’s future business continuity goals possible in ways other vendors could not match, Wagamon committed to the Fusion Framework System.

The system brought together all of TBK Bank’s business continuity plans into one accessible and actionable location. Vulnerabilities and gaps were identified and remediated. Such a transformation would typically take years via a traditional approach, however, the Fusion Framework and its flexible, information-based approach and robust plan management infrastructure enabled the TBK Bank business continuity team to instill best practices in the program without starting from scratch. Wagamon affirmed, “It took me just six months to take my plan from ‘basic’ to ‘robust.’”

Managing Vendor Risk

TBK Bank worked with Fusion not only to leverage the Fusion Framework System for business continuity, but also to improve vendor risk management. Previously, Wagamon had vendor information in multiple places, so it was hard to manage, keep up to date, and pull together in the event of an audit. With over 350 vendors in play, she knew it was only a matter of time before something crucial was missed, with significant ramifications. “Trying to manage all the due diligence, contracts, and everything was becoming a nightmare. I had to get the vendor data into some kind of an automated tool,” explained Wagamon.

TBK Bank leveraged the flexibility and configurability of the Fusion Framework System to create a vendor management solution aligned with its specific needs. “I truly feel confident, because the Fusion Framework System handles everything. Processes are automated to eliminate human error. The system sends me an e-mail whenever I have to update insurance. If I’ve got a contract that’s coming up in 90 days, the business owner gets an e-mail saying, ‘Do you want to renew this or do you want to terminate?’ All I do now is manage.”

Plus, because the information foundation created by the Fusion Framework now contains comprehensive vendor data, the vendor risk management program is fully integrated with the business continuity program. This results in greater engagement of users and stronger end-to-end business continuity plans.

Fueling Further Success

To further the success of its business continuity program, TBK Bank took advantage of Fusion’s unique offering known as Fuel which pairs Wagamon’s group with an industry expert and a team of Fusion product experts. The team keeps TBK Bank’s program focused on the right priorities and provides expertise impossible to get from an internal resource. Wagamon noted, “This has been wonderful for me. I meet with an expert on a monthly basis and talk about my objectives for the next budget year, get help to resolve any issues I might have, and learn how to use the system to its fullest advantage.”

Additionally, Wagamon has benefited greatly from the knowledge sharing opportunities that are regularly available as a member of the Fusion community. Wagamon attends Fusion industry user groups, where she learns from her peers. She affirmed, “There’s always more to Fusion – it doesn’t matter how much you’re learning or how far you’ve come in the last two or three years, there’s just so much depth. The user groups are wonderful for allowing you to connect with the Fusion community, learn from fellow peers, and understand all the areas where Fusion can assist you.

Wagamon has been thrilled to share her experience with others. “I’ve been able to sit down with someone who is as frustrated as I used to be and tell them my story,” she stated. “Normally, I don’t make a stand and speak out in public about vendors, but with Fusion, I do.”

Thursday, 24 May 2018 17:45

Business Continuity You Can Bank On

Many organizations use templates to help them craft their business continuity plans.

In our opinion, this is an excellent way of going about doing it.

The “good” of using templates is significant and will be sketched out below.

If there is an “ugly” part about using templates, it’s what happens when organizations mistake filling out a template with the thought and analysis that comes with actual planning.

That being said, we commonly see more problems when organizations don’t use templates as a guide or standard for their planning efforts.

A surprisingly large number of organizations forgo the convenience and support of templates for a cooking-from-scratch approach. Moreover, they frequently have lots of different cooks.

Such organizations commonly task different individuals from across the company with writing the recovery plans for their respective departments. You can imagine the results: A large collection of mismatched plans varying widely in quality, comprehensiveness, level of detail, organization, and formatting. Some of these plans are liable to be excellent and some barely adequate. Many will have significant gaps, and since there’s no companywide documentation standard, they will probably all be confusing to anyone from outside the department who has to use them in an emergency. Talk about ugly.

In terms of the “bad” aspects of using templates, there really aren’t many. However there are some precautions you should keep in mind which using them, and which we’ll spell out in a moment.

...

https://www.mha-it.com/2018/05/using-business-continuity-templates/

By CONNOR COX, Director of Business Development, DH2i (http://dh2i.com)

In 2017, many major organizations—including Delta Airlines and Amazon Web Services (AWS)—experienced massive IT outages. Despite the reality of a growing number of internationally publicized outages like these, an Uptime Institute survey collected by 451 Research had some interesting findings. While the survey found that a quarter of participating companies experienced an unplanned data center outage in the last 12 months, close to one-third of companies (32 percent) still lack the confidence that they are totally prepared in their resiliency strategy should a disaster such as a site-wide outage occur in their IT environments. 

Cox1Much of this failure to prepare for the unthinkable can be attributed to three points of conventional wisdom when it comes to disaster recovery (DR): 

  • Comprehensive, bulletproof DR is expensive

  • Implementation of true high availability (HA)/DR is extremely complex, with database, infrastructure, and app teams involved

  • It’s very difficult to configure a resiliency strategy that adequately protects both new and legacy applications 

Latency is also an issue, and there’s also often a trade-off between cost and availability for most solutions. These assumptions can be true when you are talking about using traditional DR approaches for SQL Server. One of the more predominant approaches is the use of Always On Availability Groups, which provides management at the database level as well as replication for critical databases. Another traditional solution is Failover Cluster Instances, and you can also use virtualization in combination with one of the other strategies or on its own.

There are challenges to each of these common solutions, however, starting with the cost and availability tradeoff. In order to get higher availability for SQL Server, it often means much higher costs. Licensing restrictions can also come into play, since in order to do Availability Groups with more than a single database, you need to use Enterprise Edition of SQL Server, which can cause costs to rapidly rise. There are also complexities surrounding these approaches, including the fact that everything needs to be the same, or “like for like” for any Microsoft clustering approach. This can make things difficult if you have a heterogeneous environment or if you need to do updates or upgrades, which can incur lengthy outages.

But does this have to be so? Is it possible to flip this paradigm to enable easy, cost-effective DR for heavy-duty applications like SQL Server, as well as containerized applications? Fortunately, the answer is yes—by using an all-inclusive software-based approach, DR can become relatively simple for an organization. Let’s examine the how and why behind why I know this to be true.

Simplifying HA/DR

The best modern approach to HA/DR is one that encapsulates instances and allows you to move them between hosts, with almost no downtime. This is achieved using a lightweight Vhost—really just a name and IP address—in order to abstract and encapsulate those instances. This strategy provides a consistent connection string.

Crucial to this concept is built-in HA—which gives automated fault protection at the SQL Server instance level—that can be used from host to host locally, as well as DR from site to site. This can then be very easily extended to disaster recovery, creating in essence an “HA/DR” solution. The solution relies on a means of being able to replicate the data from site A to site B, while the tool manages the failover component of rehosting the instances themselves to the other site. This gives you many choices around data replication, affording the ability to select the most common array replication, as well as vSAN technology or Storage Replica.

Cox2So with HA plus DR built in, a software solution like this is set apart from the traditional DR approaches for SQL Server. First, it can manage any infrastructure, as it is completely agnostic to underlying infrastructure, from bare metal to virtual machines or even a combination. It can also be run in the cloud, so if you have a cloud-based workload that you want to provide DR for, it’s simple to layer this onto that deployment and be able to get DR capabilities from within the same cloud or even to a different cloud. Since it isn’t restricted in needing to be “like for like,” this can be done for Windows Server all the way back to 2008R2, or even on your SQL Server for Linux deployments, Docker containers, or SQL Server from 2005 on up. You can mix versions of SQL server or even the operating system within the same environment.

As far as implications for upgrades and updates, because you can mix and match, updates require the least amount of downtime. And when you think about the cost and complexity tradeoff that we see with the traditional solutions, this software-based tool breaks that because it facilitates high levels of consolidation. Since you can move instances around, users of this solution on average stack anywhere from 5 to 15 SQL Server instances per server with no additional licensing in order to do so. This understandably results in a massive consolidation of the footprint for management and licensing benefits, enabling a licensing savings of 25 to 60 percent on average.

There is also no restriction around the edition of SQL Server that you must use to do this type of clustering. So, you can do HA/DR with many nodes all on Standard Edition of SQL Server, which can create huge savings compared to having to buy premium software editions. If you’ve already purchased these licenses, you can use them later, reclaiming the licenses for future use.

Redefining DB Availability

How does this look in practice? You can, for example, install this tool on two existing servers, add a SQL Server instance under management, and very simply fail that instance over for local HA. You can add a third node that can be in a different subnet and any distance away from the first two nodes, and then move that instance over to the other site—either manually or as the result of an outage.

By leveraging standalone instances for fewer requirements and greater clustering ability, this software-based solution decouples application workloads, file shares, services, and Docker containers from the underlying infrastructure. All of this requires no standardization of the entire database environment on one version or edition of the OS and database, enabling complete instance mobility from any host to any host. In addition to instance-level HA and near-zero planned and unplanned downtime, other benefits include management simplicity, peak utilization and consolidation, and significant cost savings.

It all comes down to redefining database availability. Traditional solutions mean that there is a positive correlation between cost and availability, and that you’ll have to pay up if you want peak availability for your environment. These solutions are also going to be difficult to manage due to their inherent complexity. But you don’t need to just accept these facts as your only option and have your IT team work ridiculous hours to keep your IT environment running smoothly. You do have options, if you consider turning to an all-inclusive approach for the total optimization of your environment.

In short, the right software solution can help unlock huge cost savings and consolidation as well as management simplification in your datacenter. Unlike traditional DR approaches for SQL Server, this one allows you to use any infrastructure in anyw mix and be assured of HA and portability. There’s really no other way that you can unify HA/DR management for SQL Server, Windows, Linux, and Docker to enable a sizeable licensing savings—while also unifying disparate infrastructure across subnets for quick and easy failover.

 
Cox ConnorConnor Cox is a technical business development executive with extensive experience assisting customers transform their IT capabilities to maximize business value. As an enterprise IT strategist, Connor helps organizations achieve the highest overall IT service availability, improve agility, and minimize TCO. He has worked in the enterprise tech startup field for the past 5 years. Connor earned a Bachelor of Science in Business Administration from Colorado State University and was recently named a 2017 CRN Channel Chief.

 

    

As a Business Continuity practitioner with more than 20 years of experience, I have had the opportunity to see, review and create many continuity and disaster recovery plans. I have seen them in various shapes and sizes, from the meager 35 row spreadsheet to 1,000 plus pages in 3-ring binders. Reading these plans, in most cases, the planners’ intent is very evident – check the  “DR Plans done” box.

There are many different types of plans that are called in to play when a disruption occurs, these could be Emergency Health & Safety, Crisis Management Plans, Business Continuity, Disaster Recovery, Pandemic Response, Cyber Security Incident Response, and Continuity of Operations Plans (COOP) etc.

The essence of all these plans is to define “what” action is to be done, “when” it has to be performed and “who” is assigned the responsibility.

The plans are the definitive guide to respond to a disruption and have to be unambiguous and concise, while at the same time providing all the data needed for informed decision making.

...

https://www.ebrp.net/dr-plans-the-what-when-who/

Wednesday, 02 May 2018 14:15

DR Plans – The What, When & Who

By Tim Crosby

PREFACE: This article was written before ‘Meltdown’ and ‘Spectre’ were announced – two new critical “Day Zero” vulnerabilities that affect nearly every organization in the world. Given the sheer number of vulnerabilities identified in the last 12 months, one would think patch management would be a top priority for most organizations, but it is not the case. If the “EternalBlue” (MS17-010) and “Conflicker” (MS08-067) vulnerabilities are any indication, I have little doubt that I will be finding the “Meltdown” and “Spectre” exploits in my audit initiatives for the next 18 months or longer. This article is intended to emphasize the importance of timely software updates.

“It Only Takes One” – One exploitable vulnerability, one easily guessable password, one careless click, one is all it takes. So, is all this focus on cyber security just a big waste of time? The answer is NO. A few simple steps or actions can make an enormous difference for when that “One” action occurs.

The key step everyone knows, but most seem to forget is keeping your software and firmware updated. Outdated software provides hackers the footholds they need to break into your network as well as privilege escalation and opportunities for lateral movement. During a recent engagement, 2% of the targeted users clicked on a link with an embedded payload that provided us shell access into their network. A quick scan identified a system with a Solaris Telnet vulnerability that was easily exploitable and allowed us to establish a more secure position. The vulnerable Solaris system was a video projector to which no one gave a second thought, even though the firmware update had existed for years. Our scan thru this projector showed SMBv1 traffic so we scanned for “EternalBlue”; targeting 2008 servers due to the likelihood that they would have exceptions to the “Auto Logoff” policy and would be a great place to gather clear text credentials for administrators or helpdesk/privileged accounts. Several of these servers were older HP Servers with HP System Management Home Pages, some servers were running Apache Tomcat with default credentials (should ring a bell – the Equifax Argentina hack), a few running JBoss/JMX and even a system vulnerable with MS09-050.

The vulnerabilities make the above scenario possible have published exploits readily available in the form of free opensource software designed for penetration testing. We used Metasploit Framework to exploit a few of the “EternalBlue” vulnerable systems, followed the NotPetya script and downloaded clear text credentials with Mimikatz. Before our scans completed, we were on a Domain Controller with “System” privileges. The total time from “One careless click” to Enterprise Admin: less than 2 hours.

The key to our success?? Not our keen code writing ability, not a new “Day 0” vulnerability, not a network of super computers, not thousands of IOT devices working in unison, it wasn’t even a trove of payloads we purchased with Bitcoin on the Dark Web. The key was systems vulnerable to widely publicized exploits with widely available fixes in the form of updated software and/or patches. In short, outdated software. We used standard laptops running Kali or Parrot Linux operating systems with widely available free and/or opensource software, most of the which come preloaded on those Linux distributions.

The projector running Solaris is not uncommon, many office devices including printers and copiers have full Unix or Linux operating systems with internal hard drives. Most of these devices go unpatched and therefore make great pivoting opportunities. These devices also provide an opportunity to gather data (printed or scanned documents) and forward them to an external FTP site off hours, this is known as a store and forward platform. The patch/update for the system we referenced above has been available since 2014. Many of these devices also come with WiFi and/or Bluetooth enabled interfaces even when connected directly to the network via Ethernet, making them a target to bypass your firewalls and WPA2 Enterprise security. Any device that connects to your network, no matter how small or innocuous, needs to be patched and/or have software updates applied on a regular basis as well as undergo rigorous system hardening procedures including disabling unused interfaces and changing default access settings. This device with outdated software extended our attack long enough to identify other soft targets. Had it been updated/patched, our initial foothold could have vanished the first-time auto logoff occurred.

Before you scoff or get judgmental believing only incompetent or lazy network administrators or managers could allow this to happen, slow down and think. Where do the patch management statistics for your organization come from? What data do you rely on? Most organizations gather and report patching statistics based on data directly from their patch management platform. Fact – systems fall out of patch management systems or are never added for many reasons, such as: a GPO push failed, a switch outage during the process, systems that fall outside of the patch managers responsibility or knowledge (printers, network devices, video projector, VOIP Systems). Fact – Your spam filter may be filtering critical patch fail reports, this happens far more often than you might imagine.

A process outside of the patching system needs to verify every device is in the patch management’s system and that the system is capable of pushing all patches to all devices. This process can be as simple and cost effective as running and reviewing NMAP scripts on or as complex and automated as commercial products such as Tenable’s Security Center or BeyondTrust’s Retina that can be scheduled to run and report immediately following the scheduled patch updates. THIS IS CRITICAL! Unless you know every device connected to your network; wired, wireless or virtual and where it’s patch/version health status, there are going to be wholes in your security. At the end of this process, no matter what it looks like internally, the CISO/CIO/ISO should be able to answer the following:

  • Did the patches actually get applied?

  • Did the patches undo a previous workaround or code fix?

  • Did ALL systems get patched?

  • Are there any NEW critical or high-risk vulnerabilities that need to be addressed?

There are probably going to be devices that need to be manually patched, there is a very strong likelihood that some software applications are locked into vulnerable versions of Java, Flash or even Windows XP/2003/2000. So, there are devices that will be patched less frequently or not at all. Many organizations simply say, “That’s just how it is until manpower or technology changes - we just accept the risk”.

That may be a reasonable response for your organization, it all depends on your risk tolerance. What about Firewall or VLANs with ACL restriction for devices that can’t be patched or upgraded if you have a lower risk appetite?? Why not leverage virtualization to reduce the security surface area of the that business-critical application that needs to run on an old version of Java or only works on 2003 or XP? Published application technologies from Citrix, Microsoft, VMware or Phantosys fence the vulnerabilities into a small isolated window that can’t be accessed by the workstation OS. Properly implemented, the combination of VLANs/DMZs and Application Virtualization reduces the actual probability of exploit to nearly zero and creates an easy way to identify and log any attempts to access or compromise these vulnerable systems. Once again these are mitigating countermeasure when patching isn’t an option.

We will be making many recommendations to our clients including multi-factor authentication for VLAN access, changes to password length and complexity, and additional VLAN. However, topping the list of suggestions will be patch management and regular internal vulnerability scanning, preferably as the verification step for the full patch management cycle. Keeping your systems patched makes sure when someone makes a mistake and lets the bad guy or malware in – they have nowhere to go and a limited time to get there.

As an ethical hacker or penetration tester, one of the most frustrating things I encounter is spending weeks of effort to identify and secure a foothold on a network only to find myself stuck; I can’t escalate privileges, I can’t make the session persistent, I can’t move laterally, ultimately rendering my attempts unsuccessful. Though frustrating for me, this is the optimal outcome for our clients as it means they are being proactive about their security controls.

Frequently, hackers are looking for soft targets and follow the path of least resistance. To protect yourself, patch your systems and isolate those you can’t. By doing so, you will increase the level of difficulty, effort and time required rendering a pretty good chance they will move on to someone else. There is an old joke about two guys running from a bear, the punch line applies here as well – “I don’t need to be faster that the bear, just faster than you…”

Make sure ALL of your systems are patched, upgraded or isolated with mitigating countermeasure; thus, making you faster than the other guy who can’t outrun the bear.

About Tim Crosby:

Crosby TimTimothy Crosby is Senior Security Consultant for Spohn Security Solutions. He has over 30 years of experience in the areas of data and network security. His career began in the early 80s securing data communications as a teletype and cryptographic support technician/engineer for the United States Military, including numerous overseas deployments. Building on the skillsets he developed in these roles, he transitioned into network engineering, administration, and security for a combination of public and private sector organizations throughout the world, many of which required maintaining a security clearance. He holds industry leading certifications in his field, and has been involved with designing the requirements and testing protocols for other industry certifications. When not spending time in the world of cybersecurity, he is most likely found in the great outdoors with his wife, children, and grandchildren.

Migrating and managing your data storage in the cloud can offer significant value to the business. Start by making good strategic decisions about moving data to the cloud, and which cloud storage management toolsets to invest in.

Your cloud storage vendor will provide some security, availability, and reporting. But the more important your data is, the more you want to invest in specialized tools that will help you to manage and optimize it.

Cloud Storage Migration and Management Overview

First, know if you are moving data into an application computing environment or moving backup/archival data for long-term storage in the cloud.  Many companies start off with storing long-term backup data in the cloud, others with Office 365. Still others work with application providers who extend the application environment to the vendor-owned cloud, like Oracle or SAP. In all cases you need to understand storage costs and information security such as encryption. You will also need to decide how to migrate the data to the cloud.

...

http://www.enterprisestorageforum.com/storage-management/managing-cloud-storage-migration.html

Tuesday, 27 March 2018 05:11

Managing Cloud Storage Migration

Leveraging Compliance to Build Regulator and Customer Trust

Bitcoin and other cryptocurrencies continue to gain ground as investors buy in, looking for high returns, and as acceptance of it as payment takes hold. However, with such growth come risks and challenges that fall firmly under the compliance umbrella and must be addressed in a proactive, rather than reactive, manner.

Cryptocurrency Challenges

One of the greatest challenges faced by the cryptocurrency industry is its volatility and the fact that the cryptocurrency markets are, unlike mainstream currency markets, a social construct. Just as significantly, all cryptocurrency business is conducted via the internet, placing certain obstacles in the path of documentation. The online nature of cryptocurrency leads many, especially regulators, to remain dubious of its legitimacy and suspicious that it is used primarily for nefarious purposes, such as money-laundering and drug trafficking, to name a few.

This leaves companies that have delved into cryptocurrency with an onerous task: building trust among regulators and customers alike, with the ultimate goal of fostering cryptocurrency’s survival. From a regulatory standpoint, building trust involves not only setting policies and procedures pertaining to the vetting of customers and the handling of cryptocurrency transactions and trades, but also leveraging technology to document and communicate them to the appropriate parties. Earning regulators’ trust also means keeping meticulous records rendered legally defensible by technology. Such records should detail which procedures for vetting customers were followed; when, by whom and in what jurisdiction the vetting took place; and what information was shared with customers at every step of their journey.

On the customer side, records must document the terms of all transactions and the messages conveyed to customers throughout their journey. Records of what customers were told regarding how a company handles its cryptocurrency transactions and any measures it takes to ensure the legitimacy of activities connected with transactions should be maintained as well.

...

http://www.corporatecomplianceinsights.com/cryptocurrency-challenges-opportunities/

How to help your organization plan for and respond to weather emergencies

By Glen Denny, Baron Services, Inc.

Hospitals, campuses, and emergency management offices should all be actively preparing for winter weather so they can be ready to respond to emergencies. Weather across the country is varied and ever-changing, but each region has specific weather threats that are common to their area. Understanding these common weather patterns and preparing for them in advance is an essential element of an emergency preparedness plan. For each weather event, those responsible for organizational safety should know and understand these four important factors: location, topography, timing, and pacing.

In addition, be sure to understand the important terms the National Weather Service (NWS) uses to describe changing weather conditions. Finally, develop and communicate a plan for preparing for and responding to winter weather emergencies. Following the simple steps in the sample planning tool provided will aid you in building an action plan for specific weather emergency types.

Location determines the type, frequency and severity of winter weather

Denny1The type of winter weather experienced by a region depends in great part on its location, including proximity to the equator, bodies of water, mountains, and forests. These factors can shape the behavior of winter weather in a region, determining its type, frequency, and severity. Knowing how weather affects a region can be the difference in the number of lives saved and lives lost.

Winter weather can have a huge impact on a region’s economy. For example, in the first quarter of 2015, insurance claims for winter storm damage totaled $2.3 billion, according to the Insurance Information Institute, a New York-based industry association. One Boston-area insurance executive called it the worst first quarter of winter weather claim experience he’d ever seen. The statistics, quoted in an article that appeared in the Boston Globe, noted that most claims were concentrated in the Northeast, where winter storms had dumped 9 feet of snow in Greater Boston. According to the article, Mounting insurance claims are remnants of a savage winter, “That volume of claims was above longtime historic averages, and coupled with the recent more severe winters could prompt many insurance companies to eventually pass the costs on to consumers through higher rates.”

Denny2Every region has unique winter weather, and different ways to mitigate the damage. Northern regions will usually have some form of winter precipitation – but they also have the infrastructure to handle it. In these areas, there is more of a risk that mild events can become more dangerous because people are somewhat desensitized to winter weather. Sometimes, they ignore warnings and travel on the roads anyway. Planners should remember to issue continual reminders of just how dangerous winter conditions can be.

Areas of the Southwest are susceptible to mountain snows and extreme cold temperatures. These areas need warming shelters and road crews to deal with snow and ice events when they occur.

Denny3Any winter event in the Southeast can potentially become an extreme event, because organizations in this area do not typically have many resources to deal with it. It takes more time to put road crews in place, close schools, and shut down travel. There is also an increased risk for hypothermia, because people are not as aware of the potential dangers cold temperatures can bring. Severe storms and tornadoes can also happen during the winter season in the Southeast.

Figure 1 is a regional map of the United States. Table 1 outlines the major winter weather issues each region should consider and plan for.

Topography influences winter weather

Denny4Topography includes cities, rivers, and mountains Topographical features influence winter weather, because they help direct air flow causing air to rise, fall, and change temperature. Wide open spaces – like those found in the Central U.S. – will increase wind issues.

Timing has a major effect on winter weather safety

Denny5Knowing when a winter event will strike is one of the safety official’s greatest assets because it enables a degree of advance warning and planning. But even with early notification, dangerous road conditions that strike during rush hour traffic can be a nightmare. Snowstorms that struck Atlanta, GA and Birmingham, AL a few years ago occurred in the middle of the day without adequate warning or preparation and caused travel-related problems.

Pacing of an event is important – the speed with which it occurs can have adverse impacts

Denny6Storms that occur in a few hours can frequently catch people off guard and without appropriate preparation or advanced planning. In some regions, like the Northeast, people are so immune to winter weather that they ignore the slower, milder events. Many people think it is fine to be out on the roads with a little snowfall, but it will accumulate over time. It is not long before they are stranded on snowy or icy roads.

Denny7As part of considering winter event pacing, emergency planners should become familiar with the terms the National Weather Service (NWS) currently uses to describe winter weather phenomenon (snow, sleet, ice, wind chill) that affect public safety, transportation, and/or commerce. Note that for all advisories designated as a “warning,” travel will become difficult or impossible in some situations. For these circumstances, planners should urge people to delay travel plans until conditions improve.

A brief overview of NWS definitions appears on Table 2. For more detailed information, go to https://www.weather.gov/lwx/WarningsDefined.

Planning for winter storms

After hurricanes and tornadoes, severe winter storms are the “third-largest cause of insured catastrophic losses,” according to Dr. Robert Hartwig, immediate past president of the Insurance Information Institute (III), who was quoted in Property Casualty 360° online publication. “Most winters, losses from snow, ice and other freezing hazards total approximately $1.2 billion, but some storms can easily exceed that average.”

Given these figures, organizations should take every opportunity to proactively plan. Prepare your organization for winter weather. Have a defined plan and communicate it to all staff. The plan should include who is responsible for monitoring the weather, what information is shared and how. Identify the impact to the organization and show how you will maintain your facility, support your customers, and protect your staff.

Denny8Once you have a plan, be sure to practice it just as you would for any other crisis plan. Communicate the plan to others in the supply chain and transportation partners. Make sure your generator tank is filled and ready for service.

Denny9Implement your plan and be sure to review and revise it based on how events unfold and feedback from those involved.

Denny10A variety of tools are available to help prepare action plans for weather events. The following three figures are tools Baron developed for building action plans for various winter weather events.

Use these tools to determine the situation’s threat level, then adopt actions suggested for moderate and severe threats – and develop additional actions based on your own situation.

Weather technology assists in planning for winter events

A crucial part of planning for winter weather is the availability of reliable and detailed weather information to understand how the four factors cited affect the particular event. For example, Baron Threat Net provides mapping that includes local bodies of water and rivers along with street level mapping. Threat Net also provides weather pattern trends and expected arrival times along with their expected impact on specific areas. This includes 48-hour models of temperature, wind speed, accumulated snow, and accumulated precipitation. In addition to Threat Net, the Baron API weather solution can be used by organizations that need weather integrated into their own products and services.

To assist with the pacing evaluation, proximity alerts can forecast an approaching wintery mix and snow, and can be used along with NWS advisories. While these advisories are critical, the storm or event has to reach the NWS threshold for a severe weather event. By contrast, technology like proximity alerting is helpful – just because an event does not reach a NWS defined threshold does not mean it is not dangerous! Pinpoint alerting capabilities can alert organizations when dangerous storms are approaching. Current conditions road weather information covers flooded, slippery, icy, and snow covered conditions. The information can be viewed on multiple fixed and mobile devices at one time, including an operation center display, desktop display, mobile phone, and tablet.

An example is a Nor’easter storm that occurred in February 2017 along the east coast. The Baron forecasting model was accurate and consistent in the placement of the heavy precipitation, including the rain/snowfall line leading up to the event and throughout the storm. Models also accurately predicted the heaviest bands of snow, snow accumulation, and wind speed. Based on the radar image showing the rain to snow line slowly moving to the east the road conditions product displayed a brief spatial window where once the snow fell, roads were still wet for a very short time before becoming snow-covered, which is evident in central and southern NJ and southeastern RI.

Final thoughts on planning for winter weather

Denny11Every region within the United States will experience winter weather differently. The key is knowing what you are up against and how you can best respond. Considering the four key factors – location, topography, timing, and pacing – will help your organization plan and respond proactively.

Atkins Unbottling VolnerabilitiesGraphic2By Ed Beadenkopf, PE

As we view with horror the devastation wrought by recent hurricanes in Florida, South Texas, and the Caribbean, questions are rightly being asked about what city planners and government agencies can do to better prepare communities for natural disasters. The ability to plan and design infrastructure that provides protection against natural disasters is obviously a primary concern of states and municipalities. Likewise, federal agencies such as the Federal Emergency Management Agency (FEMA), the U.S. Army Corps of Engineers (USACE), and the U.S. Bureau of Reclamation cite upgrading aging water infrastructure as a critical priority.

Funding poses a challenge

Addressing water infrastructure assets is a major challenge for all levels of government. While cities and municipalities are best suited to plan individual projects in their communities, they do not have the funding and resources to address infrastructure issues on their own. Meanwhile, FEMA, USACE and other federal agencies are tasked with broad, complex missions, of which flood management and resiliency is one component.

Federal funding for resiliency projects is provided in segments, which inadvertently prevents communities from being able to address the projects entirely. Instead, funding must be divided into smaller projects that never address the entire issue. To make matters even more challenging, recent reports indicate that the White House plan for infrastructure investment will require leveraging a major percentage of funding from state and local governments and the private sector. 

Virtually, long-term planning is the solution

So, what’s the answer? How can we piece together an integrated approach between federal and local governments with segmented funding? Put simply, we need effective, long-term planning.

Cities can begin by planning smaller projects that can be integrated into the larger, federal resilience plan. Local governments can address funding as a parallel activity to their master planning. Comprehensive planning tools, such as the Atkins-designed City Simulator, can be used to stress test proposed resilience-focused master plans.

A master plan developed using the City Simulator technology is a smart document that addresses the impact of growth on job creation, water conservation, habitat preservation, transportation improvements, and waterway maintenance. It enables local governments to be the catalyst for high-impact planning on a smaller scale.

By simulating a virtual version of a city growing and being hit by climate change-influenced disasters, City Simulator measures the real impacts and effectiveness of proposed solutions and can help lead the way in selecting the improvement projects with the highest return on investment (ROI). The resulting forecasts of ROIs greatly improve a community’s chance of receiving federal funds.

Setting priorities helps with budgeting

While understanding the effectiveness of resiliency projects is critical, communities must also know how much resiliency they can afford. For cities and localities prone to flooding, a single resiliency asset can cost tens of millions of dollars, the maintenance of which could exhaust an entire capital improvement budget if planners let it. Using effective cost forecasting and schedule optimization tools that look at the long-term condition of existing assets, can help planners prioritize critical projects that require maintenance or replacement, while knowing exactly the impact these projects will have on local budgets and whether additional funding will be necessary.

It is imperative to structure a funding solution that can address these critical projects before they become recovery issues. Determining which communities are affected by the project is key to planning how to distribute equitable responsibility for the necessary funds to initiate the project. Once the beneficiaries of the project are identified, local governments can propose tailored funding options such as Special Purpose Local Option Sales Tax, impact fees, grants, and enterprise funds. The local funding can be used to leverage additional funds through bond financing, or to entice public-private partnership solutions, potentially with federal involvement.

Including flood resiliency in long-term infrastructure planning creates benefits for the community that go beyond flood prevention, while embracing master planning has the potential to impact all aspects of a community’s growth. Local efforts of this kind become part of a larger national resiliency strategy that goes beyond a single community, resulting in better prepared cities and a better prepared nation.

Atkins Beadenkopf EdEd Beadenkopf, PE, is a senior project director in SNC-Lavalin’s Atkins business with more than 40 years of engineering experience in water resources program development and project management. He has served as a subject matter expert for the Federal Emergency Management Agency, supporting dam and levee safety programs.

There’s a crack in California. It stretches for 800 miles, from the Salton Sea in the south, to Cape Mendocino in the north. It runs through vineyards and subway stations, power lines and water mains. Millions live and work alongside the crack, many passing over it (966 roads cross the line) every day. For most, it warrants hardly a thought. Yet in an instant, that crack, the San Andreas fault line, could ruin lives and cripple the national economy.

In one scenario produced by the United States Geological Survey, researchers found that a big quake along the San Andreas could kill 1,800 people, injure 55,000 and wreak $200 million in damage. It could take years, nearly a decade, for California to recover.

On the bright side, during the process of building and maintaining all that infrastructure that crosses the fault, geologists have gotten an up-close and personal look at it over the past several decades, contributing to a growing and extensive body of work. While the future remains uncertain (no one can predict when an earthquake will strike) people living near the fault are better prepared than they have ever been before.

...

https://www.popsci.com/extreme-science-san-andreas

Sunday, 25 February 2018 13:35

Extreme Science: The San Andreas Fault

Damage to reputation or brand, cyber crime, political risk and terrorism are some of the risks that private and public organizations of all types and sizes around the world must face with increasing frequency. The latest version of ISO 31000 has just been unveiled to help manage the uncertainty.

Risk enters every decision in life, but clearly some decisions need a structured approach. For example, a senior executive or government official may need to make risk judgements associated with very complex situations. Dealing with risk is part of governance and leadership, and is fundamental to how an organization is managed at all levels.

Yesterday’s risk management practices are no longer adequate to deal with today’s threats and they need to evolve. These considerations were at the heart of the revision of ISO 31000, Risk management – Guidelines, whose latest version has just been published. ISO 31000:2018 delivers a clearer, shorter and more concise guide that will help organizations use risk management principles to improve planning and make better decisions. Following are the main changes since the previous edition:

...

https://www.iso.org/news/ref2263.html

Thursday, 15 February 2018 15:54

The new ISO 31000 keeps risk management simple

Some things are hard to predict. And others are unlikely. In business, as in life, both can happen at the same time, catching us off guard. The consequences can cause major disruption, which makes proper planning, through business continuity management, an essential tool for businesses that want to go the distance.

The Millennium brought two nice examples, both of the unpredictable and the improbable. For a start, it was a century leap year. This was entirely predictable (it occurs any time the year is cleanly divisible by 400). But it’s also very unlikely, from a probability perspective: in fact, it’s only happened once before (in 1600, less than 20 years after the Gregorian calendar was introduced).

A much less predictable event in 2000 happened in a second-hand bookstore in the far north of rural England. When the owner of Barter Books discovered an obscure war-time public-information poster, it triggered a global phenomenon. Although it took more than a decade to peak, just five words spawned one of the most copied cultural memes ever: Keep Calm and Carry On.

...

https://www.iso.org/news/ref2240.html

Mahoning County is located on the eastern edge of Ohio at the border with Pennsylvania. It has a total area of 425 square miles, and as of the 2010 census, its population was 238,823. The county seat is Youngstown.

Challenges

  • Eliminate application slowdowns caused by backups spilling over into the workday
  • Automate remaining county offices that were still paper-based
  • Extend use of data-intensive line-of-business applications such as GIS

...

https://www.riverbed.com/customer-stories/mahoning-county-ohio.html

Anyone following enterprise data storage news couldn’t help but notice that aspects of the backup market are struggling badly. From its glory days of a couple of years back, the purpose-built backup appliance (PBBA), for example, has been trending downwards in terms of revenues per IDC.

"The PBBA market remains in a state of transition, posting a 16.2% decline in the second quarter of 2017," said Liz Conner, an analyst at IDC. "Following a similar trend to the enterprise storage systems market, the traditional backup market is declining as end users and vendors alike explore new technology."

She’s talking about alternatives such as the cloud, replication and snapshots. But can these really replace backup?

...

http://www.enterprisestorageforum.com/backup-recovery/data-storage-backups-vs-snapshots-and-replication.html

Page 2 of 2