• WHAT IF YOU COULD HAVE A CONTINUITY, COMPLIANCE AND RISK CLOUD SOLUTION THAT... INTRODUCING FRONTLINE LIVE 5 WHERE CONTINUITY AND COMPLIANCE CONVERGE

    Continuity Logic’s Frontline Live 5™ is the first leader in Gartner’s Magic Quadrant Business Continuity (BCMP) software category that has effectively converged continuity, risk and compliance in a one easy to use cloud-based solution.

Fall World 2015

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 28, Issue 2

Full Contents Now Available!

eBRP-webinar
Incident Readiness is the New BCM
 
Wed, June 17, 2015 2:00 PM - 3:00 PM EST
 
 

Space is limited.
Reserve your Webinar seat now at:
https://attendee.gotowebinar.com/register/5318730635963684865

 

Incident Readiness is the New BCM – Jim Mitchell, Director, eBRP

With our experience implementing BCM programs for many global enterprises, State and Federal Agencies, eBRP is fortunate to have a bird’s eye-view into BCM Program’s trends.

As organizations mature in their BCM programs, the program objectives constantly evolve and change. Some of the program objective changes are driven by the regulators & auditors who are looking for assurance that your BCM program is viable, can withstand stress and is sustainable.

During this informational session, eBRP will showcase the trends & approaches in Planning, Exercising, Incident Response and Incident Management.

After registering, you will receive a confirmation email containing information about joining the webinar.

Title: Incident Readiness is the New BCM
Date: Wednesday, June 17, 2015
Time: 2:00 PM - 3:00 PM EST
After registering you will receive a confirmation email containing information about joining the Webinar.
View System Requirements Here
 
 
 
 

You’re ready for advancement, you want to learn and you’re looking for an educational programme to encompass the needs of your current or planned role in the protection and preservation of your organisation’s functionality, viability and profitability. The MSc Organisational Resilience at Buckinghamshire New University will be good for you – here’s why:

You will become confident, capable and thorough in your knowledge and understanding of organisational resilience

You will understand how resilience needs to match the context of a changing global operating and threat landscape

You will develop the important skill of not just being able to talk about resilience, but also to take an analytical approach that allows you to offer balanced and evaluated solutions to real problems and issues

...

https://buckssecurity.wordpress.com/2015/05/26/why-the-msc-organisational-resilience-will-be-good-for-you-2/

To ensure the availability of high performance, mission critical IT services, IT departments need both solid monitoring capabilities and dedicated IT resources to resolve issues as they occur. But even with the right tools in place, when an abundance of alerts and alarms start streaming in, it can quickly become overwhelming , particularly when IT staff have been asked to focus time and attention on activities that both support the organization’s end users and add to the company’s bottom line.

Logicalis US suggests that organizations need to ask the following five key questions to help ensure that enterprise IT monitoring is fit for purpose:

1. Is your monitoring tool configured properly? Most organizations have off-the-shelf monitoring tools that gather information from all of the devices on their network. The information coming from these tools can be overwhelming, and while it may be helpful to have access to all of that data, weeding through it in crunch-time can be cumbersome. To limit alerts to those that are most important takes training, knowledge and expertise, which leads many organizations that want to manage IT monitoring in house to employ full-time experts just to configure and manage their monitoring tools.

2. Do you update regularly? Since rules are continually being added to monitoring tools, monitoring isn’t an ‘implement and forget it’ situation, which means IT departments spend a considerable amount of time making sure the tools they depend on for alerts are as current and up-to-date as possible.

3. Can your tool provide event correlation? A single network error can have a ripple effect impacting applications that would otherwise be completely unrelated. As a result, it’s critical that an IT monitoring tool provide event correlation to speed diagnosis and remediation in all affected areas.

4. Does your monitoring tool offer historical trending data? When managing an enterprise environment, IT pros need to analyze historical trend data to identify recurring issues as well as to do capacity planning which, in many cases, can help prevent issues before they arise. Some of today’s popular monitoring tools, however, either operate in real time or store historical data for 30 days or less. Knowing what your tool offers is important information since being able to intelligently analyze and manage an organization’s IT environment can depend on having access to this historical data long term.

5. Do you have the right expertise in house? In an enterprise IT environment, it’s important to consider internal staffing needs and the expertise required to manage the monitoring tools and process in house. Keeping an enterprise environment up and running is no longer IT’s value-add; it’s an expectation. Today, most organizations want their IT staff delivering business results, which is why it may make sense to consider outsourcing monitoring to a third party skilled in assessing and limiting incident reports to only the handful that a busy internal staff actually needs to address.

www.us.logicalis.com

On 12th December 2014 NATS, the UK's leading provider of air traffic control services, experienced a failure in its Swanwick flight data system. The outage resulted in widespread flight delays and cancellations. A report has now been published which details the events behind the outage and subsequent business continuity response.

Written by an enquiry panel led by Sir Robert Walmsley the report finds that:

  • Failure occurred on the 12th December because of a latent software fault that was present from the 1990s. The fault lay in the software’s performance of a check on the maximum permitted number of Controller and Supervisor roles.
  • The system error was caused because of a number of new Controller roles that had been added to the system the day before.
  • The standard practice in NATS is that engineering recovery is coordinated through a group of designated engineers, known as the Engineering Technical Incident Cell (ETIC) and drawn from those available in the Systems Control Centre adjacent to the Operations Room. While some recovery actions are automated, ETIC manually control all key recovery actions, e.g. the restoration of data, to ensure that decisions are made with due and careful deliberation; this is important, as the wrong decisions could have further downgraded performance.
  • Identifying a software fault in such a large system (the total application exceeds 2 million lines of code), within only a few hours, is a surprising and impressive achievement. This was made possible because system logs contain details of the interactions at the workstations.

The detailed 93 page report is available here as a PDF and should be of interest to business continuity managers whatever their sector. It shows how legacy systems can have unexpected and unanticipated impacts as well as giving useful details about the business continuity plans and strategies that were in place at the time of the incident.

The report makes clear that although this was a high profile incident which caused difficulties for NATS' direct customers and the supply chain, it was undoubtedly a business continuity success. Without a strong recovery team response and the pre-planned procedures that were in place the incident and disruption would have been much worse.

According to a new market research report published by MarketsandMarkets the mass notification market is estimated to grow from $3.81 billion in 2015 to $8.57 billion in 2020. This represents a compound annual growth rate (CAGR) of 17.6 percent from 2015 to 2020.

The major forces driving this market are the growing need for public safety, increasing awareness for emergency communication solutions, the requirement for mass notification for business continuity, and the trend towards mobility.

The report says that business continuity and disaster recovery and public safety compliance standards are boosting the sales of mass notification solutions.

Mass notification solutions providers are expected to collaborate and provide better competitive services to take advantage of the emerging mass notification market and to meet the need for complete crisis communication solutions.

Obtain the ‘Mass Notification Market by Solution (In-Building, Wide-Area, Distributed Recipient), by Application (Interoperable Emergency Communications, Business Continuity & Disaster Recovery, Integrated Public Alert & Warning, Business Operations), by Deployment, by Vertical & by Region - Global Forecast to 2020’ report from here.

Most people are visually oriented when it comes to taking in information. They also prefer analogue displays to digital ones. In other words, when it comes to understanding risk as part of business continuity, they like colours and graphics, rather than numbers in a spreadsheet. That makes the risk heat map a popular choice for presenting summary risk information to non-risk experts or senior management. Typically, areas in red on the heat map indicate the biggest risks and areas in green the smallest/most acceptable risks. But does this approach in fact too limited?

...

http://www.opscentre.com.au/blog/the-colour-of-continuity-and-the-risks-of-red-and-green/

Tuesday, 26 May 2015 00:00

New Approaches to IT Efficiency

Virtually everyone is in favor of an energy-efficient data center. But if that is the case, why has the industry struggled so mightily to reduce power consumption?

Even with the remarkable gains in virtualization and other advanced architectures, the data center remains one of the primary energy consumers on the planet, and even worse, a top cost-center for the business.

But the options for driving greater efficiency in the data center are multiplying by the day – from low-power, scale-out hardware to advanced infrastructure and facilities management software to new forms of power generation and storage. As well, there is the option to offload infrastructure completely to the cloud and refocus IT around service and application delivery, in which case things like power consumption and efficiency become someone else’s problem.

...

http://www.itbusinessedge.com/blogs/infrastructure/new-approaches-to-it-efficiency.html

Editor’s Note: This is part of a series on the factors changing data analytics and integration. The first post covered cloud infrastructure; the second discussed new data types, and the third focused on data services.

Data keeps expanding, but only recently have organizations been able to store the data in useful ways. Now, organizations can theoretically keep data at the ready, whether it’s in the cloud, a data lake or in-memory appliance.

Hopefully, it will soon be archaic to hear my doctor say, “Oh, we sent that x-ray to tape. We could get it — but it’s a huge hassle.”

The ability to store mass data is one of the five data evolutions that David Linthicum cited in his thesis on “The Death of Traditional Data Integration.” The ability to pool Big Data sets would not be disruptive, though, if it weren’t coupled with the ability to access it easily and as needed for analytics. As Informatica CEO Sohaib Abbasi points out, this “richness of big data is disrupting the analytics infrastructure.”

...

http://www.itbusinessedge.com/blogs/integration/four-questions-to-ask-before-building-the-data-infrastructure-of-tomorrow.html