We think of the Internet as a borderless entity, but that could all change, according to an annual emerging risk report from Swiss Re.
The publication is based on the SONAR process, an internal crowdsourcing tool that collects inputs and feedback from underwriters, client managers, risk experts and others to identify, assess and manage emerging risks.
Increased localization of internet networks within country borders is one of the key emerging risks that industry players should prepare for, the report suggests.
How else have IT departments been doing so much more with so much less? Cloud service providers have done what so many CIOs and IT managers have only dreamed of.
They have packaged virtualisation, automation, replication and innovation together, and put cost reduction in as part of the deal too.
Never before have enterprises and organisations had so much power at their fingertips for so few dollars (well, thousands of dollars). However, there’s just one big drawback.
The drawback isn’t really due to cloud computing. After all, much of cloud computing fulfils its function marvellously well. That includes providing resources for business continuity and disaster recovery, as well as for data archiving.
For many organizations, it is a constant challenge to meet the current year goals and objective for the business continuity management program. There are a plethora of causes and symptoms, including:
- Exercises continually fail to meet recovery time objective (RTO) targets.
- The internal and/or external auditors have black notes that have not been fixed.
- The board, interested parties, customers and other stakeholders are making more demands.
- The competition now has certified BCM programs and is winning more business.
- A lack of confidence in consistently meeting contractual and regulatory obligations.
- A need to expand the BCM program scope, e.g., additional departments, regions, or community responders, etc.
But there is hope. A set of fresh eyes to perform a gap analysis of your BCM program can highlight non-conformities and provide direction on how to reasonably move forward to meet your goals.
Turmoil in emerging markets, increased localisation of Internet networks within country borders and financial repression are some of the key risks identified in this year's Swiss Re SONAR report, published recently. Although aimed at the insurance sector the report contains useful information for all enterprise risk managers. The publication is based on the SONAR process, a crowdsourcing tool drawing on Swiss Re's internal risk management expertise to pick up early signals of what lies beyond the horizon.
The report offers insights into emerging risks, those newly developing or evolving risks whose potential impact and scope are not yet sufficiently taken into account. Among these, the report also highlights a ‘crisis of trust’ in institutions, the ‘legal and pricing risks of the sharing economy’ and technology-related topics, such as the rise of ‘precision medicine’ and ‘distributed energy generation’.
"Risk management is not just about managing risks in the present. It is about anticipating future ones to make sure we will be in a position to deal with them," says Patrick Raaflaub, Swiss Re's Group Chief Risk Officer. "These risks may only fully reveal themselves to future generations. That doesn't mean that we shouldn't act today to reduce uncertainty and alleviate their burden."
The identified risks are relevant to life and non-life insurance areas and are presented with the goal of helping industry players prepare for new scenarios by adapting their behaviours, market conduct and product portfolios.
Detecting early signals of looming threats allows for a proactive approach to risk mitigation and is an important step to help society as a whole to become more resilient.
The three top risks with the highest potential impact:
Emerging markets crisis 2.0: turmoil in emerging countries could hinder the market entry and the penetration strategies of global insurance companies and even result in higher underwriting losses, especially in property, personal and commercial lines, for example in the case of riots.
The great monetary experiment: the long-term costs of negative interest rates and unconventional monetary policies are still unknown, yet they might lead to a broader loss of confidence in the monetary system. Short-term benefits are limited as the policies are unlikely to boost economic growth.
Internet fragmentation: firewalls, special software to filter out unwanted information and isolated IT infrastructure detached from global networks: disconnected nets could soon become a reality. Their potential impact includes increased costs and disrupted business models for insurance companies and other businesses operating across borders.
Unplanned system downtime is the reality that IT departments need to deal with every day. Some even see downtime as being the worst thing that can happen with their IT systems. In fact, as almost everything we know has gone through a digital transformation, businesses rely more and more upon IT; therefore an IT issue is a business issue. When critical incidents occur, the business operations can quickly suffer from it:
- Loss of online revenue for e-retailers
- Drop off in employee’s productivity in manufacturing
- Frustrated clinicians, increased patient safety risk and drop of the hospital bed turnover rate in hospitals
- Impact on brand, company image and patient satisfaction
Not long ago, CloudEndure published a survey that put system downtime, and more specifically the cost of system downtime into perspective. The online survey was conducted in January of 2016 and responses were collected from 141 IT professionals from around the world who were using or looking to implement disaster recovery.
Springtime is a time for flowers, leaves on trees and new grass – a manifestation of nature’s own recycling program – but it also marks the beginning of weather patterns that can create less-inviting scenarios. Between the tornado season, the hurricane season kickoff and what traditionally has been the start of a fire season, springtime lights up a veritable cauldron of natural disasters just waiting to boil over.
That’s why MSPs at this time of year should be talking to their clients about data backup and disaster recovery (BDR) strategies. With those clients who already have a strategy in place, this is a good time to review their plans to assess whether they still meet all of the clients’ requirements.
Are all new users included in the backup process? Are they aware of recovery procedures in the event of a disaster? Have any systems been installed recently that require some kind of upgrade to the BDR?
So, you and your family have survived a tornado; it’s awesome that you were prepared, and you ended up coming out of it in good shape. Unfortunately, after a tornado, it’s very common for homeowners to see significant property damage. When you’re dealing with structural damage to your home, you need to consider the safety of your family, and what you do after a tornado can be just as important as what you did in preparation for it.
A study done on tornado damage in Marion, Illinois, showed that 50 percent of tornado-related injuries occurred after a storm
had passed. It’s common for injuries to occur during cleanup and post-tornado activities; almost a third of these injuries occurred after a person stepped on a nail. A tornado damages power, gas and electrical lines, and when you combine that with storm debris, it really puts you at risk.
Object storage delivers an underlying agility that lets a wide variety of users access and utilize data with a wide variety of applications across a wide landscape of locations.
Have you ever met a senior corporate executive who was asking for data?
Not likely. Answers are what most senior execs are seeking. Actionable answers. Answers that can help them more quickly make more highly effective decisions that drive truly impactful action.
SALEM, Ore. — When Target’s systems were breached in 2015, it was rumored that the cyber side of the house had the information it needed, but didn’t know it was looking at an attack that compromised its clientele's credit card information.
In just the last decade, threat vectors have evolved from the standard “known” perils of the cyber realm to the evolving attacks that change from discovery to detention within systems — and the ever-changing threats are not just a problem for the private sector.
During the Oregon Digital Government Summit held May 24, Bob Pelletier with Palo Alto Networks discussed the issues facing IT teams everywhere and how they could better defend their networks from bad actors.
In speaking with enterprise CIOs and IT managers, I hear a lot of the same stories about successful technology deployments and complicated mistakes. As companies scale, they tend to take separate paths to similar ends, eventually running into the same obstacles and undertakings.
One of the most interesting, but not infrequent, stories I’ve heard comes from enterprises that recently built primary or secondary data centers – without considering that in the modern cloud era, there are no circumstances under which a company should build a data center.
A company telling this story likely bought land and constructed its new data center in a remote part of the country, where real estate and utilities were cheap. It entered a contracted agreement with a single network carrier that served the area. Then, as the organization grew and the company sought to work with new service providers, the team was surprised to learn that its site’s so-called valuable location prevented the data center from accessing certain services, ultimately putting a cap on the company’s growth.