Industry Hot News (6684)
More than 500 Red Cross volunteers are helping people affected by Hurricane Odile in the Mexican state of Baja California Sur. The volunteers—120 of which are paramedics—are providing basic medical check-ups and delivering food to people housed in shelters. The Red Cross has sent 2,000 food parcels to the city of Los Cabos. In addition, volunteers are carrying out damage assessments in Baja California Sur in order to determine the most urgent needs.
The storm has left roughly 82% of the population in Los Cabos and La Paz without electrical power, damaged roadways, and caused ports to close. People affected by the storm have evacuated to 164 shelters in Baja California Sur.
Mexican Red Cross volunteers participating in the response are specialists in collapsed structures, damage evaluations, pre-hospital care, and logistics support in shelters & collection centres. The Mexican Red Cross is working closely with federal authorities, Civil Protection, the Governors Secretariat, the Mexican Marines and Army, to deliver the aid to the people affected as quickly as possible.
Another storm—Hurricane Polo—is threatening the Mexican state of Guerrero, where at least 120 Mexican Red Cross volunteers are prepositioned to act if needed.
(MCT) — Among the many things the Bay Area learned from the recent shaker near Napa is that the University of California, Berkeley’s earthquake warning system does indeed work for the handful of people who receive its messages, but most folks find out about a tremor only after it knocks them out of bed.
Silicon Valley has made apps that tell people when their Uber ride is approaching, their air conditioning has broken or a thunderstorm is brewing. Yet despite being home to the most devastating earthquakes in the country, the region does not have a high-tech earthquake alert system for the public.
But since last month’s temblor, more tech companies are trying to solve that problem. A handful of startups are developing apps that would quickly broadcast warnings of upcoming quakes to users on their smartphones, tablets or other gadgets. Already, the much-joked-about messaging app Yo has rolled out “Earthquake Yo” to hundreds of users.
What is the scarcest IT resource today? Processor power, main memory and disk space all seem to grow unabated. But network bandwidth on the other hand is still comparatively expensive. Consequently, enterprises tend to have less of it, which is turn leaves them more exposed to possible outages. Luckily, other technology means that bandwidth can be made to do more, even if it’s not reasonable to have more of it. Routing voice and data over the same links is a prime example. This simplifies recovery and can also minimize outages. What’s missing in the equation is a simple explanation of terms involved. Here are a few to help you mix and match for the configuration that suits you.
After reading several blogs and articles this week, I’ve learned that many small to midsize businesses (SMBs) tend to learn as they go—especially when it comes to technology. And often, those lessons can be costly.
In a LinkedIn Blog written by Boost IT CEO Russell Shulin, I found a list of six major technology issues often overlooked by SMBs that can bust budgets and deeply affect business. Shulin explains that each is one lesson that he’s experienced, or seen experienced by others. Tips SMBs should consider include:
In the morning of Nov. 16, 2013, rural Ouray County, Colo., emergency responders were called to help miners in a nearby mine. Two were unconscious and 20 were suffering from oxygen deficiency. The two miners tragically died of carbon-monoxide poisoning, but a swift response got the other 20 to safety in a multiagency and regional effort.
The timing was uncanny. The coordinated response that ensued was practiced in a Mass Casualty Incident Command System (MCICS) training just the day prior to the incident, when those same responders were educated using an active shooter model. The training was applied to the mine incident in a structure that can be generalized to almost any mass casualty incident.
At the Revenue-Virginius mine, the county established a transportation unit leader and group for the first time to accurately track who was coming and going during the emergency.
In total, 30 responders navigated a snowy, narrow terrain to reach miners exposed to high levels of toxic carbon monoxide gases. The transportation leader and group helped especially to track and triage the miners and ensure quick treatment at three regional hospitals.
WINNIPEG, MANITOBA, Canada – After decades of working undercover for the Royal Canadian Mounted Police, the U.S. Drug Enforcement Administration and U.S. Customs Service, crime and risk expert Chris Mathers knows where companies are vulnerable and what it takes to protect them.
“In a world where popular culture tells us that the ends justify the means, crime is all about perception,” he said in a keynote address at the 2014 RIMS Canada Conference. “Young people are bombarded with it all the time, but we are in business, too. So the question is, how vulnerable is your business?”
Mathers, who joined the forensic division of KPMG and was later named president of corporate intelligence, shared his insight into how companies can best guard against “the business of crime, and crime in business.”
(MCT) — The San Antonio River Authority has announced the first nationwide implementation of software to help emergency responders react to dangerous floods.
SARA and the San Antonio Fire Department will hold a news conference Wednesday to discuss the FloodWorks system. It was developed in the United Kingdom and is operational via a “user-friendly, interactive website” at the San Antonio Emergency Operations Center at Brooks City-Base, officials said.
“We're doing the technology development; their role is the response,” Russell Persyn, SARA's watershed engineering manager, said of the joint project with the fire department.
The system, installed late last year and run through tests in the spring, uses historical flood data and weather forecasts to plan a day before a potential flood, with real-time radar updates from the National Weather Service helping responders track developments during a storm.
Reports are published almost daily about the gender pay gap in the UK. In 2013, women earned 19.7 percent less than men doing the same job. While in professional occupations, the pay gap is smaller (around 9 percent), at a senior level, the gender pay gap has not really decreased since 2005. Senior women earn 20.2 percent less than men in a similar role.
When examining the salaries for women in the resilience and governance sectors, recruitment agency BeecherMadden expected to see a similar trend.
However, surprisingly, salaries for women in resilience and governance roles buck the trend of women being paid less. Comparing recent appointments in the past year, women have been paid up to 30 percent more. This is for roles where men with comparable experience, have been appointed at a similar time, entering similar organizations.
BeecherMadden also found several examples of women with less experience in their role than men, who were earning around 10 percent more, for a similar role. The difference is most notable for those going into their second jobs; candidates who have 3 - 5 years’ experience are the most in demand and show the biggest pay difference. At senior levels, the experience gap closes when looking at comparable commercial experience.
To address critical gaps in knowledge about data center fire prevention, the US Fire Protection Research Foundation, an affiliate of the National Fire Protection Association(NFPA), has announced the release of a new report, ‘Validation of Modeling Tools for Detection Design in High Air Flow Environments,’ as the result of a project in partnership with Hughes Associates and FM Global.
The report validates a model that provides reliable analysis of smoke detection in data centers and guidance to the technical committees for NFPA 75, Fire Protection of Informational Technology Equipment, and NFPA 76, Fire Protection of Telecommunications Facilities.
Fire prevention and detection is critical to safeguarding data centers which hold critical business and organizational information around the world. Globally, spending on these facilities will be an estimated 149 billion dollars this year, according to Gartner Group.
In the past few years, the equipment in data centers has changed significantly, which has placed increased demands on HVAC systems. As a result, airflow containment solutions are being introduced to increase energy efficiency. From a fire safety design perspective, the use of airflow containment creates a high airflow environment that dilutes smoke, which poses challenges for adequate smoke detection, and affects the dispersion of fire suppression agents.
“While data centers have become increasingly important in housing digital information, sufficient smoke detection is a challenge with data center cooling systems,” says Amanda Kimball, a research project manager for the Foundation. “This research included a series of simulations with various smoke detector spacing, types of fires, and air flows which gave us important guidance on smoke detection placement and installation.”
(MCT) — Cities across California are struggling with how to convince property owners to retrofit buildings at risk of collapse during a major earthquake.
San Francisco this week is using an unusual tactic: trying to publicly shame building owners into shoring up their structures to better withstand shaking.
The city will slap large signs — in multiple languages, with red letters and a drawing of a destroyed building — on hundreds of apartment complexes that violate San Francisco's seismic safety laws.
No California city has gone so far to inform the public about potentially dangerous buildings and pressure property owners to make fixes.
Los Angeles is considering a similar approach. Mayor Eric Garcetti has proposed what would be the nation's first letter grading system to alert the public about the seismic safety of buildings. He has also said he wants to require owners to retrofit buildings that are at risk but is still working out the details of his plan.
It seems that small to midsize businesses (SMBs) around the world should begin beefing up their cybersecurity initiatives. Cybertinel, an Israeli security company, has verified the enigmatic Harkonnen Trojan on the network of one of its German clients in August, where attackers had taken full advantage of the often lax or lacking amount of network security in place in many SMBs.
According to TechWorld, around 300 SMBs in Europe may have been used as “fronts” for stealing data for as long as a decade. TechWorld’s John E. Dunn reported:
From the details released to the press, this looks like a rare example of a professional hacking-for-hire attack of long standing that possibly also targeted firms beyond the known target list, including in the UK.
As if crisis and emergency communicators don’t have enough to worry about. In today’s instant news world, without the care journalists once showed to get it right, it’s becoming increasingly common for fake spokespersons to prank the media.
Imagine the nightmare–your organization is in the middle of a major news crisis. While you are working hard to get your authorized spokesperson prepared to go live on national or regional TV, your TV monitor shows a live report going on with someone posing as a spokesperson for your organization.
Think it won’t happen?
Nags Head, N.C., barely skims the ocean surface, a town of about 3,000 people built on sand just 10 feet above sea level. Over the decades, hurricanes have cut a rough path here, taking down homes, roads and piers.
As city planners look toward the inevitable next big blow, they’re thinking about infrastructure. What happens when emergency phone lines no longer function or when the data center goes down? To meet that challenge, Nags Head is teaming up with other municipalities to create inter-city backup arrangements.
“[If] we should have a storm and the area has to be evacuated, essential personnel generally would be required to stay here. But [if] we have a very severe storm, essential personnel would be evacuated, and this arrangement gives us a place to set up shop,” said Allen Massey, IT coordinator of Nags Head.
The arrangement he refers to involves Cary, a city of 146,000 people that’s much farther inland. For call services in particular, Cary is Nags Head’s fallback position.
(MCT) TOKYO — In a nondescript government building near the Imperial Palace, a team of Japanese seismologists stands ready to predict an earthquake.
All day, every day, they monitor data from dozens of tiltmeters, strain gauges and other instruments deployed along a stretch of coastline southwest of Tokyo. The region, called Tokai, was last rocked by a major quake in 1854. Scientists fear it’s overdue for a repeat.
Since 1979, federal scientists have been watching for ground motion that might herald an impending rupture on the fault zone. If their instruments ever detect an ominous bulge, Japanese law requires the prime minister to issue warnings that will shut down schools, hospitals, factories, roads and trains across one of the country’s most populous areas.
The Pacific Northwest is subject to the same type of seismic disaster that Japan hopes to predict, but neither the U.S. nor any other nation has such an ambitious program to nail down an earthquake before it happens. That’s because most experts are convinced it can’t be done.
(MCT) — As Clark County, Wash., families get ready to settle back into the routine of the school year, local officials are hoping residents are also preparing for something less expected: a disaster.
September is National Preparedness Month, and on Monday the Clark Regional Emergency Services Agency kicked off its annual disaster preparedness game, called the "30 Days, 30 Ways Preparedness Challenge."
The game, played over social media, assigns one readiness task for each day for the month of September.
After participants have completed the task, they are asked to post their results to Twitter, Facebook, Instagram, the game's blog or send in the result by email. More details can be found at the game's website, www.30days30ways.com.
A brutal snowstorm strikes at mid-day. Roads grow increasingly congested as commuters across the city scramble to get home before conditions worsen. Ice begins to jam roads, and resulting accidents turn interstates into parking lots and neighborhood roads into skating rinks. Some parents grow increasingly desperate to reach their children as roads become impassable, leaving students stranded on buses and at school. Other parents pick up their children only to become stuck in their cars.
Once safely reunited, families remain stuck indoors for days. Childhood excitement at the sight of snow quickly turns to cabin fever. Parents’ relief to have the family reunited turns to hope for the power to remain on and schools to reopen soon.
This scenario became reality for cities across the southeastern U.S. in January 2014, highlighting the importance of preparedness, especially for families. Natural disasters affect about 66 million children each year. Keeping children safe in emergency situations starts in the home, whatever the emergency may be.
Get a Kit
“If you could take one thing with you on a desert island, what would it be?” This popular children’s question game is not too far off the mark for putting together an emergency kit for your family. Maintaining a routine in an emergency will help your children cope.
Putting together a good kit is the first step in helping you do that. Let your children pick things that make them feel secure, such as a favorite book or food. Your children will enjoy helping create a kit of all the things they are sure they could not live without in case of an emergency. Be sure to include your children in the process. Make it a game, and they will find it fun!
Some basic items to include in your kit include:
- Radio (hand-crank or battery-powered with extra batteries)
- First-aid kit
- Can opener
- Canned goods
You should also know your child’s medications and keep a small supply in case of emergency. Consider a small identification card with information on key medications and emergency contacts for your child to keep at all times.
Think of your family’s specific needs. For example, if you have an infant, keep any special foods or extra diapers on hand.
Keep a similar kit in each car, along with a blanket, nonperishable food, and a charger for your phone or other essential electronics.
Make a Plan
Knowing what to do in an emergency is just as important as having a kit. Most important is ensuring you have a way to reunite your family if they are separated at the time of the emergency. Children do better in these situations when they are with their families. As a start, teach your children important names, phone numbers and addresses. Most children can memorize a phone number by age four or five. Make it a game—it could help keep your children safe.
Protecting your family will involve others, as well. Pick a family member out of town to be a common contact for everyone to call or text. Sometimes local telephone networks can be jammed. If someone else cares for your children during part of the day, always make sure they know what to do and who to contact in an emergency, too. Lastly, make sure you have a plan for what to do with your pets. They are part of the family, too!
Being informed of your family’s situation when everyone is separated during the day is important. Know the emergency plan in your children’s schools and keep your emergency contact information up to date. Delegate a close family friend as an alternate contact who could pick your children up if you or your spouse is not able to do so. Consider using a word that only you and your children know, and make sure your children know only to leave with someone who can tell them what the code word is. This word can be anything, like a favorite book character, and can serve as the “password” or the “code word.”
In an emergency, talk to your children about what is happening. Be honest and explain the situation; it’s better to learn about it from you than from the media, since information from the media may not be age-appropriate. Set an example with your own actions by maintaining a sense of calm, even when you are distressed. This will help your family cope in any emergency.
Events and information can change quickly in an emergency. Pay attention to local leaders, like your town’s mayor or police department, so you can make the best, most informed decisions for you and your family.
Earthquake exposure is one of the biggest risks to workers compensation insurers, so it’s interesting to read that the California State Compensation Insurance Fund (SCIF) is once again looking to the capital markets to provide reinsurance protection for workers comp losses resulting from earthquakes.
This is a repeat of the first catastrophe bond sponsored by the SCIF in 2011 – Golden State Re Ltd sized at $200 million — which is due to expire in January 2015.
Artemis blog says:
The unique transaction, which has not been repeated by anyone else until now, links earthquake severity to workers compensation loss amounts demonstrating a new use of the catastrophe bond structure.”
The Golden State Re II catastrophe bond issuance is expected to be sized at $150 million or more, and will cover the SCIF until January 2019.
The ongoing shortage of Big Data talent is a serious problem for companies whose business increasingly relies on data analytics to remain competitive. You can imagine how difficult it must be for IT staffing firms whose clients are clamoring for Big Data skills when this country’s colleges and universities simply aren’t churning out enough graduates to meet the demand. Where do you look to find those highly skilled people? Overseas? Perhaps. But what if you looked at the existing pool of IT workers who are already inside those companies?
That’s one of the approaches being taken by Collabera, an IT staffing firm based in Morristown, N.J. I discussed the shortage of Big Data talent in an interview earlier this week with Nixon Patel, senior vice president and head of the technology competency units at Collabera. When I asked him about the extent to which Collabera relies on foreign talent, like individuals here on H-1B visas, to fill these roles for its clients, I was blown away when Patel said Collabera has taken a different approach:
Less than three-quarters of the way through 2014 and we have already seen a slew of regulatory changes and increased audit demands. First, we saw the Supreme Court significantly extend whistleblower provisions to include private companies. Then, we saw Walmart hit with $439 million in compliance enhancements and investigation costs due to its recent FCPA probe.
Needless to say, compliance officers have been dealt a tough hand – something that’s not expected to lighten up throughout the remaining months of 2014. Here are five challenges compliance officers can expect to face throughout the remainder of this year:
A new study relies on a complex systems modelling approach to analyse inter-dependent networks and improve their reliability in the event of failure.
Energy production systems are good examples of complex systems. Their infrastructure equipment requires ancillary sub-systems structured like a network: including water for cooling, transport to supply fuel, and ICT systems for control and management. Every step in the network chain is interconnected with a wider network and they are all mutually dependent.
A team of UK-based scientists has studied various aspects of inter-network dependencies, not previously explored. The findings have been published in The European Physical Journal B by Gaihua Fu from Newcastle University, UK, and colleagues. These findings could have implications for maximising the reliability of such networks when facing natural and man-made hazards.
Previous research has focused on studying single, isolated systems, not interconnected ones. However, understanding inter-connectedness is key, since failure of a component in one network can cause problems across the entire system, which can result in a cascading failure across multiple sectors, as in the energy infrastructure example quoted above.
In this study, interdependent systems are modelled as a network of networks. The model characterises interdependencies in terms of direction, redundancy, and extent of inter-network connectivity.
Fu and colleagues found that the severity of cascading failure increases significantly when inter-network connections are one-directional. They also found that the degree of redundancy, which is linked to the number of connections, in inter-network connections can have a significant effect on the robustness of systems, depending on the direction of inter-network connections.
The authors observed that the interdependencies between many real-world systems have characteristics that are consistent with the less reliable systems they tested, and therefore they are likely to operate near their critical thresholds. Finally, ways of cost-effectively reducing the vulnerability of inter-dependent networks are suggested.
Reference: Fu, G. et al. (2014). Interdependent networks: Vulnerability analysis and strategies to limit cascading failure. European Physical Journal B.
Read the paper (PDF).
The World Health Organization (WHO) has identified six countries as being at high risk for the spread of the Ebola virus disease. It is working with these countries to ensure that full surveillance, preparedness and response plans are in place.
“The following countries share land borders or major transportation connections with the affected countries and are therefore at risk for spread of the Ebola outbreak: Benin, Burkina Faso, Côte d’Ivoire, Guinea-Bissau, Mali, and Senegal,” the agency said in the first in a series of regular updates on the Ebola response roadmap.
WHO’s Ebola Response Roadmap Situation Report 1 features up-to-date maps containing hotspots and hot zones, as well as epidemiological data showing how the outbreak is evolving over time. It also communicates what is known about the location of treatment facilities and laboratories.
It follows the release of an Ebola response roadmap that aims to stop the transmission of Ebola virus disease (EVD) within six to nine months.
The update noted that although the numbers of new cases reported in Guinea and Sierra Leone had been relatively stable, last week saw the highest weekly increase yet in Guinea, Sierra Leone and Liberia, highlighting ‘the urgent need to reinforce control measures and increase capacity for case management.’
Disaster recovery planners are often recommended to take a holistic view of their IT organisation. They should work to deal with potential outcomes, rather than possible causes. That certainly helps businesses to greater overall DR effectiveness and cost-efficiency. However, there’s no denying that a number of practical details must also be respected. Otherwise, the best-aligned DR plan may never get off the ground. The old rhyme says: “For want of a nail, a shoe was lost…” and finally the whole kingdom too. Here are a few such ‘nails’ that disaster recovery planning can take into account to get those mission-critical apps up and running again after an incident.
What is the BCI Diploma?
The BCI Diploma enables individuals to achieve a formal, internationally recognised academic qualification in business continuity and is delivered in partnership with Buckinghamshire New University as a distance learning programme.
This course has been developed in response to industry demand and is designed to meet the current and future needs of business continuity professionals working in the industry worldwide.
Students will be entitled to FREE Student membership for the duration of their studies, giving them full access to a wide range of high-quality business continuity resources through the BCI Members’ Area to support their learning as well as a wide range of other value-add benefits, including Member discounts on BCI products and services.
Successful completion of the Diploma leads to the post-nominal designation DBCI (Diploma of the Business Continuity Institute). Holders of the DBCI can apply via the Alternative Route to Membership for Statutory membership of the BCI (AMBCI or MBCI dependent on experience).
This course is delivered in an interactive eLearning environment and is delivered over a period of eight weeks. Each session lasts two hours with two sessions scheduled for each of the eight weeks, giving you a total of 32 hours of training.
The BCI Good Practice Guidelines Live Online Training Course has been revised for 2014 and is fully aligned to the Good Practice Guidelines (GPG) 2013 and to ISO 22301:2012, the international standard for BCM.
This course offers a solid description of the methods, techniques and approaches used by BC professionals worldwide to develop, implement and maintain an effective BCM programme, as described in GPG 2013 and takes the student step by step through the BCM Lifecycle, which sits at the heart of good BC practice.
Infrastructure virtualization is a proven means of streamlining hardware footprints and increasing resource agility in order to better handle the demands of burgeoning data loads and wildly divergent user requirements.
But it turns out that what is good for infrastructure is also good for data itself, which is why many organizations are looking to augment existing virtual plans with data virtualization, particularly when it comes to massive volumes found in archiving and data warehousing environments.
The Data Warehousing Institute’s David Wells offers a good overview of data virtualization and how it can drive greater enterprise flexibility. In essence, the goal is to enable access to single copies of data across disparate entities, preferably in ways that make details like location, structure and even access language irrelevant to the user. For warehousing and analytics, then, this eliminates the need to move all related data to a newly created database, which gives infrastructure and particularly networking a break because data no longer has to move from site to site in order to reach the user. Couple this with semantic optimization and in-memory caching and suddenly Big Data starts to look a lot less menacing.
The big change has finally started to take effect, with our historic preceptions of terrorism, consequences of decades of mismanagement of the Middle East, the lack of intervention where needed and intervention where not necessary, the lack of political and public will to engage with the idea of ‘home-grown’ terrorism and the enthusiasm for disaffected youth to belong to something that allows them to ‘matter’.
In the UK, we have raised our threat level from International Terrorism to ‘Severe’. This is in recognition of the fact that there is stated intent to attack the UK ‘homeland’ and its people. There is known capability and the potential adversaries are motivated and perhaps preparing their plans now – raising the threat level is a sensible caution and allows some focus and thinking about what needs to be done to improve our protective and response capabilities. The result amongst our population varies from fear about a threat we don’t understand to perhaps understandable scepticism about the motives of the Government and the wish to impose a ‘police state’ regime.
Today, I conclude a three-part series on risk assessments in your Foreign Corrupt Practices Act (FCPA) or UK Bribery Act anti-corruption compliance program. I previously reviewed some of the risks that you need to assess and how you might go about assessing them. Today I want to consider some thoughts on how to use your risk assessment going forward.
Mike Volkov has advised that you should prepare a risk matrix detailing the specific risks you have identified and relevant mitigating controls. From this you can create a new control or prepare an enhanced control to remediate the gap between specific risk and control. Finally, through this risk matrix you should be able to assess relative remediation requirements.
A manner in which to put into practice some of Volkov’s suggestions was explored by Tammy Whitehouse, in an article entitled “Improving Risk Assessments and Audit Operations”. Her article focused on the how Timken Company, assesses and then evaluates the risks the company has assessed. Once risks are identified, they are then rated according to their significance and likelihood of occurring, and then plotted on a heat map to determine their priority. The most significant risks with the greatest likelihood of occurring are deemed the priority risks, which become the focus of the audit/monitoring plan, she said. A variety of solutions and tools can be used to manage these risks going forward but the key step is to evaluate and rate these risks.
There’s no doubt that virtualisation has been a boon to many enterprises. Being able to rationalise the use of servers by spreading storage and applications evenly over a total pool of hardware resources leads to higher cost-efficiency, as well as improved disaster recovery and business continuity. Yet in practical terms, businesses are often still tied to one vendor for any effective storage strategy. To break free of that constraint, software-defined storage (SDS) lets IT departments mix and match the physical storage devices as they want. And there are further benefits too.
I was recently talking with a friend about—what else—Facebook and her thoughts on whether that would be too private to share.
“Oh, I don’t believe in privacy,” she said with a dismissive hand wave.
That stumped me, in large part because she’s a defense attorney.
“You don’t believe in privacy as a fact or you don’t believe in privacy as a law?” I asked.
“Oh - legal privacy is very important,” she said. “But privacy as a fact—I don’t believe in it. It doesn’t exist.”
It sounds like a distinction only a lawyer could make. Yet as Big Data becomes commonplace, CIOs must educate themselves about the legal risks and responsibilities of gathering and using data, advises Larry Cohen, global CTO of Capgemini.
"I think the CIO is already kind of taking on more of a role of a risk broker and risk orchestrator in the enterprise," Cohen told CIO.com. "I think this is a perfect example of how a role like that arises in a topic like Big Data."
A Case In Point
Not long ago I was talking with a long-term CIO of a large organization about Disaster Recovery. He proceeded to tell me they are all set, as their tapes are stored offsite. To him, that was all he needed to be concerned about as it related to DR.
When a fire broke out in the office next to their data center, I am certain that offsite tapes were the last thing on his mind. He learned a hard lesson about relying on backups though. Turns out that after the fire, they were able to physically relocate their entire office before IT was able to restore all their applications. Even more disturbing than that was the discovery that they had more than ten days’ worth of data loss due to old/bad tapes, skipped files, and incomplete backups. I would not have wanted to be him when he met with the COO in the aftermath and had to explain the situation and his lack of preparedness.
The Napa County earthquake will have political aftershocks on Capitol Hill. The big question is how long they’ll last.
Prompted by California’s weekend temblor, lawmakers are renewing their push for earthquake warning programs. The most recent quake could spur support for a long-debated early warning system. It also could reveal some partisan fault lines.
“What we need is the political resolve to deploy such a system,” Sen. Dianne Feinstein, D-Calif., said this week.
In April, underscoring the role of politics in earthquake matters, 25 House Democrats from California, Oregon and Washington endorsed a proposal to provide $16.1 million for an earthquake early warning system. No Republican signed the letter requesting the funds.
As government leaders in California wend their way through the management of the state's historic drought, real discussions about how the state should adapt to water scarcity are taking place. And if history is a guide, the decisions made in the Golden State will have their impact in other places where water scarcity is becoming the norm.
Make no mistake: California is moving forward into uncharted territory. Traditional engineered solutions, such as the California Aqueduct that channels water from the wetter regions in the north to the arid south, are being challenged by a host of factors beyond the drought, including environmental regulations and the capacity of the systems themselves. Such water-transfer projects made it possible for the drier Southland to grow and become the most populous region of the state. But government and private-sector leaders are rapidly realizing that other approaches will be needed to fulfill future statewide agriculture, business and residential water needs.
Natural catastrophe events in the United States accounted for three of the five most costly insured catastrophe losses in the first half of 2014, according to just-released Swiss Re sigma estimates.
In mid-May, a spate of severe storms and hail hit many parts of the U.S. over a five-day period, generating insured losses of $2.6 billion. Harsh spring weather also triggered thunderstorms and tornadoes, some of which caused insured claims of $1.1 billion.
The Polar Vortex in the U.S. in January also led to a long period of heavy snowfall and very cold temperatures in the east and southern states such as Mississippi and Georgia, resulting in combined insured losses of $1.7 billion.
Ed. Note-Today, I continue my three-part posts on risk assessments. Today I take a look at some different ideas on how you might go about assessing your risks.
One of the questions that I hear most often is how does one actually perform a risk assessment? Mike Volkov has suggested a couple of different approaches in his article “Practical Suggestions for Conducting Risk Assessments.” In it Volkov differentiates between smaller companies which might use some basic tools such as “personal or telephone interviews of key employees; surveys and questionnaires of employees; and review of historical compliance information such as due diligence files for third parties and mergers and acquisitions, as well as internal audits of key offices” from larger companies. Such larger companies may use these basic techniques but may also include a deeper dive into high risk countries or high risk business areas. If your company’s sales model uses third party representatives, you may also wish to visit with those parties or persons to help evaluate their risks for bribery and corruption that might well be attributed to your company.
Another noted compliance practitioner, William Athanas, in an article entitled “Rethinking FCPA Compliance Strategies in a New Era of Enforcement”, took a different look at risk assessments when he posited that companies assume that FCPA violations follow a “bell-curve distribution, where the majority of employees are responsible for the majority of violations.” However Athanas believed that the distribution pattern more closely follows a “hockey-stick distribution, where a select few…commit virtually all violations.” Athanas suggests assessing those individuals with the opportunity to interact with foreign officials have the greatest chance to commit FCPA violations. Diving down from that group, certain individuals also possess the necessary inclination, whether a personal financial incentive linked to the transaction or the inability to recognize the significant risks attendant to bribery.
There’s bad news for SAP’s HANA: The majority of SAP’s American User Group is skeptical that the Big Data platform is worth the costs.
ASUG recently surveyed its member on SAP HANA adoption. It received more than 500 respondents, with 93 percent identifying themselves as ASUG members.
Three-fourths of SAP customers said they have not purchased any SAP HANA products because they can’t identify a business case that will justify its costs. Ranked well below this concern (at 40 percent) were concerns about skill set, a roadmap and upgrade issues.
ASUG membership can also include SAP partners, whose responses were separated out from customer survey results. Still, partner results share a similar concern. The top factor partners say could lead to more HANA purchases would be “better business case guidance.” (As one reader pointed out in the comments, the SAP Innovation Awards might help here, since the list provides nearly 30 use cases.)
WASHINGTON – The Federal Emergency Management Agency (FEMA), through its Regional Office in Oakland, California, is monitoring the situation following the U.S. Geological Survey report of a 6.0 magnitude earthquake that occurred this morning six miles south southwest of Napa, California. FEMA remains in close coordination with California officials, and its Regional Watch Center is at an enhanced watch to provide additional reporting and monitoring of the situation, including impacts of any additional aftershocks.
FEMA deployed liaison officers to the state emergency operations center in California and to the California coastal region emergency operations center to help coordinate any requests for federal assistance. FEMA also deployed a National Incident Management Assistance Team (IMAT West) to California to support response activities and ensure there are no unmet needs.
“I urge residents and visitors to follow the direction of state, tribal and local officials,” FEMA Administrator Craig Fugate said. “Aftershocks can be strong enough to cause additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.”
When disasters occur, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations and numerous private interest groups who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
Safety and Preparedness Tips
- Expect aftershocks. These secondary shockwaves are usually less violent than the main quake but can be strong enough to do additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.
- During an earthquake, drop, cover and hold on. Minimize movements to a few steps to a nearby safe place. If indoors, stay there until the shaking has stopped and exiting is safe.
- If it is safe to do so, check on neighbors who may require assistance.
- Use the telephone only for emergency calls. Cellular and land line phone systems may not be functioning properly. The use of text messages to contact family is the best option, when it is available.
- Check for gas leaks. If you know how to turn the gas off, do so and report the leak to your local fire department and gas company.
The enterprise must change if it is to take advantage of all the benefits that cloud and mobile technologies have to offer. This is nothing new, of course, as the enterprise has been changing to meet new challenges and opportunities since its inception.
But confronting challenges is always easier in hindsight, which leaves us non-time travelers in a quandary: What does the cloud future hold, and how can we best prepare for it?
According to the rising cadre of startups looking to capitalize on burgeoning cloud infrastructure, the biggest thing holding the enterprise back is their legacy infrastructure and the continued reliance on the old guard vendors who created it. SolidFire’s Jeremiah Dooley, for example, claims leading platform providers are trying to delay the inevitable switch to the cloud as much as possible in order to prevent others from encroaching upon their territory. This may benefit their revenue streams, but it keeps the enterprise in the slow lane when it comes to provisioning services and driving operational efficiency. The message here is simple: The cloud is not the problem; static legacy infrastructure is.
Social media is now a standard communications tool for businesses, with many companies regularly using Facebook, Twitter and other social networks to engage with the public. More and more businesses are hiring social media specialists whose sole responsibility is to be the company’s “voice” on these platforms. But this activity comes with risk for both the organization and the individual. The potential for any posting to be retweeted, shared or even go viral underscores the need to be aware of the rising legal risks associated with your business’s social media accounts.
Potential Defamation Lawsuits
The first tip for anyone engaged in social media on behalf of their business or employer is obvious, but not always followed—think before you post. Even if the tweet or post contains an unintended error and is deleted immediately, postings can still be pulled and reposted or retweeted by others. Once something is out there on social media, however, you’ll need to deal with the consequences. Although the laws surrounding social media are still developing, it is possible for a business to be hit with an expensive defamation suit based on a single posting or comment.
The Business Continuity Institute is pleased to announce that the keynote speaker for the BCI World Conference and Exhibition will be Prof Steve Peters – consultant psychiatrist, bestselling author and Head of Sports Psychology at UK Athletics. In addition to his extraordinary success with British cycling, he has also worked on twelve other Olympic disciplines as well as English Premier League football and the English rugby and football teams.
Beginning his career as a maths teacher, Prof Peters then switched to medicine and specialised in patients with severe and dangerous personality disorders. His focus is now on how the mind can enable people to reach optimum performance in all walks of life. Working with sportspeople at the top of their game, he gives them the confidence to come back from defeat and out-perform the opposition.
Prof Peters has been described as a "genius" by Team GB cycling coach Dave Brailsford and many decorated Olympians such as Chris Hoy, Victoria Pendleton and Bradley Wiggins have all attributed their success to him.
In his keynote speech, Prof Peters will explain his method to help us understand and control what he describes as our 'inner chimp' – the irrational, impulsive, seemingly impossible part of our mind that often holds us back. Examining motivation, confidence and communication, he will show that competition is as much in the mind as it is in the field or on the track – or in the office.
Find out more about the BCI World Conference and Exhibition on the 5th and 6th November at the London Olympia by visiting the BCI website.
Yesterday, I blogged about the Desktop Risk Assessment. I received so many comments and views about the post, I was inspired to put together a longer post on the topic of risk assessments more generally. Of course I got carried away so today, I will begin a three-part series on risk assessments. In today’s post I will review the legal and conceptual underpinnings of a risk assessment. Over the next couple of days, I will review the techniques you can use to perform a risk assessment and end with a discussion of what to do with the information that you have gleaned in a risk assessment for your compliance program going forward.
One cannot really say enough about risk assessments in the context of anti-corruption programs. Since at least 1999, in the Metcalf & Eddy enforcement action, the US Department of Justice (DOJ) has said that risk assessments that measure the likelihood and severity of possible Foreign Corrupt Practices Act (FCPA) violations identifies how you should direct your resources to manage these risks. The FCPA Guidance stated it succinctly when it said, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” The UK Bribery Act has a similar view. In Principal I of the Six Principals of an Adequate Compliance program, it states, “The commercial organisation regularly and comprehensively assesses the nature and extent of the risks relating to bribery to which it is exposed.” In other words, risk assessments have been around and even mandated for a long time and their use has not lessened in importance. The British have a way with words, even when discussing compliance, and Principal I of the Six Principals of an Adequate Compliance program says that your risk assessment should inform your compliance program.
Your data backups are there to help you recover information, applications and files if required, hopefully both effectively and efficiently. But they and any archiving you do may also be there for external parties to use as a result of e-discovery. That’s the retrieval of electronically stored information (ESI) for use in legal proceedings involving your organisation. The US has led the way in this field, defining ESI as any information that is “created, stored, or best used with any kind of computer technology”. Now in Australia, all court dealings above a certain size must be conducted completely digitally. But is e-discovery good news or bad news for legal rulings and ultimately business continuity?
In our haste to cover all the high-level strategies that may be needed to respond to a business disruption, Business Continuity Plans often miss critical details that can mean the difference between success and failure – especially when time is a major factor.
Many BCP’s have a strategy for “Loss of Building”. That strategy may include moving critical employees from the most crucial business processes to alternate sites – either internal (another of the organization’s facilities in a different geographical location) or external (at a 3rd party “Workspace” that can be made ready to accommodate those employee’s technology requirements).
All good; and logical – but perhaps missing some critical information.
A state of emergency was declared in California yesterday by Gov. Edmund G. Brown due to the effects of a 6.1 magnitude earthquake that rocked the Napa Valley area in northern California. The U.S. Geological Survey estimates that economic losses from the quake could top $1 billion and said there is a 54% likelihood of another large quake, magnitude 5 or higher, within the next week.
As of 4:15 p.m. Sunday, six aftershocks had been reported, four centered near Napa, ranging 2.5 to 3.6 magnitude. Two others, a 2.8 and a 2.6 were reported near American Canyon, according to the USGS.
The Napa quake is the largest in the Bay Area since the 1989 Loma Prieta quake, which was magnitude 6.9. That quake resulted in $1.8 billion in insured claims (in 2013 dollars) being paid to policyholders, said Robert Hartwig, Ph.D., president of the Insurance Information Institute.
(MCT) — Ten seconds before the earth rumbled in a UC Berkeley lab early Sunday morning, an alarm started blaring — and an ominous countdown warned that a temblor centered near Napa was moments away.
"Earthquake! Earthquake!" it cautioned, after a quick series of alarms. "Light shaking expected in three seconds."
The successful alert was the biggest test yet in the Bay Area for a type of earthquake early warning system that's not yet available to the public in the U.S. but already is providing precious seconds of notice before quakes hit in Mexico and Japan.
The ShakeAlert system — a collaboration between Cal, Caltech, the University of Washington and the U.S. Geological Survey — could one day stop elevators, control utilities and alert motorists of an impending natural disaster. But before it is reliable enough to launch throughout the West Coast, the system needs about $80 million in equipment, software and other seismic infrastructure upgrades.
(MCT) — City officials in Napa had long worried that the grand building on the corner of Second and Brown streets — with its brick walls and giant red-tiled cupolas — could be devastated by a major earthquake.
So city officials required brick structures such as the landmark Alexandria Square building to get seismic retrofitting — bolting brick walls to ceilings and floors to make them stronger. The work was completed years ago on the 104-year-old property.
But when a 6.0 earthquake struck Sunday morning, the walls on the top floors crumbled, showering brick and mortar onto the sidewalk and outdoor café.
The destruction highlights one of the greatest fears of seismic engineers — that the retrofitting of unreinforced masonry buildings still leaves weak joints between bricks. Whole chunks can fall, sending bricks crashing down.
One day after a magnitude 6.0 earthquake struck the San Francisco/Napa area of California, the Northern California Seismic System (NCSS) says there is a 29 percent probability of a strong and possibly damaging aftershock in the next seven days and a small chance (5 to 10 percent probability) of an earthquake of equal or larger magnitude.
The NCSS, operated by UC Berkeley and USGS, added that approximately 12 to 40 small aftershocks are expected in the same seven-day period and may be felt locally.
As a rule of thumb, a magnitude 6.0 quake may have aftershocks up to 10 to 20 miles away, the NCSS added.
In the European Union in the past year, a whole range of corporate risk and regulatory issues have been at the top of the agenda, but at the top of my list are data protection and information security.
In this report on risk issues for 2014, I will look at websites, privacy impact assessments, cloud computing and the EU Data Protection Regulation.
Focus on Websites in the EU
In the past five years or so, the European Commission and regulators that focus on consumer protection have carried out regular “sweeps” of websites in order to assess levels of compliance. This trend will continue, and businesses that sell or license content to consumers need to review their online terms and conditions as well as their compliance with other e-commerce rules such as the E-Privacy Directive, E-Commerce Regulations and Distance Selling Regulations.
For example, an EU-wide screening of 330 websites that sell digital content (such as books, music, films, videos and computer games) across the European Economic Area revealed some significant areas of non-compliance.
How many among you out there are sushi fans? Conversely, how many out there consider the idea of eating raw fish right up there with going into to the dentist’s office for some long overdue remedial work? One’s love or distaste for sushi was used as an interesting metaphor for leadership in this week’s Corner Office section of the New York Times (NYT) by Adam Bryant, in an article entitled “Eat Your Sushi, and Expand Your Horizon”, where he profiled Julie Myers Wood, the Chief Executive Officer (CEO) of Guidepost Solutions, a security, compliance and risk management firm. Wood said her sushi experience relates to advice she gives college students now, “One thing I always say is “eat the sushi.” When I had just graduated from college, I went with my mom to Japan. We had a wonderful time, but I refused to eat the sushi. Later, when I moved to New York, I tried some sushi and loved it. The point is to be willing to try things that are unfamiliar.”
I thought about sushi and trying something different in the context of risk assessments recently. I think that most compliance practitioners understand the need for risk assessments. The FCPA Guidance could not have been clearer when it stated, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” Many compliance practitioners have difficulty getting their collective arms about what is required for a risk assessment and then how precisely to use it. The FCPA Guidance makes clear there is no ‘one size fits all’ for about anything in an effective compliance program.
One type of risk assessment can consist of a full-blown, worldwide exercise, where teams of lawyers and fiscal consultants travel around the globe, interviewing and auditing. However if there is one thing that I learned as a lawyer, which also applies to the compliance field, is that you are only limited by your imagination. So using the FCPA Guidance that ‘on one size fits all’ proscription, I would submit that is also true for risk assessments.
Napa, Calif., residents were awakened at 3:20 a.m. on Sunday, Aug. 24, by a magnitude 6.0 earthquake that struck six miles southwest of the Northern California city, sending as many as 160 to the hospital, and causing widespread damage, including dozens of broken water mains and triggering six major fires. One person was still in critical condition Sunday evening.
The fires destroyed several mobile homes, and firefighters struggled with water pressure issues since a significant amount of pressure was lost because of the cracked and broken water mains. Most of the damage occurred in downtown Napa where the buildings are older.
There was also significant damage to roads, but the California Department of Highway Patrol and California Department of Transportation found no damage to bridges. The Transportation Department also had dive teams checking local toll bridges but found no damage.
(MCT) — A predawn earthquake rattled Napa, Calif., early Sunday morning, critically injuring at least three people as the shaking ripped facades and shattered windows from historic downtown buildings, toppled chimneys and ignited gas fires at mobile home parks.
Countless residents fled into darkened streets as the result of the quake, measured at magnitude 6.0 by the United States Geological Survey. It was the largest to hit the San Francisco Bay area since the devastating 6.9-magnitude Loma Prieta earthquake in 1989, prompting Gov. Jerry Brown to declare a state of emergency.
The Queen of the Valley Medical Center in Napa reported 120 people seeking treatment soon after the quake. They included a small child who was airlifted to UC Davis Medical Center with critical injuries authorities attributed to a collapsed chimney.
The buildup to fall is in full swing. The next step is Labor Day parades and barbeques and, then, the school busses will begin to roll.
IT and telecommunications never had a real summer slowdown this year, though. Much was done and lots of news was made, and hasn’t even slowed down during the latter half of August. Here is a look at some of the news and more interesting commentary.
"I always imagined a few people on the phones in a small office taking calls, not a big office with actual departments, and definitely not anyone thinking about business continuity and risks." Over the past year I have heard this line said to me in varying forms when I have explained that I give advice on corporate risk and business continuity in the non profit sector.
Not a common misconception and when being able to easily list the risks relevant to the financial services industry for example, applying that to the non profit industry along with the associations of what is important is not as easily obvious straight away.
Some Challenges and observations:
The varying degrees of academia in non profit organisations are expansive and the primary challenge is making it accessible and relatable to all.
The attitudes that this would take too long - it’s not required in our industry and focusing on delivering primary front line services was more important. But has anyone thought about those supporting functions?
"This will never happen to us anyway." At first, it made me feel uneasy hearing this but this is the best challenge to promote business continuity in any industry. Using the "if we don’t comply, we will get fined" card almost shifts the desired affect from wanting to provide great assurance to an exhausting check box exercise. The appetite and denial factor is a tough barrier to get around.
Forgotten plans - in most cases contingency plans were in people’s minds but just not on paper. Hearing various stories of incidents taking place which resulted in an instant panic before the swift realisation that "oh yes, we have a plan, we know what we need to" kicked off a series of reactions to get things back to normal.
Planning V’s practicing - countless months were spent planning and writing but practicing those BCP’s were missing. In recent exercises some feedback I got was that no one had ever tested their plans and found it really useful. The actions that were thought to take five minutes took twenty. This started a chain of actions which plan owners needed to implement in order to become more resilient in an incident. A friend said to me once that businesses don’t fail because of a bad business continuity plans, but because of bad choices. That stuck with me.
So what does BC look like in these industries?
We live in a robust and dynamic society and whilst a generic approach to start off a plan is valuable, they can be adaptable. I quickly realised that I was getting too hung up on wanting to make each teams plan look the same and what really mattered was that it absolutely has to work for the people invoking it, and if it is clear and coherent, that is sufficient.
It is without a doubt that the non-physical threats such as reputational risks, loss of funding from a major donor and employee scandals can have serious impacts on your operation, especially when the majority of funding is provided by the public generosity. If an incident occurred what would be the emergency funding protocol? It is things like this that needs the most consideration. Yes, every industry needs to consider the building, IT/data and staff but what about the intangible factors that essentially calls for a disaster.
Making those threats relatable is key and, the empowerment resulting in a shift in view of risk and business continuity only being related to IT and Financial services is essential. (Because of the varying levels of academics in these industries often sit under one roof).
What does this all mean?
All non profits, for example charities, are run like businesses. Fact!
Non profit or not, business continuity is on everyone’s mind, but they just don’t know that this is what it is. Yes, the variations of levels in what constitutes a threat differs from industry to industry but essentially, what matters most is the resiliency each organisation has to overcome any incident it faces.
RISKercizing until next time
It’s hard to have a conversation in the enterprise these days without the topic veering toward Big Data. What is it? Where does it come from? And what are we supposed to do with it?
But despite the fact that none of these questions have clear answers yet, IT is still tasked with preparing to accommodate Big Data and then figuring out how to derive real value from it.
Part of the problem is the term “Big Data” itself. While large data volumes are a facet of Big Data, that’s not where the challenge lies. Rather, says IBM’s Doug Balog, it’s the need to accommodate the ‘variety, velocity and veracity’ that advanced analytics require that will give most managers fits. This will require not only bigger, more scalable infrastructure, but entirely new ways to collect, analyze and store data, which, from IBM’s perspective, will require advanced Power8 architectures married to powerful third-party platforms like Canonical and the various Linux distributions.
Every organization should have an Emergency Action or Evacuation Plan. Even when it is not required (by the building owner, fire department or occupancy regulations) it is a ‘best practice’ for every organization to plan and practice to evacuate all personnel from the workplace. Often, evacuation focuses on getting out quickly. Surely that’s the most critical objective. . While simple in principle, there are some considerations that should not be overlooked:
Too Close for Safety: The standard ‘rule of thumb’ for Assembly points is at least 200 feet from the evacuated building. This is intended to assure personnel will not be endangered is window glass or other debris falls. Keep in mind that taller buildings may have a wider potential debris pattern. Two-hundred feet should be used as the minimum. Assuring employee safety should be the priority.
Obstruction: When Emergency Services (Fire, police, ambulance) arrive, will they have sufficient room to do their job? Crowds of evacuated personnel shouldn’t impede their work. Emergency services may need room to park and to turn their vehicles around. Make sure Assembly Points are a reasonable distance from entrances and drive paths- and assure personnel won’t interfere.
(MCT) — For six weeks, Florida reeled under the assault of four hurricanes.
First Charley struck Port Charlotte Aug. 13, 2004, with 150-mph winds. Then Frances pounded Martin and Palm Beach counties, collapsing part of Interstate 95 near Lake Worth and sending gusts into Broward that left a quarter-million people without electricity. Ivan came ashore near Pensacola with 120-mile-per-hour winds and a storm surge that swamped coastal towns. Jeanne struck the same area as Frances, turning out the lights in most of Palm Beach County, ripping off roofs and flooding houses.
It came to be known as the Year of the Four Hurricanes.
Following that beating, and another one the next year with Hurricanes Wilma and Katrina, there have been dramatic improvements to Florida’s electric grid, shelters, forecasting abilities and ability to communicate. And while another season like 2004 still would be disastrous, residents would have more warning and stand a better chance of returning faster to normal life.
(MCT) — The good news is people are more alert to and educated about weather this time of year.
Husbands and wives on the Coast can carry on a conversation about how the amount of sand in the upper atmosphere along the Atlantic affects the chances a tropical storm will develop.
But the down side is the array of information can be confusing and the social media sites, looking for clicks, tend to hype tropical activity.
Find a trusted source, local emergency managers say.
Here’s a tip that might take a little pressure off the data scientist talent search: A data scientist doesn’t necessarily need to be a math wizard with a PhD or other hard science background.
In fact, that type of person might actually prove disappointing if your goal is Big Data analytics for humans, according to data scientist Michael Li.
That may seem odd, given that Li’s work focuses on exactly the kind of credentials normally associated with the term “data scientist.” Li founded and runs The Data Incubator, a six-week bootcamp to prepare science and engineering PhDs for work as data scientists and quantitative analysts.
You can’t just wing it anymore. Many things have changed since you first said you wanted to become a fireman, an astronaut, a veterinarian or a nun. This is especially true in the field of business continuity.
Business continuity is not just concerned with IT recovery anymore. Supply chain management is critical to sustaining company operations. How do we determine what is or isn’t critical? Shouldn’t we bring these issues to the attention of our C-Level management?
These are just some of the issues confronting BCP Managers and most practitioners today had to learn how to handle these things along the way. As time goes by, trying to cover all bases regarding continuity has become more and more complicated. Instead of learning while working the job, a little bit of education to start would go a long way to getting ahead of what needs to be done.
The GlaxoSmithKline PLC (GSK) corruption matter in China continues to reverberate throughout the international business community, inside and outside China. The more I think about the related trial of Peter Humphrey and his wife, Yu Yingzeng for violating China’s privacy laws regarding their investigation of who filmed the head of GSK’s China unit head in flagrante delicto with his Chinese girlfriend, the more I ponder the issue of risk in the management of third parties under the Foreign Corrupt Practices Act (FCPA). In an article in the Wall Street Journal (WSJ), entitled “Chinese Case Lays Business Tripwires”, reporters James T. Areddy and Laurie Burkitt explored some of the problems brought about by the investigators convictions.
They quoted Manuel Maisog, chief China representative for the law firm Hunton & Williams LLP, who summed up the problem regarding background due diligence investigations as “How can I do that in China?” Maisog went on to say, “The verdict created new uncertainties for doing business in China since the case hinged on the couple’s admissions that they purchased personal information about Chinese citizens on behalf of clients. Companies in China may need to adjust how they assess future merger partners, supplier proposals or whether employees are involved in bribery.”
I had pondered what that meant for a company that wanted to do business in China, through some type of third party relationship, from a sales representative to distributor to a joint venture (JV). What if you cannot get such information? How can you still have a best practices compliance program around third parties representatives if you cannot get information such as ultimate beneficial ownership? At a recent SCCE event, I put that question to a Department of Justice (DOJ) representative. Paraphrasing his response, he said that companies still need to ask the question in a due diligence questionnaire or other format. What if a third party refuses to answer, citing some national law against disclosure? His response was that a company needs to very closely weigh the risk of doing business with a party that refuses to identify its ownership.
It’s been said that Big Data and the cloud go together like chocolate and peanut butter, but it looks like more symbiosis is at work here than meets the eye.
While on the surface it may seem like the two developments appeared at the same time by mere coincidence, the more likely explanation is that they both emerged in response to each other – that without the cloud there would be no Big Data, and without Big Data there would be no real reason for the cloud.
Silicon Angle’s Maria Deutscher hit on this idea recently, noting that the two seem to be feeding off each other: As enterprises start to grapple with Big Data, they will naturally turn to the cloud to support the load, which in turn will generate more data and the need for additional cloud resources. In part, this is a continuation of the old paradigm that more computing power and capacity simply causes users to up their data requirements. Of course, the cloud comes with additional security and availability concerns, but in the end it is the only way for already stretched IT budgets to feasibly cope with the amount of data being generated on a daily basis.
An improving economy and updated business practices have contributed to companies sending more employees than ever on international business trips and expatriate assignments. Rising travel risks, however, require employers to take proactive measures to ensure the health and safety of their traveling employees. Many organizations, however, fail to implement a company-wide travel risk management plan until it is too late – causing serious consequences that could easily have been avoided.
The most effective crisis planning requires company-wide education before employees take off for their destinations. Designing a well-executed response plan and holding mandatory training for both administrators and traveling employees will ensure that everyone understands both company protocol and their specific roles during an emergency situation.
Additionally, businesses must be aware that Duty of Care legislation has become an integral consideration for travel risk management plans, holding companies liable for the health and safety of their employees, extending to mobile and field employees as well. To fulfill their Duty of Care obligations, organizations should incorporate the following policies within their travel risk management plan:
Ian Kilpatrick looks at the risks involved with mobile devices and how to secure them.
Mobile devices with their large data capacities, always on capabilities, and global communications access, can represent both a business applications’ dream and a business risk nightmare.
For those in the security industry, the focus is mainly on deploying ‘solutions’ to provide protection. However, we are now at one of those key points of change which happen perhaps once in a generation, and that demand a new way of looking at things.
The convergence of communications, mobile devices and applications, high speed wireless, and cloud access at a personal level, are driving functionality demands on businesses at too fast a rate for many organizations.
Lockton report provides information to help protect companies' employees and operations from Ebola threats.
The current Ebola outbreak, deemed ‘an international public health emergency’ by the World Health Organization, has left many companies uncertain of how to properly protect themselves, while ensuring the safety of its employees and operations.
"The situation on the ground is evolving quickly and poses a threat not only to companies with operations in the region, but to all companies who have employees that may come in contact with the Ebola virus while traveling internationally," said Logan Payne of Lockton's International Risk Management Team.
Most companies are concerned with two main areas when facing a threat like Ebola: personnel risk and an interruption of normal business operations leading to a loss of revenue.
The 2014 Business Continuity Institute Africa Awards took place on Tuesday 19th August at a ceremony to coincide with the SADC and ITWeb Business Resilience Conference in South Africa. The BCI Africa Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in or operating in Africa.
The Winners of the Awards were:
Business Continuity Manager of the Year
Sylvain Prefumo MBCI, Head of Business Continuity at the State Bank of Mauritius
Emmanuel Atta Hanson MBCI, Business Continuity Manager at Barclays Bank of Ghana Ltd, and Elnora Aryee-Quaynor, Director of Africa Risk and Quality at PricewaterhouseCoopers (Ghana) Ltd, were both Highly Commended
Business Continuity Public Sector Manager of the Year
Dr Clifford Ferguson, Business Continuity Manager at the Government Pensions Administration Agency
Business Continuity Consultant of the Year
Peter Frielinghaus MBCI, Senior BCM Advisor at ContinuitySA
Lynn Jackson MBCI, Senior Business Continuity Consultant at ContinuitySA, was Highly Commended
Business Continuity Team of the Year
Barclays Bank of Kenya
Deloitte was Highly Commended
BCM Newcomer of the Year
Darren Johnson AMBCI, BCM Advisor at ContinuitySA
Business Continuity Innovation of the Year
Business Continuity Provider of the Year (Service)
Most Effective Recovery of the Year
Barclays Bank of Kenya
Business Continuity Personality of the Year
Congratulations to all the winners and well done to all those who were nominated. All winners from the BCI Africa Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition.
Computerworld - When Healthcare.gov was launched last October, it gave millions of Americans direct experience with a government IT failure on a massive scale. But the overall reliability of federal IT operations is being called into question by a survey that finds outages aren't uncommon in government.
Specifically, the survey found that 70% of federal agencies have experienced downtime of 30 minutes or more in a recent one-month period. Of that number, 42% of the outages were blamed on network or server problems and 29% on Internet connectivity loss.
This rate of outage isn't anywhere near as severe or dramatic as what Healthcare.gov faced until it was fixed. But the report by MeriTalk, which provides a network for government IT professionals, suggest that downtime is a systemic issue. The research was sponsored by Symantec.
The report is interesting because it surveys two distinct government groups, 152 federal "field workers," or people who work outside the office, and 150 IT professionals.
For all the care and feeding we’ve given to the data center over the years, it must be remembered that all that technology and the skills to operate it are a means to an end. The real prize these days is application performance.
An increasingly mobile workforce is fostering dramatic changes in the way work and productivity are measured, and enterprise infrastructure needs to keep up with these trends in order to remain relevant in the years to come. That means issues like throughput and compute power are still important, but so are architectural flexibility and the need to become more responsive to user needs.
According to a recent survey from SolarWinds, 93 percent of business people say the performance and availability of apps like Exchange, Sharepoint and NetSuite are crucial to their job performance, with nearly two-thirds describing them as critically important. At the same time, however, 36 percent say they have waited a full day for problems to be resolved in mission-critical apps, while 22 percent have experienced wait times of several days.
By Claire Phipps, MBCI
Businesses are usually in operation to make money and deliver a service or provide a product. To be successful there are many traits required and by ensuring your business is dynamic, adaptive, efficient and cost effective are all good starting points. Who would want a business that is passive, rigid, ineffective and expensive?
The same is true when talking about good management disciplines and recognised international standards and best practice.
So why don’t we evolve these disciplines and channel our way of thinking to change the way in which we deploy them. Adapt the methods in which we operate to one of ‘organizational resiliency’ - an all-encompassing comprehensive management discipline that ‘ticks all the right boxes’, provides success, growth, strength, security and a return on our investment.
Within my industry, there has long been an ongoing discussion and debate with regards to the future of business continuity and whether or not organizational resilience is the way forward. The fact that we are still not getting a concrete answer could be the answer itself. Yet again I’m hearing the phase being more commonly discussed and thought I would consider my own opinions on the topic and open this up for further discussion.
Senior disaster management officials from APEC economies, meeting in Beijing in the aftermath of the Ludian Earthquake in Southwest China, have detailed new far-reaching measures to strengthen relief and risk reduction capabilities across the Asia-Pacific, the world’s most disaster-prone region.
Upon observing a moment of silence for the victims of the 6.5 magnitude quake, officials were briefed on efforts to help survivors and speed recovery, and sanctioned deeper cooperation to protect against future emergencies. Joint actions are being taken forward in technical capacity building exchanges between APEC economies.
“The frequent occurrence of natural disasters poses a serious threat to lives and the economic health of the entire region,” cautioned Dou Yupei, China’s Vice Minister of Civil Affairs, in remarks to the 8th APEC Senior Disaster Management Officials’ Forum. “We must join hands to reduce disaster risk and guarantee the coordinated development of society, economy and the environment.”
IFMA, the US-based International Facility Management Association, has published an overarching guide to business continuity and emergency preparedness. It includes results from the IFMA 2014 Business Continuity Survey and research forums on emergency preparedness and business continuity.
‘High Stakes Business: People, Property and Services (Facility Management Perspectives on Emergency Preparedness and Business Continuity in North America)’ looks at the growing necessity of emergency and business continuity planning as a strategic priority; one which provides a unique opportunity for facility managers to establish valued partner status in ensuring organizational resiliency and longevity.
“Emergency preparedness and business continuity are critical and complex tasks that affects all facets of commercial and institutional facilities are central to FM worldwide. This publication provides practical guidance to facility professionals in order to develop plans that will best equip their organizations to resume normal operations as quickly as possible after disaster strikes,” said Stephen Ballesty, IFMA Board of Directors, IFMA Research Committee Chair, Director, Head of Advisory, Head of Research.
The report is available at a cost of $180 for non IFMA members and £90 for members.
The 2014 BCI Asia Awards took place on Thursday 14th August at the 12th Asia Business Continuity Conference in Singapore. The BCI Asia Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in the region.
The Winners of the Awards were:
Business Continuity Provider of the Year (Product)
Business Continuity Team of the Year
Business Continuity Innovation of the Year
BCM Manager of the Year
Khalid Ahmed Bahabri
BCM Newcomer of the Year
All winners from the BCI Asia Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition 2014.
Maintaining a supply chain's resilience is a daunting challenge, especially considering the increasing scale and complexity of supply chains worldwide. To support business continuity professionals in helping to assess their supply chains, the Business Continuity Institute has just published its latest Working Paper which uses a series of statistical comparisons from previous studies to look at the influence the number of suppliers an organisation has on the frequency and cost of supply chain disruption.
The research concluded that supply chain complexity does influence the frequency and cost of disruption which represents an important step towards the better understanding of supply chain disruption. Establishing the relationship between the complexity of supply chains to the frequency and cost of incidents will validate efforts by supply chain planners to work towards greater visibility of their supply chains. This also provides additional proof that may be used to justify continuous investment towards further understanding an organisation’s supply chain.
The study does highlight however, that given the implications of this research to decisions made by organisations, it is recommended that further statistical analysis be done to other variables that affect supply chains.
The Supply Chain Resilience survey has been one of the most comprehensive studies of its kind. It has produced useful findings that have guided organisations into imparting resilience to their supply chains. A more thorough study therefore provides greater opportunities to refine this tool and make it even more helpful to organisations worldwide.
To download the full version of the BCI's 'Working Paper Series No. 2: A quantitative analysis of selected variables in the 2013 Supply Chain Resilience Survey', please click here.
To take part in the BCI's 2014 Supply Chain Resilience survey and help further this research, please click here.
You can contact the paper’s author – Patrick Alcantara of the BCI’s Research Department – with any feedback about this particular paper or with any suggestions for future topics.
The main challenges in properly implementing business continuity management in an organisation can be expressed in four words: engagement, understanding, appropriateness and assumptions. In other words: senior management needs to be involved and committed to BCM; business continuity managers need to understand the essentials about IT operations; BCM processes need to link business objectives to operational realities; and any assumptions in BC planning need to be closely scrutinized. If this sounds like IT governance, you’re right. IT governance gives some good hints about how to make business continuity a practical, valued reality.
Maintaining the state’s trend of taking a leading position on new technological and legal challenges, a California Court of Appeals ruled earlier this month that within the state,
“We hold that when employees must use their personal cell phones for work-related calls, Labor Code section 2802 requires the employer to reimburse them. Whether the employees have cell phone plans with unlimited minutes or limited minutes, the reimbursement owed is a reasonable percentage of their cell phone bills."
And with that, a fresh set of headaches for companies and IT departments managing or allowing employee-owned devices used for work purposes is born.
By Victoria Harp
CDC leads the nation in responding to public health emergencies, such as outbreaks and natural disasters. While the agency encourages the public to be aware of personal and family preparedness, not all CDC staff follow those guidelines. In an effort to increase personal preparedness as part of workforce culture, CDC created the Ready CDC initiative. Targeting the CDC workforce living in metropolitan Atlanta, this program recently completed a pilot within the organization and is currently being evaluated for measurable improvements in recommended personal preparedness actions. Ready CDC is co-branded with the Federal Emergency Management Agency’s (FEMA) Ready.gov program, which is designed for local entities to take and make personal preparedness more meaningful to local communities. Ready CDC has done just that; the program uses a Whole Community approach to put personal preparedness into practice.
FEMA’s Whole Community approach relies on community action and behavior change at the local community level to instill a culture of preparedness. To achieve this with Ready CDC, the CDC workforce receives the following:
- The support needed to participate from their employer
- Consistent messaging from a trusted, valued source
- Localized and meaningful personal preparedness tools and resources
- Expertise and guidance from local community preparedness leaders
- Personal preparedness education that goes beyond the basic awareness level to practicing actionable behaviors such as making an emergency kit and a family disaster plan
Are you Ready CDC?
When the Office of Public Health Preparedness and Response Learning Office conducted an environmental scan and literature review, as well as an inward look at the readiness and resiliency of the CDC workforce, the need for a program like Ready CDC emerged. Although CDC has highlighted personal preparedness nationally in its innovative preparedness campaigns, there have been no formal efforts to determine if or ensure that the larger CDC workforce is prepared for an emergency. After all, thousands of people make up CDC’s workforce in Metro Atlanta, throughout the United States, and beyond.
The public relies upon those thousands of people to keep the life-saving, preventative work of CDC going 24/7. When the CDC workforce has their personal preparedness plans in place, they should be more willing and better able to work on behalf of CDC during a local emergency. Research has shown that individuals are more likely to respond to an event if they perceive that their family is prepared to function in their absence during an emergency*. Also, the National Health Security Strategy describes personal preparedness in its first strategic objective as a means to build community resilience.
Local Partnerships for the CDC
Ready CDC intends to move the dial by using its own workforce to understand behaviors associated with preparedness, including barriers to change. This is the most intriguing aspect of Ready CDC for the local community preparedness leaders involved. Most community-level preparedness education is currently conducted at the awareness level. Classes are taught and headcounts are taken, but beyond that, there is no feedback or follow-up to determine if their efforts are leading to desired behavior changes. Ready CDC is currently measuring and studying the Ready CDC intervention and that has local community preparedness leaders around metro Atlanta very interested in its outcomes.
While CDC has subject matter experts on many health-related topics, CDC looked to preparedness experts in and around the Metro Atlanta community to help make Ready CDC a locally-sustainable intervention. After all, the best interventions are active collaborations with community partners**. Key community partners from the American Red Cross; Atlanta-Fulton County, DeKalb County, and Gwinnett County Emergency Management Agencies; and the Georgia Emergency Management Agency played ongoing and significant roles in developing the program content, structure, and sustainability needed for CDC’s Metro Atlanta workforce. CDC gets the benefit of their time and expertise while partners have the satisfaction of knowing their efforts are making a difference in and contributing to the resilience of their communities. Also, because of these great partnerships, one lucky class participant wins a family disaster kit courtesy of The Home Depot and Georgia Emergency Management Agency.
Ready CDC is currently available to the CDC workforce in and around Metro Atlanta; however, efforts are underway to ensure that the broader CDC workforce is reached in 2015. For more information about Ready CDC, please email firstname.lastname@example.org.
Do you have a cybersecurity emergency plan in place? If you do, are you confident in your cybersecurity plan? If you answered both of these questions with a yes, pat yourself on the back for a job well done. And then volunteer some advice to your business peers because you are in the minority.
According to a new study by the SANS Institute, sponsored by AccessData, AlienVault, Arbor Networks, Bit9 + Carbon Black, HP and McAfee/Intel Security, found that 90 percent of American businesses don’t have a very effective cybersecurity emergency plan. One of the top reasons why an effective plan isn’t in place is lack of time to do so and a lack of budget, at 62 percent and 60 percent, respectively.
So, the companies that are already spending time and money on some sort of cybersecurity emergency plan don’t have one as good as they’d like. But these companies are also in the minority, as 43 percent don’t have any type of formal emergency response plan and 55 percent don’t have a response team. That could be a fatal mistake, especially considering that more than half claimed to have had at least one critical incident requiring a response over the past two years.
Banks may be undermining their own efforts at Big Data, according to a recent Information Week column.
“When faced with the requirements of a new big data initiative, banks too often only draw on prior experience and attempt to leverage familiar technologies and software-development-lifecycle (SDLC) methodologies for deployment,” writes Michael Flynn, managing director in AlixPartners' Information Management Services Community.
The problem: Those technologies enforce structure and focus on optimizing processing performance. That means the data is aggregated and normalized in an environment that works against Big Data sets in three ways, Flynn explains:
(MCT) — Dr. Diane Weems knew the virus was on their minds, so the acting director of the East Central Health District just launched into it at last week’s meeting of the Richmond County Board of Health.
“OK, does anyone have questions about Ebola?” she asked board members.
The lethal outbreak in Africa has prompted a lot of unneeded fear even among health care workers who might not understand that it takes more than casual contact to cause an infection, she said.
Augusta and Georgia have faced far bigger public health threats in the past and will likely face worse in the future, experts said.
The problem with the outbreak in West Africa, where nearly 2,000 people have been infected and more than 1,000 people have died, is that unlike past outbreaks in self-contained rural villages, this one is occurring in more populated areas, Weems said. These countries also lack a good public health infrastructure and health workers might not be taking common infection control procedures, such as wearing gloves, she said.
As the trend for larger and more frequent wildfires continues, a team of scientists, engineers, technologists, firefighters and government and industry professionals is working on a project, called WIFIRE, to build an end-to-end cyberinfrastructure for simulation, prediction and visualization of wildfire behavior.
The WIFIRE system will analyze wildfire dynamics with specific emphasis on the climate. The system will integrate heterogeneous satellite information and remote sensor data by computational techniques like signal processing, visualization, modeling and data assimilation to develop a scalable method to monitor weather patterns and predict the spread of a wildfire.
The project started with a three-year, $2.65 million grant to the University of California at San Diego in October 2013 when participants in the project began integration and cataloging of data from sensors, satellites and scientific models to create scalable wildfire models. Participants include the San Diego Supercomputer Center (SDSC), the California Institute for Telecommunications and Information Technology’s Qualcomm Institute and the University of Maryland.
Land Cover Atlas helps communities “see” vulnerabilities and craft stronger resilience plans
A new NOAA nationwide analysis shows that between 1996 and 2011, 64,975 square miles in coastal regions -- can area larger than the state of Wisconsin -- experienced changes in land cover, including a decline in wetlands and forest cover with development a major contributing factor.
Overall, 8.2 percent of the nation’s ocean and Great Lakes coastal regions experienced these changes. In analysis of the five year period between 2001-2006, coastal areas accounted for 43 percent of all land cover change in the continental U.S. This report identifies a wide variety of land cover changes that can intensify climate change risks, such as loss of coastal barriers to sea level rise and storm surge, and includes environmental data that can help coastal managers improve community resilience.
"Land cover maps document what's happening on the ground. By showing how that land cover has changed over time, scientists can determine how these changes impact our plant’s environmental health," said Nate Herold, a NOAA physical scientist who directs the mapping effort at NOAA's Coastal Services Center in Charleston, South Carolina.
Among the significant changes were the loss of 1,536 square miles of wetlands, and a decline in total forest cover by 6.1 percent.
The findings mirror similar changes in coastal wetland land cover loss reported in the November 2013 report, Status and Trends of Wetlands in the Coastal Watersheds of the Conterminous United States 2004 to 2009, an interagency supported analysis published by the U.S. Fish and Wildlife Service and NOAA.
This new NOAA analysis adds to the 2013 report with more recent data and includes loss of forest cover in an overall larger land area survey. Both wetlands and forest cover are critical to the promotion and protection of coastal habitat for the nation’s multi-billion dollar commercial and recreational fishing industries.
Development was a major contributing factor in the decline of both categories of land cover. Wetland loss due to development equals 642 square miles, a disappearance rate averaging 61 football fields lost daily. Forest changes overall totaled 27,515 square miles, equaling West Virginia, Rhode Island and Delaware combined. This total impact, however, was partially offset by reforestation growth. Still, the net forest cover loss was 16,483 square miles.
These findings, and many others, are viewable via the Land Cover Atlas program from the NOAA’s Coastal Change Analysis Program (C-CAP). Standardized NOAA maps allow scientists to compare maps from different regions and maps from the same place but from different years, providing easily accessible data that are critically important to scientists, managers, and city planners as the U.S. population along the coastline continues to grow.
“The ability to mitigate the growing evidence of climate change along our coasts with rising sea levels already impacting coastlines in ways not imaged just a few years ago makes the data available through the Land Cover Atlas program critically important to coastal resilience planning,” said Margaret Davidson, National Ocean Service senior advisor for coastal inundation and resilience science services.
C-CAP data identify a wide variety of land cover changes that can intensify climate change risks — for example, forest or wetland losses that threaten to worsen flooding and water quality issues or weaken the area’s fishing and forestry industries. The atlas’s visuals help make NOAA environmental data available to end users, enabling them to help the public better understand the importance of improving resilience.
“Seeing changes over five, 10, or even 15 years allows Land Cover Atlas users to focus on local hazard vulnerabilities and improve their resilience plans,” said Jeffrey L. Payne, Ph.D., acting director for NOAA’s Coastal Services Center. “For instance, the atlas has helped its users assess sea level rise hazards in Florida’s Miami-Dade County, high-risk areas for stormwater runoff in southern California, and the best habitat restoration sites in two watersheds of the Great Lakes.”
Selected Regional Findings – 1996 to 2011:
The Northeast region added more than 1,170 square miles of development, an area larger than Boston, New York City, Philadelphia, Baltimore, and the District of Columbia combined.
The West Coast region experienced a net loss of 3,200 square miles of forest (4,900 square miles of forests were cut while 1,700 square miles were regrown).
The Great Lakes was the only region to experience a net wetlands gain (69 square miles), chiefly because drought and lower lake levels changed water features into marsh or sandy beach.
The Southeast region lost 510 square miles of wetlands, with more than half this number replaced by development.
Many factors led to the Gulf Coast region’s loss of 996 square miles of wetlands, due to land subsidence and erosion, storms, man-made changes, sea level rise, and other factors.
On a positive note, local restoration activities, such as in Florida’s Everglades, and lake-level changes enabled some Gulf Coast and Southeast region communities to gain modest-sized wetland areas, although such gains did not make up for the larger regional wetland losses.
C-CAP moderate-resolution data on the Land Cover Atlas encompasses the intertidal areas, wetlands, and adjacent uplands of 29 states fronting the oceans and Great Lakes. High-resolution data are available for select locations.
All C-CAP data sets are featured on the Digital Coast. Tools like the Digital Coast are important components of NOAA’s National Ocean Service’s efforts to protect coastal resources and keep communities safe from coastal hazards by providing data, tools, training, and technical assistance. Check out other products and services on Facebook or Twitter.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.
What if there was only a single BCM/DR methodology that all organizations would follow? Would it be able to address the specific concerns of particular industries or generalize to the point where it adds no value? Would it be able to address all situations, all possible scenarios and all industries in all countries? How could any single methodology address every situation and every minute detail; taking into account language interpretation, definitions and culture? Could it be done?
If everything was the same and the same perspectives were leveraged it would make sense for what satisfies the needs of a manufacturer to use the same rationale that suits an insurance company. But that is impossible isn’t it? There are other concerns for a manufacturer has that an insurance company wouldn’t. That’s like saying what is good for one person is good for another. Well, we know that’s not correct because we are all individuals with our own wants, needs, desires…and dislikes.
There are thousands upon thousands of organizations in the world, so how can we ever expect that each of these will need the exact same BCM/DR solution or framework? We can’t.
Hospitals nationwide are hustling to prepare for the first traveler from West Africa who arrives in the emergency room with symptoms of infection with the Ebola virus.
Dr. Thomas R. Frieden, director of the Centers for Disease Control and Prevention, has said such a case is inevitable in the United States, and the agency this month issued the first extensive guidelines for hospitals on how recognize and treat Ebola patients.
The recommendations touch on everything from the safe handling of lab specimens to effective isolation of suspected Ebola patients. But one piece of advice in particular has roused opposition from worried hospital administrators.
The C.D.C. says that health care workers treating Ebola patients need only wear gloves, a fluid-resistant gown, eye protection and a face mask to prevent becoming infected with the virus. That is a far cry from the head-to-toe “moon suits” doctors, nurses and aides have been seeing on television reports about the outbreak.
Big Data isn’t just Hadoop and in-memory anymore. Big data technologies and tools have grown significantly over the past few years — so much so that it’s hard to keep up with them.
If you’d like to get up to snuff and are primarily interested in open source solutions, I recommend this CIOL.com column by Virenda Gupta, senior vice president at Huawei Technologies India.
He discusses new open source solutions in the areas of Big Data processing, analytics and mining. He also addresses Big Data virtualization, where he sees a shortage of comprehensive platforms.
(MCT) — Ebola is not the virus that keeps Marc Lipsitch up at night.
Lipsitch, a Harvard epidemiologist who grew up in Atlanta, is on a mission to eradicate human-engineered strains of deadly pathogens such as the H5N1 “bird flu.” Those strains exist only in a handful of labs, where they have been genetically altered to make the virus more contagious.
H5N1, which first infected humans in 1997 in China, has killed about 60 percent of the almost 700 people who have been diagnosed with it. Nearly all of them got sick through contact with infected birds; in nature, H5N1 does not pass easily from person to person.
If it acquires that ability in the wild before scientists have developed effective vaccines and treatments, many millions of people are likely to die.
(MCT) — The mean season has arrived.
During the 10-week stretch from mid-August through October, the most powerful storms tend to form in the Atlantic, Caribbean and Gulf of Mexico. It's also when South Florida is most likely to be struck.
"Almost all South Florida hurricanes and the vast majority of tropical storms have struck our area during these months," said meteorologist Robert Molleda of the National Weather Service in Miami.
Normally, ocean waters heat up and wind shear eases during the heart of the season, allowing tropical systems to form and grow.
The good news this year: Tropical waters should remain cooler than normal and wind shear stronger than normal over the entire Atlantic basin for the remainder of the hurricane season.
The Business Continuity Institute is pleased to welcome its first Associate Fellow (AFBCI) since the new grade was created. Having completed a rigorous assessment process, Johannes Muellenberg now has the honour of being able to call himself an AFBCI and gain extra recognition through the use of those letters after his name.
Earlier in the year, the BCI launched its AFBCI grade in order to meet the growing demand of our members, many of whom have contributed significantly to the industry and the Institute but are not yet eligible to become a Fellow. The AFBCI grade sits between MBCI and FBCI and successful candidates must have demonstrated their commitment to the industry through the number of years experience they have, and a commitment to ongoing learning through their participation in a continuous professional development (CPD) scheme.
To find out more information on BCI membership grades, please click here.
No two disasters are ever the same and business continuity practitioners should never base their plans directly on an individual experience, but case studies still provide an extremely helpful tool when it comes to thinking about what organisational disruptions may occur and how they can be dealt with. That is the purpose of a new book titled ‘In hindsight: a compendium of business continuity case studies’ launched in July at Missenden Abbey in Buckinghamshire, UK, a tribute to the venue where the idea for the book was originally conceived.
In hindsight was edited by Robert Clark MBCI and authored by several people from the field of resilience who all (with one exception) came together when studying at Buckinghamshire New University under the tutelage of Philip Wood AMBCI who provided the preface for the book. In his preface he states "I have found it to be an interesting, thought provoking and stimulating collection of studies and I have learned a great deal from reading it. Learning is key to understanding, and understanding allows us to make the right decisions.”
This compendium of business continuity case studies contains fascinating examples showing the diverse range of issues that organisations could have to deal with. With stories ranging from financial crises (collapse of Barings Bank) to industrial disasters (Piper Alpha), from disease outbreaks (SARS) to natural disasters (UK flooding of 2007), from product recalls (Toyota’s 8 million cars in 2009/10) to crowd management (Dusseldorf Love Parade in 2010), this book is packed with case studies of various incidents demonstrating what happened, how it was dealt with and an additional focus on what went well and what didn’t go well.
In explaining why ‘hindsight' is perhaps the perfect theme for a book, Robert Clark highlighted that “we tend not to look back enough on what has happened in the past in order to learn from it. That's why this book is not just about theory, it is about looking at past incidents and identifying how an effective business continuity management system could have made the situation better.”
Disasters will always happen but if we can learn from each one then we can improve on the outcome the next time something similar happens. To find out more about this book, please click here.
The 2014 BCI Asia Awards took place on Thursday 14th August at the 12th Asia Business Continuity Conference in Singapore. The BCI Asia Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in or operating in China, Tibet, Hong Kong, Japan, Macau, North Korea, South Korea, Taiwan, Mongolia, Philippines, Malaysia, Singapore, Laos, Thailand, Vietnam, Brunei, Myanmar (Burma), Cambodia, East Timor, Indonesia.
The Winners of the Awards were:
Business Continuity Provider of the Year (Product)
Business Continuity Team of the Year
Business Continuity Innovation of the Year
BCM Manager of the Year
Khalid Ahmed Bahabri
BCM Newcomer of the Year
Congratulations to all the winners and well done to all those who were nominated. All winners from the BCI Asia Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition 2014.
(MCT) — Nearly 24 hours after witnessing the devastation himself, Gov. Rick Snyder today declared a disaster for metro Detroit counties in the wake of a historic flood that left a huge path of destruction across the region.
Thousands of flooded basements and raw sewage spills. Wrecked cars. A massive sinkhole. Ongoing traffic nightmares.
Metro Detroit is dealing with all of this — and more. Adding to the chaos, scavengers are now going through water-logged debris that people are putting out on the curb for trash. Where that ends up is uncertain, triggering yet more public health concerns.
The devastation has left local officials exasperated and pleading for help, saying there is no way their communities can handle this on their own. They are in dire need of state and federal aid, they say. And it needs to come fast.
With the Northern Hemisphere now in the midst of hurricane, typhoon and cyclone season, many businesses have emergency plans in place, plywood to board the windows, and generators at the ready. But a new study from economists Solomon M. Hsiang of Berkeley and Amir S. Jina of Columbia, “The Causal Effect of Environmental Catastrophe on Long-Run Economic Growth,” found it is far more difficult for the overall economy to weather the storm.
As Rebecca J. Rosen explained in The Atlantic, economists previously had four competing hypotheses about the impact of destructive storms: “Such a disaster might permanently set a country back; it might temporarily derail growth only to get back on course down the road; it might lead to even greater growth, as new investment pours in to replace destroyed assets; or, possibly, it might get even better, not only stimulating growth but also ridding the country of whatever outdated infrastructure was holding it back.”
After looking at 6,712 cyclones, typhoons, and hurricanes that occurred between 1950 and 2008 and the subsequent economic outcomes of the countries they struck, Hsiang and Jina were able to decisively strike down most of these hypotheses. “There is no creative destruction,” Jina said. “These disasters hit us and [their effects] sit around for a couple of decades.”
In 2012, when Superstorm Sandy struck the East Coast, thousands of residents were displaced from their homes. In wake of the panic and chaos, Airbnb, an online platform where people list and book accommodations around the world, saw an opportunity to leverage its existing services for neighbors to help neighbors. During the disaster, 1,400 Airbnb hosts — who typically collect payment for accommodations — opened their homes and cooked meals for those left stranded.
After Sandy, Airbnb reached out to the San Francisco Department of Emergency Management to share what it learned and discuss how it could reach a broader audience during an emergency. Simultaneously, the company was in discussions with officials in Portand, Ore., about an initiative to help civic leaders and community members work together to create a more shareable and livable city.
Over a series of articles, Hilary Estall, Director of Perpetual Solutions, will be discussing subject areas aimed at those managing a business continuity management system (BCMS) and in particular, those systems certified to ISO 22301. With her pragmatic approach to management systems and auditing in particular, Hilary will offer an insight into areas not widely discussed but still important for the ongoing success of a BCMS.
In the second article of the series, Hilary Estall looks at what’s involved when a certified BCMS reaches its recertification point. What does this mean and what’s involved?
In this article I demystify the process of recertification; the procedure undertaken by certification bodies every third year in the cycle of management system certification. I identify how an organization should prepare and the process of recertification itself. Is it just another audit or is there more to it?
If your organization has a certified business continuity management system (BCMS) you will know that in order to retain it, your certification body will carry out periodical audits. You will also know that when you first achieved certification and were issued with your certificate, it had an expiry date on it, three years hence*. What are the implications of this expiry date and how should you prepare for ‘renewal’?
When it comes to data restoration, addressing deleted mailboxes or emails is the most common request of IT administrators, according to new survey data from Kroll Ontrack.
When asked how often they receive requests for data restoration, 61 percent of the nearly 200 Ontrack PowerControls customers surveyed across EMEA, North America and APAC report they receive up to five email related restoration requests a month, with an additional 11 percent claiming up to 10 times a month.
In Europe, the second most common data restoration need was disaster recovery (16 percent), followed by missing data (12 percent). In the US, the second most common data restoration need was collection of electronic data for ediscovery (21 percent), followed by consolidating data from older to new applications to eliminate legacy servers (15 percent).
Requests for data restoration came from all departments across an organization, with 24 percent stemming from the internal legal department, 22 percent coming from IT security and 15 percent originating from sales and marketing. Why do these people need their email and documents back? 45 percent of IT administrator respondents note that employees request their email and documents back because they were accidentally deleted. Internal investigations (17 percent) ranked as the second most common source of restoration requests.
Historically, vendor solutions for disaster recovery have been created for on-site use for individual enterprises. The client company concerned was the sole owner of the user data involved, and disaster recovery could be implemented without having to worry about anybody else. The cloud computing model changes that situation. It’s possible to use cloud services to have your own dedicated servers and instances of applications, or to share physical space but still have your own application (as in multi-instance setups). However, multi-tenancy (perhaps the defining feature of cloud architectures) makes the application of disaster recovery solutions rather more delicate.
We talk about Big Data and, now, Small Data as if it’s always clear with which you’re dealing. Big Data means volume, variety or velocity (or all three) and small data is structured and everything else.
Of course, the reality isn’t always so binary, according to a panel of medical and pharmaceutical experts at the recent MIT Chief Data Officer and Information Quality Symposium.
SearchCIO.com covered the event, and, in a recent article, shared a few lessons from the panel’s trial-and-error approach to dealing with data variety. Mark Schreiber’s experience is a perfect example.
Codenomicon's discovery of OpenSSL's "Heartbleed" flaw this past spring highlighted the increasing importance of source code assurance and quality control as software grows in prominence in daily life. The Heartbleed memory leak opened the door for infiltrators to obtain passwords and security keys to decode encrypted data — a vulnerability that allegedly still threatens enterprise systems months after its discovery, according to a recent report.
(MCT) — Karen Windon still gets chills when she thinks back on Hurricane Charley.
"We were right in the cross-hairs for a long time as Charley barreled up the Gulf of Mexico," Windon recalled Tuesday.
Windon, now a deputy administrator for Manatee County, Fla., was the county's public safety director in 2004.
"For me, it was a mixture of tense moments, and swelling pride, knowing we had such a committed team at the emergency operations center at that time," Windon said.
Although Manatee County escaped much of Charley's fury, with a historic right turn that directed it northeast through Punta Gorda and Arcadia on Aug. 13, 2004, it proved to be a game changer.
It changed the local public perception of hurricanes from something to ride out to knowing there could be a dangerous killer on the loose. And Charley put emergency managers on notice that they needed to step up their games.
Manatee County officials got serious about building a stand-alone, hardened emergency operations center that could withstand such natural disaster as a hurricane. Officials moved ahead with plans for a new Public Safety Center that might otherwise have languished on a wish list for years.
Recently I did a remarkably silly thing. Something I hadn’t done in almost seventeen years as the proverbial travelling consultant.
I went to London. No, that’s not the silly thing – I go to London quite often and honestly it’s really not that bad there. Even for a country bumpkin like me. No, the silly thing came to light after I’d boarded the train and it was pulling out of the station. I opened my bag to take out my laptop and some papers so that I could start work and my laptop wasn’t there. I checked again. And again. But it still wasn’t there. After checking for a fourth time the penny finally dropped – I’d left my laptop at home. I was a couple of minutes into a two-hour train journey, all ready to get stuck in to some quality report writing time and my laptop, one of the main tools of my trade – if not the main tool – was sitting at home, rather than on the table in front of me.
After the initial panic attack subsided I remembered that I wasn’t presenting today, so at least I didn’t need my laptop for any of my meetings. And I had my phone, and lots of people tell me that’s all they need to be able to work. “I can just work from wherever I am, as long as I have my mobile phone and an internet connection” is an assertion I hear all the time. Well this was a perfect opportunity for me to put that theory to the test.
Luckily I had a charger with me, otherwise I’d have been in trouble from the off. Because the second thing I didn’t do last night – the first being to not spot the absence of a laptop when I checked the contents of my bag (yes I did actually check, or at least I thought I did – it was late) – was to charge my ‘phone. I have one of those ‘phones that you have to charge about every three and a half hours (you know the ones) so the 20% remaining battery life probably wouldn’t have got me halfway to London, let alone seen me through the day.
So I plugged in and off I went. I couldn’t work on the report that I’d planned to because, whilst I synchronise files between my desktop and laptop, I don’t store all of my data in the cloud as a matter of course. In fact I don’t store much there at all, particularly if it’s confidential. Call me old fashioned but I haven’t yet developed the same blind faith in 'the cloud' that many others have. I’m with one of my information security colleagues on this one – he recently said “I wish people would stop calling it ‘the cloud’ and start calling it ‘putting my data on someone else’s computers’. Don’t get me wrong, I’m not saying 'the cloud' is all bad. And yes, I do use it. But I’m extremely selective about what I choose to put there. There are, after all, some significant advantages if it’s used properly. But the cloud is a big and often dimly-lit place and not every cloud is created equal. Call me a cynic but I largely think of 'the cloud', particularly the free bits of it, as a really convenient way of letting someone else delete, corrupt, leak, sell, give away, deny me access to or otherwise compromise my data so that I don’t have to do it myself. Which I personally think is a healthy attitude that others would do well to adopt.
But I digress. In any case, trying to write a proper report on a phone, as opposed to making a few notes, isn’t the easiest thing in the world to do. For a start, typing large amounts of text on a phone isn’t as easy as on a real keyboard, at least for anyone with normal sized fingers. Let alone the fact that my phone is constantly correcting what I type, which means I spend an inordinate amount of time correcting it back again. Then there’s the compatibility issues (which I won’t go into here as it’ll probably just turn into a rant against Microsoft and Apple), which means that you’re pretty much restricted to text only, without too much formatting and certainly nothing as weird and wonderful as a table.
But I digress again. At least I could start by sending a few e-mails. Except there was no network connection. On-board wifi hasn’t made much of an appearance on the trains from Evesham to London yet, at least not the peak time trains (for some reason you can get it at 2 o’clock in the afternoon, which is really useful for the majority of business travellers who actually have to get up in the morning). And the mobile phone signal is somewhat patchy for the first part of the journey. Funny how I can get a mobile signal at the top of a ski slope but not in the Cotswolds, despite the claims of 99% UK coverage by the mobile ‘phone companies (second rant suppressed).
So I read a couple of (paper) documents, wrote a bit of my blog, corrected the corrections, finally managed to send and receive some e-mails, did a bit of web browsing (albeit looking at stuff on a very small screen), popped a couple of headache tablets and arrived in London for my meetings.
Shortly before I got on the train home, my phone started bleating “low battery” at me again. “No matter”, I thought, “I’ll just charge it on the train”. Except the electrical sockets on this particular train weren’t working. So I had about twenty minutes of trying to access my e-mails (and failing, due to a glitch at my internet service provider – good old Sod’s Law!) and writing a few notes for later processing before my phone gave up the ghost. At which point I gave up too and read the paper instead.
So, how effective was my plan to “just work from wherever I am using my mobile ‘phone”. Well, I suppose I managed to do a bit, and significantly more than in the pre-smartphone days. But how effective was it really? Well I think the answer to that is fairly evident. I reckon I probably achieved fifteen to twenty percent of what I’d have been able to do had I had my laptop to hand.
Yes, remote working is eminently possible – I do it all the time – but its effectiveness is hugely dependent on the tools available and the type of work that you’re trying to do remotely. Even working at home can be problematical and far less efficient than working in an office, if that’s what you normally do. And if you’re a laptop user and you don’t have it with you (which is a distinct possibility if you’re one of the many, many people who leave their laptops in the office when they go home) remote working can be trickier still.
And yes, there are all sorts of things that can be done with a smartphone (aside from checking Facebook or tweeting), particularly if your job largely involves phoning and emailing people and making a few notes. But in my experience their usefulness is limited and they’re really no substitute for a proper computer if you have things like reports to write (or read) or large, complicated spreadsheets to deal with, amongst other things. And, whilst they may be OK for a short period, I challenge anyone to work effectively for anything more than a very short time using just their smartphone.
So next time someone says to you “I can just work from wherever I am, as long as I have my mobile phone and an internet connection,” I strongly suggest you challenge them to prove it. Because some things are a lot easier said than done.
Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management. You can follow him on Twitter and his blog or link up with him on Linked In.
I’m really very grateful that Education Month has given me an opportunity to focus on what interests me. Primarily, I’m interested in business continuity (as an element of a wider interest in Organisational Resilience). I’m also interested in improvement; that means self, organisation, personal and professional. And of course, I’m interested in education. I like to learn from my peers, colleagues, students and business partners, as well as by studying and maintaining focus on what is going on around us every day. And because it’s my job, I want to share that enthusiasm and interest with others; in this case, you.
Business continuity is one of those industries/professions/sectors that is on a growth trajectory. It needs to be as it works in an environment that is rife with influences that may engender or initiate change and thus inform the shape of risk and impact landscapes. There is much speculation, theorising and pontificating about what is coming, how it should be influenced or could be controlled and how we deal with impacts. From globalized business activity to changes in national and international power balances, from political reorientations to an emergence of technology enabled ‘people power’. Also, while there is an immense amount of opinion and theory put forward daily from all quarters concerning human behaviour and its effect on others (such as, by implication, political, economic, social, technological impacts) it is also worth considering ideas, theories and opinions on the less easily quantifiable and controllable. These are all areas for thought, concern and yes, education.
So, if we are aware of the potential problems, what’s the problem? Well, there are thousands of business continuity professionals (that is what you are: professionals) out there who are undereducated, or perhaps miseducated, or maybe even not specifically educated at all. You may have been trained; but ‘educated’ is a different thing. Of course, you will know things, processes, functions, problems and issues and you will be adept in your role, and if that’s OK with you; then that’s OK. The sector abounds with professionals who are working hard, mainly successfully, to do what needs to be done and in general, we don’t equate ourselves with reticence, lack of confidence or indecision; or indeed lack of self-awareness.
However, there are very many people who do hesitate when it comes to education. It is interesting. Maybe this hesitancy is not about cost; nor is it usually about obtaining support from employers. Usually, there is a fear of being overcome by the difficulties and challenges of learning, perhaps because they have been away from formal education for many years, or simply because they are familiar with training rather than the academic rigour of university programmes.
Well, simply put, there is nothing to be afraid of or worried about. If you decide to undertake an academic programme you can expect to be provided with advice, support, guidance and resources to allow you to grow into the mysteries of higher educational learning. In fact, here’s a little secret – there are no mysteries at all! Learning takes time; skills take practice, correction and amendment to perfect. It can be done and in fact, it is not intimidating or difficult at all. It does take hard work and application – but so does life.
Most importantly higher education learning doesn’t turn you into an academic; it enhances your professional capabilities. In fact, unless you are steeped in study on a daily basis, you are not an academic or a scholar – in reality, for those who undertake professional and academic courses as part of their CPD (continuing professional development), the clue is in the acronym - ‘CPD’! And importantly, it is not all about theory; education in the modern world and in the BC world should be about practical application.
So, in Education Month, perhaps it is worthwhile taking pause from your busy and demanding life and thinking about what you would like to be.
- Better paid? Education helps whether you study for a certificate, diploma, bachelors or master’s degree.
- More competitive? Education helps you to think about and analyse the world around you.
- Better at your job? Education helps you to learn and understand what you do and why – and what you should be doing and why.
- A thought leader? Education helps you to become a more effective thinker as well as an effective practitioner; win/win!.
Education will not necessarily make you any better than anyone else. Just holding an award is meaningless if you are unable to make it work for you and if you cannot use and develop the skills and knowledge gained from your learning. But - if you’ve taken the time and trouble to read the Education Month blogs and other publicity then you must be interested and it may be time to transform your interest into reality.
Phil Wood is the Head of Enterprise, Security and Resilience within the Faculty of Design, Media and Management at Buckinghamshire New University in the UK.
Ultimate responsibility for ERM starts at the top. However, everyone who matters within an organization should participate in the ERM process.
While several executives have significant responsibilities for ERM, including the Chief Risk Officer, Chief Financial Officer, Chief Legal Officer and Chief Audit Executive, the ERM process works best when all key managers of the organization contribute. The COSO ERM framework states that managers of the organization “support the entity’s risk management philosophy, promote compliance with its risk appetite and manage risks within their [respective] spheres of responsibility consistent with risk tolerances.” Therefore, identifying leaders throughout the organization and gaining their support is critical to successful implementation of ERM.
A goal of ERM is to incorporate risk considerations into the organization’s agenda and decision-making processes. This means that ultimately, every manager is responsible, which can only happen when performance goals, including the related risk tolerances, are clearly articulated, and the appropriate individuals are held accountable for results.
One question often posed to me is how to think through some of the relationships a company has with its various third parties in order to reasonably risk rank them. Initially I would break this down into sales and supply chain to begin any such analysis. Anecdotally, it is said that over 95% of all Foreign Corrupt Practices Act (FCPA) enforcement actions involve third parties so this is one area where companies need to put some thoughtful consideration. However, the key is that if you employ a “check-the-box” approach it may not only be inefficient but more importantly, ineffective. The reason for this is because each compliance program should be tailored to an organization’s specific needs, risks and challenges. The information provided below should not be considered a substitute for a company’s own assessment of the corporate compliance program most appropriate for that particular business organization. In the end, if designed carefully, implemented earnestly, and enforced fairly, a company’s compliance program—no matter how large or small the organization—will allow the company, generally, to prevent violations, detect those that do occur, and remediate them promptly and appropriately.
By now, you’ve heard all the hoopla over IBM’s new brain-like chip. There’s little doubt that this is significant chip innovation, but what interests me is what this new development means for data.
Most of the news has focused on the similarities between SyNapse's TrueNorth and the human brain. Actually, as revealed last week, the technology represents 16 million neuron chips, which is a good deal short of the 100 billion neurons in the human brain, according to the UK’s University of Manchester Computer Engineering Professor Steve Furber.
Furber is a co-designer of the original ARM processor chip in the 1980s. For the past three years, he has worked on a project that would model 1 billion neurons, according to the UK Register.
During the process of developing a Business Continuity Plan or strategy it is easiest to focus on the larger picture; to understand the major impacts and potential roadblocks. But when putting that Plan on paper (figuratively or literally) it is time to think about more granular logistical needs and issues. One that is often overlooked is where – and how – the money will come from to pay for that recovery strategy. A good plan must document that process, or create one if it doesn’t already exist.
Even if one assumes that the organization will pay any price to recover its business operations in the most timely manner possible, questions remain:
- Who has the authority to approve expenditures?
- What are the limitations of that authority?
- What is the process needed to gain approval of expenditures?
- How will expenses be documented?
- How will vendors and suppliers be paid?
If the Business Continuity Plan calls for moving personnel to another office many miles away, how will their transportation costs (airline or train tickets, fuel reimbursement) and lodging be paid?
CHRISTCHURCH, New Zealand — You don’t see it, but you certainly know when it’s not there: infrastructure, the miles of underground pipes carrying drinking water, stormwater and wastewater, utilities such as gas and electricity, and fiber-optics and communications cables that spread likes veins and arteries under the streets of a city.
No showers, no cups of tea or coffee, no flushing toilets, no lights, no heating, and no traffic lights — a modern bustling city immediately shuts down. Factor in damaged roads, bridges, and retaining walls above ground, and the situation is dire.
That calamity hit Christchurch, New Zealand, in a series of earthquakes that devastated the city in 2010 and 2011.
Most people here don’t see the extent of repair work going on underground. They just notice roadworks and seemingly millions of orange cones that have sprouted up all over the city. Yet the organization created to manage Christchurch’s infrastructure rebuild has a vital role, and it’s become something of a global model for how to put the guts of a city back together again quickly and efficiently after a disaster.
By Charlie Maclean-Bristol
The first death caused by Ebola (officially Ebola virus disease (EVD)) outside Africa caught my eye this week, this was a Saudi national who had been visiting Sierra Leone.
Over the last few months the number of deaths from the illness has been growing, infecting people from Guinea, Sierra Leone and Liberia.
At the time of writing there have been 932 deaths and over 1500 cases.
Apart from the first death outside Africa, the illness has recently spread to Nigeria, with one death and a number of other cases.
Nigeria, with its large population and strong links to Europe, makes it more likely that the illness could spread further.
By Tom Salkield
2014 started badly - by severely testing the UK’s flood defences. Information security professionals have a similarly precarious feel, as they work to continuously hold back a flood of ever more sophisticated attacks and protect their information assets. Cybercrime, like the weather, is often unpredictable, but organizations can gain a competitive advantage by making risk–based decisions and investments to focus resources and get the best return on investment to prevent costly breaches to their defences.
The coverage of the flood damage to many areas of the UK dominated the news earlier this year. The debate still rages between those who argue that more should have been invested in planning and delivering effective defences, and those who claim that the volume of rain meant there was little more that could have been done to prevent the devastation.
Tripwire, Inc., has published the results of a survey of 215 attendees at the Black Hat USA 2014 security conference in Las Vegas, Nevada.
Industry research shows most breaches go undiscovered for weeks, months or even longer. Despite this evidence, 51 percent of respondents said their organization could detect a data breach on critical systems in 24 to 48 hours, 18 percent said it would take three days and 11 percent said within a week.
According to the Mandiant 2014 Threat Report, the average time required to detect breaches is 229 days. The report also states that the number of firms that detected their own breaches dropped from 37 percent in 2012 to 33 percent in 2013.
“I think the survey respondents are either fooling themselves or are naively optimistic,” said Dwayne Melancon, chief technology officer for Tripwire. “A majority of the respondents said they could detect a breach in less than a week, but historical data says it is likely to be months before they notice.”
Agile project methodologies have their roots in the software industry, but the overall principle of staying close to market requirements can be applied in any sector. When risk management becomes difficult because of uncertainties like the weather or the economy, short agile cycles encourage a focus on objectives. This may make more sense than detailed planning that tries to put everything in place for the mid to long term. Efficiency and business continuity can be improved, on condition that communications remain open and productive with all stakeholders. So with these advantages, why don’t all organisations and projects jump on the agile bandwagon?
(MCT) — It’s been a little more than three months since the April 28 tornadoes ravaged a portion Limestone County, and efforts continue to get residents back on their feet.
United Way of Athens-Limestone County has played an integral role in those efforts.
After the tornadoes, the nonprofit organization took on 75 long-term recovery cases, but that doesn’t include those who were provided other services, according to United Way Executive Director Kaye Young McFarlen.
Some need quick, easy help on the front end. Others were more long term and more involved.
Behind the media sensationalism and hyping of the story overall, there is little to dispute the fact the Ebola virus is a regional emergency that has the potential to become much more. Of course, this is the problem with every transmissible disease and especially so in our age of international travel for business and pleasure. The symptoms and effects of this disease are particularly unpleasant and if you have an interest, there is no shortage of descriptions of those available to you. Also, there is no shortage of warnings and pronouncements from the governments and agencies such as the World Health Organisation about the spread and effects of Ebola.
How much of a risk is there? For me, the prime risk is the lack of awareness of the disease while other things are going on. Ebola can be added to the list of big things that are happening in 2014 (Ukraine, Syria, Iraq, Libya, Malaysian Airlines, natural disasters) that probably desensitise us to the gravity of each situation individually and their collective impact. Besides, Ebola is happening in Africa, mainly, and attracts the usual international public response – relative disinterest. Although the outbreak is in the news, the threat appears to be downplayed, certainly here in the UK, as being of remote concern.
Going mobile with your data? Don’t think you can forget data quality. In fact, data quality takes on a new importance when you’re dealing with enterprise mobility, warns David Akka, head of Magic Software’s UK branch.
“In an enterprise mobility project, we typically have the same challenge of presenting information from multiple systems to the user on a single screen, but mobile brings other challenges as well,” writes Akka in a recent Enterprise Apps Tech column. “For example, typing on a small touchscreen increases the chance that critical data may be misspelled (increasing the chance of duplicating customer records); and users are also far less likely to search multiple records, as they get frustrated faster on mobile.”
What surprised Akka, and prompted his blog post, is that he found that a major automotive industry company is outsourcing data quality to an external agency — despite the fact that data quality could easily be added into the integration workflow.
Forecasters with NOAA’s Climate Prediction Center now say the chances of a below-normal Atlantic hurricane season have increased to 70 percent, up from 50 percent in May.
In its updated outlook, NOAA said overall atmospheric and oceanic conditions that are not favorable for storm development will persist through the season.
Check out the revised numbers in this NOAA graphic:
However, coastal residents may want to heed the words of NOAA lead forecaster Dr. Gerry Bell:
By now, you’ve heard about the Russian gang of hackers who allegedly gathered more than a billion user names and passwords and a lot of other information. How did you react to the news? I kind of shrugged my shoulders about it. It’s news, sure, but as someone who reads about breaches daily and gets regular updates about what’s happening in the state of cybersecurity, my reaction was this: What user names and passwords could they have that haven’t already been breached at some point?
I’m not the only one who said this. Shortly after I told some friends on Facebook that they shouldn’t panic, I got this comment in an email from John Prisco, CEO with Triumfant:
This issue reminds me of an iceberg, where 90 percent of it is actually underwater. That’s what is going on here with the news of 1.2 billion credentials exposed. So many cyber breaches today are not actually reported, often times because companies are losing information and they are not even aware of it. Today, we have learned of a huge issue where it seems like a billion passwords were stolen overnight, but in reality the iceberg has been mostly submerged for years – crime rings have been stealing information for years, they’ve just been doing it undetected because there hasn’t been a concerted effort on the part of companies entrusted with this information to protect it.
Over a series of articles, Hilary Estall, Director of Perpetual Solutions, will be discussing subject areas aimed at those managing a business continuity management system (BCMS) and in particular, those systems certified to ISO 22301. With her pragmatic approach to management systems and auditing in particular, Hilary will offer an insight into areas not widely discussed but still important for the ongoing success of a BCMS.
In her first article, Hilary Estall shares her thoughts on becoming a BCMS Lead Auditor and explores why people sometimes mistakenly opt for this particular auditor classification when more appropriate options may be available:
In this article I consider the role of the Lead Auditor and why so many individuals opt for this route for their auditor training. It’s a subject close to my heart and one which, in my opinion, is misrepresented and therefore misunderstood by those seeking auditor training. Whilst it’s not limited to business continuity management system standards, this is the context in which I have written the article.
Dr. Steven Goldman identifies ten business continuity and disaster recovery trends that are emerging, highlighting actions that business continuity managers can take in response to each item.
10: There has been an overall worldwide increase in the number of natural disasters
As a trend, the incidence of natural disasters worldwide has steadily increased, especially since the 1970’s, according to reports from the New England Journal of Medicine (NEJM) and from global insurer Munich Re.
Climate-related disasters include hydrological events such as floods, storm surge, and coastal flooding, while meteorological events include storms, tropical cyclones, local storms, heat/cold waves, drought, and wildfires. There were three times as many natural disasters between 2000 and 2009 as compared to the amount between 1980 and 1989. The NEJM notes that a vast majority (80 percent) of this growth is due to climate-related events. As a result, the amount of economic damage due to these natural disasters has seen a steady upturn. This in turn means that companies and organizations need to be prepared for natural disasters.
The number of geophysical disasters has remained fairly stable since the 1970’s. Geophysical disasters include earthquakes, volcanoes, dry rock falls, landslides, and avalanches.
What does this mean to you?
The conventional wisdom is that if you fight Mother Nature, she always wins. However, this does not mean you surrender! It means that companies and organizations need to be prepared for whatever Mother Nature can dish out. Remember Hurricane Sandy? Many companies in the northeast USA were battered, but several not only survived but also continued operations. How? Planning, preparation, and execution.
The world around us is constantly changing. Some say we now live and work in a VUCA environment, characterised by:
So how do businesses survive (and thrive) when nothing ever stands still? Perhaps part of the answer is in continuous learning and development, which can enable individuals to be agile and responsive to each and every challenge.
GALVESTON — Nearly six years after Hurricane Ike, one of the nation’s deadliest hurricanes, struck this city, boarded-up and dilapidated houses and empty lots still punctuate the streets. Many houses that remain are decorated with “for sale” signs.
“This was a thriving neighborhood,” Tina Kolunga said as she drove down a street lined with abandoned houses. On another street, she pointed out large patches of grass where homes and public housing used to sit. Kolunga still lives in Galveston, though she struggled for years after Ike to rebuild her home.
“This used to be one of the busiest restaurants in town,” Kolunga said, pointing out a rundown white building still worn from water damage.
As Kolunga toured the damage that remains years after Ike, recounting the ongoing recovery struggles of her neighbors, state lawmakers across town worried about the future of this coastal town and the surrounding region. At a hearing of the Joint Interim Committee to Study a Coastal Barrier System, held just a few miles from Kolunga's neighborhood, on Texas A&M University's Galveston campus Monday, experts told legislators that the coast is still not adequately prepared for a hurricane like Ike, which in September 2008 left billions of dollars of damage and at least 100 people dead in its wake.
Over the past several years, most (but not all) states made strides in reducing their inventory of bridges in poor condition.
Friday marked the seven-year anniversary of the I-35 bridge collapse in Minneapolis. The tragedy and subsequent bridge failures have helped focus public attention on the issue, leading some lawmakers to support additional investment in infrastructure.
A Governing story published in June examined how some states managed to significantly cut their tallies of structurally deficient bridges.
In the six years following the Minneapolis collapse, the number of structurally deficient bridges declined 14 percent nationwide.
To view trends for a particular state, select it in the menu below. Charts illustrate changes for structurally deficient and functionally obsolete bridges for the past 20 years.
Presidential Policy Directive 8: National Preparedness, requires an annual National Preparedness Report (NPR) that summarizes national progress in building, sustaining and delivering the 31 core capabilities outlined in the National Preparedness Goal (the Goal). The intent of the NPR is to provide the Nation—not just the federal government—with practical insights on core capabilities that can inform decisions about program priorities, resource allocation, and community actions. This report marks the third annual NPR, updating and expanding upon findings from the previous two years. The 2014 NPR highlights accomplishments achieved or reported during 2013.
In 2014 the Nation faced a range of incidents that challenged our collective security and resilience and confirmed the need to enhance preparedness across the whole community. Incidents like the Boston Marathon bombings, wildfires, drought, mass shootings, and ongoing management of several long-term recovery efforts, required activating capabilities across the five mission areas outlined in the Goal—Prevention, Protection, Mitigation, Response and Recovery.
Overarching Findings on National Issues
In addition to key findings for each of the 31 core capabilities, the 2014 NPR outlines cross-cutting findings that involve multiple mission areas:
- Embracing a new approach to disaster recovery: Major events, such as Hurricane Sandy and the severe 2012-2013 drought, have served as catalysts for change in national preparedness programs, drawing clearer links between post-disaster recovery and pre-disaster mitigation activities.
- Launching major national initiatives: The Federal Government has initiated several national-level policy and planning initiatives that bring unity of effort to preparedness areas, including critical infrastructure security and resilience, cybersecurity, recovery capabilities, and climate change.
- Managing resource uncertainties: Budget uncertainties have created preparedness challenges at state and local levels of government, resulting in increased ingenuity, emphasis on preparedness innovations, and whole community engagement.
- Partnering with tribal nations: Tribal partners are now more systematically integrated into preparedness activities. However, opportunities remain for Federal agencies and tribal nations to increase engagement and expand training opportunities on relevant policies.
The Nation Continues to Make Progress
The 2014 NPR identifies five core capabilities that require ongoing sustainment to meet expected future needs: Interdiction and Disruption, On-scene Security and Protection, Operational Communications, Public and Private Services and Resources, and Public Health and Medical Services.
Opportunities for Improvement
The 2014 NPR identifies the following core capabilities as national areas for improvement: Cybersecurity, Health and Social Services, Housing, Infrastructure Systems, and Long-term Vulnerability Reduction. Cybersecurity, Health and Social Services, and Housing have been areas for improvement for three consecutive years. Several ongoing initiatives, including implementation of Executive Order 13636 on Improving Critical Infrastructure Cybersecurity, Presidential Policy Directive 21 on Critical Infrastructure Security and Resilience, and the Hurricane Sandy Rebuilding Strategy will enable continued progress in these areas.
Key Factors for Future Progress
The 2014 NPR represents the third opportunity for the Nation to reflect on progress in strengthening national preparedness and to identify where preparedness gaps remain. Looking across all 31 core capabilities outlined in the Goal, the NPR provides a national perspective on critical preparedness trends for whole community partners to use to inform program priorities, to allocate resources, and to communicate with stakeholders about issues of shared concern.
Ebola is the big news story of the moment, all the media are covering it and they seem to be competing with each other to raise the fear level. ‘Out of control’, ‘deadly’, ‘terror’, all those words appear in even the more restrained media publications.
Many of us in the business continuity field will have had someone ask what planning we should do; hopefully this article will help you with that and also give you ammunition to combat some of the media excesses!
The ‘not invented here’ syndrome was something that forward-looking corporations set out to beat about 20 years ago. If a different product or service could be more cost-effectively bought in rather than being designed and manufactured in-house, then it was bought in. The challenge was to overcome misplaced pride and internal turf wars, where being asked to give up control over development could be construed as an attack on credibility, status or both. Some departments resisted by refusing to work with something that was ‘not invented here’. Now, Disaster Recovery as a Service (DRaaS) may be plagued with a similar issue, where companies cannot look outside what they already have – but for a different reason.
Master data management (MDM) solutions are used for much more than customer and product master data. According to a recent Information Difference report, MDM is also used for asset, location, supplier, finance and personnel data.
“Indeed it has become quite common for MDM efforts to begin in a relatively low-key area such as maintaining relatively stable reference data (country codes, etc.) as a toe in the water before broadening the initiative out to deal with more volatile master data domains,” the “MDM Landscape Q2 2014” report states. You can read the full report on The Information Difference’s site.
That doesn’t mean it’s a good idea to broadly apply MDM technology. As both Forrester and Gartner analysts as well as several IT Business Edge readers have pointed out, MDM is often misunderstood and misapplied within organizations.
Hawaii Residents and Visitors Urged to Follow Direction of Local Officials
WASHINGTON – The Federal Emergency Management Agency (FEMA), through its National Watch Center in Washington and its Pacific Area Office in Oahu, is continuing to monitor Hurricanes Iselle and Julio in the Pacific Ocean. FEMA is in close contact with emergency management partners in Hawaii.
According to the National Weather Service, Hurricane Iselle is about 900 miles east southeast of Honolulu with sustained winds of 85 MPH, and Hurricane Julio is about 1,650 miles east of Hilo, Hawaii, with sustained winds of 75 MPH. Tropical storm conditions are possible on the Big Island of Hawaii on Thursday. These adverse weather conditions may spread to Maui County and Oahu Thursday night or Friday. A tropical storm warning is in effect for Hawaii County, and tropical storm watches are in effect for Maui County and Oahu.
“I urge residents and visitors to follow the direction of state and local officials,” FEMA Administrator Craig Fugate said. “Be prepared and stay tuned to local media – weather conditions can change quickly as these storms approach.”
When disasters occur, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations and numerous private interest groups who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
Although there have been no requests for federal disaster assistance at this time, FEMA has personnel on the ground who are positioned in the Pacific Area Office year round. An Incident Management Assistance Team has also been deployed to Hawaii to coordinate with state and local officials, should support be requested, or needed.
At all times, FEMA maintains commodities, including millions of liters of water, millions of meals and hundreds of thousands of blankets, strategically located at distribution centers throughout the United States and its territories.
Safety and Preparedness Tips
- Residents and visitors in potentially affected areas should be familiar with evacuation routes, have a communications plan, keep a battery-powered radio handy and have a plan for their pets.
- Storm surge can be the greatest threat to life and property from a tropical storm or hurricane. It poses a significant threat for drowning and can occur before, during, or after the center of a storm passes through an area. Storm surge can sometimes cut off evacuation routes, so do not delay leaving if an evacuation is ordered for your area.
- Driving through a flooded area can be extremely hazardous and almost half of all flash flood deaths happen in vehicles. When in your car, look out for flooding in low lying areas, at bridges and at highway dips. As little as six inches of water may cause you to lose control of your vehicle.
- If you encounter flood waters, remember – turn around, don’t drown.
- Get to know the terms that are used to identify severe weather and discuss with your family what to do if a watch or warning is issued.
For a Tropical Storm:
- A Tropical Storm Watch is issued when tropical cyclone containing winds of at least 39 MPH or higher poses a possible threat, generally within 48 hours.
- A Tropical Storm Warning is issued when sustained winds of 39 MPH or higher associated with a tropical cyclone are expected in 36 hours or less.
For Flash Flooding:
- A Flash Flood Watch is issued when conditions are favorable for flash flooding.
- A Flash Flood Warning is issued when flash flooding is imminent or occurring.
- A Flash Flood Emergency is issued when severe threat to human life and catastrophic damage from a flash flood is imminent or ongoing.
More safety tips on hurricanes and tropical storms can be found at www.ready.gov/hurricanes.
Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions almost always focus on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent, and then the problem is compounded through advanced techniques like hyperthreading and shared memory access.
Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions almost always focus on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent, and then the problem is compounded through advanced techniques like hyperthreading and shared memory access.
(MCT) — While Anniston, Ala., schools have not been the scene of the sort of firearm violence that has struck other schools around the country in recent years, district officials and others across the state are taking steps to permit a safer outcome if such a situation develops.
The tactic: To let all first responders know the layout of the school before an emergency arises.
During the summer, detailed 3-D virtual maps were created revealing the nooks and crannies inside each of Anniston City Schools’ seven school buildings, at a cost to the district of between $2,000 and $3,000 per school, said Superintendent Darren Douthitt.
Most of us can’t imagine conducting day-to-day business without email. Our dependence has only increased because of smart devices that keep us connected to our email 24/7.
How would your business operate if suddenly, unexpectedly, no one had access to their email?
More importantly, what would happen if – while that email outage was taking place – all incoming emails were irretrievably lost? Would you miss business opportunities? Could your lack of access make prospects, customers and vendors feel like you are ignoring them, don’t care about their needs (or worse)? Do you fully understand all regulatory implications that may apply to missed communications?
Corporations spend a lot of time and money to ensure their employee- and customer-facing technologies are compliant with all local and regional data privacy laws. However, this task is made challenging by the patchwork of data privacy legislation around the world, with countries ranging from holding no restrictions on the use of personal data to countries with highly restrictive frameworks. To help our clients address these challenges, Forrester developed a research and planning tool called the Data Privacy Heat Map (try the demo version here). Originally published in 2010, the tool leverages in-depth analyses of the privacy-related laws and cultures of 54 countries around the world, helping our clients better strategize their own global privacy and data protection approaches.
The most recent update to the tool, which published today, highlights two opposing trends affecting data privacy over the past 12 months:
Companies large and small appear to have been targeted in what is being described as the largest known data breach to date.
As first reported by The New York Times, a Russian crime ring amassed billions of stolen Internet credentials, including 1.2 billion user name and password combinations and more than 500 million email addresses.
The NYT said it had a security expert not affiliated with Hold Security analyze the database of stolen credentials and confirm its authenticity.
One of the challenges with Big Data is how to find value hidden in all that volume. Experts generally recommend approaching it as an explorer rather than simply querying the data to find specific answers.
As an astrophysicist, Dr. Kirk Borne knows a thing or two about probing the unknown. Borne, professor of Astrophysics and Computational Science at George Mason University, began tinkering with large data sets because of science, but soon became an advocate for Big Data. Now, in addition to his work as a professor and astrophysicist, Borne is a transdisciplinary data scientist.
According to Borne’s May post for the MapR.blog, he has identified four major types of Big Data discoveries (data-to-discovery, he terms it):
The changes to storage technology have been well-documented over the years. From tape to disk to solid state, not to mention DAS, SAN/NAS and StaaS, the only constant in the storage industry has been change.
Lately, however, these technological changes are starting to coalesce in the data center to produce not only bigger and better storage, but entirely new architectures designed to address increasingly specialized workloads. This has given the enterprise unprecedented ability to craft their own storage environments, rather than simply upgrade their legacy vendor solutions.
Naturally, this is producing a fair amount of turmoil in the traditionally staid storage industry. As Redmond Magazine’s Jeffrey Schwartz notes, established firms like EMC, HP and NetApp are under increasing pressure from start-ups like Nasuni and Pure Storage who are turning to advanced Flash and memory solutions aimed specifically at mobile and cloud-based data loads. Even companies like Microsoft are moving into storage hardware as they ramp up their cloud offerings in the race to beat Amazon to the highly lucrative enterprise storage market.
Among the already scary global state of affairs, cybersecurity and critical infrastructure are also areas that have become increasingly tense. In July, The Economist ran a special section on cybersecurity, and one of the stories focused on critical infrastructure attacks. One passage explains perhaps the key issue driving the underlying threat to the world’s critical infrastructure, and it involves the way in which supervisory control and data acquisition (SCADA) systems, which control network operations, have evolved:
Many of these were designed to work in obscurity on closed networks, so have only lightweight security defences. But utilities and other companies have been hooking them up to the web in order to improve efficiency. This has made them visible to search engines such as SHODAN, which trawls the internet looking for devices that have been connected to it. SHODAN was designed for security researchers, but a malicious hacker could use it to find a target.
Security is, I believe, a major contributor to organisational resilience. It is about protecting assets from loss and damage, risk analysis and management, and alignment with organisational needs. It’s not about criminals and criminality. If you want to be adept and capable as a security professional, knowing about what motivates criminals is not actually of much practical utility. Why should you be interested in ‘rational choice’ when what you need to know about are the methods required to protect your assets? Why study the nuances of criminal investigation when you are looking into the security breach that has already occurred? Obviously, if you want to inform methods of limiting future damage then that is useful, but for me not the driving focus of security.
The functions of security have moved on rapidy from alignment to policing activities to a much wider embedded and linked function. The security professional should be as comfortable in blending his or her functions with those of crisis and continuity management as they are in conducting risk analyses. The security professional should be less concerned with crime rates and more with the ability to identify and manage their own vulnerabilities to all types of threat, some malicious and criminal, but many not. The growth in security these days is of course around IT, information and cyber; and there are adversaries out there who are deeply criminal. They no doubt hit all the spots for criminological theories; but it doesn’t matter – the cyber security professional’s role is to limit the penetration and damage whether the adversary is a kid in his bedroom or a nation-state. Or even the insider who does not understand the damage that their IT use can cause
Traditional data backup happens once every so often – once an hour, once a day, once a week, for example, depending on the recovery requirements associated with the data. It’s typically the recovery point objective or RPO that determines the frequency of the backup. If you cannot afford to lose more than the last 30 minutes’ worth of data, then your RPO will be 30 minutes and backups will happen at least every half an hour. Continuous replication on the other hand changes the model by backing up your data every time you make a change. But what does that do to RPO, disk space requirements and network capacity (assuming you’re backing up to storage in a different physical location)?
As the health of two Ebola-stricken American missionaries deteriorated late last month, an international relief organization backing them hunted for a medical miracle. The clock was ticking, and a sobering fact remained: Most people battling the disease do not survive.
Leaders at Samaritan’s Purse, a North Carolina-based Christian humanitarian group, asked officials at the Centers for Disease Control and Prevention whether any treatment existed — tested or untested — that might help save the lives of Kent Brantly and Nancy Writebol, both of whom had contracted Ebola while helping patients in Liberia.
The CDC put the group in touch with National Institutes of Health workers in West Africa, where an employee knew about promising research the U.S. government had funded on a serum that had been tested only in monkeys.
What should a business continuity plan contain? It's important to keep it concise and manageable, but I'm sure we all have our own ideas as to what the 'must have' items are. Charlie Maclean-Bristol of PlanB Consulting takes us through what he thinks the top ten features of a good plan are:
1. Scope. On many of the plans I see it is not clear what the scope of the plan is. The name of the department may be on the front of the plan but it is not always obvious whether this is the whole of the department, which may cover many sites, or just the department based in one location. It should also be clear within strategic and tactical plans what part of the organisation the plan covers. Or does it cover the whole of the organisation? Where large organisations have several entities and subsidiaries it should be clear whether the tactical and strategic plans cover these.
(MCT) — With dozens of local doctors and medical staff among the dead, U.S. and foreign experts are preparing to flood into West Africa to help fight the deadliest Ebola outbreak on record.
Although two Americans, Dr. Kent Brantly and health worker Nancy Writebol, have contracted the disease, health experts say foreigners taking careful precautions should not be at serious risk.
But more than 60 local medical staff, about 8 percent of the fatalities, have died in Sierra Leone, Liberia and Guinea — poor countries with weak, overloaded health-care systems that are ill-equipped to handle the outbreak.
Ebola expert G. Richards Olds, dean of medicine at UC Riverside, compared local health-care workers there to doctors who donned beaked masks, leather boots and long, waxed gowns to fight the plague in medieval Europe.
The harmful toxin found in Lake Erie that caused a water crisis in Ohio's fourth-largest city this weekend has raised concerns nationally. That's because no states — including Texas — require testing for such toxins, which are caused by algal blooms. And there are no federal or state standards for acceptable levels of the toxins, even though they can be lethal.
In Toledo, Ohio, where voluntary tests at a water treatment plant found elevated levels of the toxin microcystin, which is produced by blue-green algae, the city is urging residents and the several hundred thousand people served by its water utility not to drink tap water, even if they boil it. Exposure to high levels of microcystin can cause abdominal pain, vomiting and diarrhea, liver inflammation, pneumonia and other symptoms, some of which are life-threatening. Restaurants have closed and there are shortages of bottled water as far as 100 miles away.
In Texas, which has battled blue-green algae problems at several of its lakes, Terry Clawson, the spokesman for the state's Commission on Environmental Quality, said surface water data has "not demonstrated levels of algal toxins that show any cause for alarm."
(MCT) — Hotshot Hollywood directors make movies about machines that can predict the future and software programs that can peer ahead in time. Silver screen villains plot to use the predictive power for evil; heroes fight for good.
The drama makes for great movies, but it's not all science fiction: the Tennessee Highway Patrol is already using that kind of technology every day.
It's called predictive analytic software. And it could be the start of a whole new generation of traffic safety, a new tool as revolutionary as seat belts or radar.
"It's the coming thing," said Tennessee Highway Patrol Colonel Tracy Trott.
Tennessee Highway Patrol analysts plug all sorts of factors into the software — like weather patterns, special events, home football schedules, festivals and historic crash data — and the program spits out predictions of when and where serious or fatal traffic accidents are most likely to happen.
ABUJA, Nigeria — In an ominous warning as fatalities mounted in West Africa from the worst known outbreak of the Ebola virus, the head of the World Health Organization said on Friday that the disease was moving faster than efforts to curb it, with potentially catastrophic consequences, including a “high risk” that it will spread.
The assessment was among the most dire since the outbreak was identified in March. The outbreak has been blamed for the deaths of 729 people, according to W.H.O. figures, and has left over 1,300 people with confirmed or suspected infections.
Dr. Margaret Chan, the W.H.O. director general, was speaking as she met with the leaders of the three most affected countries — Guinea, Liberia and Sierra Leone — in Conakry, the Guinean capital, for the introduction of a $100 million plan to deploy hundreds more medical professionals in support of overstretched regional and international health workers.
“This meeting must mark a turning point in the outbreak response,” Dr. Chan said, according to a W.H.O. transcript of her remarks. “If the situation continues to deteriorate, the consequences can be catastrophic in terms of lost lives but also severe socioeconomic disruption and a high risk of spread to other countries.”
Summer vacation: Isn’t it great? Except it is not what it used to be. We are either expected by our employers and clients to somehow remain accessible and productive 24/7 while we’re “off,” or we put that pressure on ourselves. Or we’re in the middle of a job search and don’t want to lose precious momentum or appear not to be serious.
Taking needed vacation time in order to relax and recharge can be especially difficult for those working in IT. A Computerworld piece that is filled with seriously depressing anecdotes about IT folks working through vacation cites a 2014 TEKsystems survey that “found that 47% of senior IT professionals are expected to be available 24x7 while on vacation (up from 44% in 2013), compared to 18% of entry- to mid-level IT professionals (a decrease from 20% in 2013).”
Here are ideas from IT Business Edge and elsewhere for how to manage the expectations, stress, extra duties and communication challenges that your wonderful vacation now brings.
The debate has been going on for a long time. Is it Business Continuity for business processes and Disaster Recovery for IT? Is Business Continuity just the current term for any preparedness planning going on in the organization? Does it depend on who is the driving force behind the need to create a plan? Was it IT, a business line, Audit or Risk Management that got it started? One thing for sure is that in most companies the people on either side of the fence don’t often talk to each other. And it has been that way for years.
When I did an internet search on the topic of Business Continuity vs Disaster Recovery, I found posts going back many years. Just last year (August 27, 2013) Jim Mitchell posted a blog that said, “Unless and until IT and ‘the business’ work together as equal partners in the development of comprehensive Business Continuity, we haven’t moved into a truly ‘post-DR’ world. As long as the two extremes see themselves as adversaries, they are unlikely to reach true Business Continuity objectives. As long as they fight separately over the same budget dollars (and we all know who usually wins that battle), they will never truly be partners in organization recoverability.” A year later this is still true.
The Director-General of WHO and presidents of west African nations impacted by the Ebola virus disease outbreak will meet Friday in Guinea to launch a new joint US$100m response plan as part of an intensified international, regional and national campaign to bring the outbreak under control.
The scale of the ongoing outbreak is unprecedented, with approximately 1,323 confirmed and suspected cases reported, and 729 deaths in Guinea, Liberia and Sierra Leone since March 2014.
“The scale of the Ebola outbreak, and the persistent threat it poses, requires WHO and Guinea, Liberia and Sierra Leone to take the response to a new level, and this will require increased resources, in-country medical expertise, regional preparedness and coordination,” says Dr Chan. “The countries have identified what they need, and WHO is reaching out to the international community to drive the response plan forward.”
When it comes to business continuity and disaster recovery planning, hope is not a strategy. IT departments, however, are too often surprised by the inevitable when a disaster they could have seen coming changes everything. Even companies that have a good disaster recovery or even disaster recovery as a service (DRaaS) plan in place aren't immune to significant business disruptions; they may think their company is fully protected, but Logicalis US warns that having a disaster recovery plan alone may be putting the proverbial cart before the horse.
The horse, in this case, is developing a solid business continuity strategy first.
"Disaster recovery – even DR as a Service – is technology based. The technology will save whatever data you tell it to, but the success of your business depends as much – if not more – on the effectiveness and efficiencies of your processes and procedures," says David Kinlaw, Practice Manager, Data Protection and Availability Services, Logicalis US. "Critically reviewing, evaluating and improving those processes and procedures is therefore essential to ensuring the success of your business."
That's because the true value of business continuity planning is not limited to technology. Done correctly, the exercise of developing and implementing a thorough business continuity plan opens ongoing conversations between IT and business units, empowering them as a team to face whatever challenges lie ahead. Combine a well-implemented disaster recovery or DRaaS plan with a strong business continuity strategy and the organization will have a winning combination for long-term sustainability.
For a while, the general assumption was that Ethernet would supplant all things Fibre Channel in the data center. But the rise of cloud computing and virtualization has created demand for more storage bandwidth than ever.
Rising to the challenge, Cisco this week made additions to its storage area network (SAN) lineup that not only provide 16G of bandwidth, but are also much simpler to manage by both automating the provisioning process and providing tools for detecting network congestion and recovery logic that helps ensure application performance requirements are continuously met.
Nitin Garg, senior manager for product management in the data center switching group at Cisco, says it is now much simpler to provision the Cisco MDS 9148S 16G Fabric Switch, the Cisco MDS 9706 Storage Director, and the Cisco MDS 9700 FCoE Module for multi-protocol networking fabrics.
In its fifth annual board of directors survey, “Concerns About Risks Confronting Boards,” EisnerAmper surveyed directors serving on the boards of more than 250 publicly traded, private, not-for-profit, and private equity-owned companies to find out what is being discussed in American boardrooms and, in turn, what those boards are accomplishing as a result.
According to the report, reputation remains the top concern across a range of industries:
(MCT) — When a major hurricane strikes the Gulf Coast again — as it inevitably will — the federal government will undoubtedly respond in some manner, just as it did after hurricanes Rita and Ike. But the key word in that sentence is "after." The damage will have been done, and coastal residents will bear the brunt of the recovery.
A new study by the National Research Council reinforces that reality. It encourages state and local governments to do all they can now to minimize devastation from hurricanes instead of hoping that Washington will ride to the rescue afterward.
That makes sense. Congress is usually slow to act when disasters strike, and the Federal Emergency Management Agency has a spotty record — even if it has improved in recent years. Responsibility for hurricane risk is scattered among many governmental agencies, the study says, yet collectively they are doing little about protecting coasts before storms strike.
We all rely on USB to interconnect our digital lives, but new research first reported by Wiredreveals that there's a fundamental security flaw in the very way that the humble Universal Serial Bus functions, and it could be exploited to wreak havoc on any computer.
Wired reports that security researchers Karsten Nohl and Jakob Lell have reverse engineered the firmware that controls the basic communication functions of USB. Not only that, the've also written a piece of malware, called BadUSB, that can "be installed on a USB device to completely take over a PC, invisibly alter files installed from the memory stick, or even redirect the user's internet traffic."
Embedded within USB devices—from thumb drives thorough keyboards to smartphones—is a controller chip which allows the device and a computer it's connected to send information back and forth. It's this that Nohl and Lell have targeted, which means their malware doesn't sit in flash memory, but rather is hidden away in firmware, undeletable by all but the most technically knowledgable. Lell explained to Wired:
As the deadliest outbreak of Ebola in recorded history continues to devastate Western Africa, the American Red Cross is supporting efforts through both financial and staffing support.
While the Sierra Leone Red Cross is taking the lead in promoting awareness through social mobilization campaigns, the American Red Cross, along with the global Red Cross network, is helping amplify efforts and strengthen capacity. An American Red Cross specialist has been deployed to provide telecommunications support and internet to the health team in country, and follows another IT specialist that had been in Sierra Leone for the past month.
The American Red Cross has also assisted with remote mapping and information management in the region and has contributed $100,000 to strengthen the capacities of both the Liberia Red Cross and Guinea Red Cross. These funds will help manage the Ebola outbreak response and increase public awareness of the virus.
Red Cross volunteers in the region working to assist with Ebola awareness efforts. In total, more than 1,200 volunteers have been mobilized in Sierra Leone, Liberia and Guinea to date.
Since March 2014, some 1,200 cases have been reported and more than 670 deaths have been linked to the virus in Sierra Leone, Liberia, Guinea and most recently, Nigeria.
Currently outbreaks are centered in the cities of Kailahun and Kenema in Sierra Leone, and the counties of Lofa and Montserrado in Liberia.
Recognizing the severity of the issue, Liberian President Ellen Johnson Sirleaf has announced the closure of most of Liberia’s borders, with stringent medical checks being stepped up at airports and major trade routes. The government has also banned public gatherings of any kind, including events and demonstrations.
Difficulties remain in identifying cases, tracing contacts, and raising public awareness about the disease and how to reduce the risk of transmission. These difficulties, including widespread misconception, resistance, denial and occasional hostility, are considerably complicating the humanitarian response to containing the outbreak.
For more information on the Ebola outbreak and response, visit http://www.ifrc.org.
One of the more frustrating things about IT is that in the wake of the consumerization of IT, no matter how hard internal IT departments try, they can’t wean end users off shadow IT services. Much of that has to do with the user experience those services provide. Designed for consumers, they tend to be a lot simpler to use than applications delivered by the enterprise IT organization. The simple fact is that in order for internal IT organizations to win that battle, they have to deliver an application that provides a much better customer experience than the consumer application they are trying to replace.
With that goal in mind, EMC Syncplicity has delivered an upgrade to its file transfer and synchronization software for Apple iOS devices that makes it easier to not only surface the most relevant and pertinent content, but also predicts which content an end user is likely to want to access next.
By Deborah Ritchie
A report from the Information Commissioner’s Office sets out how the law applies when big data uses personal information. It details which aspects of the law organisations need to particularly consider. Big data is a way of analysing data that typically uses massive datasets, brings together data from different sources and can analyse the data in real time. It often uses personal data, be that looking at broad trends in aggregated sets of data or creating detailed profiles in relation to individuals, for example lending or insurance decisions.
Some commentators have argued that existing data protection law can’t keep up with the rise of big data and its new and innovative approaches to personal data. That is not the view of the ICO, which stresses the basic data protection principles already established in UK and EU law are flexible enough to cover big data. “Applying those principles involves asking all the questions that anyone undertaking big data ought to be asking,” the report reads. “Big data is not a game that is played by different rules.”
(MCT) — During severe weather, Carla Kerr, her daughter and her mother bunker down in their 10-foot-long bathroom on the first floor. With blankets, a flashlight and a weather radio, it’s a bit of a tight fit.
As residents of Guinotte Manor, a public housing complex in Kansas City, they don’t have basements where they can take cover from tornadoes.
At the end of next summer, Kerr will have a safer solution across the street.
The Garrison Community Center will start construction on a safe room this summer, said Bob Lawler, project manager of Kansas City Parks & Recreation Department. The safe room will be able to withstand the highest-rated tornadoes while holding 1,300 occupants, close to the estimated number of residents within a half-mile radius.
A survey into cyber security in the retail sector suggest that a number of organisations don’t realise the goal of PCI compliance is the protection of cardholder data alone – not for the business as a whole.
Conducted by Dimensional and Atomik Research and sponsored by Tripwire, the survey evaluated the attitudes of 407 retail and financial services organisations in the US and the UK on a variety of cyber security topics.
Despite industry data to the contrary, Tripwire’s retail cybersecurity survey indicates that organisations that rely on PCI compliance as the core of their information security program were twice as confident that they could detect rogue applications, such as those used to exfiltrate data. These respondents were also significantly more confident that they would be able to detect misconfigured or unauthorised network shares, which was a key attack vector exploited in the Target data breach.
Ensuring employee safety by rapidly disseminating the right information, and keeping communication lines open in a time of crisis are both priorities for businesses. Traditional solutions for this have relied on the manual ‘call tree’ or ‘phone tree’. Key employees are contacted first to inform them of whatever situation or crisis has arisen, with remaining staff to be contacted as soon as possible afterwards. However, even for smaller organisations of 100 people for example, the manual call tree rapidly demonstrates its limitations. For larger enterprises, there is no doubt – a better solution is required.
MONTGOMERY, Ala. – Community Emergency Response Teams prepare for the worst, then when disaster strikes, they help themselves, their families, their neighborhoods and their communities.
Begun in Los Angeles in 1985, the CERT program consists of specially trained volunteers who are called into action during and immediately following major disasters before first responders can reach the affected areas. They work closely with fire and emergency management departments in their communities.
More than 2,200 CERT programs are available in the United States. In Alabama, 10 counties offer CERT training and maintain teams. During a disaster, Alabama CERT members may self-deploy in their neighborhoods, be mobilized by a sheriff’s office or report to a pre-determined location.
“CERT groups provide immediate assistance to people in their areas and lead spontaneous volunteers before we can get to the area and inform emergency management of what the needs are,” said Art Faulkner, director of Alabama Emergency Management.
Billy Green, Deputy Director of Emergency Management for Tuscaloosa County, had just finished a training class for Hispanic CERT volunteers the week before the tornado outbreak of April 2011 in Alabama.
“We finished on the Saturday before the tornadoes hit,” he said. “These Spanish speakers took exactly what they learned and put it out in the field. The City of Holt has a high Hispanic population, and this team was able to go out there and do search and rescues.”
Holy Spirit Catholic Church set up its own shelter for the Hispanic population, he added. “Those guys were in that shelter helping and making sure everyone was all right.”
This April’s severe weather and flooding caught many Mobile County residents by surprise, said Mike Evans, Deputy Director of Mobile County Emergency Management Agency.
“Mobile gets the most rainfall of anywhere in the continental United States with 67 inches per year,” he said. “This wasn’t like during hurricane season; getting a lot of rain and thunderstorms is pretty common. But areas that normally flood didn’t, it was urban areas.”
Since the ground was already saturated, the rain had nowhere to go so roads that were low flooded, he said.
“People tried to drive through and we had to get them out,” Evans said.
CERTs distributed commodities and one team knocked on doors asking who was going to leave the area and who was going to stay, he said. After the storm, his teams notified people who left the area of the status of their property.
CERTs can also work with crowd and traffic control, work at water stations at large events, help community members prepare for emergencies, and assist with fire suppression and medical operations as well as search and rescue operations.
Initially, CERT members take training classes that cover components of disaster activities, including disaster preparedness, fire suppression, medical operations, search and rescue and disaster psychology and team organization. Additional training occurs twice a year with mock disasters. Refresher courses are also held. The Federal Emergency Management Agency supports CERT by conducting or sponsoring train-the-trainer and program manager courses for members of the fire, medical and emergency management community, who then train individual CERTs.
CERTs are organized in the Alabama counties of Dale, DeKalb, Shelby, Morgan, Tallapoosa, Jefferson, Colbert, Calhoun, Russell and Coffee.
To join an existing CERT program in your community, go online to www.fema.gov/community-emergency-response-teams. Click on the “find nearby CERT programs” link and enter your zip code. If there is a team near you, you will see the name and phone number of a contact person as well as pertinent information about the local program.
That site can also provide information on how to build and train your own community CERT, the curriculum for training members as well as how to register the program with FEMA.
Aside from providing a vital community service, CERT members receive professionally recognized training and continue to increase their skills.
“CERTs complement and enhance first-response capabilities by ensuring safety of themselves and their families, working outward to the neighborhood and beyond until first responders arrive,” said FEMA’s Federal Coordinating Officer Albie Lewis. “They are one of the many volunteer organizations that we rely on during a disaster.”
The industry is so focused right now on Big Data and the Internet of Things that it’s hard to write about anything else. But it’s important to remember that some organizations are still struggling with more basic data problems.
Government Technology recently published a contributed piece about Lodi, California, a town of about 60,000 people and a $350 million wine industry.
Jay Mishra, VP of development at Astera Software, wrote the piece, and it’s pretty obvious he’s promoting the company’s own ETL solution.
More than 175 million records were compromised between April and June due to 237 data breaches, bringing the 2014 total to 375 million records affected and 559 data breaches. That’s a lot of records illegally accessed for less than 1,000 breaches worldwide. What this tells me is that even SMBs store a lot more records than they may realize, and a single data breach can result in a huge payoff for a hacker.
These numbers are from SafeNet’s Breach Level Index second quarter report. The report found that retail was the hardest hit industry, with more than 145 million records stolen, or 83 percent of all data records breached, according to a release.
Here is an important finding in the report: Less than 1 percent of all of the data breaches in the second quarter happened to networks that used encryption or strong security platforms to protect the data. So, no, not every security system is foolproof, but you greatly improve your chances of avoiding a breach if you put strong security practices in place. At the same time, it is a little scary to think how many businesses are still lacking when it comes to network security. Good security is vital to any company’s success, and a second report from SafeNet shows why. Once a customer discovers a company has been breached, he or she is not likely returning. As Yahoo Finance reported:
This story was originally published by Data-Smart City Solutions.
Data science and big data are hot topics in today’s business and academic environments. Corporations in a variety of industries are building teams of data scientists. Universities can barely keep up with student demand for courses. The hope is that new analytic methods, combined with more data and computational power, will uncover insights that would otherwise remain undiscovered. In the private sector, these new insights lead to new revenue opportunities and more targeted investments.
This week I’m back at the National Emergency Training Center (NETC) in Emmitsburg, MD. If you’ve read some of my past blogs, you’ll know that this is “home base” for National Community Emergency Response Team (CERT) training. Even though this isn’t my “first rodeo” at the NETC, I still find it an honor whenever I get the opportunity to teach here. There’s so much history in this region of the United States as well as on the campus that houses the NETC. Throughout the week, I hope to share a few of the stories and sites that make this such a special place to come to.
The NETC is home to both the National Fire Academy (NFA) and the Emergency Management Institute (EMI). The 107-acre campus was the original site of Saint Joseph’s Academy, a Catholic school for girls from 1809 until 1973. It was purchased by the U.S. Government in 1979 for use as the NETC.
The National Fire Academy (NFA) is one of two schools in the United States operated by the Federal Emergency Management Agency (FEMA) at the NETC. Operated and governed by the United States Fire Administration (USFA) as part of the U.S. Department of Homeland Security (DHS), the NFA is the country’s pre-eminent federal fire training and education institution. The original purpose of the NFA as detailed in a 1973 report to Congress was to “function as the core of the Nation’s efforts in fire service education—feeding out model programs, curricula, and information.
Apache’s open source Storm may be the big buzz in Big Data streaming analytics, but according to a recent Forrester report, the commercial vendors are the ones who have “got the goods.”
While Storm is used by a number of high-profile companies, including the Weather Channel, Spotify and Twitter, the research firm writes that it’s nonetheless “a very technical platform that lacks the higher order tools and streaming operators that are provided by the vendor platforms evaluated in this Forrester Wave …”
In its July report on Big Data Streaming Analytics Platforms, the research firm reviewed seven platforms: IBM, Informatica, SAP, Software AG, SQLstream, TIBCO and Vitria. Forrester assessed each on 50 criteria, including business application and platform integration, data sources, development tools, ability to execute, partnerships and pricing.
Ask a roomful of IT experts what the future holds for the data center and you’re likely to get a roomful of different opinions. This is doubly true during periods of revolutionary change like we are seeing now.
With outlooks ranging from all-cloud, all-software constructs to massive hyperscale infrastructure tailored toward specific web-facing or Big Data workloads, it seems that the enterprise has a range of options when it comes to building next-generation infrastructure.
Even during times of heady change, however, it is still useful to anticipate the future by analyzing the past. TechNavio, for example, has noticed that rack units have nearly doubled in size over the past decade, leading the firm to conclude that future data centers will feature higher ceilings and taller equipment racks. A key driver in this is the rising cost of property, which is causing designers to build up rather than out. But it also has to do with the need for increased densities and the prevalence of wireless connectivity, which reduces the need for bulky cables.
KPMG’s UK and global lead in KPMG’s cyber security practice, Malcolm Marshall, is warning organizations about the impact that international political disputes can have on the ability to conduct ‘business as usual’. He suggests that, “whilst attention is focused on the search for resolutions in the ‘corridors of power’, businesses need to be ready to defend themselves, as the cyberspace in which they operate increasingly becomes the new battleground.”
Mr. Marshall says: “Businesses are so focused on cyber-attacks by organized crime that it is easy for them to ignore the possibility of being targeted by groups wanting to make a political point, possibly even with backing from a hostile government.
The International Federation of Risk and Insurance Management Associations (IFRIMA) has established a working group to define ‘the core knowledge and competencies that lie at the heart of risk management in whatever context it is practiced’.
FERMA, RIMS, Alarys and the Institute of Risk Management (IRM) are among the organizations taking part.
The aim of the working group is to produce a short document that any risk management body can use as the foundation of a risk management education and /or certification process.
Publication is planned for sometime in 2015.
Employing a third party to store and deliver assets critical to Disaster Recovery or Business Continuity Plans can be invaluable. But offsite storage should never be “dump it and forget it”. Despite everything your storage provider may promise, it’s what you don’t know that could become a problem when you need to retrieve your data backup, ‘go box’ or other essential recovery assets.
First, there’s the hand-off process. If your IT team ships physical backups offsite on a regular basis, that process can become routine. Over time, routine can slip into neglect. Neglect can result in outcomes that may a problem – or a disaster – when it’s time to recall those backups. And if you are using internal means to store vital assets, understanding the process and its security is just as critical – perhaps even greater.
What is the process? Is it documented? Is it verified with the vendor/provider periodically? Take the time to visit the provider (or even follow their pickup agent) to see exactly how the process works. Ask to see your stored materials, the vendor’s logs and their entry procedures in action.
Civic technologist Matt Stempeck makes an unusual proposal in a recent Harvard Business Review post (registration required): Businesses, especially in the tech sector, should consider donating data over dollars.
Stempeck draws the idea from the International Charter on Space and Major Disasters, a 1999 act that required satellite companies to provide imagery to public agencies in times of crisis. Stempeck points out that under that act, DMC International Imaging has provided valuable imagery on:
- Flooding in the UK and Zimbabwe
- The spread of lotus in Algeria
- Fires in India
- Snow in South Korea
Study looks at more than 60 years of coastal water level and local elevation data changes
Annapolis, Maryland, pictured here in 2012, saw the greatest increase in nuisance flooding in a recent NOAA study. (Credit: With permission from Amy McGovern.)
Eight of the top 10 U.S. cities that have seen an increase in so-called “nuisance flooding”--which causes such public inconveniences as frequent road closures, overwhelmed storm drains and compromised infrastructure--are on the East Coast, according to a new NOAA technical report.
This nuisance flooding, caused by rising sea levels, has increased on all three U.S. coasts, between 300 and 925 percent since the 1960s.
The report, Sea Level Rise and Nuisance Flood Frequency Changes around the United States, also finds Annapolis and Baltimore, Maryland, lead the list with an increase in number of flood days of more than 920 percent since 1960. Port Isabel, Texas, along the Gulf coast, showed an increase of 547 percent, and nuisance flood days in San Francisco, California increased 364 percent.
"Achieving resilience requires understanding environmental threats and vulnerabilities to combat issues like sea level rise," says Holly Bamford, Ph.D., NOAA assistant administrator of the National Ocean Service. "The nuisance flood study provides the kind of actionable environmental intelligence that can guide coastal resilience efforts."
“As relative sea level increases, it no longer takes a strong storm or a hurricane to cause flooding,” said William Sweet, Ph.D., oceanographer at NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) and the report’s lead author. “Flooding now occurs with high tides in many locations due to climate-related sea level rise, land subsidence and the loss of natural barriers. The effects of rising sea levels along most of the continental U.S. coastline are only going to become more noticeable and much more severe in the coming decades, probably more so than any other climate-change related factor.”
The study was conducted by scientists at CO-OPS, who looked at data from 45 NOAA water level gauges with long data records around the country and compared that to reports of number of days of nuisance floods.
Nuisance flooding events have increased around the U.S., but especially off the East Coast. Click graphic for high resolution PDF. (Credit: NOAA)
The extent of nuisance flooding depends on multiple factors, including topography and land cover. The study defines nuisance flooding as a daily rise in water level above the minor flooding threshold set locally by NOAA’s National Weather Service, and focused on coastal areas at or below these levels that are especially susceptible to flooding.
The report concludes that any acceleration in sea level rise that is predicted to occur this century will further intensify nuisance flooding impacts over time, and will further reduce the time between flood events.
The report provides critical NOAA environmental data that can help coastal communities assess flooding risk, develop ways to mitigate and adapt to the effects of sea level rise, and improve coastal resiliency in the face of climate- and weather-induced changes.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook and our other social media channels.
Top ten U.S. areas with an increase nuisance flooding*
Meters above mean higher high water mark
Average nuisance flood days, 1957-1963
Average nuisance flood days, 2007-2013
Atlantic City, N.J.
Sandy Hook, N.J.
Port Isabel, Texas
San Francisco, Calif.
* More than one flood on average between 1957-1963, and for nuisance levels above 0.25 meters.
As the enterprise delves ever deeper into virtual and cloud infrastructure, one salient fact is becoming clearer: Attributes like scalability and flexibility are not part and parcel to the technology. They must be developed and integrated into the environment so that they can truly provide the benefits that users expect.
Even at this early stage of the cloud transition, providers are already feeling the blowback that comes from overpromising and under-delivering. According to a recent study by Enterprise Management Associates (EMA), one third of IT executives say they found scaling either up or down to be not as easy as they were led to believe. With data loads ebbing and flowing in a continual and often chaotic fashion, just trying to match loads with available resources is a challenge even with modern automation and orchestration software.
(MCT) — The coast's susceptibility to big storms is clearly no secret, but ever wonder what the shoreline looked like 100 years ago? Or about the rate at which sea level is changing? The U.S. Geological Survey released an interactive website last week that will allow Coastians to easily research coastal changes.
The tool, called the USGS Coastal Change Hazards Portal, shows changing sea levels, retreating shorelines and vulnerability to extreme coastal storms. A link to the site can be found at sunherald.com.
USGS research geologist Robert Thieler said a large driver behind the portal, which became available July 16, was to bring the three research themes together into one easy-to-use website. He said the functionality of the site and the value of the information make it a useful tool for the general public as well as city and county officials.
All organizations with a Business Continuity Management (BCM) or Disaster Recovery (DR) program always strive to have their Business Continuity Plans (BCP) / Disaster Recovery Plans (DRP) in a state they can use: in a state they believe will cover them in any and all situations. They want their plans to at least cover the basic minimum so that they can be responsive to any situation. But if an organization takes its program – and related plans – seriously, then these plans are never fully complete.
For a plan to be truly viable and robust, it must be able to address as many possible situations as possible while at the same time must have the flexible enough to adapt to any potential unknown situations. If it’s ‘carved in stone’ it makes a bit tough to adapt the plan to the situation (the situation won’t adapt to your plan).
This flexibility – and it’s maintenance (which keeps the plan alive) – includes incorporating lessons learned captured from news headlines and then incorporating the potential new activities or considerations that may not be in the current BCM / DRP plan. These plans aren’t quick fixes or static responses to disasters; they are ‘living and breathing’ documents that need new information to grow and become robust. This is why they should never be considered as complete; as the organization grows and changes – and the circumstances surrounding the organization changes – so to must the BCM and DRP plans.
Many of the attacks launched against today’s brands are as covert as they are debilitating. In today’s connected age, savvy cyber criminals often blitz companies with a flurry of activity across an array of online channels.
To make matters worse, employees who are using the Internet casually or personally create a vulnerability for businesses: workers could click on a phishing link sent to their personal account and unknowingly be exploited by cyber criminals, or they could bring harm to the business via a social media post they thought to be harmless.
And, let’s not forget that brands can also inflict damage on themselves, such as through executive scandals, accounting errors or failing to protect customers and investors. Even though these events may not involve a malevolent, third-party attacker, the resulting fallout can be just as severe as if they fell prey to one.
Another American aid worker has become infected with the Ebola Virus in Liberia. The patient has been identified as Nancy Writebol, a worker with the Christian aid group Serving in Mission (SIM) which runs a hospital on the outskirts of the Liberian capital Monrovia. Writebol had been working as a hygienist responsible for decontaminating those coming and going from the hospital’s Ebola care center.
The diagnosis follows that of Dr. Ken Brantley, a doctor who was working in the same care center for the allied aid group Samaritan’s Purse. Both Americans are said to be in stable but serious condition. Ken Isaacs, a vice president at Samaritan’s Purse, told the AP of the team’s declining morale: “It’s been a shock to everyone on our team to have two of our players get pounded with the disease… Our team is frankly getting tired.”
The two cases fall in the midst of an historically devastating outbreak which has killed 129 people in Liberia, and more than 670 throughout West Africa. The highly contagious disease has no cure or vaccine, meaning aid workers must prevent the spread of the disease altogether. Thus, aid workers and the WHO have to heavily rely on public education and social mobilization to prevent activities that pose a high risk of transmission. Ebola is spread through direct contact with bodily fluids and organs, so these activities include close contact with infected people and inefficient burial of the dead.
(MCT) — Saying California’s emergency responders need more training to handle major calamities, state and local leaders are pitching plans to build a world-class $56 million training facility in eastern Sacramento County that would pit fire crews against a variety of realistic, pressure-packed simulated disasters.
Emergency crews would be required to douse a real 727 jet as it lies in pieces across a field after a simulated crash at the training site; or make split-second decisions on how to approach a derailed train leaking crude oil; or figure out how to quickly pull survivors out of a partially demolished and unstable building after a terrorist bombing or earthquake.
Initial construction on the Emergency Response Training Center has begun on 53 acres east of Mather Field in Rancho Cordova. The facility, billed as one of the most varied, modern and sophisticated training sites in the country, would be “a total disaster city,” said Sacramento Metropolitan Fire District Chief Kurt Henke, one of the officials behind the push.
Organizations invest hundreds of thousands of dollars of redundant hardware and software into their data centers to ensure high availability (the five 9s, again) and resiliency. These same companies hire IT professionals with very specialized experience and certifications to ensure their capital investments maintain high availability and data security on a day-to-day basis.
High availability within the organization shouldn’t just be identified with a Server failure or outage. Serious impacts to your business can be caused by natural disasters (like hurricanes or floods), strikes, major highway closures, terrorist events, all without a single server ever going down! Loss of sales, regulatory fines, contract penalties, getting employees to your business, loss of suppliers, branding, reputation, can all easily be affected, and for very long periods of time, without directly touching the five 9s.
Yet these very same companies often overlook the fact that these same IT professionals sometimes aren’t trained or experienced in recovering from a disaster and ensuring business continuity. Business continuity planning requires a high-level view of IT, but more importantly, a rock-solid understanding of business processes and the potential consequences of natural disasters, strikes, highway closures, terrorist events (and so on) on the business. FEMA has some very valuable information on Business Impact Analysis (BIA) as well as operational and financial impacts. Ironically, IT doesn’t even make the list!
With the referendum on independence for Scotland not far away, the Business Continuity Institute (BCI) has published a paper to help organisations on both sides of the border consider what the impact of independence could be for them and provide links and resources that will help organisations further understand the debate.
Whether it is a new government being elected or an international treaty signed, political change happens all the time and this will invariably have an impact on organisations. Priorities, budgets and regulations all change, resulting in organisations having to rethink their strategies. If disruption can occur when something relatively routine happens, what could happen when there is an entire change of sovereignty?
The BCI remains neutral in this debate but highlights that, whatever the outcome, there will be change in Scotland that will require organisations to reconsider their business continuity plans and strategies. Some of the key findings of the paper include:
- The decision could significantly influence the conduct of business across the Anglo-Scottish border and organisations may have to adapt to a possible independence settlement.
- Policy divergence in key areas such as taxation, business regulation and labour laws is likely to significantly affect highly regulated professions such as the finance and banking industries. For other sectors, policy divergence may result in an increase in operational and logistics costs.
- A monetary union with rUK that allows an independent Scotland to use the pound sterling will keep transaction and borrowing costs to a minimum. However, this may come at the expense of setting independent Scottish monetary policy that may have effects on some businesses. Meanwhile, adopting a different currency for an independent Scotland carries costs so organisations need to have financial safeguards in either case.
- Both sides of the border enjoy interconnected infrastructure and this is not likely to change following independence. However, two sets of policies for critical infrastructure are likely to emerge due to inherent strategic differences of both countries and this may influence the operational terrain for businesses operating on both sides of the border.
- Differences in energy production/consumption may influence sourcing arrangements and cause variations in fuel and energy costs for sectors such as manufacturing, logistics and retail.
- An independent Scotland’s admittance to the EU is uncertain and the indefinite timescale of the accession process may introduce short-term uncertainty over the continued access to benefits provided by EU membership.
A change in the Anglo-Scottish relationship will carry far-reaching consequences to organisations and the way they operate across the border. Independence in itself carries potential opportunities and both Scotland and the rUK remain excellent markets with the highly interlinked trade between these countries not likely to change following independence. However, the change in their relationship will inevitably influence changes in business operations albeit to differing degrees. This is something that organisations must understand as they wait for the results of the independence referendum, and must plan for either way.
Lyndon Bird, Technical Director at the BCI, commented: “This is a big decision for the people of Scotland that could have far reaching consequences for organisations on both sides of the border and further beyond. Like with any major event however, looking ahead, establishing what the impact could be and making sure plans are in place to deal with these should allow any organisation to operate as normal during abnormal circumstances.”
Whilst the outcome of the vote remains uncertain, maintaining continuity amidst political change is a constant. The Scottish independence referendum is a unique event that may influence the conduct of business on both sides of the border. It is essential that organisations know what is at stake as their capacity to adapt will determine their viability regardless of independence or continued union.
To read the full report, click here.
Based in Caversham, United Kingdom, the Business Continuity Institute (BCI) was established in 1994 to promote the art and science of business continuity worldwide and to assist organizations in preparing for and surviving minor and large-scale man-made and natural disasters. The Institute enables members to obtain guidance and support from their fellow practitioners and offers professional training and certification programmes to disseminate and validate the highest standards of competence and ethics. It has circa 8,000 members in more than 100 countries, who are active in an estimated 3,000 organizations in private, public and third sectors.
For more information go to: www.thebci.org
Given the high profile of Big Data, mobile and data analytics, marketing should be a huge fan of data integration. “More data, more profit,” should be marketing’s motto.
However, if you happen to be working with the most backward marketing division in the world, you might want to send them this recent post, “How Data Integration Tools can Turbocharge Your Marketing.”
The post, by a freelance writer who boasts some programming experience, makes an excellent case for the value of data integration. The post casts a wide net, touting data integration’s ability to:
- “Reduce friction” in sales cycles.
- Develop a “customer-centric approach to data.”
- Improve relationships with customers by maximizing demographic information.
- Combine basic demographic data with sentiment data to help create calls-to-action.
- Break down data silos, so you know more about life-long customers.
As compelling an IT opportunity as Big Data can be, it’s not without its challenges—not the least of which is securing all that data.
Looking to make it easier to apply encryption to distributions of Hadoop, Zettaset today announced it is making its encryption software available independently of the Orchestrator platform it created to manage Big Data security.
Zettaset CEO Jim Vogt says the Big Data encryption tools that Zettaset developed are compatible with Apache-based Hadoop distributions available from Hortonworks, Cloudera, MapR, Pivotal, Teradata, and IBM as well as Cassandra and Couchbase NoSQL databases. The tools can also now be accessed via third-party management consoles. That’s critical, says Vogt, because it allows IT organizations to apply encryption to Big Data at rest using the management tools in which they have already invested.
Last winter heavy rain, storm force winds and large waves combined with high spring tides presented England with unprecedented flooding from the sea, rivers, groundwater and surface water.
Thousands of properties were flooded, infrastructure was damaged and tragically, eight people lost their lives. The full impact of these events has not yet been calculated but we do know that 175,000 businesses in England are at risk of flooding.
In 2012 flooding cost affected businesses an average of £60,000, so it is not surprising that flooding is a national priority. In fact the National Risk Register of Civil Emergencies cites coastal flooding as the second highest priority risk after pandemic flu and ahead of catastrophic terrorist attack (taking both likelihood and impact into account) .
No news is good news, or so the saying goes. But when equipment malfunctions and services are interrupted, no news can mean intense frustration for customers and end-users. In today’s quality and satisfaction-oriented business world, you might think that major corporations had understood the importance of good crisis communication. And to be fair, many now make efforts to keep customers informed of the causes of business interruption, the solutions being put in place, and the estimated time when normal service will be resumed. That’s what makes behaviour around a recent outage by one of the top IT and cloud service vendors so hard to fathom.
Ted Julian describes five steps that will help ensure that your incident response plans work when they are required.
Even in the most carefully thought out incident response (IR) plans, there is room for continual improvement. Anyone who has put a response plan into action knows there is a gulf between the theoretical plan and what actually happens given all the variables and complexities that inevitably occur. Because of this, plans often break down; particularly if they haven't been stress-tested based on different real world scenarios.
Whilst not everything will go according to schedule, a thoroughly tested and validated plan will minimise the impact of an incident which, in turn, leads to faster business recovery times. Indeed, no plan is complete until it has been tested with fire drills and functional exercises that assess its effectiveness and identify potential gaps.
Here we outline some practical steps to improving your incident response plan:
In situations where the fastest possible access to data is required – trading floors, for example – CIOs have traditionally turned to flash-based storage systems. No one disputes the performance advantages of flash over traditional disk or tape storage methods, but cost has always been a barrier to a wider adoption. Today, however, the flash technology that once made sense only when extreme high performance was required is now priced to attract the attention of CIOs from a wide range of mid-sized to large companies. To help CIOs determine if flash might be the right solution for their companies, Logicalis US has outlined six key reasons flash storage makes sense for fast access to mission-critical data in mainstream applications:
1. Boosting performance: purpose-built flash storage systems can deliver performance boosts in application response times, accelerated access to information, and increased power efficiency when compared to conventional spinning disks. And, because flash storage is powerful enough to support an organization’s most demanding virtualized cloud environments, along with online transaction processing (OLTP), client virtualization, and business analytic applications, it is garnering attention from performance-hungry CIOs looking for new ways to speed access to business information.
Over the last 12 months, global fatalities from acts of terrorism have risen 30% compared to the previous five year average, according to a new security monitoring service from global risk analytics company Maplecroft, which also identifies China, Egypt, Kenya and Libya as seeing the most significant increases in the risk of terrorist attacks.
The Maplecroft Terrorism and Security Dashboard (MTSD) recorded 18,668 fatalities in the 12 months prior to 1 July, up 29.3% from an annual average of 14,433 for the previous five years. Over the same period the MTSD recorded 9,471 attacks at an average of 26 a day, down from a five year average of 10,468, revealing that terrorist methods have become increasingly deadly over the last year.
The MTSD classifies 12 countries as ‘extreme risk,’ many of which are blighted by high levels of instability and weak governance. These include: Iraq (most at risk), Afghanistan (2nd), Pakistan (3rd), Somalia (4th), Yemen (6th), Syria (7th), Lebanon (9th) and Libya (10th). However, of particular concern for investors, the important growth economies of Nigeria (5th), the Philippines (8th), Colombia (11th) and Kenya (12th) also feature in the category.
Communication in the workplace is challenging enough under the best of circumstances, but in workplaces that can have as many as four generations struggling to communicate with each other, even simple exchanges can result not only in miscommunications, but in misunderstandings that can create serious problems.
One person who has given this problem a lot of thought is Dana Brownlee, a corporate trainer and management consultant whose background in technology includes stints at AT&T Bell Labs, IBM Global Services and EMC. In a recent interview on the topic of multigenerational communication issues in the workplace, I asked Brownlee if, in light of her technology background, she had any sense of whether these issues are more or less prevalent in an IT organization, compared to other organizations.
“My experience has been that IT is such a rapidly developing field, that there's a Darwinian effect that forces anyone who's successful in the field to change, learn, and adapt, early and often,” she said. “As a result, I've tended to see less of these generational communication issues in IT. I'm sure there are exceptions, but that's my general observation.”
The overwhelming response to our range of programmes at Buckinghamshire New University has been indicative to us of the interest in our focus on our resilience, and our emphasis on the ‘New’ in our name. Resilience is not new – organisations have been good (or bad) at it for years. The upsurge in interest in Organisational Resilience is about the need to be able to understand, blend and apply the constituent elements – risk, impact, security, crisis, emergency, disaster, business continuity, change, personnel management are a few of them. With many specialists around who cover some of the areas but few, understandably, who cover all, our aim is to provide a resilience perspective to every programme that we run.
For the MSc Organisational Resilience that is a given. However, in our programmes on Cyber Security, Business Continuity and Security Consultancy, that same approach is applied. By looking outside the specialism, but by retaining that specialist focus, the effective resilience super-practitioner/manager/professional/director is able to contextualise their own actions, plans and ideas and to build and develop an interlocking and intertwined capability. Finally, we are beginning to see the need expressed by both specialists and non-specialists for such a capability to be developed. However, this is not an anodyne function that is grey and bland; it is a multi-faceted and interlinked organisational enhancement that offers significant challenges; it needs confident, capable and educated leaders.
The statement that investments in resilience pay huge dividends when disaster strikes rings true, but the conversation can’t end there.
As a longtime local and state emergency management director, one of my final challenges remains unmet: the ability to gather the combined resources of a community to consider the challenges of restoration prior to a disaster.
Here’s why: Knowledge of risks is often known, but that information is diffused among a number of agencies. Those who know the most about risk rarely have an opportunity or a forum, outside of their own professional discipline, to educate or share their knowledge with others. We need discussions outside of our respective disciplines because no one group or profession possesses either all of the answers or a clear understanding of all of the negative impacts that could arise from a disaster.
Sexual assault is always avoidable.” Far short of the 140 characters allowed by Twitter, but enough to cause an immediate “twit storm.” The unfortunate tweet -- generated by a consultant hired by Massachusetts to handle its Twitter communiqués -- was meant to cap off the state’s recognition of Sexual Assault Awareness Month. If awareness was the tweet’s goal, it achieved it in spades. The tweet immediately set off a firestorm of controversy.
Joe Fitzgibbon stumbled into a similar twit storm. The Washington state representative tossed off a flippant -- but arguably amusing -- tweet after the Seattle Seahawks lost to the Arizona Cardinals in a football game last fall. “Losing a football game sucks,” Fitzgibbon wrote. “Losing to a desert racist wasteland sucks a lot.” The reference to Arizona’s arid climate and less-than-liberal immigration laws set off an interstate uproar, testimony to the power of a handful of words moving through the ether.
Words aren’t the only way Twitter can do damage. The New York City Police Department in April created a hashtag -- #myNYPD -- allowing citizens to quickly and easily post pictures to the department’s Twitter page of NYPD’s finest in action. The public largely responded by tweeting the department’s less-than-finest moments: a veritable gallery of the city’s men and women in blue clubbing, tear gassing, handcuffing and tackling Gotham citizens. “It was unfortunate to see what happened to the NYPD,” says Anil Chawla, author of an online white paper Twit Happens: How to Deal with Tweet Regret in the Public Sector. “It probably gives other government agencies pause.”
BSI has published a new white paper which explains why the benefits of BCM ‘go far beyond helping organizations recover from unexpected disruptions’.
The executive summary reads as follows:
- BCM is a critical business discipline, helping organizations prepare for, and recover from, a wide range of unexpected incidents and unwelcome interruptions.
- The importance of recovery – the most obvious purpose of BCM – can hardly be overstated. Thousands of businesses have saved time and money by getting back up and running quickly after a disruption – and some even owe their survival to it.
- Despite the ‘recovery rationale’ for BCM, many business leaders are yet to embrace the discipline – while others are implementing it piecemeal or poorly.
- Organizations are missing out on more – much more – than simply a speedy return to ‘business as usual’ in the event of disruption. BCM can provide a rich return on investment (ROI) without the occurrence of a disaster.
- A robust BCM process offers many advantages, from lower insurance premiums and process improvements to business expansion and brand enhancement.
- At a strategic level, BCM can play a key part in organizations’ risk management processes, answering to the demands of today’s onerous regulatory and corporate governance requirements.
- It is time for the ‘C-suite’ to wake up to the full range of BCM benefits and the true ROI the discipline offers.
- Help is at hand: the management system standard ISO 22301 provides the ideal framework for implementing a BCM system.
- Many organizations, both large and small, have already implemented ISO 22301, harnessing a host of benefits from this multi-faceted standard.
- Some have maximized the benefits by achieving independent third party certification to ISO 22301, enabling them to demonstrate ‘badge on the wall’ best practice in this vital area.
- There is a growing trend for companies to be required to hold certification to ISO 22301 by powerful private and public sector customers – or risk losing business.
Read the white paper (PDF).
Business continuity problems often carry their own penalty in the form of lost revenue, customer churn and reputational damage. In some cases, outages also mean stiff fines that go beyond the penalties that are part of any service level agreement. Thus, SingTel, the Singaporean telecommunications company, received a 6 million dollar fine (about 4.81 million USD) from the ICT regulator in Singapore for a breakdown in service in October 2013. The disruption affected government agencies and financial institutions and had an impact on 270,000 subscribers. But what is really behind fining a company whose business continuity fails like this?
Among the many unwitting assumptions that occur when developing Business Continuity plans is the assignment of roles to specific individuals. A smart BCM planner will at least buttress that assignment with a backup person – just in case. But is that really enough?
In many cases is should be. But there are many others in which roles assigned to individuals (even with a backup) may prove wholly inadequate.
The most obvious is in a natural disaster scenario. What if the individuals cannot be contacted or the roads are closed? Those may cause a temporary problem. On the other hand, consider that the individuals may have other priorities – like protecting their family, or their home. Those priorities may not mean they can’t respond to their BCM obligations – but they may be unwilling. If you’ve assigned tasks to an individual who doesn’t show up, you’ll have to scramble to reassign the task (perhaps to someone unfamiliar with the role and responsibilities). And even in a non-disaster situation, a named individual may be on holiday or away on business. They can’t help if they’re not there.
If circumventing the IT department is “kind of a given,” as one executive from a cloud services provider put it in a post I wrote earlier this month, it may be just as much of a given that what business units are most eager to acquire when they do circumvent the IT department are business intelligence and data analytics tools.
I recently discussed this phenomenon in an email interview with Fred Shilmover, CEO of InsightSquared, a provider of cloud-based BI services in Cambridge, Mass. When I asked Shilmover if he’s finding that business units are circumventing the IT department in order to get the data analytics tools they need, Shilmover said they see it all the time among companies that have purchased a lot of cloud-hosted software, and he noted that the tools are often purchased by a new line-of-business leader:
Residents of Alaska have historically been more likely than people in other states to have a supply of frozen food on hand, but their reliance on food from stores has grown in recent years, leaving them vulnerable in an emergency.
Like every other state, Alaska has to be prepared for disasters, both natural and man-made. But as it works to make sure its residents would have enough food in a disaster, the state also has to deal with some unique challenges.
“We’ve got volcanoes, earthquakes, cold weather — a lot of potential for emergencies up here,” said Danny Consenstein, state executive director for the U.S. Department of Agriculture Farm Service Agency in Alaska. “Do we have a food system that is resilient and strong, that could help us in case of emergencies?”
Catastrophe risk modelling firm AIR Worldwide has updated its earthquake model for Canada. The comprehensive update will provide insurers and other industry stakeholders with an advanced tool for assessing potential losses from ground shaking, fire following earthquake, tsunami, liquefaction, and landslide for the Canadian market and will be a significant tool for compliance with OSFI Guideline B-9.
"The updated Earthquake Model for Canada has been extensively reengineered and offers significant enhancements," said Dr Jayanta Guin, executive vice-president, research and modelling, AIR Worldwide. "The model reflects an up-to-date view of seismicity based on the latest hazard information from the Geological Survey of Canada and collaboration with leading academics. In addition to the ability to estimate losses from shake, fire following, and liquefaction, the release is the first in the industry to include fully probabilistic landslide and tsunami models for Canada. Virtually every component of the updated model has undergone peer review."
Testament to the sophistication of the model, the Insurance Bureau of Canada selected AIR Worldwide to conduct the most comprehensive study of seismic risk in Canada ever undertaken. According to IBC, AIR's study will help drive a national discourse on mitigation, financial preparedness, and emergency response.
The final attribute of the RIMS Risk Maturity Model should be of great interest to risk managers responsible for establishing an enterprise risk management (ERM) program. Without some level of business resilience and sustainability built into your program, the iterative, cultural changes that are created by the ERM process will wane and your exposure to loss events will increase.
Traditionally, business continuity plans have focused on technology platforms, but resiliency means much more than ensuring that your information technology infrastructure is prepared for disaster recovery. Consider that the IT infrastructure that is the focus of your business continuity plans is likely to play a critical role in the execution of your mitigation activities (for example, a server that supports access rights and security). A lack of capability to explicitly identify relationships between these entities can result in huge increases in short term risk exposure at the worst possible time, as rapidly deteriorating business environments require even stronger change management ability.
If your wife is a researcher in medical entomology, you’ll often hear odd tidbits related to mosquito-borne diseases. For instance, did you know how cute malaria parasites can look under a microscope? I didn’t either, until I met Cassandra Urquhart. (Some other things I’ve heard described as “cute” since then include, but are not limited to: cockroaches, nematodes, spiders, earwigs, and male mosquitoes.) She’s fascinated by her own work with La Crosse virus, excited by new papers on dengue fever, and interested in how many of the mosquitoes she’s collected at sites around Knox County, Tennessee will test positive for West Nile virus. In her spare time, she reads books on the history of yellow fever and Chagas disease for fun. Don’t get me wrong—she cares about the human toll of such diseases. But as a scientist, she’s usually more curious than alarmed about them. However, when it comes to chikungunya virus, my cheerfully bug-obsessed wife gets far more serious—and so do many entomologists. So why is chikungunya different?
Chikungunya virus seems, at first, to have a lot in common with dengue virus, another mosquito-borne pathogen. Both cause extremely painful diseases—chikungunya’s name comes from a Makonde word meaning “that which bends up,” referring to the contortions sufferers put themselves through due to intense joint pain. Dengue’s nickname is breakbone fever. Both viruses are primarily transmitted by the Aedes aegypti mosquito, and both have been moving slowly closer to the United States over the past decades, with local cases of dengue fever already found in Florida and Texas.
Last week the Centers for Disease Control and Prevention announced the first locally acquired cases of chikungunya in the United States. A woman in Miami-Dade County and a man in Palm Beach County, neither of whom had left the country recently, both came down with the dreaded disease.
There is a view that senior members of organisations are the ones who are the strategists, the shapers of the future and those who are responsible for the developments in industry and professions who form the direction of various sectors. They may well be; but in continuity, security, crisis and emergency management – they probably aren’t. The conflation of seniority in an organisation with the assumption that there is associated strategic capability is common, but where hierarchies are populated at the top end by those who have got there by a combination of luck, ruthlessness, ‘dead man’s shoes’, or any other combination of assumed capability, the reality is different.
In the world of ‘resilience’ (and for today I am combining those specialisms mentioned above – amongst others – under that term), strategies should be driven – but often aren’t - not by senior managers and directors, but by those who are able to think, consider and plan for the future. Resilience is necessarily reactive to what has happened previously and in essence is about trying to reduce the impacts of future recurrence. And to consider the wisdom of strategists we can look at any number of examples from recent years and think about why those with authority and power, kudos and seniority can’t strategise their way out of a paper bag.
No-one likes to feel that they have no control. It’s demoralising to think that someone or something else directs and influences your destiny, your future and everything involved in it. Of course, you can help yourself by learning to be assertive or adopting a particular approach to life that allows you to regain some of the control that we all risk losing in life today. However, there are many variables that influence the way our lives turn out; and to me it makes sense to reduce those variables as much as we are able.
It does seem strange that so many of us leave it quite late in life to understand that we need to take control and determine our future; that there are some aspects of life that we can affect on our own initiative and with determination. I meet a lot of people who, for example, have ‘never had the time’ to study the subject that their job involves, or who say ‘I’ve realised that I need a qualification’, when both knowledge about their business and evidence about that knowledge are both key elements of development and progression. Without them, there are gaps in capability that you cannot fill – and they are therefore filled by someone or something else – and if that happens to you, you do not have control over either your, or your organisation’s future.
MONTGOMERY, Ala. – The backbreaking work accomplished by volunteers in Alabama following the April 28 through May 5 severe storms, tornadoes, straight-line winds and flooding seems to have occurred out of the clear blue sky.
- More than 25 Amish men traveled 70 miles to help a Madison County farmer clean up debris and help fix her home. They asked for nothing in return except a hot meal.
- Nearly 100 volunteers showed up over a recent weekend to cut and remove 25,000 cubic yards of debris in Bessemer. But that’s just a drop in the bucket – one month after the disaster, volunteers had removed nearly 80,000 cubic yards of debris. All these volunteer’s wanted was a “thank you.”
- In Coxey, Samaritan’s Purse, a Christian service and relief organization, brought in 471 volunteers who put in 5,900 hours in just three weeks. Also there, a local church was transformed into a storm relief center and overflowed with donations of clothes, food, personal hygiene items, cleaning supplies, and pet and baby items for survivors. The look on survivors’ faces was ample payment for these workers.
Every year and in every disaster, volunteers fill an often overlooked role and seemingly arrive and leave the scene at just the right time. A further look will reveal a network of agencies choreographing volunteer groups with seamless precision to fill the gaps that the federal government cannot. They are called Long Term Recovery Committees or LTRC.
Charles “Larry” Buckner serves as a Federal Emergency Management Agency volunteer agency liaison in Alabama to help coordinate these efforts and provide advice. He also reviews benefit requests to make sure there is no duplication.
“As far as we know, there is $4.2 million in unmet needs in home repair in all nine designated counties in this disaster,” Buckner said. “Of these counties, seven have set up Long Term Recovery Committees, some of which had just barely shut down because of the tornadoes from 2011.”
The two remaining counties have not had LTRCs in the past but are now forming them.
While FEMA and the state can and have helped survivors, neither the federal nor state governments are empowered by law to make disaster survivors whole, that is, to fully replace all that is lost.
LTRCs pick up where FEMA leaves off. Their goal is to identify and meet as many reasonable needs as possible.
These committees are the boots on the ground determining what unmet needs exist. They, in turn, work with state Voluntary Organizations Active in Disasters and other groups to attain what is needed, whether it is cash, workers or donated materials.
The committees are everywhere across the country, Buckner said. The concept has been in existence for more than 18 years.
These committees are made up from a variety of organizations – church denominations, local charities, community foundations and some independent groups, such as nondenominational “mega churches.” The one feature they all share is a calling to help serve those in need.
“United Way is providing case workers in some counties and may act as the fiduciary, the American Red Cross may provide case workers as does the Salvation Army,” he added.
In Alabama, Buckner said the LTRC committees are working with Serve Alabama, part of the governor’s office, and has applied for a grant to be used to hire case workers.
“With the grant, they can hire 12 case workers for 18 months,” he said. “It asks for just shy of $1 million.” If approved, the grant will come from FEMA, he added.
The case workers meet with survivors and assess their unmet needs. They take into account what FEMA provided, but FEMA grants are capped at $32,400 per household. Anything beyond that amount is where the LTRC committees can assist.
The case worker will make a recommendation to a group of three to five committee members “in such a way that the board sees the facts but may never know who that individual is,” he explained.
“That is done to prevent favoritism or being passed over based on who the survivor is,” he said. “Then, the group gives a thumb’s up or down to entirely or partially meet the unmet need. You won’t see them replacing a swimming pool, but they may replace house siding and decide to paint it as well.”
While this is going on, other members of the LTRC are working to recruit volunteer organizations such as Habitat for Humanity, the Mennonites and others to come in and repair or rebuild homes. Still others are securing grants large enough to meet most, if not all, of the unmet needs.
“The dollars can go into the millions,” he said.
And any excess funding all goes to meet the needs of the survivors.
“If there is a surplus, they use the money to replace furniture, appliances and other things that will help people get back on their feet.
“They want to provide people with safe, sanitary and functional homes,” Buckner said. “In some areas of the country they are not as successful. But they are here because the southern culture dictates that communities take care of their own.”
While no state is ruled out of the possibility of experiencing an earthquake, 42 states have a “reasonable chance” of having damaging ground shaking from an earthquake, according to recently updated information from the U.S. Geological Survey (USGS). The agency’s research also determined that 16 states — those that have experienced a magnitude 6.0 earthquake or larger — have a “relatively high likelihood” of having a damaging quake in the future.
The updated U.S. National Seismic Hazard Maps were released July 17 to reflect current understanding of where future earthquakes will occur. The data reflected what researchers have known: The earthquake hazard is highest on the West Coast, intermountain West and in regions of the Central and Eastern U.S., including near New Madrid, Mo., and Charleston, S.C. “While these overarching conclusions of the national-level hazard are similar to those of the previous maps released in 2008, details and estimates differ for many cities and states,” reported the USGS. “Several areas have been identified as being capable of having the potential for larger and more powerful earthquakes than previously thought due to more data and updated earthquake models.”
“What we’re doing is trying to forecast future shaking based on past behavior,” said Chuck Mueller, a research geophysicist with the USGS.
Congratulations! We’d Like You to Implement a Business Continuity Program in our Organization
Picking up the pieces and starting a business continuity program takes finding a BC mentor to a whole new level
We last spoke about finding one or several subject matter experts to help you understand a bit more about business continuity, disaster recovery and crisis management in your organization. Your inquisitiveness and understanding in these areas has brought you to the attention of management and perhaps positions you to be the best candidate to continue / restart BCP efforts. You’ve become a business continuity planner!
While simple interest doesn’t necessarily get you promoted, maybe your experience in Information Technology, business operations, Audit / EDP Audit or facility management qualifies you earn the confidence needed to support your BCP efforts.
As Business Continuity continues its growth as a profession, the idea of certification and the membership of professional bodies are more frequently discussed at all levels of the organization – from those starting out their career in the industry, right up to the Board Room.
As an individual you will be looking at the long term development of your career while those at Board level need to consider the long term growth of the organization. Of course the two of these are not mutually exclusive and many managers will tell you that the best way to grow an organization is to invest in its people.
The first step on the professional ladder is certification. Certification gives you an outward facing verification of your knowledge in that discipline. Attaining this level of qualification will set you apart from those who are not certified, who would only have knowledge of BC in their current environment.
A new report from New York State’s Attorney General details the damage to the state’s citizens and organizations from reported data breaches over the last eight years. “Information Exposed: Historical Examination of Data Breaches in New York State” attempts to illustrate the exponential growth in breaches, reports of breaches and some of the related costs, and then gives recommendations on how individuals and companies can better protect themselves.
- Almost 5,000 separate data breaches were reported to the AG’s office between 2006 and 2013.
- These breaches exposed 22.8 million personal records of New Yorkers.
- The number of breaches reported annually more than tripled during the time period.
- 2013 was a record-setting year, with 7.3 million records of New Yorkers exposed.
- Five of the 10 largest breaches reported to the AG have occurred since 2011. These are considered “mega breaches.”
An interesting article in Fortune this morning covered a round table of security and technology experts who discussed the biggest threats to businesses. Stephen Gillett, Symantec’s chief operating officer, said there were three types of threats: script kiddies, organized crime and state-sponsored. In my opinion, he forgot a few, like hacktivism, which I think he includes with script kiddies, though hacktivism needs to stand on its own as one of the most serious threats to business operations.
The panel also raised what I think is a very important question: Do you know your company’s weakest security link? Yes, they talked about insider threats and how they are underestimated in relation to outsider threats:
It’s more likely that an employee doesn’t realize the value of the data access they have, even if they’re a low-profile employee.
Explaining just why cyber attacks and data breaches are a very real concern for business continuity professionals, a report published by ForeScout Technologies revealed that 96% of respondents who took part in their survey had experienced a major IT security incident in the last year. 39% experienced at least two incidents while 16% experienced at least five.
The IDG Connect Cyber Defense Maturity Report 2014 was the result of a study of 1600 decision makers in IT security who work for companies with more than 500 employees located in three distinct regions - the US, UK and DACH (Germany, Austria and Switzerland). The sectors that respondents worked in were finance, manufacturing, healthcare, retail and education are active.
The majority of those surveyed were aware that part of their security measures were immature or ineffective, but only 33% were very confident that they can improve the less sophisticated security checks. It was suggested in the report that growing operational complexities and threats have affected the security capacity with over 43% claiming that prevention, identification, diagnosis and resolution of problems today is more difficult than it was two years ago.
With the threat so high, as also demonstrated in the latest BCI Horizon Scan Report, organizations must ensure they have plans in place to deal with the consequence of IT security incidents should they occur. Organizations are becoming more and more reliant on technology and IT, but even if those systems malfunction, with an effective business continuity plan in place, the organization should still be able to function.
38% of executives claim that supply chain management is their main challenge over the coming year with 42% placing it at the top of their list for increased investment. Those were some of the findings of a study carried out by the Consumer Goods Forum and KPMG International. The figures were even higher for those in the retail sector with over half (51%) of non-food retailers citing supply chain management as their main challenge.
The annual Global Top of Mind survey, a poll of nearly 500 C-suite and senior executives across 32 countries, also revealed how important the digital revolution will be over the next 12 months to consumer goods and retail companies – impacting everything from business growth and supply chain management to food safety, sustainability, and data security and privacy.
Supply chains are becoming longer and more complex with many factors coming into play such as infrastructure and weather - a lot of data needs to be processed in order to make sure they are fully optimised. As the complexity increases however, so does the possibility of disruption.
It is easy to see why supply chain management is an issue when you look at the most recent BCI Supply Chain Resilience Report. This report highlighted that 75% of respondents did not have full visibility over their supply chain and that 75% experienced at least one supply chain disruption over the last year with 42% of these disruptions occurring below the tier one supplier. 15% of respondents experienced disruptions that cost in excess of €1 million and 9% experienced a single disruption that cost in excess of €1 million.
The study concludes that as supply chains become increasingly complex, greater collaboration among suppliers and retailers is needed. Companies need to achieve greater visibility beyond their tier one and two suppliers and that downstream supply chains also need to be more transparent and agile.
The 2014 BCI Supply Chain Resilience Survey is currently live and can be completed by clicking here.
Readers of this blog know I am huge Civil War buff. Growing up in Texas, I only focused on the Southern side as a youngster and while this led to a sometime myopic view of events, in my mid-20s when I did begin to study the Northern side of the war, because I had never seriously studied from that perspective an entire panorama opened up for me.
One thing that never changed however, was the disaster that befell the South from the appointment of John Bell Hood to commander of the Army of Tennessee, which opposed General Sherman’s advance into Georgia since his stunning defeat of the Confederate forces at Chattanooga and later Lookout Mountain in Tennessee in late 1863. On this day 150 years, Confederate President Jefferson Davis replaced General Joseph Johnston with John Bell Hood as commander of the Army of Tennessee. Davis, impatient with Johnston’s defensive strategy in the Atlanta campaign, felt that Hood stood a better chance of saving Atlanta from the forces of Union General William T. Sherman. President Davis selected Hood for his reputation as a fighting general, in contrast to Johnston’s cautious nature. Hood did what Davis wanted and quickly attacked Sherman at Peachtree Creek on July 20 but with disastrous results. Hood attacked two more times, losing both and destroying his army’s offensive capabilities. Over the next two weeks in 1864, Hood’s actions not only led to President Abraham Lincoln’s reelection but spelled, once and for all, the doom of the Confederacy.
I thought about the risks of appointing Hood to command when I read a recent article in the Compliance Week Magazine by Carol Switzer, co-founder and President of the Open Compliance and Ethics Group (OCEG), entitled “A Strategic Approach to Conduct Risk”. Her article was accompanied by an entry in the OCEG Illustrated Series, entitled “Managing Conduct Risk in the GRC Context”, and she also presented thoughts from a Roundtable which included John Brown, Managing Principal, Risk Segment, Financial and Risk Division at Thompson Reuters; Tom Harper, Executive Vice President-General Auditor Federal Home Loan of Chicago and Dr. Roger Miles, Behavioral Risk Lead, Thompson Reuters.
Historically, corporate Boards of Directors have held the responsibility of risk management oversight, ensuring that risk management processes are clearly defined and appropriately enacted. Their role in managing risk has been to provide guidance and leadership on matters that impact the strategic direction of a company or its public image. In this traditional view, C-level management is left with the responsibility of actual risk assessment and mitigation, including issue resolution. But in today’s fast-paced and social-media driven world, the speed at which a risk can turn into a widely publicized issue means Board members must now provide both tactical and strategic supervision over risk management as part of their membership.
In the wake of recent financial crises, increased awareness and interest from a broader array of company stakeholders now exists. High-profile and highly reported product quality problems continue to impact multiple industries and both regulators and Boards have been forced to re-evaluate the structure and the role of their risk governance efforts. Whether required by law or not, many corporate Boards, especially (but not solely) those in the financial industry, have taken a more active role in managingcorporate risks. Regardless of regulation or stakeholder demands, an active risk management initiative at the Board level makes good business sense because each risk, whether strategic, operational, political, reputational or other, presents companies with an opportunity to build competitive advantage. The proliferation of risks in the current environment has intensified and forced companies to focus on impacts that must be avoided and opportunities that should be seized. From our point of view, the Board of today should play a direct role in the new risk environment paradigm by creating an active Board-level risk management program. Such an approach will allow organizations to transition from a position defending against risk to a more proactive approach that leverages risks as new opportunities and perhaps even advances organizations to more “blue ocean” possibilities.