Industry Hot News (7039)
Keith Fehr wants to be ready for anything when the Super Bowl comes to the University of Phoenix Stadium in February. “We trained on structural collapse, on foodborne illness. We practiced a biological agent release, a chemical warfare release, explosions, multi-vehicle accidents,” he said.
As director of emergency management for the Maricopa Integrated Health System, an Arizona system that encompasses an adult trauma center, pediatric trauma, a regional burn center and two behavioral health facilities, Fehr said he has his bases covered. “The big game may never see a chemical weapons attack,” he said, “but you always want to push to the point of failure, to see where you could do better.”
Fehr got his right-to-the-edge training this fall at the Center for Domestic Preparedness (CDP), a FEMA teaching facility where some 14,000 first responders and emergency managers come each year to drill, pairing classroom time with intensely realistic exercises. Walking wounded stagger through a mock downtown. Radiation victims crowd the halls of a full-scale hospital. Hazmat teams deal with actual anthrax and ricin. It’s a hardcore program, with FEMA picking up all participants’ costs.
When a technology company does well, more power to it. When it does good at the same time, it warrants our attention. So when TCN, a provider of cloud-based call center technology in St. George, Utah, announced that it was releasing technology that would help visually impaired people get jobs in call centers, my attention was immediately grabbed.
On Tuesday, TCN announced the release of Platform 3 VocalVision, technology that enables visually impaired people to navigate TCN’s Platform 3.0 call center suite. The approach was to optimize the platform to be compatible with Job Access with Speech (JAWS), a popular screen reader that assists users whose vision impairment prevents them from seeing screen content or using a mouse.
In an email interview, Terrel Bird, co-founder and CEO of TCN, explained the roots of the project.
(TNS) — This is a test of the region's preparedness for sea level rise and climate change. This is only a test:
It's Aug., 19, 2044. Hurricane Elvis, a Category 3 storm, is bearing down on Hampton Roads.
Sea levels are 1.5 feet higher than today. Because of climate change, the region has had 60 days 90 degrees this year. The National Weather Service is forecasting Elvis storm surges of 3 to 8 feet.
What does Hampton Roads need to do to prepare for something like this?
The scenario was part of a federally led exercise held this week at Old Dominion University.
The year is 2015. You walk into your bank to make a withdrawal, hold your smartphone to the terminal with one hand, and put the fingers of your other hand on the small green-glowing window.
A buzzer sounds and the words “IDENTITY REJECTED” flash onto the screen. A security guard appears from nowhere.
You begin the first of many long, frustrating protestations. You are who you say you are, but you can’t prove it.
Your identity has been snatched.
BSI has announced the availability of a revised version of PAS 96, which helps companies safeguard food and drink against malicious tampering and food terrorism. PAS 96 ‘Defending food and drink’ was first published in 2008 as a guide to Hazard Analysis Critical Control Point (HACCP) which identifies and manages risks in supply chains.
The food and drinks industry is used to handling natural errors or mishaps within the food supply chain, but the threat of deliberate attack, although not new, is growing with the changing political climate. Ideological groups can see this as an entry point to commit sabotage or further criminal activity.
Therefore the impacts of threats to the food supply chain are great. They can include direct losses when responding to the act of sabotage, paying compensation to affected producers and suppliers, customers and distributors. Trade embargoes can be imposed by trading partners and long term reputational damage can occur as a result of an attack.
Businesses in the UK are at risk of sleepwalking into a reputational time bomb due to a lack of awareness on how to protect their data assets, according to new research by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.
The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56%) of UK businesses being more concerned than 12 months ago. 7 in 10 (70%) attribute this to hackers becoming more skilled and better at targeting businesses. However, whilst the vast majority (98%) of organizations have taken measures to minimize risks to their information security, only 12% are extremely confident about the security measures their organization has in place to defend against these attacks.
These concerns echo those in the annual Horizon Scan survey carried out by the Business Continuity Institute and sponsored by BSI, which showed that cyber attacks and data breaches are the joint second biggest concern for business continuity practitioners. In the 2014 report, 73% of respondents to a global survey expressed either concern or extreme concern about each of these threats materialising.
Worryingly, IT Directors appear to have accepted the risks to their information security, with 9 in 10 (91%) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49% in both instances). Around four in ten (42%) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30%) have suffered a loss of confidential information.
Organizations need to safeguard themselves and their customer data, however there is an inherent lack of trust from consumers on how their data is handled with a third of consumers admitting they do not trust organizations with their data. There have been many high profile data breaches in the last few years that help demonstrate just why this lack of trust is justified. On the other hand there is a level of acceptance that nothing online will ever be safe, leading to a false sense of security that ‘this will not happen to me’ amongst those who have not suffered from a cyber-attack/crime.
Maureen Sumner Smith, UK Managing Director at BSI added: “Consumers want their information to be confidential and not shared or sold. Those who want to be reassured that their data is safe and secure are looking to organizations who are willing to go the extra mile to protect and look after their data. Best practice security frameworks, such as ISO 27001 and easily recognizable consumer icons such as the BSI Kitemark for Secure Digital Transactions can help organizations benefit from increased sales, fewer security breaches and protected reputations. The research shows that the onus is on businesses to wake up and take responsibility if they want to continue to be profitable and protect their brand reputations.”
Efforts continue in order to stop the spread of the Ebola outbreak and find vaccines to defeat the virus. However, businesses need to be prepared in more ways than one. Although the risk is considered low that a widespread Ebola infection would occur outside West African countries, the potential consequences could be catastrophic and deadly. Like other epidemics that became pandemics, precautions against Ebola can start with common sense instructions to prevent infection and to react appropriately if it is detected. But they cannot end there. Organisations must make sure that additional protection is in place both for their employees and their business activities.
When creating a business continuity (BC) or disaster recovery (DR) plan, I say “begin with the end in mind.”
A BC / DR plan’s primary goal is to help prepare an organization so it can respond to and fully recover from any disaster, as quickly as possible. But how many actually get to the end with a fully functional integrated easy to use crisis management plan (or Incident Management, Continuity of Operations Plan)? How many still have a big thick binder with multiple pages you have to flip through to find the information you need?
The point of this article is to map out elements of an effective crisis management plan with the goal of helping you avoid recovery delays and potential financial or operational disasters. Having an effective crisis management plan with each action mapped out prior to an incident is essential. Without it, your emergency response might lead to catastrophic consequences for your employees, your business and your customers.
2014 saw continued use of buzzwords like cloud, wearables, BYOD and IoT but conversations around what this will mean to business if we don’t evolve and prepare our IT infrastructures were significantly lacking.
There’ll always be some level of disconnect between maintaining IT and maintaining business productivity; both have very different deliverables. However the two must be interlinked as there are key areas where IT and business objectives overlap. Understanding the ICT environment in depth is important to improving business resilience and the efficiency of the ICT infrastructure.
In this article Patrick Hubbard highlights emerging areas where greater understanding is required to enable organizations to maintain current levels of ICT availability and resiliency.
EMC Corporation has published the findings of a new global data protection study that reveals that data loss and downtime cost enterprises more than $1.7 trillion in the last twelve months. Data loss is up by 400 percent since 2012 while, surprisingly, 71 percent of organizations are still not fully confident in their ability to recover after a disruption.
The EMC Global Data Protection Index, conducted by Vanson Bourne, surveyed 3,300 IT decision makers from mid-size to enterprise-class businesses across 24 countries.
Impact of data loss and downtime
The good news is that the number of data loss incidents is decreasing overall. However, the volume of data lost during an incident is growing exponentially:
- 64 percent of enterprises surveyed experienced data loss or downtime in the last 12 months;
- The average business experienced more than three working days (25 hours) of unexpected downtime in the last 12 months;
- Other commercial consequences of disruptions were loss of revenue (36 percent) and delays to product development (34 percent).
New wave of data protection challenges
Business trends, such as big data, mobile and hybrid cloud are creating new challenges for data protection:
- 51 percent of businesses lack a disaster recovery plan for any of these environments and just 6 percent have a plan for all three;
- In fact, 62 percent rated big data, mobile and hybrid cloud as 'difficult' to protect
- With 30 percent of all primary data located in some form of cloud storage, this could result in substantial loss.
The protection paradox
Adopting advanced data protection technologies dramatically decreases the likelihood of disruption. And, many companies turn to multiple IT vendors to solve their data protection challenges. However, a siloed approach to deploying these can increase risks:
- Enterprises that have not deployed a continuous availability strategy were twice as likely to suffer data loss as those that had;
- Businesses using three or more vendors to supply data protection solutions lost three times as much data as those who unified their data protection strategy around a single vendor;
- Those enterprises with three vendors were also likely to spend an average of $3 million more on their data protection infrastructure compared to those with just one.
More details: http://emc.im/DPindex
There are a great many challenges to overcome to prepare a sizable organization for crises, emergencies or reputation disasters. But one seems nearly intractable: the ignorance of those in high places. The very ones who will make the big decisions when push comes to shove. The lawyers, the CEOs, the regional execs, the Incident Commanders, the chiefs, the directors, the presidents.
If the ones who call the shots during a response do not understand the water they are swimming in, the effort is doomed–despite all the preparation that communication and public relations leaders may put in place.
A week or so ago I had the privilege of presenting to the Washington State Sheriffs and Police Chief’s association training meeting. Chief Bill Boyd and I were to give a four hour presentation to these law enforcement leaders. Bill did the bulk of the work on the presentation, but had a medical emergency and couldn’t present with me. One item he had gathered for this really hit me–and those present. The Boston Police radio message from the Incident Commander on the scene just after the bombing occurred included the calm but clearly adrenalin-filled IC’s details on what actions the police on the scene were taking. Then he said, “And I need someone to get on social media and tell everyone what we are doing.” That’s correct. One of the top priorities of this Commander was to inform the public of police actions and the way to do that he knew was through the agencies social media channels.
Boards, regulators and leadership teams are demanding more and more of risk, compliance, audit, IT and security teams. They are asking them to collaboratively focus on identifying, analyzing and managing the portfolio of risks that really matter to the business.
As risk management programs evolve to more formal processes aligned with business objectives, leaders are realizing that by developing a proactive mindset in risk and compliance management, teams can provide added value to help the organization gain agility by identifying new opportunities as well as managing down-side risk. Organizations with this new perspective are more successful in orchestrating change to provide a 360-degree view of both risk and opportunity.
Risk teams that are further along on the journey of leveraging proactive approaches to risk management look not only within the organization but beyond to supplier, third party and customer ecosystems. This means developing a view across the larger enterprise infocosm, to ensure alignment of people, processes and technologies.
(TNS) — In baseball, when a slugger has been slumping for a few years in a row, the pundits in the upper deck will be quick to declare a trend; “the bum’s done,” they’ll assert.
Weather forecasters are a little more retrospective.
In 2001, forecasters had announced that they believed that since 1995, the tropics had been in a cycle of more and stronger storms. Such periods can last 25 to 40 years.
The hurricane season that ended Sunday, Nov. 30, was quiet. So was the year before that. Only three seasons since 1995 have been below average. We just went through two of them.
This followed some of the busiest, and most damaging, years on record.
UK businesses are at risk of sleepwalking into a reputational time bomb due a lack of awareness on how to protect their data assets, according to research released by BSI. As cyber hackers become more complex and sophisticated in their methods, UK organizations are being urged to strengthen their security systems to protect both themselves and consumers.
The BSI survey of IT decision makers found that cyber security is a growing concern with over half (56 percent) of UK businesses being more concerned than 12 months ago. 7 in 10 (70 percent) attribute this to hackers becoming more skilled and better at targeting businesses.
However, whilst the vast majority (98 percent) of organizations have taken measures to minimize risks to their information security, only 12 percent are extremely confident about the security measures their organization has in place to defend against these attacks.
Worryingly, IT directors appear to have accepted the risks to their information security, with 9 in 10 (91 percent) admitting their organization has been a victim of a cyber-attack. Around half have experienced an attempted hack, and/or suffered from malware (49 percent in both instances). Around four in ten (42 percent) have experienced the installation of unauthorized software by trusted insiders, and nearly a third (30 percent) have suffered a loss of confidential information.
In theory, BCM and ERM should get along just fine. ERM or enterprise risk management is concerned with identifying both positive and negative risk for an organisation – or opportunities as well as threats, if you prefer. Business continuity management is about keeping a business in operation in the face of adversity. It’s also about enhancing the value and profitability of operations, thanks to a better corporate image towards customers, banks, insurers and the like. Effective BCM depends on good risk analysis of the kind that ERM is designed to do. With selection of ERM software tools to automate risk management, how can organisations find out if there’s one that’s right for them?
Some interesting research came out last month regarding the enterprise’s attitude toward the cloud and what it will take to push more of the data load, and mission-critical functions in particular, off of local infrastructure. It turns out that while security and availability are still prime concerns, flexibility and federation across multiple cloud architectures are equally important.
In IDG’s most recent Enterprise Cloud Computing Study, more than a third of IT respondents say they are comfortable with the current state of cloud technology, with about two thirds saying the cloud increases agility and employee collaboration. The key data, however, comes in the attitude toward advanced networking technologies like software-defined networking (SDN) and network functions virtualization (NFV), with more than 60 percent saying they plan to increase their investment in these areas specifically to enhance their ability to access and manage disparate cloud environments.
Where does your business stand on security readiness?
If you are like the majority of small businesses, you are pretty nervous about your cybersecurity efforts and ability to thwart and/or react to a threat.
In October, e-Management asked attendees at the CyberMaryland Conference about their cybersecurity policies. What the CyberRX survey found was that 63 percent of small businesses aren’t very confident about their continuous security monitoring capabilities and nearly a quarter don’t provide any type of security training for their employees. Of those that do provide some sort of training, it is mostly periodic—and we’ve learned that cybersecurity education and training needs to be a constant evolving effort because the threat landscape is always changing.
(TNS) — Tornado Alley is undergoing a transformation.
The number of days that damaging tornadoes occur has fallen sharply over the past 40 years, a study published recently in the journal Science shows. But the number of days on which large outbreaks occur has climbed dramatically.
“It’s really pretty shocking,” said Greg Carbin, warning coordination meteorologist with the Storm Prediction Center in Norman, Okla.
In the early 1970s, there was an average of 150 days each year with at least one F1 tornado. That number has dropped to about 100 days each year now.
There were just six days in all of the 1970s with at least 30 F1 tornadoes. But that number has jumped to three a year now.
(TNS) — The nation's top housing official recently toured the core of a house in Brownsville that holds the promise of returning people quickly to their homes after a major disaster. What he didn't know was that it had been partially put up in an afternoon by a group of unskilled teenagers.
The house inspected Monday by Housing and Urban Development Secretary Julian Castro is part of a $2 million pilot project that envisions the construction of less-expensive, structurally sound housing within days of a disaster instead of years. Although hundreds of low-income homes have been rebuilt since Hurricanes Dolly and Ike laid waste to the Texas Gulf Coast in 2008, many families are still waiting for housing already funded with federal disaster money.
The RAPIDO project, to build 20 prefabricated homes in the Rio Grande Valley, is the first of two projects that its originators hope will revolutionize not only the way housing is built after disasters, but as a way to provide low-income housing everywhere in Texas. A similar $4 million project to build 20 homes in Harris and Galveston counties is in its early stages and expected to produce its first house by March.
Remember the business aftermath of Hurricanes Katrina and Sandy? In each case, companies far and wide scrambled to put business continuity/disaster recovery (BC/DR) plans in place if they didn’t already have them – whether or not they had felt so much as a raindrop from the super-storms.
But human memory is short-lived. As incredible as it may seem, some people have already forgotten the devastation and destruction caused by disasters such as Hurricanes Sandy and Katrina. The problem, of course, is that the risk of disasters hasn’t gone down, even if our alertness to them has. All you need to do is take a look at data such as Sperling’s natural disaster map to see that the next disaster could be just around the corner … with the risks notably higher depending on where you are.
So now – in between crises – is a great time to figure out how to mitigate the risk associated with natural disasters. And one of the foremost ways to do so is to consider the location of your secondary or backup data center.
By Charlie Maclean-Bristol, FBCI
Recently I conducted three strategic level exercises and thought I would share some of the lessons learned. The exercises consisted of two public sector executive teams and a manufacturer.
The following are the main lessons learned.
By Steve Salinas
For those of us in the technology industry comparing Moore's Law to technology advancement is nothing new. Moore's law holds that computer processing power will double every two years. Aside from a few peaks and valleys, I think most would agree that this is true. I contend that Moore's Law, at least in principle, holds true for malware and attack methods as well.
Unless you have been hiding under a rock the last few years you will be fully aware that cybercrime has exploded in recent years. Hackers, who once had to build their own malware from scratch, now have access to numerous toolkits that make developing their own variant of malware easy. For the hacker who would rather spend their money than their time on malware, there are even malware exchanges where anyone can buy malware built for anything from controlling a webcam to siphoning credit card information, and anything in between.
Combine the ease by which hackers can access malware with the way social media makes it easy to organize groups of people around the world and you have a dangerous new frontier. Attackers, who can work together to target an organization, steal data and cover their tracks, all under the guise of anonymity. How can you defend yourself from this new breed of attackers?
One of the highlights of the Business Continuity Institute’s World Conference in November was BSI’s announcement of their new guidance for organizational resilience – BS 65000. Richard Taylor from BSI highlighted the benefits of organizational resilience and explained what this standard can do to support organizations aiming to achieve this by providing an overview of resilience, describing the foundations required and explaining how to build resilience. This standard, one that deals with an organization’s capacity to anticipate, respond and adapt, has now been published and officially launched at an event in London on the 27th November.
It is argued that by following this guidance, an organization is more able to adapt successfully to unforeseen and disruptive changing environments, perhaps not dissimilar to an effective business continuity programme. This is possibly taken a step further by enabling an organization to gain a competitive edge by identifying gaps in the market or better understanding risks and opportunities, and being more agile and innovative in order to exploit these. It could also help the organization to reduce costs and increase efficiency by avoiding potential pitfalls.
More and more these days we talk about the value of reputation and BS 65000 provides guidance that can help an organisation preserve or improve its reputation by being seen as vigilant and robust, while also engendering trust amongst its internal and external stakeholders. All of this can help cultivate a culture of shared purpose and values.
Patrick Alcantara, Research Associate at the BCI and author of the Institute’s Working Paper on conceptualising organizational resilience, commented: “We see the launch of the BS 65000 as the next step towards building a more resilient world. As one of the institutions who collaborated in developing this standard, we subscribe to its vision of putting resilience as a strategic goal for top management. This standard adds more value to BC and the work its practitioners do as one of the integral ‘protective disciplines’ within its scope.”
Anne Hayes Head of Market Development for Governance and Risk at BSI said: “Organizations that are resilient behave in a very specific way and have long understood what this means to their long term success. They take a proactive approach to governing themselves and have pinpointed the importance of being forewarned. BS 65000 can work alongside their existing risk, crisis and business continuity management strategies to provide a solid defence against weathering a tough business climate.”
A wide range of experts and representatives from a cross-section of industry, trade bodies and academia were involved in the consensus-based process for developing the standard. Deborah Higgins MBCI, Head of Learning and Development at the BCI, played an important role in this as part of the group that developed the standard, representing the BCI Membership and encouraging Members to comment as part of the public consultation process.
To purchase you copy of BS 65000, access the BSI shop by clicking here and then on the link for BS 65000.
I’ve pointed out many times over the years that everyone has their own perception of green. To a coal plant operator, a 20 percent reduction in emissions is cause for celebration, while the environmentalist still frets over the 80 percent still coming out of the stack.
So it is understandable that the data center industry – arguably the top energy consumer on the planet – is both the hero and the villain when it comes to greening up the world’s digital infrastructure. And in time-honored tradition, the biggest targets are always first on the hit list, which in this case would be the hyperscale providers like Google, Facebook and Amazon.
But as Data Center Dynamics’ Peter Judge points out, criticism of the web-scale providers actually misses the mark when it comes to environmental friendliness because their facilities, while massive, are also among the most efficient on the planet. According to a recent breakdown from the National Resources Defense Council, hyperscale infrastructure consumes about 5 percent of total data center energy draw, and is probably responsible for even less of the emissions due to its state-of-the-art power capabilities. The largest consumers of data center power are the small-to-mid-sized facilities, which account for about half of total consumption. Large enterprises take up another quarter or so, followed by the colocation industry, which draws another 20 percent.
Okay, sure, maybe Gartner has a point about this whole “data lake becoming a data swamp” problem. But a recent Information Age piece proposes that organizations can get around all that — and the need for data scientists — with a “data refinery layer.”
Haven’t heard of such a thing? Neither have I, and Google seems to only have heard of it twice, including this article and an unsourced Word document.
“As data is consolidated, the refinement layer would process, evaluate, correlate and learn from the information passing through it, essentially generating additional insights and information from the data, and also linking to the aforementioned applications to drive value,” the article explains.
That sounds wonderful. Let’s do it! The problem is, after reading the article, I’m still not exactly sure what it is or if it exists or if it could exist.
Did you know that by making a few simple changes to your CV and Linked In profile you can increase the number of interviews you secure by up to 50%?
Having an effective CV and Linked In profile is absolutely critical so, in conjunction with the CV & Interview Advisors, the Business Continuity Institute is inviting you to attend a free webinar to help you significantly enhance your CV and Linked In profile as you prepare for the new year.
The webinar will be delivered by one of the UK's leading authorities on personal branding and career enhancement and previous events have been described as "outstanding" and "truly inspirational".
In this lively one hour session, you will learn:
- How to assess the effectiveness of your current CV
- The things that you should never do on your CV
- How to transform your CV and Linked In profile into a powerful business case
- How to use case studies on your CV and Linked In profile to differentiate you from other candidates
The webinar is not your typical boring top 10 tips; it is a leading-edge session for professionals and is packed with practical advice that really works, as this candidate recently confirmed: “Following the webinar, I have spent that last week re-writing my CV in the format you discussed - then put it online last night. Today, I have received three emails from agencies who want to deliver my CV to their clients. Alongside this I have had two calls from companies who have invited me in for a chat about vacancies. This is more interest that I have had in the last three years combined! Testament to the success of your webinar.”
If you want more interviews and job offers, then investing one hour of your life watching this webinar is an absolute must. The webinar takes place on Monday 8th December at 1915 GMT and to register, all you need to do is click here and fill in your details.
The enterprise is poised to embark on a number of data and infrastructure initiatives in the coming years, almost all of which are focused on the capture and analysis of Big Data.
But while the term “Big Data” is appropriate to describe the scale of the challenge ahead, it leaves the impression that the solution is simply to deploy more resources to accommodate larger workloads. But as many early adopters are finding out, Big Data is not just big, it’s also complex and nuanced — and that spells trouble for anyone who thinks they can just throw resources at Big Data and make it work.
As MarkLogic’s Jon Bakke points out, Big Data can encompass everything from large text and database files to audio/video and real-time data streams tracking changes to complex systems and environments. To handle this, the enterprise will need to mount a multi-pronged approach that encompasses not just advanced database systems and emerging infrastructure technologies, but legacy systems as well. A key strategy in squaring this circle is the logical data warehouse (LDW), which encompasses two or more physical database platforms united under a common access and control mechanism. In this way, the enterprise can take advantage of existing capabilities like RDBMS while employing state-of-the-art capabilities for the specific functions that need them.
CHICAGO – With the holidays fast approaching, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) Region V office encourages everyone to consider giving gifts that will help protect their family members and friends during a future emergency.
“A gift to help prepare for emergencies could be life-saving for friends and family,” said FEMA Region V acting regional administrator, Janet Odeshoo. “These gift ideas provide a great starting point for being prepared for an emergency or disaster.”
Supplies for an emergency preparedness kit can make unique—and potentially life-saving—holiday gifts, such as:
- Battery-powered or hand-crank radio and a NOAA Weather Radio with tone alert.
- A flashlight with extra batteries.
- Solar-powered cell phone charger.
- Smoke detector and/or carbon monoxide detectors.
- First aid kit.
- Fire extinguisher and fire escape ladder.
- Enrollment in a CPR or first aid class.
- Books, coloring books, crayons and board games for the kids, in case the power goes out.
- Personal hygiene comfort kit, including shampoo, body wash, wash cloth, hairbrush, comb, toothbrush, toothpaste and deodorant.
- A waterproof pouch or backpack containing any of the above items, or with such things as a rain poncho, moist towelettes, work gloves, batteries, duct tape, whistle, food bars, etc.
Holiday shoppers might also consider giving a winter car kit, equipped with a shovel, ice scraper, emergency flares, fluorescent distress flags and jumper cables. For animal lovers, a pet disaster kit with emergency food, bottled water, toys and a leash is also a good gift.
The gift of preparedness might just save the life of a friend or family member. For more information, preparedness tips or other gift ideas, visit www.Ready.gov.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at . The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
As holiday shopping gets underway, several major retailers are opening even earlier this year offering the prospect of deep discounts and large crowds to an ever growing number of shoppers.
The National Retail Federation (NRF) notes that 140 million holiday shoppers are likely to take advantage of Thanksgiving weekend deals in stores and online.
Millennials are most eager to shop, with the NRF survey showing 8 in 10 (79.6 percent) of 18-24 year olds will or may shop over the weekend, the highest of any age group.
Much has been written about the risks of online shopping, but for those who still head to the stores, there are dangers there too.
Steelhenge Consulting has published the results of its Crisis Management Survey 2014: ‘Preparing for Crisis, Safeguarding Your Future’.
The aim of the Crisis Management Survey was to build a better picture of how organizations are preparing themselves to manage crises effectively in order to protect their reputation and performance. It asked the 375 participants from organizations around the world, what they are doing to prepare to manage crises, the challenges they face in creating a crisis management capability and to assess their overall level of crisis preparedness.
Over half rated themselves as less than very well prepared, with 13 percent responding that they were either not well prepared or not prepared at all.
The crisis communications function was shown to be lagging behind when it comes to crisis preparedness; while 84 percent of organizations surveyed had a documented crisis management plan, over a quarter of respondents recorded that they do not have a documented plan for how they will communicate in a crisis and 41 percent responded that they do not have guidance on handling social media in a crisis.
Other key themes from the survey results include:
Embedding: less than half of the respondents had a programme of regular reviews, training and exercising that would help embed crisis management within an organization and create a genuinely sustainable crisis management capability.
Engagement: in the face of high profile crises befalling major organizations year after year, 29 percent of organizations taking part in the survey still waited for the brutal experience of a crisis before creating a plan. Crisis preparedness is still a work in progress, particularly with regard to crisis communications planning.
Ownership: ownership of crisis management at the strategic level amongst the survey population lay predominantly with the chief executive. However, responsibility for day-to-day management of the crisis management capability was spread widely across a broad range of functional roles.
For the full results of the Crisis Management Survey, please click here (PDF).
A lack of widespread adherence to best practices, combined with the number of organizations that have suffered a significant cyber attack, potentially indicates a false sense of security.
SolarWinds has released the results of its Information Security Confidence Survey, which explored IT professionals’ confidence in their organizations’ security measures and processes. The survey found that while confidence is notably high, likely the result of several key factors, widespread adherence to security best practices is lacking and significant, damaging attacks continue: potentially indicating this confidence is a false sense of security.
“Organizations are taking positive steps toward improving their information security; most notably in terms of budget and resources,” said Mav Turner, director of security, SolarWinds. “It’s important, however, to never fall into the trap of over-confidence. IT pros should do everything they can to ensure the best defences possible, but never actually think they’ve done everything they can. This approach will ensure they are proactively taking all the steps necessary to truly protect their organizations’ infrastructures and sensitive data.”
Conducted in October 2014 in conjunction with Enterprise Management Associates, the survey yielded responses from 168 IT practitioners, managers, directors and executives in the UK from small and midsize enterprise companies.
Recently the US law firm of Foley and Lardner LLP and MZM Legal, Advocates & Legal Consultants in India jointly released a white paper, entitled “Anti-Bribery and Foreign Corrupt Practices Act Compliance Guide for U.S. Companies Doing Business in India”. For any compliance practitioner it is a welcome addition to country specific literature on the Foreign Corrupt Practices Act (FCPA), UK Bribery Act and other anti-corruption legislation and includes a section on India’s anti-corruption laws and regulations.
FCPA Enforcement Actions for Conduct Centered in India
Under the FCPA, several notable US companies have been through enforcement actions related to conduct in India. Although not monikered as a ‘Box Score’ the authors do provide a handy chart which lists the companies involved, a description of the conduct and fine/penalty involved.
Application development is a vital and ever-changing part of the mobile ecosystem. Now, there are rumblings that a new approach is necessary. Research sponsored by Kinvey points to dissatisfaction on the part of CIOs about mobile app creation. Half of those surveyed, according to the story at Associations Now, think that it takes too long to build an app. More than half says it takes seven months to a year and 35 percent think it takes less than six months.
A big problem, according to the survey, is lack of a cohesive central strategy. Seventy-five percent of respondents say that product lines and “individual functions” drive development. The process may be changing, however: 54 percent of those who answered the survey say they will standardize development and 63 percent will utilize cloud approaches.
The call to change is being heard. Forrester released a report on the transitions occurring in the mobile app development sector. It identifies eight. The top four: Standalone apps will fade; hardware changes will create new opportunities; and mobile competition will shift to both accessories and ecosystems. The other four changes and details on all of them are available at the ReadWrite story on the Forrester research.
While organizations of just about any size have an interest in tapping into the potential of Big Data, the vast majority of them won’t have the resources required to actually do that any time soon unless they get some external help.
With that issue in mind, First Data, a provider of credit card processing services, has been building out an Insightics analytics service in the cloud that aggregates both internal data collected by First Data and external data sources. The latest external data source that First Data is including comes from Factual, provider of a location-based service that helps organizations deliver mobile experiences based on the physical location of a mobile computing device.
Sandeep Garg, vice president of information and analytics at First Data, says that rather than requiring small-to-medium-sized (SMB) organizations to build their own Big Data applications and acquire associated infrastructure, First Data has created an application that they can either interface directly or programmatically address via application programming interfaces (APIs).
Former FBI Director Robert Mueller once said, “There are only two types of companies: those that have been hacked and those that will be. Even that is merging into one category: those that have been hacked and will be again.” This is the environment in which risk managers must protect their businesses, and it isn’t easy.
Cyber risk is not an IT issue; it’s a business problem. As such, risk management strategies must include cyber risk insurance protection. Until recently, cyber insurance was considered a nice-to-have supplement to existing insurance coverage. However, following in the wake of numerous, high-profile data breaches, cyber coverage is fast becoming a must-have. In fact, new data from The Ponemon Institute indicates that policy purchases have more than doubled in the past year, and insiders estimate U.S. premiums at around $1 billion today and rising.
But is a cyber policy really necessary? In short, yes. As P.F. Chang’s China Bistro recently discovered, commercial general liability (CGL) policies generally do not include liability coverage to protect against cyber-related losses. CGL policies are intended to provide broad coverage, not necessarily deep coverage. Considering the complexity of cyber risks, there is a real and legitimate need for specialized policies that indemnify the insured against cyber-related loss and liability.
By Mark Kedgley
December 15th is the anniversary that Target's infamous security breach was discovered; but has anything really changed in the year that has gone by? Retailer after retailer is still falling foul of the same form of malware attack. So just what is going wrong?
The truth is that there is never going to be a 100 percent guarantee of security: and with today's carefully focused zero day attacks, the continued reliance on prevention rather than cure is obviously not working. Organizations are blithely continuing day to day operations while an attack is in progress because they are simply not spotting the breaches as they occur.
If an organization wants to maintain security and minimise the financial fall out of these attacks, the emphasis has to change. Accept it: the chances of stopping all breaches are unlikely at best with a prevention only strategy. Instead, with non-stop, continuous visibility of what is going on in the IT estate, an organization can at least spot in real-time the unusual changes that may represent a breach, and take action before it is too late.
Despite over half of companies wanting to retain control of their IT disaster recovery inhouse, a lack of frequent testing is putting these businesses more at risk of IT downtime than companies which outsource. The mismatch between the high levels of confidence that in-house disaster recovery yields and the high test failure rates indicates that either testing needs to be stepped up or companies would be better to outsource.
This was one of the key findings of research carried out by Plan B, through surveying 150 contacts that attended the BCI World conference in November 2014. All contacts interviewed were within an IT function of their business, with knowledge of the disaster recovery strategy and solution for their business.
Other findings include:
As efforts to contain and eliminate the current Ebola outbreak in West Africa continue, countries around the world are making preparations to be ready in case the virus arrives. The Australian government is also making plans to deal with such an event. Ebola already exists in Australia – but fortunately (so far) only as the subject of research in the high security Australian Animal Health and Research Centre in Geelong to develop a vaccine. But how does Australian preparedness compare with that if other countries? And what would happen if Ebola cases were declared in Australia in the way they have already occurred in Spain and in the United States?
If you want to achieve an enterprise view of your data, your solution options basically fall into one of two camps:
- Move it and integrate.
- Leave it and virtualize.
Metanautix’s co-founder, Theo Vassilakis, contends that both add unnecessary complexity to enterprise data analytics.
“A lot of the times, that's where the complexity comes from: Oh hold on, let me do a little Informatica here, let me do a little virtualization here, and let me do a little Teradata there,” Vassilakis said during a recent interview. “So, solving the same business problem, some of the data sets you’ll have to move and some of the data sets you're not going to be able to move. Additionally, you end up having to do the moving with one system and then the querying with another system."
One of the attributes of most advanced analytics applications is that they assume the organization or person invoking them actually knows which questions are worth asking. Most organizations, however, are still trying to figure out the questions they should be asking.
With that goal in mind, BeyondCore has made available a production release of BeyondCore V, an analytics application that is designed to discover patterns in data in minutes using new data visualization tools.
BeyondCore CEO Arijit Sengupta says while analytics applications can be helpful, most organizations need help framing the question to ask. The end result is major investments in hiring everybody from SQL programmers to data scientists.
One of the main problems in introducing scale-out architecture to legacy data environments is the sheer number of incompatible formats, platforms and vendor solutions that have infiltrated the data center over the years.
The drive to remove these siloes and federate the data environment under either a single proprietary solution or the myriad open platforms currently available is well underway. But in many cases the transition is happening too slowly given that the need to scale out is immediate as enterprises attempt to cope with issues like Big Data and the Internet of Things.
This is why many researchers are looking to move the concept of virtualization to an entirely new level. Rather than focus on infrastructure like servers, storage and networking, virtualization on the data plane introduces a level of abstraction that allows data and applications to sit on any hardware, and thus interact with other data sets across the enterprise and into the cloud. And as tech author Anne Buff points out, it would also optimize hardware utilization and reduce system complexity, as well as offer more centralized security and control.
(TNS) — Signs were already brewing for last week’s devastating lake effect snowfall as early as Nov. 15, when the National Weather Service issued its first watches for a couple of feet of snow — and maybe more.
Over the following two days leading up to the storm, the watches were upgraded to warnings as weather service forecasts called for “near blizzard conditions” across Erie County with “around two feet in the most persistent bands” that could leave “some roads ... nearly impassable.”
The weather service also accurately pegged accumulating snows at almost unheard of “rates of 3 to 5 inches per hour in the most intense portion of the band.”
But, according to state and Erie County officials, not only did the information come too late for them to adequately prepare, the national forecasting service also failed to project the ferocity and exact locations of the tandem lake-effect storms that dumped 7 feet or more of snow in just 72 hours.
(TNS) — Officials are planning the first major rollout of California's earthquake early warning system next year, providing access to some schools, fire stations and more private companies.
The ambitious plan highlights the progress scientists have made in building out the system, which can give as much as a minute of warning before a major earthquake is felt in metropolitan areas.
Until now, only academics, select government agencies and a few private firms have received the alerts. But officials said they are building a new, robust central processing system and now have enough ground sensors in the Los Angeles and San Francisco areas to widen access. They stressed the system is far from perfected but said expanded access will help determine how it works and identify problems.
Improved model, new surge forecast products and research projects debuted
The Atlantic hurricane season will officially end November 30, and will be remembered as a relatively quiet season as was predicted. Still, the season afforded NOAA scientists with opportunities to produce new forecast products, showcase successful modeling advancements, and conduct research to benefit future forecasts.
“Fortunately, much of the U.S. coastline was spared this year with only one landfalling hurricane along the East Coast. Nevertheless, we know that’s not always going to be the case,” said Louis Uccellini, Ph.D., director of NOAA’s National Weather Service. “The ‘off season’ between now and the start of next year’s hurricane season is the best time for communities to refine their response plans and for businesses and individuals to make sure they’re prepared for any potential storm.”
How the Atlantic Basin seasonal outlooks from NOAA’s Climate Prediction Center verified:
Named storms (top winds of 39 mph or higher)
Hurricanes (top winds of 74 mph or higher)
Major hurricanes (Category 3, 4, 5; winds of at least 111 mph)
“A combination of atmospheric conditions acted to suppress the Atlantic hurricane season, including very strong vertical wind shear, combined with increased atmospheric stability, stronger sinking motion and drier air across the tropical Atlantic,” said Gerry Bell, Ph.D., lead hurricane forecaster at NOAA’s Climate Prediction Center. “Also, the West African monsoon was near- to below average, making it more difficult for African easterly waves to develop.”
Meanwhile, the eastern North Pacific hurricane season met or exceeded expectations with 20 named storms – the busiest since 1992. Of those, 14 became hurricanes and eight were major hurricanes. NOAA’s seasonal hurricane outlook called for 14 to 20 named storms, including seven to 11 hurricanes, of which three to six were expected to become major hurricanes. Two hurricanes (Odile and Simon) brought much-needed moisture to the parts of the southwestern U.S., with very heavy rain from Simon causing flooding in some areas.
“Conditions that favored an above-normal eastern Pacific hurricane season included weak vertical wind shear, exceptionally moist and unstable air, and a strong ridge of high pressure in the upper atmosphere that helped to keep storms in a conducive environment for extended periods,” added Bell.
In the central North Pacific hurricane basin, there were five named storms (four hurricanes, including a major hurricane, and one tropical storm). NOAA’s seasonal hurricane outlook called for four to seven tropical cyclones to affect the central Pacific this season. The most notable storm was major Hurricane Iselle, which hit the Big Island of Hawaii in early August as a tropical storm, and was the first tropical cyclone to make landfall in the main Hawaiian Islands since Hurricane Iniki in 1992. Hurricane Ana was also notable in that it was the longest-lived tropical cyclone (13 days) of the season and the longest-lived central Pacific storm of the satellite era.
New & improved products this year
As part of its efforts to provide better products and services, NOAA's National Weather Service introduced many new and experimental products that are already paying off.
The upgrade of the Hurricane Weather Research and Forecasting (HWRF) model in June with increased vertical resolution and improved physics produced excellent forecasts for Hurricane Arthur’s landfall in the Outer Banks of North Carolina, and provided outstanding track forecasts in the Atlantic basin through the season. The model, developed by NOAA researchers, is also providing guidance on tropical cyclones in other basins globally, including the Western Pacific and North Indian Ocean basins, benefiting the Joint Typhoon Warning Center and several international operational forecast agencies. The Global Forecast System (GFS) model has also been a valuable tool over the last couple of hurricane seasons, providing excellent guidance in track forecasts out to 120 hours.
In 2014, NOAA's National Hurricane Center introduced an experimental five-day Graphical Tropical Weather Outlook to accompany its text product for both the Atlantic and eastern North Pacific basins. The new graphics indicate the likelihood of development and the potential formation areas of new tropical cyclones during the next five days. NHC also introduced an experimental Potential Storm Surge Flooding Map for those areas along the Gulf and Atlantic coasts of the United States at risk of storm surge from an approaching tropical cyclone. First used on July 1 as a strengthening Tropical Storm Arthur targeted the North Carolina coastline, the map highlights those geographical areas where inundation from storm surge could occur and the height above ground that the water could reach.
Beginning with the 2015 hurricane season, NHC plans to offer a real-time experimental storm surge watch/warning graphic for areas along the Gulf and Atlantic coasts of the United States where there is a danger of life-threatening storm surge inundation from an approaching tropical cyclone.
Fostering further improvements
While this year’s hurricane season was fairly quiet, NOAA scientists used new tools that have the potential to improve hurricane track and intensity forecasts. Several of these tools resulted from research projects supported by the Disaster Relief Appropriations Act of 2013, which was passed by Congress in the wake of Hurricane Sandy.
Among the highlights were both manned and unmanned aircraft missions in Atlantic hurricanes to collect data and evaluate forecast models. NOAA and NASA’s missions involving the Global Hawk, an unmanned aircraft that flies at higher altitudes and for longer periods of time than manned aircraft, allowed scientists to sample weather information off the west coast of Africa where hurricanes form, and also to investigate Hurricane Edouard’s inner core with eight crossings over the hurricane’s eye. NOAA launched a three-year project to assess the impact of data collected by the Global Hawk on forecast models and to design sampling strategies to improve model forecasts of hurricane track and intensity.
While the Global Hawk flew high above hurricanes, NOAA used the much smaller Coyote, an unmanned aircraft system released from NOAA’s hurricane hunter manned aircraft, to collect wind, temperature and other weather data in hurricane force winds during Edouard. The Coyote flew into areas of the storm that would be too dangerous for manned aircraft, sampling weather in and around the eyewall at very low altitudes. In addition, NOAA’s hurricane hunters gathered data in Hurricanes Arthur, Bertha and Cristobal, providing information to improve forecasts and to test, refine and improve forecast models. The missions were directed by research meteorologists from NOAA’s Hurricane Research Division, a part of the Atlantic Oceanographic and Meteorological Laboratory in Miami, and the NOAA Aircraft Operations Center in Tampa.
In addition, increased research and operational computing capacity planned in 2015 will facilitate future model upgrades to the GFS and HWRF to include better model physics and higher resolution predictions. These upgraded models will provide improved guidance to forecasters leading to better hurricane track and intensity predictions.
The 2015 hurricane season begins June 1 for the Atlantic Basin and central North Pacific, and on May 15 for the eastern North Pacific. NOAA will issue seasonal outlooks for all three basins in May. Learn how to prepare at hurricanes.gov/prepare and FEMA’s Ready.gov.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram and our other social media channels.
(TNS) — Police officers have become as visible on college campuses as students and professors, as schools respond to the early Thursday morning shooting at Florida State University.
The incident, in which FSU alumnus Myron May injured three students in a campus library before being killed by police, has alarmed students and employees at colleges throughout the state. Schools are now reviewing their own security procedures.
"Incidents like this remind us we can never be too cautious," said Alexander Casas, police chief at Florida International University, west of Miami.
Campus safety has been a high priority for most Florida colleges and universities since the Virginia Tech massacre in 2007. Many schools have added sirens with speakers as well as text, email and social media alert systems. They've also increased the number of counselors to deal with mental health issues.
With the biggest shopping events of the season, retailers face tremendous amounts of both risk and reward as sales and door-busters draw in eager consumers all week. In 2013, Thanksgiving deals brought in 92.1 million shoppers to spend over $50 billion in a single weekend, the National Retail Federation reports.
The National Retail Federation issued crowd management guidelines for retailers and mall management officials to use when planning special events, including Black Friday, product launches, celebrity appearances and promotional sales. General considerations to plan for and curtail any crowd control issues include:
By Rose Jacobs
As we move into the holiday season, the idea of travel begins to cause many of us sleepless nights: We have visions of highway traffic jams, long airport layovers, even longer flights and–that nightmare of nightmares—the winter storm that leaves our best-laid plans in total disarray.
There is some comfort to be had, however, and it comes in the form of technology. Gadgets and apps can brighten the best of trips–and make the worst bearable. I’ve learned this the hard way, through countless hours on the road for work and in the skies to see my family overseas. And from that trial by fire, I can offer 10 tried and tested technology tools that are as necessary as a passport and as comforting as a first-class lounge.
Following its success in 2014, the Business Continuity Institute will again be hosting the BCI Middle East Conference in 2015, this time in Doha, Qatar on the 11th and 12th May.
Over the two days, the conference will focus on the latest thinking and best practice in continuity and resilience. There will be plenary sessions from leading local and international experts with opportunities to break out into streams and examine the key themes in more depth – either from a business continuity or enterprise risk perspective.
Chris Green FBCI, Head of the Business Continuity Programme at Qatar Airways, commented: "There are many benefits to being active in the business continuity industry. There is a huge advantage in meeting and in being connected to other people who can act as a valuable resource for information and ideas. The BCI Middle East Conference will give you an opportunity to network with business continuity professionals from different countries and industry sectors. All of them work in the region so you can share with them your common areas of interest, thoughts and ideas. In return, you are sure to gain inspiration from the experience and knowledge available from the speakers and delegates. Alongside this great-value conference will be an exhibition of products and services from some innovative and influential companies in the Middle East.”
Thomas Keegan FBCI, Middle East Enterprise Resilience Leader at PwC, added: “No matter how experienced you are in business continuity or the wider field of organisational resilience, there is always something new to learn. The educational aspect of the BCI Middle East Conference will expose you to new ways of conducting your business – informing you of the latest research, keeping you up-to-date on best practice and demonstrating the latest tools and techniques that are available to you, all of which are designed to assist you in your role.”
As well as the main conference, delegates will be able to register for site visits to high profile organisations and discover how they put business continuity theory into practice. For those who wish to focus on developing their practical skills, the BCI will be running a selection of training courses – perhaps an ideal opportunity to get certified in business continuity. Coinciding with the conference will be the BCI’s Middle East Awards where individuals and organisations from across the region will have their outstanding contribution to the industry recognised in front of their colleagues.
The Climate Resilience Toolkit provides resources and a framework for understanding and addressing the climate issues that impact people and their communities.
After covering tips for small to midsize businesses (SMBs) to minimize data loss, it makes sense to also delve further into disaster recovery. For those businesses that have a mix of infrastructures, including those that store information both onsite and in the cloud, it can be extremely complex to ensure that the data remains available after a disaster.
Unitrends, provider of industry-leading backup, archiving and disaster recovery solutions, offers one simple solution that many SMBs find attractive: Disaster Recovery as a Service (DRaaS). According to Subo Guha, vice president of product management for Unitrends, the cloud is helping to make disaster recovery options more attainable for SMBs. In an email interview, Guha explained why disaster recovery is integral for even small businesses:
Disaster recovery (DR) is crucial for any size business. However, due to limited resources, an SMB’s ability to quickly recover (from outages, disaster and/or catastrophic failure) can be the sole factor in their survival or failure (according to a recent IDC study, 80% of SMB respondents reported that network downtime costs their organizations at least $20,000 per hour). Although many SMBs acknowledge the importance of protecting their data, DR continues to be a major challenge; IT environments are more complex than ever, as critical data resides across virtual, physical and cloud infrastructures and IT staffing and budgets are constrained. And, SMBs are overwhelmed by the time, money and personnel required to build physical failover environments for disaster recovery purposes. SMBs simply can’t afford large scale disaster recovery policies and facilities; therefore, they are continuously striving for more economical means to manage DR.
Continuity Central's 2014 Business Continuity Paper of the Year competition is open to entries and to mark this we are publishing the winning entry from the 2013 competition. This was first published in the Q1 2013 issue of the Business Continuity and Resiliency Journal.
The paper, entitled 'A fork in the road' was submitted by Ken Simpson. Although it was written in 2013 the issues that it raises are still very pertinent to the position the business continuity profession currently finds itself in.
In 2013 we find ourselves at a collective fork in the road, once again considering the path we should collectively take to the future of the discipline. The current choice is between a wider-focused discipline called business continuity, and the 'management systems' highway known as business continuity management.
Moving forward may require embracing multiple alternative paths and destinations. To grow towards a wider focus we need to become a learning discipline. A wider focus on learning means we reflect on what we need to learn and how we facilitate that learning as a holistic discipline.
This paper discusses three ideas that challenge business continuity (management) professionals to think differently about learning, what it means to learn and ways that we can shape future practice.
Read the paper (PDF)
When hurricane Sandy came to town, it blew through a slew of cracks in New York’s building infrastructure. Millions of people sat in the dark for days, many unable to wash their hands or flush their toilets. Backup generators, which sat in flooding basements, broke before they had a chance to help. Sewer systems overflowed.
In the months that followed, in an effort to protect its residents from future bouts of city-wide paralysis, the city of New York asked for help safeguarding their buildings from future storms. They called Russell Unger, the executive director of a nonprofit called Urban Green Council, to create a task force of building experts, property owners and city officials some 200-strong. After six months and more than 5,500 hours of donated time, the task force released a report recommending 33 changes that would make buildings safer. That was in June of 2013 . So far, the city has already passed and implemented 16 of their recommendations.
In case you haven’t seen, Uber, the controversial (for taxi companies anyway) new contract ride service, is in trouble. Seems they have a way of knowing where everyone who uses their service goes. It’s available to those inside the company. It’s called “God View.”
Obviously there is considerable power in having such a God view. As Lord Acton reminded us, there is a corrupting power related to power. All it would take would be for someone not using their head to use it for bad reasons. Buzzfeed broke a story about the New York executive for Uber using the God View to track the movements of a reporter and others. One other executive said that Uber might use the tracking information to smear reporters who wrote critically of the company. He, of course, apologized and admitted saying that was “wrong.”
The shooting at Sandy Hook Elementary School nearly two years ago shook Newtown, Conn., and has had far-flung reverberations. Tech companies have continued the push for gun control, the Center for Health Care Services launched a crisis intervention app that provides resources for early intervention and treatment of mental illness, and an app launched in January 2014 aims to give law enforcement a 60-second head start on school shootings.
Some jurisdictions even installed mobile panic alarms in schools. Take Ohio, where the tragedy pushed state government to expand its wireless emergency communications by offering radios for schools to communicate directly with local law enforcement during a life-threatening situation.
The idea of the school radios with emergency buttons -- like "fire alarms," but for police -- came up the day of the Sandy Hook tragedy at a meeting addressing the upgrade of Ohio's Multi-Agency Radio Communication System, or MARCS.
One of the most common weapons in the cybercriminal’s arsenal is the DDoS attack. According to the network security experts at Digital Attack Map, “A Distributed Denial of Service (DDoS) attack is an attempt to make an online service unavailable by overwhelming it with traffic from multiple sources. They target a wide variety of important resources, from banks to news websites, and present a major challenge to making sure people can publish and access important information.”
While many have heard of these attacks or suffered from the outages they cause, most people do not understand the true business risks these incidents pose. To get a better picture of the threat, Internet security firm Incapsula surveyed 270 firms across the U.S. and Canada about their experiences with DDoS attacks. On average, they found, 49% of DDoS attacks last between 6 and 24 hours. “This means that, with an estimated cost of $40,000 per hour, the average DDoS cost can be assessed at about $500,000—with some running significantly higher,” the company reported. “Costs are not limited to the IT group; they also have a large impact on units such as security and risk management, customer service, and sales.”
Check out the infographic below for more of Incapsula’s findings on the actual costs of DDoS attacks:
As the U.S. begins to feel winter’s icy grasp, a number of cities are turning to GPS data and the Internet of Things to help keep the roads clear during snowstorms.
Boston, Minneapolis and Buffalo, N.Y. (parts of which received 60 inches of snow on Tuesday, according to AccuWeather), are among the many municipalities using machine-to-machine communication and engagement tools to modernize snow removal and other inclement weather requests from citizens. From sensors attached to snow plows and interactive mapping technology, residents remain more informed on travel conditions, while public works departments are seeing an increase in efficiency.
Buffalo’s Division of Citizen Services teamed up with the city’s public works department to speed the process of addressing service calls for salting and snow issues. The plowing and salting strategy hasn’t changed – plows still clear the main roads, followed by secondary and side streets. But with GPS sensors now attached to the city’s snowplow fleet, it has made the entire operation a lot more transparent.
SACRAMENTO, Calif. – When earthquakes occur, some of the damage happens in areas of our homes and businesses that may be nearly impossible to spot without close attention. Residents and business owners in Napa and Solano Counties continue to discover damage from the South Napa Earthquake.
The California Governor’s Office of Emergency Services (Cal OES) and the Federal Emergency Management Agency (FEMA) urge people in those counties to take time to check for any signs of potential damage and register for assistance as soon as possible.
"Earthquake damage sometimes goes unnoticed," said Federal Coordinating Officer Steve DeBlasio. "Earthquakes are different from other disasters, because damages can mimic regular wear and tear or be so subtle that they are hard to find at first. A new crack or stuck door, for example, could be the sign of a serious problem."
Homeowners and renters in Napa and Solano Counties who had damage from the South Napa Earthquake have until Dec. 29, 2014 to apply for disaster assistance from FEMA. Disaster assistance includes grants to help pay for temporary housing, essential home repairs and other serious disaster-related needs not covered by insurance or other sources.
“Every resident and business should take the necessary time to do a thorough double check for damages of their property,” said Cal OES Director Mark Ghilarducci. “It’s important for homeowners and businesses to take advantage of available federal assistance and register as soon as possible.”
Cal OES and FEMA offer the following questions and tips to help everyone spot potential damage:
• Has the house shifted off its foundation? Has it fallen away from the foundation in any place?
• Is the structure noticeably leaning? When looked at from a distance, does it look tilted?
• Do you see severe cracks or openings between the structure and outdoor steps or porches?
• Do you experience seriously increased vibrations from passing trucks and buses?
• Do you see severe cracks in external walls or foundation?
• Are there any breaks in fence lines or other structures that might indicate nearby damage?
• Did you check for damage to ceilings, partitions, light fixtures, the roof, fuel tanks and other attachments to the main frame of the structure?
• Are there cracks between the chimney and the exterior wall or the roof?
• Are there cracks in the liner?
• Did you find unexplained debris in the fireplace?
• Are power lines to your house noticeably sagging?
• Is your hot water heater leaning or tilted?
• Are all the water connections secure including those for pipes, toilets, faucets?
• Are any doors and windows more difficult to open or close?
• Is the roof leaking? Is there water damage to the ceiling?
• Has the furnace shifted in any way? Are ducts and exhaust pipes connected and undamaged?
• Do you feel unexplained draftiness? Are any cracks in the walls, poorly aligned window frames or loosened exterior sidings letting in breezes?
• Has the floor separated from walls or stairwells anywhere inside the house?
• Are there cracks between walls and built-in fixtures such as lights, cupboards or bookcases?
• Does the floor feel "bouncy" or "soggy" when you walk on it?
• Have you checked crawl spaces, stairwells, basements, attics and other exposed areas for signs of damage such as exposed or cracked beams, roof leaks and foundation cracks?
Low-interest disaster loans are also available from the U.S. Small Business Administration (SBA) for homeowners, renters, businesses of all sizes, and private non-profit organizations that had damage or loss as a result of the South Napa Earthquake. Disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.
To apply for disaster assistance, register online at DisasterAssistance.gov or via smartphone or tablet at m.fema.gov. Applicants may also call FEMA at 800-621-3362 or (TTY) 800-462-7585. People who use 711-Relay or VRS may call 800-621-3362.
FEMA must verify damages for every application. FEMA inspectors have completed more than 2,600 inspections in Napa and Solano Counties. FEMA inspectors display photo identification badges.
Damage inspections by FEMA are free and generally take 30 to 45 minutes, and they are conducted by FEMA contract inspectors who have construction or appraisal expertise and have received disaster-specific training. Inspectors document the damage by checking the building structure and its systems, major appliances and any damaged septic systems and wells.
If applicants discover additional damage to their property after the inspection takes place, they can request another one by calling FEMA at 800-621-FEMA (3362) or (TTY) 800-462-7585.
Additional information on California disaster recovery is available at www.fema.gov/disaster/4193.
Guerrilla warfare, guerrilla marketing, guerrilla negotiating – if all these things can benefit from a ‘guerrilla’ point of view, how about business continuity management? The basic concept is to get bigger results from a smaller amount of resources, possibly supplemented by some lateral thinking. Guerrilla soldiers don’t have the big guns and tanks of their adversaries. Guerrilla marketers don’t have the big television and print budgets of their competitors. And guerrilla negotiators learn to think around business deals to turn losing propositions into winning ones. Guerrilla business continuity management can draw on each of these areas to help BCM move forward.
Data loss in any shape or form can prove disastrous for business—especially small to midsize businesses (SMBs). Depending on the occurrence, data recovery can cost from $100 for a commercial data recovery product to thousands for hard drive crashes or catastrophic events such as flood, fire or tornado.
According to David Zimmerman, president of LC Technology International, a global leader in file and data recovery, a big mistake that SMBs make in regard to data protection is that they don’t create and test a formal plan because they don’t expect a big data loss to happen. In an email interview, Zimmerman explained that it’s important for all businesses to at least plan for a disaster:
You have to expect bad things to happen, prepare for the worst. Having a disaster recovery plan in place is mandatory for successfully restoring backups and recovering lost data. Off-site storage that is readily accessible is also essential to help protect data and get the business running after a disaster.
Not all organizations are moving to the external cloud. Some data and applications are going from public to private clouds, said Seth Robinson, senior director of technology analysis at CompTIA.
As I wrote yesterday, CompTIA released its Fifth Annual Trends in the Cloud report, which queried 400 businesses and 400 individuals on cloud adoption. I’ve covered the integration aspects, but here’s something else worth noting: While cloud adoption is becoming more mainstream, at least some adopters are opting to move data to internal clouds.
“It's not that everything is funneling into major cloud providers,” Robinson said. “Companies have different requirements for different pieces of their architecture, and they are finding where those pieces fit best between all these models that are out there. Companies are going to keep moving that way.”
It’s funny how technology always progresses to a higher state even before the current state has made its way to widespread use. First blade servers, then virtualization and then the cloud all made their way into the collective IT consciousness while most enterprise managers were still getting their feet wet with the current “state of the art” technology.
These days, the buzz is all about the software-defined data center (SDDC), which is an amalgam of nearly everything that has happened to IT over the past decade cobbled together into one glorious, trouble-free computing environment. And if you believe that last part, I have a bridge to sell you.
What is clear is that by virtualizing the three pillars of data infrastructure – compute, storage and networking – entire data environments could potentially be created and dismissed at whim. I say “potentially” because the technology to do this simply does not exist yet, at least not in the way that people expect: quickly, easily and with little or no training.
(TNS) — As nationwide alarm over Ebola fades, hospital officials and public health professionals are trying to ensure that lessons learned don’t disappear along with it.
After a Liberian man carrying the disease died last month in a hospital in Dallas and two of his nurses became infected, facilities stepped up training and planning for Ebola cases.
“The mantra is, ‘Don’t be the next Dallas,’ ” said Dr. Andrew Pavia, chief of pediatric infectious diseases for the University of Utah health system.
But as the situation abates, so does the urgency to act. With a quarter of American hospitals losing money in day-to-day operations, according to the American Hospital Association, expensive and time-consuming training for unknown future outbreaks is not always a top priority, experts say.
(TNS) — On the coldest morning since last winter, officials with numerous state agencies gathered Tuesday morning to practice ways to avoid a repeat of last winter’s memorable “Snowmageddon.”
On that cold January day, heavy snow moved into metro Atlanta just as businesses and government agencies sent workers home, and thousands of motorists were stranded overnight — and well into the next day — on jammed, ice- and snow-laden streets and interstates.
Tuesday, the Georgia Emergency Management Agency opened its Emergency Operations Center for a coordination exercise that involved GEMA, the Georgia Department of Transportation, the Georgia Department of Public Safety, the Georgia Department of Natural Resources, the Georgia Forestry Commission and the Georgia National Guard.
The Insurance Institute for Business & Home Safety’s (IBHS) free business continuity planning toolkit, OFB-EZ (Open for Business-EZ), is now available as a free, mobile app.
IBHS member company, EMC Insurance Companies, partnered with IBHS to develop the new app, OFB-EZ Mobile, which guides users through an easy process to create a recovery plan that will help even the smallest business recover and re-open quickly after a disaster.
OFB-EZ Mobile, available for Android devices in the Google Play Store and for Apple devices in the App Store, includes several helpful planning tools, such as evaluation checklists to help business users understand their risks, and forms for users to enter and store important contact information for employees, key customers, suppliers, and vendors.
OFB-EZ is also available at no charge in Adobe Acrobat (pdf) and Microsoft Word formats on the IBHS website at: http://www.disastersafety.org/open-for-business.
A recent poll by the Security Executive Council set out to discover which business continuity standards are being used when organizations are developing their business continuity programs.
The results show that ISO 22301 was used most often. 34 percent of poll respondents use this standard to benchmark against. However, surprisingly 30 percent stated that they do not benchmark their business continuity program against any standard.
The other standards in use are:
- NFPA 1600 12 percent
- ISO/IEC 27001 8 percent
- BS 25999 6 percent
- ISO/PAS 22399 4 percent
- Other 6 percent
The ‘Other’ category included write-in votes for other business continuity related standards, the most popular being CSA Z1600, HB 221/292, and NIST 800-53.
Blue Coat Systems has published research results that show that the growing use of encryption to address privacy concerns is creating perfect conditions for cyber criminals to hide malware inside encrypted transactions, and even reducing the level of sophistication required for malware to avoid detection.
The use of encryption across a wide variety of websites — both business and consumer - is increasing as concerns around personal privacy grow. In fact, eight of the top 10 global websites as ranked by Alexa deploy SSL encryption technology throughout all or portions of their sites. For example, technology goliaths Google, Amazon and Facebook have switched to an ‘always on HTTPS’ model to secure all data in transit using SSL encryption.
Business critical applications, such as file-storage, search, cloud-based business software and social media, have long-used encryption to protect data-in-transit. However, the lack of visibility into SSL traffic represents a potential vulnerability to many enterprises where benign and hostile uses of SSL are indistinguishable to many security devices. As a result, encryption enables threats to bypass network security and allows sensitive employee or corporate data to leak from anywhere inside the enterprise.
If your employees travel on behalf of your business – whether in the U.S. or abroad – you are legally responsible for their health and safety. In fact, Duty of Care legislation has become increasingly important in the corporate travel world. Companies that fail to safeguard their employees not only risk the health and safety of their people, but also can face legal, financial and reputational consequences.
Someone in your company must be responsible for ensuring the safety and health of traveling employees (usually, this falls on an administrator from the human resources or risk management department). This should include implementing a well balanced, company-wide travel risk management plan.
Throughout its history, the Business Continuity industry has maintained a steady focus on Preparedness – understanding the organization’s most critical business functions (both technological and operational) and development of Plans to respond to any disruption of those critical functions. That makes sense. How that can be accomplished has been refined and tweaked over time through various ‘standards’ and ‘best practices’. Those activities answer some basic questions:
- What do we need to protect?
- How will we prepare to respond to a disruption of those critical functions?
What has always been omitted in that analysis has been the third major question:
- How will we manage that response?
If you ask 20 BCM practitioners that question you will get a wide variety of answers:
Integration permeates all four stages of cloud adoption, from experimenters to companies that are “brutally transforming” their business and workflows through cloud, a recent report by CompTIA shows. In other words, it’s not so much a barrier to cloud adoption as it is a “hidden challenge,” according to Seth Robinson, senior director of Technology Analysis for the firm.
“Integration pops up in every stage; it's the one that runs through everything,” said Robinson via a call this week. “Even as, in general, the early stages see more technical challenges and the leaders see more behavioral or culture challenge, that challenge of integration — which is more of a technical challenge — does run through every stage.
“And that really goes back to what was known for a long time, that integration tends to be the lion's share of the cost or effort in an IT project."
SAN ANTONIO — Snohomish County, Wash., Emergency Management Director John Pennington said he hoped the audience at a breakout session during the International Association of Emergency Managers conference in San Antonio on Tuesday, Nov. 18, would never have to go through what he and his colleagues experienced in March when part of a hill collapsed, sending mud and debris across the North Fork of the Stillaguamish River taking a whole neighborhood with it. It covered a square mile and buried some of the 43 dead as much as 75 feet deep.
The slide and response and recovery missions were what Pennington calls the “new normal.” That consists of a new way of doing business, considering climate change and trends of more natural disasters that tax communities to the hilt — and catch them off guard, as Pennington said the slide did with him and his colleagues.
The enterprise seems to be developing a love/hate relationship with the public cloud. On the one hand, the prospect of virtually limitless resources seems ready to take on any processing or storage load that comes along. On the other, issues of security, availability and portability threaten to inhibit productivity unless sophisticated new management layers are introduced.
Nevertheless, enterprise deployment of public cloud resources is on the rise, if the data from the provider community is to be believed. Gigaom Research, for one, estimates that public cloud infrastructure is nearly deployed or already in place at more than half of large enterprises, most of which are looking to provide the underpinnings of broad scale-out architectures to support Big Data analytics. This trend cuts across a wide swath of industry verticals, including manufacturing, tech firms, finance and ecommerce, with specific applications ranging from real-time workload and batch processing to app development and social media.
Clearly, this is good news for the large public cloud providers, and at the moment there is none larger than Amazon. At its AWS re:Invent show last week, the company claimed no less than one million active customers who are driving revenue growth to about 40 percent per year. Gartner estimates that AWS offers about five times the capacity as the next 14 cloud competitors combined.
It is becoming clear that the prevailing piecemeal approach to security is no longer sufficient to thwart increasingly sophisticated attacks. Gaps in coverage provide possible entry points, blended attacks in several sectors can mask the actual threat, and sophisticated attacks involving multiple targets and approaches can find their way around many current defenses.
Interest is growing in unified threat management (UTM) for small to medium-sized businesses, which centralizes all network intrusion response in a single device, and next-generation firewalls (NGFs), which defend against most of the same things but are aimed at the enterprise. Although some currently define these as separate product areas, major vendors are now providing this form of protection as a continuum. Centralizing network perimeter protection in this way has innumerable benefits, making it possible to apply best practices and meet regulatory requirements. Centralized devices can provide a variety of deep-screening techniques through virtualized systems and across changing areas of concern. In addition, they can be quickly updated, reducing effectiveness of zero-day exploits, and dedicated hardware permits use of ASICS (application-specific integrated circuits engineering) and FPGAs (field-programmable gate arrays) to improve throughput.
Cloud adoption has historically been hampered by security concerns. All of Forrester's research shows this to be the number one impediemtn to adoption. Forrester just finished evaluating four cloud platform providers on the depth and breadth of their security controls. This Forrester Wave™ evaluates four of the leading public clouds along 15 key security criteria evaluations to answer this question. The participating cloud services providers were: AWS, CenturyLink Cloud, IBM SoftLayer, and Microsoft Azure. This report details our findings about how well each vendor fulfills our criteria and where they stand in relation to each other, to help S&R professionals select the right public cloud partner with the best options for security controls and overall security capabilities.
The results can be found here: The Forrester Wave™: Public Cloud Platform Service Providers' Security, Q4 2014.
Gigaom Research and CipherCloud have announced the results of their new ‘Shadow IT: Data Protection and Cloud Security’ study. The research examined the extent of enterprises' cloud adoption, the challenges and the risks.
The results of the study indicate that the cloud market will grow 126.5 percent this year with the majority of the growth in two areas: software-as-a-service (SaaS) growing at 199 percent and infrastructure-as-a-service (IaaS) at 126 percent.
Disaster recovery as a service (DRaaS) provides organizations with a variety of cost-efficient ways to recover and replicate critical servers and data center/centre infrastructure to the cloud environment in case of any disaster resulting in disruption of services.
DRaaS offers business continuity across a range of organizations and their applications, by ensuring availability of IT infrastructure in an event of disaster.
A new report from Transparency Market Research looks at the DRaaS market, providing industry analysis, global trends and forecasts for 2014 – 2020.
Following on from its success at the Business Excellence Awards, the Business Continuity Institute has now been named Training Resource of the Year (Business Continuity) at the prestigious ACQ Global Awards.
Deborah Higgins, Head of Learning and Development at the BCI, said: “The BCI and our international network of Training Partners and Instructors have invested a lot into ensuring that business continuity practitioners are offered the highest standard of teaching. To be named Training Resource of the Year (Business Continuity) is great recognition of all this effort and something we are extremely proud of. It is important that we do not rest on our laurels however, but use this award as inspiration to develop our services further and continue to provide the best training to those working in the industry all across the world.”
"Exceptional individuals, teams and firms across the marketplace represent the very best in their field from around the world and truly deserve the accolade of being an ACQ Award winner.” said Jake Robson, Editor in Chief of ACQ. “All category winners are in effect, a brand. In one sense, perhaps the most important sense, a brand is a promise. You know what you’re going to get with a well-branded product or service. It takes a lot of time, money and very hard work to build and maintain great brands, brands that can speak volumes in just a few syllables. It’s shorthand for what you are.
All category winners in the ACQ Global Awards 2014 represent this ethos and this year, our dedicated subscribers have once again recognized the genuine leaders in the market. The quality of this year's entries is astonishingly high and a testament to the fact that the profession continues to innovate and deliver high-quality services even in economically challenging times.”
Since 2008, the ACQ Global Awards have been celebrating achievement, innovation and brilliance in their annual awards. Every year, they seek the assistance of their readers in recognizing industry leaders, eminent individuals, exemplary teams and distinguished firms, which they believe represent the benchmark of achievement and best practice in a variety of fields – and every year, they turn to their readers to help as they strive to recognize an ever-widening spectrum of services, markets, industries and organisations that the sector serves.
SACRAMENTO, Calif. – State and federal disaster assistance now totals $12.1 million for those affected by the South Napa Earthquake. The current total includes $5.6 million in grants from the Federal Emergency Management Agency (FEMA) and the California Governor’s Office of Emergency Services (Cal OES), as well as $6.5 million in low-interest disaster loans from the U.S. Small Business Administration (SBA).
A recap of the disaster recovery operation by the numbers, as of Nov. 16:
Households Registered: 3,142
Total Grants Approved: $5,670,654
• Housing Assistance Grants: $5,397,952
• Other Needs Assistance Grants: $272,702
SBA Loans Approved: 145
• Home Loans: 142
• Business Loans: 3
Total SBA Loans: $6,525,500
Disaster Recovery Centers:
• Napa Earthquake Local Assistance Center - 301 First Street, Napa, CA 94559
• Solano County Disaster Recovery Center - 1155 Capitol Street, Vallejo, CA 94590
Center Visitors: 1,444
Hours: 9 a.m. to 6 p.m. Mon.-Fri., 9 a.m. to 4 p.m. Sat., until further notice. Closed Nov. 27-28.
FEMA Inspections Completed: 2,592
Homeowners and renters in Napa and Solano Counties who had damage from the South Napa Earthquake have until Dec. 29, 2014 to apply for disaster assistance from FEMA. Disaster assistance includes grants to help pay for temporary housing, essential home repairs and other serious disaster-related needs not covered by insurance or other sources.
Low-interest disaster loans are also available from the SBA for homeowners, renters, businesses of all sizes, and private non-profit organizations. Disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.
Disaster recovery officials urge those who registered with FEMA and received an SBA loan application to complete and return the application. Doing so will ensure the applicants are considered for the full range of disaster assistance that may be available to them.
SBA serves as the federal government’s primary source of money for the long-term rebuilding of disaster-damaged private property. SBA helps fund repair or rebuilding efforts and cover the cost of replacing lost or disaster-damaged personal property.
Homeowners may borrow up to $200,000—with interest rates as low as 2.063 percent—for the repair or replacement of their primary residence not fully compensated by insurance. Homeowners and renters may also borrow up to $40,000 with interest rates as low as 2.063 percent for replacement of personal property, including vehicles.
Businesses and nonprofits may apply to borrow up to $2 million for the following:
• Business Physical Disaster Loans—Loans to businesses to repair or replace disaster-damaged property owned by the business, including real estate, inventories, supplies, machinery and equipment. Businesses of any size are eligible. Private, non-profit organizations such as charities, churches, private universities, etc., are also eligible.
• Economic Injury Disaster Loans (EIDL) –Working capital loans to help small businesses, small agricultural cooperatives, small businesses engaged in aquaculture, and most private, non-profit organizations of all sizes meet their ordinary and necessary financial obligations that cannot be met as a direct result of the disaster. These loans are intended to assist through the disaster recovery period.
Homeowners and renters who apply for an SBA loan and are declined, as well as those who are not issued a loan application, may be referred to the FEMA Other Needs Assistance (ONA) grant program. Homeowners and renters must return the SBA application, if they receive one, to be considered for ONA.
ONA provides reimbursements for personal property losses, vehicle repair or replacement, moving and storage fees, and other serious disaster-related expenses not covered by insurance or other sources. FEMA provides 75 percent of the funding for ONA, and Cal OES provides 25 percent.
To apply for assistance, register online at DisasterAssistance.gov or via smartphone or tablet at m.fema.gov. Applicants may also call FEMA at 800-621-3362 or (TTY) 800-462-7585. People who use 711-Relay or VRS may call 800-621-3362.
Multilingual phone operators are available on the FEMA Helpline/Registration. Choose Option 2 for Spanish and Option 3 for other languages. Phone lines remain open 7 a.m to 10 p.m. (PT) Sun.-Sat. until further notice.
Disaster Survivor Assistance (DSA) Teams
Two six-person DSA teams continue to visit quake-damaged communities. The teams include eight young adults – ages 18 to 24 – from FEMA Corps, who work alongside FEMA employees to help communities recover from disasters. On assignment in Napa and Solano counties, the teams are stationed at community centers or walking door-to-door to speak to residents and business owners.
To date, DSA teams have registered 151 residents, updated 101 FEMA applications, completed 170 case inquiries and referred 252 people to other community resources.
Apply to Qualify
To be eligible for federal disaster assistance—such as disaster grants and loans—at least one member of a household must be a U.S. citizen, Qualified Alien or non-citizen national with a Social Security number. Disaster assistance may be available to a household if a parent or guardian applies on behalf of a minor child who is a U.S. citizen or a Qualified Alien. FEMA will only need to know the immigration status and Social Security number of the child.
Disaster assistance grants are not taxable income and will not affect eligibility for Social Security, Medicaid, medical waiver programs, Temporary Assistance for Needy Families, the Supplemental Nutrition Assistance Program or Social Security Disability Insurance.
Those who suspect someone of engaging in unscrupulous activity should call the FEMA Disaster Fraud Hotline at 866-720-5721. Complaints may also be made to local law enforcement agencies.
For unmet disaster-related needs, the United Way operates 2-1-1 that covers Napa and Solano Counties. Available 24/7 in 150 languages, the Bay Area 211 helpline connects callers with hundreds of programs to help people find food, housing, healthcare, senior services, childcare, legal aid and more.
For more information on the California disaster recovery, go to www.fema.gov/disaster/4193.
Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The Cal OES protects lives and property, builds capabilities and supports our communities for a resilient California. Cal OES achieves its mission by serving the public through effective collaboration in preparing for, protecting against, responding to, recovering from, and mitigating the impacts of all hazards and threats.
In disaster recovery, technology is often a neutral element – neither good nor bad, in itself. Some technologies are better suited to specific needs or offer relative improvements to existing solutions. What determines whether an organisation benefits or suffers is the application of technology. When it is used unthinkingly and incorrectly, the horror stories start. Worse still, many technology-related disaster recovery failures are repeats of catastrophes that were already happening decades ago. What have we learnt since then – or what should we have learned?
Data lakes may be controversial among technologists, but for many companies, they solve a pressing technology problem: bad data integration.
Andrew Oliver, president and founder of Open Software Integrators, has helped large companies build data lakes. While there is debate about what data lakes are and whether they’re even practical at this point — Gartner, in particular, has spoken against the data lake hype (see Oliver’s rebuttal)— in practice, data lakes are typically Hadoop clusters that draw data from multiple sources, Oliver said.
Often, he said, business leaders are just fed up with the bad, point-to-point integration they’ve seen with traditional BI projects.
Recent research is adding impetus to the notion that the cloud will abolish the data center as we know it. And in fact, there is a kernel of truth to this, although the change is not going to be as complete as some cloud enthusiasts would have us believe.
As I noted in a previous post, companies like IDC are projecting a steady increase in new data center construction over the next few years, followed by a drop-off as workloads are moved from internal infrastructure to external resources that are either cloud-based or provided in standard colocation fashion. This is not necessarily a death-blow to the data industry, however, given that scale-out cloud architectures will still require a fair amount of hardware and software – although most of it based on low-cost, commodity designs – so that the amount of actual data center square footage is likely to increase well into the next decade.
So clearly, the enterprise data center will not vanish, but it will change. For one thing, smaller facilities and “data closets” are likely to fade away, leaving larger, scale-out data centers in their place. As well, look for increased density and modular designs rather than today’s rack configurations, says Dell’s Ashley Gorakhpurwalla, with much of today’s hardware-driven functionality replaced by advanced software-defined architectures. The good news, though, is that much, if not all, of this new infrastructure will integrate seamlessly with legacy systems, giving the enterprise crucial breathing room when it comes to building next-generation data environments.
(TNS) — Can public health experts tell that an infectious disease outbreak is imminent simply by looking at what people are searching for on Wikipedia? Yes, at least in some cases.
Researchers from Los Alamos National Laboratory were able to make extremely accurate forecasts about the spread of dengue fever in Brazil and flu in the U.S., Japan, Poland and Thailand by examining three years’ worth of Wikipedia search data. They also came up with moderately successful predictions of tuberculosis outbreaks in Thailand and China, and of dengue fever’s spread in Thailand.
However, their efforts to anticipate cases of cholera, Ebola, HIV and plague by extrapolating from search data left much to be desired, according to a report published Thursday in the journal PLOS Computational Biology. But the researchers believe their general approach could still work if they use more sophisticated statistics and a more inclusive data set.
In the long history of IT, many innovations that originally began life as supercomputer projects over time wound up being more broadly applied. A new class of supercomputers that IBM is building in collaboration with NVIDIA and Mellanox for the U.S. Department of Energy is likely to be just such an innovation.
Under the terms of a $325 million contract with the U.S. Department of Energy, IBM is building supercomputers using a new “data centric” architecture capable of processing 100 petaflops using five petabytes of dynamic and flash memory. Based on IBM OpenPOWER processors, each system will be capable of moving data to the processor, when necessary, at more than 17 petabytes per second.
Dave Turek, vice president of technical computing for OpenPOWER at IBM, says what makes these systems unique is that IBM is designing them in a way that processes and visualizes data in parallel, but also allows processing of data to be distributed across storage and networking elements.
SACRAMENTO, Calif. – Federal Emergency Management Agency (FEMA) inspectors have completed more than 2,000 inspections of homes damaged or destroyed by the South Napa Earthquake. Homeowners and renters in Napa and Solano counties became eligible to apply for federal disaster assistance on Oct. 27 following the presidential declaration for Individual Assistance. FEMA must verify damages for every application.
Those affected by the South Napa Earthquake have until Dec. 29 to apply for disaster assistance. Disaster assistance may include grants to help pay for rent, essential home repairs, personal property replacement or other serious disaster-related expenses not covered by insurance or other sources.
Damage inspections are free and generally take 30 to 45 minutes. They are conducted by FEMA contract inspectors who have construction and/or appraisal expertise and have received disaster-specific training. Each inspector displays official photo identification.
Inspectors document the damage but do not determine the resident's eligibility for disaster assistance. They check for damage to the building structure and its systems, major appliances and any damaged septic systems and wells. Residents should tell the inspector about other important losses such as clothing, personal property, medical equipment, tools needed for a trade, and educational materials.
Inspectors then relay this information to FEMA on their handheld tablet, which they call their inspector pad. They use their pads to download work assignments and communicate throughout the day.
Applicants are reminded to keep the contact information on their applications current so an inspector can reach them. To update their information, applicants should call FEMA's Helpline at 800-621-3362 or (TTY) 800-462-7585. Contact information can also be updated online at www.disasterassistance.gov.
FEMA procedures for home inspections follow:
• An inspector calls the applicant to set up an appointment at a mutually convenient time and advises the applicant of documentation needed to complete the inspection, such as insurance policies and photo identification.
• The inspector tries a minimum of three times to contact the applicant. The inspector will call at different times on different days in the hope of finding someone at home.
• If attempts to reach the applicant are unsuccessful, the inspector posts a letter on the applicant's door with a phone number to call for an appointment.
• If applicants have relocated to another area and cannot return for the mandatory damage inspection, they can authorize an agent or proxy to be present on their behalf.
• As part of the inspection process, homeowners will be asked to show proof of ownership, such as a tax bill, deed, mortgage payment receipt or insurance policy showing the property's address. Renters must show proof of occupancy, such as a lease, rent payment receipt, utility bill or other document confirming the home was their primary residence at the time of the disaster. Both homeowners and renters must also be prepared to show a valid driver's license or other photo identification.
To speed the inspection process, applicants should:
• Make sure their home address number can be easily seen from the road.
• Keep their appointment or notify the inspector if a postponement is necessary.
• Stay in touch with FEMA, which may include telling neighbors where they can be contacted.
• Let FEMA know during the registration process if they need a reasonable accommodation, such as an American Sign Language interpreter, during the inspection.
If applicants discover additional damage to their property, they can request another inspection by calling the FEMA Helpline at 800-621-FEMA (3362) or TTY 800-462-7585.
Besides the above personnel, residents and businesses may be visited by loss verifiers from the U.S. Small Business Administration, insurance adjustors, and local building officials, as well as others involved in the recovery process. Building officials typically charge fees for permits, though these are sometimes waived after disasters.
FEMA inspectors do not tag dwellings. FEMA inspectors must follow written guidelines to perform inspections on dwellings previously tagged as unsafe to enter or unsafe to occupy by local officials.
For unmet disaster-related needs, the United Way operates 2-1-1 that covers Napa and Solano Counties. Available 24/7 in 150 languages, the Bay Area 211 helpline connects callers with hundreds of programs to help people find food, housing, healthcare, senior services, childcare, legal aid and more.
For more information on California disaster recovery, go www.fema.gov/disaster/4193.
If you’re like me, you gets lots of emails concerning Business Continuity, Disaster Recovery and Emergency Response advertisements. I even see lots of adverts in the industry journals and magazine’s; all of which say that the product they’re selling will help you with this problem or that problem. Many even say that with their product you’ll be able to communicate better. I’m not so sure about that last part.
Yes, online applications and software can certainly help put messages’ together and disseminate the messages to a plethora of audiences. Yet, just because you’re using an online tool or application it does not mean you’ll automatically become a great communicator or even be able to communicate effectively. These applications are just tools to leverage to make some aspects easier and ensure timely communications. They aren’t to be considered saviours or that answer to all your prayers.
Applications do not have the one thing that it needs to be 100% effective; there is no human element or aspect to it. Many organizations will pay large sums of money of software that promised to build them a BCM/DR program yet it only spits out specific information based on algorithms and formula’s built in to it based on responses to questions. So far, applications have not been able to analyse or take into account human elements when spitting out findings and reports – and as many do, what you need to do when a disaster occurs.
Traveling business executives have been falling prey to cybercriminals acting through hotel Internet networks since at least 2009. In an ongoing, sophisticated “espionage campaign” nicknamed “Darkhotel,” thousands of people traveling through Asia have been targeted and hacked through infected hotel WiFi, cybersecurity company Kapersky Lab reported Monday. About two-thirds of the attacks took place in Japan, while others occurred in Taiwan, China and other Asian countries.
“For the past few years, a strong actor named Darkhotel has performed a number of successful attacks against high-profile individuals, employing methods and techniques that go well beyond typical cybercriminal behavior,” said Kurt Baumgartner, principal security researcher at Kaspersky Lab. “This threat actor has operational competence, mathematical and crypto-analytical offensive capabilities, and other resources that are sufficient to abuse trusted commercial networks and target specific victim categories with strategic precision.”
So strategic, in fact, that the hackers appear to know the names, arrival and departure times, and room numbers of the targets. While maintaining an intrusion on hotel networks, the hackers used this information, waiting until the victim checked in and logged on to the hotel Wi-Fi, then submitting their room number and surname to log in. When the hackers saw the victim on the network, they would trick the executive into downloading and installing a “backdoor” with the Darkhorse spying software disguised as an update for legitimate software like Google Toolbar, Adobe Flash or Windows messenger. Once installed, the backdoor can be used to download other spying tools, such as an advanced keylogger and an information-stealing module.
Ken Smith, the 911 coordinator for Williamson County in Herrin, Ill., remembers the state officials’ response when he and his southern Illinois colleagues wanted to explore a next-generation 911 system that would accept text messages, automatic crash notification data, pictures and streaming video.
“They looked at us like we were crazy,” he said.
The state had no plans to put in an Emergency Services IP network (ESInet), and the phone companies said they had no such plans either. That’s when Smith and his colleagues realized they’d have to figure out how to do it themselves. It was no easy task.
Many people use the terms “redundancy” and “duplication” synonymously. They are not synonymous, especially when it comes to how we should use them to describe actions that increase our disaster resilience.
The real problem we face is that modern business practices have sought to save money by wringing every dime out of the cost of doing business by eliminating what’s seen as duplicative processes and capabilities.
These changes have been manifested in a number of ways.
I recently shared some thoughts on my very first Work Area Recovery Test. I tried to explain (to the best of my knowledge) the different types of jargon being used and what to expect if you were yet to have this experience. On this occasion I was delighted to discover that many seasoned professionals were ready and willing to contribute via the LinkedIn Group. I felt like this was a real turning point in my blogging adventure. More experienced individuals were adding commenting on WAR arrangements, kindly explaining confusing terms, pointing out where I might be wrong on one or two things. It certainly helped to develop the junior professional knowledge-base and was EXACTLY the reason why I set up this platform up in the first place so thank you kindly for such input ladies and gents!
Anyway, I walked away from that very first test thinking I would be much better placed to go through the experience again. I mean why wouldn’t I? Surely by now I would have a pretty good idea of what to expect at the next one? I’m much more familiar with the IT terminology now and also some of the challenges I might face.
I couldn’t have been more wrong…
Fear is defined as the feeling or condition of being afraid, whether real or imagined. The fear that we are facing today, especially healthcare, is the Ebola virus. Are we prepared? Do we have the proper training for all of our staff? Do we have enough personal protective equipment (PPE)? What are the moral, ethical, communications, legal, financial and HR issues?
There have been many discussions around the value of enterprise risk management as of late. Some individuals may feel as if having a risk manager on board checks the box, meeting the company’s obligations. Others may feel that enterprise risk management is the start and end to all their challenges and, if things do not work out as expected, the risk manager is to blame. So where does that leave the risk manager?
In order to have a healthy enterprise risk management program, risk managers should think like salespeople. Risk management professionals tend to be very passionate about their vocation, but not everyone may be buying into the ERM process. The first step to selling your risk program is to find a champion. This person should be on your executive team—preferably the CEO. You need a strong voice in your organization that will support the change that an enterprise risk management program can bring. It is also a good idea to have support from the board of directors and, if applicable, the internal auditor. When building your risk team, keep in mind that the end goal is to have all employees of the organization support and apply risk management to their day-to-day challenges. The more risk champions you can find, the better your program will be advocated and supported.
Here’s an odd economic indicator for you: Sales of master data management for product data grew only 8.7 percent last year. By comparison, MDM for customer data grew 12.2 percent.
That’s more than a 2 percent difference favoring customer data MDM, which suggests, according to Gartner’s market data, “that end users are, on the whole, comfortable with economic growth.”
Maybe, but I think this conclusion from the report is a safer bet: “Appreciation of the business value of MDM, which, though increasing, is still severely lacking.”
In 2013 Continuity Central conducted a survey to explore quality control methods that are being used within business continuity management systems. This survey is now being repeated to see how the trends in this area may have changed. The survey has also been extended to include BCMS measurement.
The interim results of this survey are as follows:
Senior executives within UK businesses say that critical data is not being protected, a new report from NTT Com Security shows. The Risk:Value report, based on a survey of 800 business decision-makers (not in an IT role) in the UK, Australia, France, Germany, Hong Kong, Norway, Sweden and the US, shows that UK executives believe that less than half (49 percent) of their critical data is fully secure.
The report, designed to assess the level of risk within large organizations and the value that senior people place on data security, reveals that the majority (56 percent) of respondents in the UK agree they are likely to suffer a security breach at some point: which rises to 63 percent on average globally.
Nearly three-quarters (72 percent) believe it is vital that their organization is insured for data security breaches, but only half (54 percent) admit their company insurance currently covers the financial impact of both data loss and a security breach.
Garry Sidaway, Senior Vice President Security Strategy & Alliances, NTT Com Security, says: “The results provide some real insight into the minds of non-IT executives about the value they place on the data in their business and whether they feel this data is at risk. The report shows a kind of ‘security maturity’ scale developing among businesses who value their data, but do not always recognise the risks to critical information. When asked what they associate with the term data security, only half say it is as ‘vital’, while less than a quarter see it as ‘a business enabler’.
“Unfortunately, security at the board level still tends be associated with data protection and compliance, when in fact securing data properly is absolutely critical to enabling businesses to thrive and survive. There’s also a growing disconnect between the cost of breaches and the importance that organizations place on IT security to drive these costs down.”
The report reveals that UK executives are also underestimating the impact of a security breach. Almost a fifth (19 percent) think there would be no significant impact on their revenue, while 28 percent admit they do not know what the financial implications would be. On average, however, UK companies estimate a drop in revenue of 7 percent. A quarter (24 percent) say it would take between one and three months to recover, with five months being the average in both the UK and across all eight countries.
Read the report (PDF).
How does a business cope with regulations that, when piled on top of each other, are ‘three Eiffel Towers high’? That’s the future for the financial industry, according to a recent report from financial regulations consultancy JWG and its forecast for the situation in 2020. But regulatory risk is also growing in many sectors. New legislation is swinging into play relating to developments unforeseen five years ago. Should organisations simply chase ever-evolving and expanding regulations to try to remain conformant? Or is there an opportunity here, disguised as a problem?
Tech savviness is a hallmark of the millennial generation. They are the first generation to replace landlines, crayons and typewriters with smartphones, laptops, and tablets. They have phased out the communication, education, and work processes of the baby boomer generation in favor of faster more technologically advanced solutions. There is no doubt this rapid adoption of technology is significantly changing the speed and accuracy of how information is processed and shared. Emergency education and communication departments are particularly benefitting from new technology. Apps like mobile GPS storm trackers have dramatically transformed the field of public health and emergency response by increasing public awareness before, during, and after severe weather. Today, smartphone emergency alerts are replacing storm sirens and social media is providing faster disaster coverage than weather radios. Technology is providing faster more pinpointed surveillance of natural and planned emergency events and innovative messaging and communication networks are allowing public health and emergency response officials to better reach larger and more diverse communities.
Minnesota Emergency & Community Health Outreach, or ECHO, is one such program that is using technology to address the emergency preparedness and response needs of their community. By leveraging text to speech technology, ECHO has created Spanish, Hmong, and Somali language warnings and alerts that extend their emergency response to include the immigrants and refugees living in their community. Known as the Minnesota Multi-Language Alerting Initiative this 15-month project led by ECHO in partnership with Twin Cities Public Television (TPT) will expand the emergency response linguistic reach of the current Common Alerting Protocol, which only provides alerts in English.
Through funding from the Corporation for Public Broadcasting, ECHO hopes to use Minnesota Multi-Language Alerting Initiative to lay the groundwork for multi-language emergency warning and alert systems and eventually support national efforts to improve emergency messaging and delivery through the Integrated Public Alert and Warning Systems.
ECHO is a non-profit organization whose founding mission engages limited English proficiency residents in emergency preparedness initiatives. Founded in 2004, ECHO has developed a team of bi-lingual ambassadors in 12 languages that work alongside ECHO to create programs and services that help people be healthy, contribute, and succeed.
The combined community engagement, new technology, and a process for delivering messages supported by cultural context across diverse communities is viewed as a best practice enhancing health and safety initiatives. The first messages are due out this fall, with final outcomes in the spring of 2015. For more information, please contact ECHO’s executive director, Lillian McDonald.
(MCT) —The first shot was faint and from inside the school auditorium easily could have been confused with the sound of an overstuffed binder dropping to the tile floor up the hallway. A minute — two minutes? — later the explosion outside the auditorium door was unmistakable.
Methuen police and school officials Tuesday afternoon ran a live test of a brand new active shooter detection system installed in one of the city’s schools, a system designed to provide immediate, real-time information on the location of shots fired inside a building and cut critical minutes and seconds off police response times. It can also direct officers to the shooter’s location, rather than spending time searching rooms and closets.
Police asked that the school remain unnamed.
If someone you didn’t know asked you to willingly hand over the keys to your car or your house, would you do it? I’m guessing not. After all, the items belong to you and are of high value. That same pride of ownership should apply to your organization’s sensitive content. These days, keeping that information safe and sound should be part of every employee’s job.
While the role of data protection falls directly on the shoulders of IT, compliance officers also need to be actively involved in any decisions that impact data security.
Far too often, the keys to an enterprise’s data are handed over to a cloud provider by employees, who give full responsibility to a third party without a complete understanding of the risks and the rights of the third party. This shifts control of data protection to the cloud vendor – a huge risk, considering organizations remain responsible to meet regulatory requirements whether the information is stored within their company firewalls or with a third-party provider.
This morning, Ronen Schwartz, vice president and general manager for Informatica Cloud, was on his way to AWS’s re:Invent conference in Las Vegas, where he was confident that the integration company would find new customers today.
He had good reason for his confidence: Informatica planned to announce expansions to its Informatica Cloud, including new pre-built data connectors for Amazon DynamoDB, Amazon Elastic MapReduce (Amazon EMR) and Amazon S3. Given that Informatica already offers connectors for Amazon Redshift and Amazon RDS, Informatica is now the only vendor to provide a complete data integration solution for AWS, he added.
“We're actually telling the user you can build your integration on Amazon. You can bring data to Amazon. You can take data out of Amazon,” he said by telephone Tuesday. “It’s not just a local solution for one scenario. We're really supporting Amazon in a very strategic way with multiple products and a variety of scenarios.”
To say that a lot of money is riding on the evolution of the data center is probably the understatement of the year.
Without doubt, the old ways of doing business are coming to an end. Routine hardware and software purchases through long-standing channel relationships are falling prey to increasingly tight budgets and the need to streamline IT infrastructure through dense, modular architectures and hefty doses of cloud and colocation services.
At the moment, this is producing a surge in data center construction as providers of all stripes seek to build infrastructure to meet expected demand. In the long term, however, it is questionable how well a consolidated, data utility industry will be able to support the vibrant manufacturing and distribution industries that we have today.
(MCT) — In his 30 years as director of the National Institute of Allergy and Infectious Diseases, Dr. Anthony Fauci has seen his share of public health scares.
When AIDS exploded in the 1980s among gay men, Fauci recalls that some people didn’t want gay waiters to serve them in restaurants. And during the anthrax scare that followed the 9/11 terrorist attacks, many were afraid to open their mail.
But when it comes to Ebola, “This one’s got a special flavor of fear,” Fauci said at the recent Washington Ideas Forum, sponsored by The Atlantic magazine and the Aspen Institute, a nonpartisan policy group.
The growing death toll in West Africa has helped create “an epidemic of fear” in the U.S., Fauci said, even though most experts feel the likelihood of a widespread outbreak in this country is minimal.
By Suaad Sait
Today’s ‘always on’ attitude or the clichéd ‘anytime anywhere’ work environment is continuing to blur the lines between work and play across businesses in EMEA. Companies are increasingly reliant on the support of applications to function, and if a service such as email, VPN or Microsoft Office goes down, huge amounts of revenue can be lost.
In fact, a recent survey by SolarWinds has found that in the UK, 94 percent of business end users believe that application performance and availability affects their ability to do their job, with a further 44 percent deeming application performance and availability to be absolutely critical. These findings are replicated across EMEA; of the 300 German end users surveyed, 57 percent deemed application availability to be business critical. Similarly, 85 percent of Danes surveyed indicated that application performance and availability affects their ability to complete a task.
Poor visibility into converged infrastructures and outdated management of the application stack can make or break a business – and the ultimate responsibility for application performance falls on the IT department. Ensuring the consistent availability and top performance of applications is far from a simple task, and it’s not getting any easier.
The Ebola outbreak in West Africa is taking a horrific toll in human lives on a scale that is unprecedented. It is also happening in a place that makes the whole rescue process an order of magnitude more difficult. Besides trying to save those already infected, aid workers must cope with the fact that the disease moves more easily outwards than medical supplies and vaccines can be brought inwards. The gradual improvement of logistics and transport in the region over the last few decades is having a perverse effect. It is encouraging the spread of infection, but hindering measures to eliminate it. How can this be?
The 1989 Loma Prieta earthquake killed 63 people, injured 3,800 more and damaged 28,000 homes and businesses, including much of San Francisco's Marina district. San Francisco continues to face the threat of earthquakes — and other natural disasters as well, thanks to climate change. These catastrophes can't be averted, but Cyndy Comerford believes good preparation is possible.
As an environmental, planning and fiscal policy manager for the city’s Department of Public Health, Comerford has partnered with the San Francisco-based civic tech company Appallicious to deploy a first-of-its-kind emergency cloud platform to help the city prepare for and recover from disasters.
The technology, called the Disaster Assessment and Assistance Dashboard (DAAD), launched as a pilot project in San Francisco this week for citizens, businesses and government agencies. DAAD was funded with a grant from the Centers for Disease Control and Prevention to help San Francisco reduce poor health from climate change.
(MCT) — Although small, the little black "trauma kits" carried by many University of Colorado police officers and other individuals on campus have the power to save lives in the event of a medical emergency.
For a little over a year, CU has been training students, faculty members and staff members how to use the kits, which contain tourniquets and other potentially life-saving devices.
They're intended to be used by bystanders when emergency medical professionals aren't immediately able to treat victims, such as during an active-shooter scenario.
So far, no one has had to use the kits on campus — and that's a good thing.
A detailed analysis by cybersecurity experts from the University of Maryland found that website administrators tasked with patching security holes exploited by the Heartbleed bug may not have done enough.
First disclosed in April 2014, Heartbleed presents a serious vulnerability to the popular OpenSSL (Secure Sockets Layer) software, allowing anyone on the Internet to read the memory of systems that are compromised by the malicious bug.
Assistant Research Scientist Dave Levin and Assistant Professor of Electrical and Computer Engineering Tudor Dumitras were part of a team that analyzed the most popular websites in the United States - more than one million sites were examined - to better understand the extent to which systems administrators followed specific protocols to fix the problem.
Levin and Dumitras both have appointments in the Maryland Cybersecurity Center, one of 16 centers and labs in the University of Maryland Institute for Advanced Computer Studies.
Their team, which included researchers from Northeastern University and Stanford University, discovered that while approximately 93 percent of the websites analyzed had patched their software correctly within three weeks of Heartbleed being announced, only 13 percent followed up with other security measures needed to make the systems completely secure.
Risk and Impact; Security and Business Continuity; Crisis and Emergency; Disaster Recovery and Natural Disaster Recovery. Fascinating silos exist in the world of Resilience, which itself either exists fully or partially in the wider World depending on the organisation and its ‘mission’, strategies and contexts. We had an interesting couple of days this week at the Business Continuity Institute’s World Expo; and met many people from many disciplines and backgrounds who either would like to apply for our MSc Organisational Resilience, or wanted to contribute as guest or associate lecturers. That was great and we were hugely impressed by the vision that many demonstrated in wanting to break out of and challenge the stovepiping of functions and capabilities that are necessary for resilient organisations.
Then there are the others; who see the emphasis on one aspect at the expense in focus on another as something that should continue. The ISO 22301:2012 defines BC as a ‘holistic’ management process. However, despite the fact that there are those who understand that ‘holistic’ should refer not to the BC process itself but to its wider function within an organisational resilience context and that the contribution to the whole is what BC is about, there are many who do not. The emphasis on Risk (or threat, as some insist on calling it), Impact as distinct rather than complementary aspects of resilience is the hallmark of framework thinkers. It’s easy, comforting and simple to put functions and those responsible for them into functional and process silos; it also helps to sell products; but it doesn’t really advance the cause of overall resilience.
Data services as an up-selling tactic and add-on product will become a mainstream trend next year, according to Forrester Research.
It’s just one aspect of what the research firm calls the emerging “data economy.”
We’re already seeing the beginning of a data economy, Forrester contends in a recent Information Management column. The term “data economy” is self-describing: It’s a system that “provides for the exchange of digital information to create new insights and value,” the firm explains.
In June 2012, the Mid-Atlantic and Midwest derecho, a severe and fast-moving thunderstorm, moved through Maryland and left more than 1 million households without power in hot, humid conditions for up to a week in some places.
Although some changes were quickly made in response, Hurricane Sandy came quick on the derecho’s heels and knocked out power again. These two events triggered Maryland to focus on its energy resilience, prompting new programs, including backup generator initiatives and requirements, and a move toward microgrids to make the state more resilient.
One of the state’s new energy resilience programs, which ended this summer, was a grant to gas stations to purchase backup generators so that fuels are available when power is down. The Fuel Up Maryland program offered grants up to $25,000 per gas station to offset 70 percent of the purchasing and pre-wire costs of backup power generation.
(MCT) NEW YORK — The lines between online thefts and all-out cyberwarfare continue to blur as hackers become more effective at attacks that threaten to cause serious economic damage, computer security and legal experts said here Thursday.
“It's not a clear, bright red line,” Mitchell Silber, executive managing director of K2 Intelligence, a cybersecurity company based here, said at a daylong cyberwarfare conference. “It really is more murky, the difference between where a cybercriminal hack ends and where some type of state or state-sponsored event begins.”
The Department of Homeland Security last week issued a bulletin to cybersecurity insiders reporting that a destructive malware program known as “BlackEnergy” has been placed in key U.S. infrastructure systems that control everything from telecommunications and power transmission grids to water, oil and natural gas distribution systems and some nuclear plants.
(MCT) — With about 22 million people vulnerable to dangerous hurricane storm surges, forecasters have long struggled over how to issue warnings, especially in low-lying Florida, where waters can rise far inland.
Now they have an interactive map that tracks flooding not only by location, but storm strength.
Published Thursday, the map for the first time links the coast from Texas to Maine, said Brian Zachry, a National Hurricane Center storm surge specialist. Forecasters used thousands of hypothetical hurricanes and factored in local coastal topography along with levees, canals and other structures to determine flooding.
In Florida, they found that about 40 percent of the population could face flooding in a powerful storm.
I’ve had a really interesting few days at the Business Continuity Institute’s Annual World Event and I wanted to share the experience with those who perhaps haven’t had the opportunity in their careers as of yet to attend. This is my very first visit after missing out the previous year due to an unforeseen major incident at work – nature of the job.
As ever, the views that I’ve chosen to share on here are purely based on my own experience and you may well receive differing accounts for my colleagues and peers (I know I have heard several different views in recent years). I hope to shed some light on this mass gathering event of like-minded professionals and ultimately encourage my peers to get involved.
It’s such an interesting dynamic. The lift doors open and I could already see an eager student firing out his CV to the various stands, and exhibitors presenting to business leads and a sea of consultants keeping one friendly eye on their competitors/associates. However, it’s all very positive, relaxed and friendly.
To set the scene, once a year professionals from around the world, who have an interest in business continuity (and beyond), gather at the BCI Annual Global Event. It runs over a couple of days to which there are various events and activities that you can choose to participate in.
CHICAGO – Cold temperatures, heavy snow, and treacherous ice storms are all risks of the impending winter season.
“Severe winter weather can be dangerous and even life-threatening for people who don't take the proper precautions,” said FEMA Region V acting administrator Janet Odeshoo. “Preparedness begins with knowing your risks, making a communications plan with your family and having an emergency supply kit with essentials such as water, food, flashlights and medications.”
Once you’ve taken these steps, consider going beyond the basics of disaster preparedness with the following tips to stay safe this cold season:
Winterize your emergency supply kit:
- Before winter approaches, add the following items to your supply kit:
- Rock salt or other environmentally safe products to melt ice on walkways. Visit the Environmental Protection Agency for a complete list of recommended products.
- Sand to improve traction.
- Snow shovels and other snow removal equipment.
- Sufficient heating fuel and/or a good supply of dry, seasoned wood for your fireplace or wood-burning stove.
- Adequate clothing and blankets to keep you warm.
Stay fire safe:
- Keep flammable items at least three feet from heat sources like radiators, space heaters, fireplaces and wood stoves.
- Plug only one heat-producing appliance (such as a space heater) into an electrical outlet at a time.
- Ensure you have a working smoke alarm on every level of your home. Check it on a monthly basis.
Keep warm, even when it’s cold outside:
- If you have a furnace, have it inspected now to ensure it’s in good working condition.
- If your home heating requires propane gas, stock up on your propane supply and ensure you have enough to last an entire winter. Many homeowners faced shortages due to the record freezing winter weather last year, and this season there’s the possibility of lower than normal temperatures again. Don’t be caught unprepared.
- Avoid the dangers of carbon monoxide by installing battery-powered or battery back-up carbon monoxide detectors.
- Winterize your home to extend the life of your fuel supply by insulating walls and attics, caulking and weather-stripping doors and windows, and installing storm windows or covering windows with plastic.
Prevent frozen pipes:
- If your pipes are vulnerable to freezing, i.e., they run through an unheated or unprotected space, consider keeping your faucet at a slow drip when extremely cold temperatures are predicted.
- If you’re planning a trip this winter, avoid setting your heat too low. If temperatures dip dangerously low while you’re away, that could cause pipes to freeze. Consider draining your home’s water system before leaving as another way to avoid frozen pipes.
You can always find valuable information to help you prepare for winter emergencies at www.ready.gov/winter-weather. Bookmark FEMA’s mobile site http://m.fema.gov, or download the FEMA app today to have vital information just one click away.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
What is the hardest risk to avoid? The risk you didn’t anticipate. The answer may seem obvious, after the fact, however most firms seldom analyze why. What is not so obvious are the decisions leading up to the risk event. It is human nature to assume that we understand risk and will avoid it just in time. Yet, time and again we are surprised.
Somewhere along the way, a consultant categorized risks into awareness buckets of “Knowns,” “Known Unknowns” and “Unknown Unknowns.” Unfortunately, categories of risk do not protect us from the effects of a risk occurrence. Senior executives do not like surprises and, more importantly, they expect risk professionals to detect and prevent them before they occur!
Let’s examine whether these events are really “Unknown Unknowns” or, quite simply, the avoidance of decision making that could have minimized or contained the risk. Cognitive research suggests that blind spots in decision making account for up to 90 percent of large operational risks across all organizations. Very few firms take the time to re-examine failed decisions, fearing where the truth may lead.
On November 7, 1940, high winds buffeted the Tacoma Narrows Bridge leading to its collapse. The first failure came at about 11 a.m., when concrete dropped from the road surface. Just minutes later, a 600-foot section of the bridge broke free. Subsequent investigations and testing revealed that when the bridge experienced strong winds from a certain direction, the frequency oscillations built up to such an extent that collapse was inevitable. For posterity, the collapse of the Bridge was captured on film.
I thought about this spectacular engineering failure when I read, yet again, commentary about representatives from the Department of Justice (DOJ) and Securities and Exchange Commission (SEC) appearing at for-profit conferences to give presentations to attendees. Personally, I was shocked, simply shocked to find out that one has to pay to attend these events. Further, it appears that one or more of the companies running these events, ACI, Momentum, IQPC, HansonWade, among others, might actually be for-profit companies. It was intimated that one of the ways the conference providers enticed registrants to pay their fees was to provide a forum of lawyers practicing in the Foreign Corrupt Practices Act (FCPA) space, to whom representatives from the DOJ and SEC could speak. Now I am really, really really shocked to find that people actually pay to obtain knowledge.
Armed with the new piece of information that there is a marketplace where people actually pay to obtain information, I have decided to practice what I preach and perform a self-assessment to determine if I am part of this commerce in ideas. Unfortunately I have come to the understanding that not only do I participate in that marketplace but also I actually use information provided by representatives of the US government in my very own marketing and commerce. So with a nod to Adam Smith’s Invisible Hand of the Marketplace; I now fully self-disclose that I digest to what US government regulators say about the FCPA, repackage it and then (try) and make money from it. (I know you are probably as shocked, shocked as I was to discover this.)
SACRAMENTO, Calif. – Federal disaster assistance now exceeds $2.4 million for those affected by the South Napa earthquake, just one week after they became eligible to apply. At the state’s request, the federal disaster declaration expanded on Oct. 27 to include Individual Assistance for homeowners and renters in Napa and Solano Counties.
Nearly 1,900 households have applied for assistance from the Federal Emergency Management Agency (FEMA).
Disaster assistance includes grants to help pay for temporary housing, home repair and other serious disaster-related needs, such as medical expenses, not covered by insurance or other sources.
Low-interest disaster loans are also available from the U.S. Small Business Administration (SBA) for homeowners, renters, businesses of all sizes, and private non-profit organizations. Disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.
To apply for assistance, register online at DisasterAssistance.gov or via smartphone or tablet at m.fema.gov. Applicants may also call FEMA at 800-621-3362 or (TTY) 800-462-7585. People who use 711-Relay or VRS may call 800-621-3362.
Multilingual phone operators are available on the FEMA Helpline/Registration. Choose Option 2 for Spanish and Option 3 for other languages.
The California Governor’s Office of Emergency Services (Cal OES) and FEMA have coordinated with the City of Vallejo and Solano County to open a Disaster Recovery Center and have partnered with the City and County of Napa to provide state and federal services in a Local Assistance Center. The centers provide face-to-face assistance for affected individuals to meet with specialists from Cal OES, FEMA and the SBA. To date, nearly 500 people have visited the centers.
Napa Earthquake Local Assistance Center
301 1st Street, Napa, CA 94559
Solano County Disaster Recovery Center
1155 Capitol Street, Vallejo, CA 94590
Standard hours for the centers are 9 a.m. to 6 p.m. weekdays and 9 a.m. to 4 p.m. weekends until further notice. On Veterans Day, Nov. 11, holiday hours will be 10 a.m. to 3 p.m.
During a visit to a center, visitors may:
- Discuss their individual disaster-related needs
- Submit any additional documentation needed, such as occupancy or ownership verification documents and letters from insurance companies
- Find out the status of an application
- Obtain information about different types of state and federal assistance
- Get help from SBA specialists in completing low-interest disaster loan applications for homeowners, renters and business owners
- Meet with FEMA hazard mitigation specialists to learn about reducing future disaster losses and rebuilding safer and stronger
People should register with FEMA before going to a Disaster Recovery Center, if possible. For visitors with a disability or functional need, the centers may have:
- Captioned telephones, which transcribe spoken words into text
- The booklet Help After a Disaster, in both Braille and large print Spanish and English
- American Sign Language interpreters available upon request
- Magnifiers and assistive listening devices
- 711-Relay or Video Relay Services available
If other accommodations are needed during any part of the application process, please ask any FEMA or Cal OES employee for assistance.
Stay in Touch with FEMA
After a person registers, a FEMA inspector will contact that person by phone to schedule an appointment. An applicant should give clear, accurate directions to the damaged property. An inspector will try three times to schedule an inspection appointment. To avoid unnecessary delays, FEMA asks applicants to make sure FEMA has their current phone number.
During the inspection, owners and renters must show proof of occupancy, such as a valid driver’s license. Owners must show proof of ownership and sign various forms. The length of the inspection will vary, depending on the amount and location of the damage.
FEMA inspectors document damage. They do not determine eligibility for disaster assistance. They do not condemn homes. When meeting with an applicant who owns a home that has been previously red-tagged, FEMA guidance allows inspectors to complete their inspection from a safe distance.
The SBA and insurance companies also have inspectors in the field.
Be Alert for Disaster Fraud
FEMA inspectors carry official photo identification. Please contact the local police if someone posing as an inspector asks for money.
Official inspectors never ask for money or use a vehicle bearing a FEMA logo. Inspectors must carry visible FEMA ID, which includes a photo and name, the FEMA seal and the ID’s expiration date. FEMA ID has a "property of the U.S. Government" disclaimer, a return address and a barcode.
Apply to Qualify
To be eligible for federal disaster assistance, at least one member of a household must be a U.S. citizen, Qualified Alien or non-citizen national with a Social Security number. Disaster assistance may be available to a household if a parent or guardian applies on behalf of a minor child who is a U.S. citizen or a Qualified Alien. FEMA will only need to know the immigration status and Social Security number of the child.
Disaster assistance grants are not taxable income and will not affect eligibility for Social Security, Medicaid, medical waiver programs, Temporary Assistance for Needy Families, the Supplemental Nutrition Assistance Program or Social Security Disability Insurance.
For more information on the California disaster recovery, go to http://www.fema.gov/disaster/4193.
Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The Cal OES coordinates overall state agency preparedness for, response to and recovery from major disasters. Cal OES also maintains the State Emergency Plan, which outlines the organizational structure for state management of the response to natural and manmade disasters.
There are signs that crowdsourcing is becoming a legitimate data strategy. What remains unclear, though, is whether it’s a reliable one.
One crowdsourcing project, PredictIt.org, illustrates the issues at play. PredictIt is an academic project under the auspices of the Victoria University of Wellington, New Zealand, with U.S. university affiliates. Essentially, it allows people to wager on political races, (yes, it’s legal), and tap the “wisdom of the crowd.”
Four days leading up to the elections, it ran a market on this year’s U.S. Congressional mid-term. The crowd successfully predicted the overall outcome in Congress, foreseeing a Republican take-over of the Senate and gains in the House. As of Monday, the site was predicting Republicans would have 53 or more seats in the Senate. The final outcome (thus far) in the Senate was 52 Republican seats with 43 Democrats.
“There are 25 or more years of data that show prediction markets do a better job predicting outcomes than polls,” Dr. Emile Servan-Schreiber, founder and CEO of Lumenogic and an expert in prediction markets, told Politico.
The New Orleans' emergency call administration center has a faster, more efficient response to emergencies that improves the flow of information between citizens, multiple agencies and first responders.
Orleans Parish Communication District (OPCD) covers an area with a population of more than 370,000 residents. They handle more than 1 million emergency calls annually, routing requests to police, fire and EMS personnel in the field. Considering its call volume, OPCD needed a better way to connect applications and automate the flow of information. The former system required multiple computers, monitors and programs, making emergency call management often painfully slow and complex.
In 2013, OCPD was selected by Motorola Solutions to conduct the field trial of a new product, eventually named PremierOne Computer Aided Dispatch NG911 Integrated Call Control.
While You May be Concentrating Your Efforts on Recovery after an Incident, Be Sure that Information Provided to the Media is What You Want Publicized
Part One of Three Regarding Your Crisis Communications
I recently had the pleasure of attending the 6th. Annual Business Continuity Symposium held in Rochester, New York, and sponsored by the Eastern Great Lakes Chapter of the Association of Contingency Planners (EGLACP). Chapter President John J. Luce and his organization staff lined up some great speakers and set a record for the number of sponsors attending the annual event.
The lead speaker was James W. Satterfield, President, COO and Founder of Firestorm Solutions, LLC, whose session was entitled “Crisis Management Reality Check: Consequence Management Lessons Learned After a Crisis”. At the start of the session, Mr. Satterfield asked “Have You Heard the One About Cannibalistic Rites Being Performed on a Major College Campus?“
By Shailendra Singh
Organizations today are presented with an ever-growing number of challenges, compounded by the speed of technological change and evolution, all of which act together to increase business risk.
In such an unpredictable environment, the ability to weather market and technological and financial stress is critical to sustainability. Reactive corporate disaster recovery is no longer sufficient. Resilient systems and processes that keep businesses running as usual during any crisis are the key to retaining competitive advantage.
One of the biggest issues facing organizations today is a plethora of unpredictable disruptions that have the potential to seriously destabilise business.
BSI has launched PAS 7000, a universally applicable supply chain information standard for suppliers and buyers at organizations of all sizes around the globe. PAS 7000 ‘Supply Chain Risk Management- Supplier prequalification’ helps answer three key questions relating to any organization’s supply chain partners: Who are they? Where are they? Can they be relied upon?
The standard draws on the collective expertise of 240 professionals drawn from global industry associations and organizations, and it addresses product, process and behavioural criteria for supplier prequalification.
PAS 7000 has been created in response to industry demand, with three quarters of executives considering supply chain risk management important or very important (1). As supply chains increasingly span continents, and brands become ever more exposed due to the demand for increased transparency, the challenges for procurement teams to assess the suitability of suppliers increases. 63 percent of EMEA companies have experienced disruption to their value chain due to unpredictable events beyond their control in the last 12 months, at an average cost of £449,525 per incident per company (2).
PAS 7000 provides companies with a uniform set of common information requirements that reduces duplication of effort in completing tender forms and aids procurement in bringing consistency to the supplier base. It establishes a model of governance, risk and compliance information for buyers to pre-qualify suppliers and confirm their intention and ability, to adhere to key compliance requirements. This in turn helps organizations make an informed decision about whether or not to engage with a potential supply chain partner.
For further information and to download the standard free of charge visit: www.bsigroup.com/PAS7000 (registration required).
(1) Don’t play it safe when it comes to Supply Chain Risk Management – Accenture Global Operations Megatrends Study 2015
(2) Dynamic Markets – Managing the Value Chain in Turbulent Times – Oracle, March 2013.
At a Gala Dinner at the Science Museum in London on the 5th November, the Business Continuity Institute (BCI) hosted their Global Awards ceremony, an event to recognise the outstanding contribution of business continuity professionals and organisations from across the world.
The BCI Global Awards consist of ten categories – nine of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being chosen by their peers in a vote. As expected the entries received during the year were all to a high standard and the panel of judges had a difficult task deciding upon a shortlist to go forward to the ceremony.
Inevitably there can be only one winner in each of the categories and those who went home celebrating were:
- Business Continuity Consultant of the Year: Bill Crichton FBCI, Managing Director and Principal Consultant at Crichton Continuity Consulting Ltd
- Business Continuity Manager of the Year: John Zeppos FBCI, Group Business Continuity Management Director at OTE Group of Companies
- Public Sector Business Continuity Manager of the Year: Brian Gray MBCI, Chief of Business Continuity Management at the United Nations
- BCM Newcomer of the Year: Luke Bird MBCI, Business Continuity Executive at Atos
- Business Continuity Team of the Year: Franklin Templeton Investments
- Business Continuity Innovation of the Year: Deloitte
- Business Continuity Provider of the Year (BCM Service): Continuity Shop
- Business Continuity Provider of the Year (BCM Product): ezBCM
- Most Effective Recovery of the Year: Bank of New Zealand
- Industry Personality of the Year: Chittaranjan Kajwadkar MBCI
Steve Mellish, Chairman of the BCI said: "The geographical range of winners at tonight's awards is a sign of just how the industry is developing internationally and how global an organisation the BCI is. The high standard of entries we received gave the judges some very difficult decisions to make so my congratulations go to everyone who won for what is a tremendous achievement."
The BCI Global Awards are held annually and coincide with the BCI World Conference and Exhibition, one of the premier events in the global industry calendar. Held over two days, the conference features fifty exhibitors, a similar number of speakers and close to a thousand visitors.
The world turns and things change – and that includes computer hacker approaches too. The immediate threats of malware and cybercriminals are relatively well-known. Phishing emails are designed to get you to click right away on a hacker’s link. Worms burrow through systems, always on the go. Viruses in that free software you should not have downloaded replicate and ravage. But now there’s a new menace with a different approach. Instead of attacking your system now, some hackers are making themselves at home for the longer term. They enter by stealth and lie low. Then they start to use your computers – just like they were their own computers. Welcome to the Advanced Persistent Threat or APT for short.
The goal of the Advanced Persistent Threat is typically not to do damage, but to steal data. The most sophisticated APTs require considerable effort and expertise, possibly requiring new internal system code. APT campaigns are also part of the spying arsenal of certain governments that can muster the high levels of hacking resources and expertise required.
Big Data is changing things, and not just because it requires shiny, new solutions such as Hadoop or Apache whatsit-of-the-week. As organizations use and assimilate Big Data, the more obvious it becomes that IT will need to reimagine some old standards in the data toolbox.
Why? The obvious reason is standard data tools aren’t designed to handle unstructured or high-velocity data. But there are other issues unique to Big Data that will require us to rethink the tools we’re using to manage, analyze and present the data. Here are two that have been in the news recently:
The Executive Dashboard
Executive dashboards were created over a decade ago to help leaders visualize specific enterprise metrics, such as key performance indicators. Not a lot has changed since then. That’s a problem in the era of Big Data, when insight is gained not so much through route reporting as it is through exploration.
Now that the software-defined data center (SDDC) is nearly upon us, enterprise executives need to start asking a number of pertinent questions; namely, how do I build one, and what do I do with it once it is built?
In essence, the SDDC is more about applications than technology. The same basic virtual and cloud technologies that have infiltrated server, storage and now networking are employed to lift data architectures off of bare metal hardware and into software. But it is the way in which those architectures support enterprise apps, and the way in which the apps themselves are reconfigured to leverage this new, more flexible environment that gives the SDDC its cachet.
Until lately, however, the application side of the SDDC has been largely invisible, with most developments aimed at the platform itself. Last week, however VMware announced an agreement with India’s Tata Consulting Services (TCS) to develop pre-tested and pre-integrated applications for the SDDC. Under the plan, TCS will provide architectural support and operational expertise to help organizations transition legacy apps into virtual environments powered by VMware solutions, namely vSphere, NSX, Virtual SAN and the vRealize Suite. The deal also calls for the creation of a Center of Excellence to link data centers in Milford, Ohio and Pune, India to handle beta test and workload assessment functions.
Susan L. Cutter is a Carolina Distinguished professor of geography at the University of South Carolina where she directs the Hazards and Vulnerability Research Institute. Her primary research interests are in the area of disaster vulnerability/resilience science — what makes people and the places where they live vulnerable to extreme events and how vulnerability and resilience are measured, monitored and assessed.
Cutter is a GIS hazard mapping guru who supports emergency management functions. I posed a series of questions about mapping and asked her to respond in writing. In Cutter’s responses she reminds us to ask the “why of the where” question when looking at maps.
According to a 2012 McKinsey study reported by Chui and colleagues, employees on average spend 28% of their workday reading and responding to email. Digging deeper into the amount of email usage, Jennifer Deal describes a 2013 study that surveyed a group of executives, managers and professionals (EMPs) and found that 60% of EMPs with smartphones are connected (primarily via email) for 13.5 hours or more per workday and spend about five hours connected during the weekend. This amounts to a 72-hour workweek.
In response to this hyper-connectedness the German automaker Daimler (maker of Mercedes-Benz) provides vacationing employees with an unusual extension to the automatic out-of-office response. As usual, the response states the employee is on vacation and provides an alternative contact person. But then, the Daimler system goes a step further and “poof” the sender’s e-mail is automatically deleted from the vacationer’s inbox. Daimler’s intent is to let the employee “come back to work with a fresh spirit.” Volkswagen and Deutsche Telekom also have policies that limit e-mails.
EATONTOWN, N.J. – The process of recovering from a disaster begins almost as soon as the threat has passed and responders have arrived. Hundreds, if not thousands, of people will need help immediately as well as for the foreseeable future. Non-governmental volunteer groups, churches and faith-based organizations are often among the first to step in and help, but also have limited resources to sustain their presence.
In 13 New Jersey counties affected by Hurricane Sandy, many of these organizations came together to form long-term recovery groups (LTRGs), and Federal Disaster Recovery Coordination (FDRC; regionally referred to as Federal Interagency Regional Coordination) connects these groups to the Federal Emergency Management Agency. FEMA Voluntary Agency Liaisons (VAL) support the LTRGs as they address the unmet needs of individuals that they can help with, in contrast to FIRC’s emphasis on communities as a whole.
While a few groups had come into existence after Hurricane Irene struck in 2011, many LTRGs were formed in the immediate aftermath of Sandy. The VALs assisted in getting some of the groups launched, using the VOAD (Voluntary Organizations Active in Disaster) manual and other toolkits to bring representatives together.
There are 14 active groups in New Jersey in 13 counties (Atlantic City has its own group separate from Atlantic County). These long-term recovery groups mainly consist of and represent faith-based and nonprofit organizations that have resources to assist survivors.
“Survivors that are still not back in their homes need things like rental assistance, construction assistance and help filling funding gaps, and members of the LTRGs seek to provide those resources and guidance,” said Susan Zuber, VAL for the New Jersey Sandy Recovery Field Office. She also said that one advantage of having religious organizations involved in the LTRGs is “they can reach up to the national level and potentially get funds and resources.”
Along with investigating the issues communities are facing during recovery, FIRC coordinates information and resources to affected survivors, so they can determine where help is available.
“The LTRG disaster case managers strive to make sure various resources get to the people they know need help, and FIRC helps them ensure that there is no duplication of benefits,” Zuber said. “We assist in being the best stewards possible of limited available funds.”
FIRC VAL Lori Ross says that nearly two years after Sandy struck, the LTRGs are still actively helping survivors with some serious issues.
“New Jersey 211 (the state’s resource hotline) is receiving (an average of) 44 new referrals for help every week,” she said. “The Ocean and Monmouth county groups have started receiving requests for rental assistance” as people who had been renting properties while their homes were repaired or rebuilt are in need of more money to pay their rent and mortgage, she added. Mold in homes that wasn’t dealt with properly initially continues to be an issue.
Not all of the problems survivors are facing are of a physical nature, either.
“We’re also seeing more cases where people are asking for mental and emotional assistance,” Zuber said. “We’re getting requests for clergy and mental health treatment. There’s a real emotional and spiritual care element as it relates to the impact of the storm.”
Ross added that even caregivers and case workers are feeling the pressure of what is now a two-year process. “This (the anniversary) is a very critical time,” she said, noting that requests for this type of aid increased at this time last year as well.
Rebuilding after a disaster the magnitude of Hurricane Sandy takes years. FEMA, the FIRC, and the long-term recovery groups of New Jersey are using coordinated teamwork and resources to help people put their lives back together.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
A newly published report from the Business Continuity Institute (BCI) highlights that, while overall results indicate a good uptake of emergency communications planning, a significant minority remain passive or have difficulty securing management buy in. It is worrying to note that among those organisations without an emergency communications plan, two-thirds (63.4%) of them would only consider adopting one after a business changing event, a bit like shutting the stable door once the horse has bolted. This could have dire consequences as previous BCI research suggests that business affecting events may often severely affect an organisation’s viability.
Supported by Everbridge, the report concludes that emergency communications remains an essential part of any BC programme and this research demonstrates that while a great majority of companies are aware of its importance, there are some gaps in implementation that need to be addressed. In order to be effective, emergency communications plans must be continuously updated to reflect the risks that a business faces and be embedded well enough within the organisation. Relevant training and education programmes, as well as ensuring top management buy in, are necessary in promoting a culture of awareness and reducing the risk of communications failure during incidents.
Further findings from the report include:
- In a sign of growing awareness, only less than 13.5% of organisations surveyed do not have an emergency communications plan.
- Emergency communications plans are quite comprehensive in their scope. At least 70% of organisations have plans covering the following threats: IT outages (81.2%), fire (77.8%), power outages (76.2%), weather related incidents (75.6%), natural disasters (74.9%) and security related incidents (70.0%). These mirror the top three causes of business disruption as reported by respondents in the last 12 months: IT outages (59.8%), power outages (51.6%) and weather related incidents (47.2%).
- Almost a fifth of respondents (18.7%) belong to organisations where more than 500 staff members travel internationally on a regular basis. More than 30% report travelling to ‘high-risk’ countries.
- Almost two-thirds of companies (64.7%) report having training and education programmes in place related to emergency communications. Most have regularly scheduled programmes (64.2%).
- Around 15% of organisations regularly schedule exercises of their emergency communications plans. Most schedule their exercises once a year (55.8%). This is a worrying finding considering that almost half of organisations are likely to invoke their plans more than once during any given year (49.6%)
- More than 70% of organisations take 30 minutes or less to activate their emergency communications plans. Nonetheless, more than a quarter of organisations (27.4%) do not request responses from their staff in the event of an incident or have defined acceptable response rates (28.2%).
- Social media appears to play an important role in an emergency communications plan. 42% of respondents report using social media to monitor their staff during emergencies and almost a third (31.6%) utilise it to inform stakeholders.
Patrick Alcantara, Research Associate at the BCI and author of the report, commented: “This survey is seen as the first step toward benchmarking an organisation’s emergency communications arrangements. It is hoped that it will allow companies to take a second look at their emergency communications capability and introduce improvements that will redound to their benefit. Given how emergency communications may improve survival during extreme situations, it is important that organisations take heed and aspire for a robust capability before it is too late.”
Imad Mouline, Chief Technology Officer at Everbridge, commented: “Fluctuating global threat levels, sophisticated cyber attacks and an ever growing mobile workforce present increasingly diverse and complex risks to business interests. In this unpredictable environment, Business Continuity Practitioners are consistently faced with the challenge to plan for the unexpected while ensuring the safety of their staff and communities and protecting their businesses from both financial loss and reputational damage. This survey provides a benchmark for Emergency Communication Planning.”
This is the first dedicated piece of research into understanding the Emergency Communications Plans of a wide range of organisations and learning how these are integrated within wider recovery programs. The results supports the anecdotal feedback from the industry, demonstrating that Emergency Communication Plans form an established, vital element of continuity plans for mid to large size enterprises while also offering some practical ideas for those looking to improve their capabilities in this area.
A newly published report from the Business Continuity Institute (BCI) highlights that nearly a quarter respondents to a survey claimed their organisation had suffered losses of at least €1 million during the previous twelve months (up from 15% last year) as a result of supply chain disruptions. 13.2 percent suffered a one-off disruption that cost in excess of €1 million (up from 9% last year). The study also showed that 40 percent of respondents claimed their organisation was not insured against any of these losses while a 20 percent were only insured against half these losses.
Organisations cannot simply bury their heads in the sand and pretend an incident will never happen to them. The survey showed that 76 percent of respondents had experienced at least one supply chain disruption during the previous twelve months, yet a quarter of respondents (28 percent) still had no BC arrangements in place to deal with such an event.
Supported by global insurer Zurich, the report concludes that supply chain disruptions are costly and may cause significant damage to an organisation’s reputation. While the survey results indicate a growing awareness of BC and its role in ensuring supply chain resilience, many organisations have yet to improve on their reporting and BC arrangements. While budgets for business continuity and ensuring supply chain resilience are often slashed in favour of other priorities, this study demonstrates why this often might not be a wise course of action. With the growing cost of disruption worldwide and the potential reputational damage caused as a result of failing to have appropriate transparency in the supply chain, investments in this area are essential and can spell the difference when disaster strikes.
Further findings from the report include:
- 78.6% of respondents do not have full visibility of their supply chains. Only 26.5% of organisations coordinate and report supply chain disruption enterprise-wide. 44.4% of disruptions originate below the Tier 1 supplier and 13% of organisations do not analyse their supply chains to identify the source of the disruption.
- The primary sources of disruption to supply chains in the last 12 months were unplanned IT and telecommunications outage (52.9%), adverse weather (51.6%) and outsourcer service failure (35.8%).
- The loss of productivity (58.5%) remains as the top consequence of supply chain disruptions for the sixth year running. Increased cost of working (47.5%) and loss of revenue (44.7%) are also more commonly reported this year and round out the top three.
- Respondents reporting low top management commitment to this issue have risen from 21.1% to 28.6%. This is a worrying finding as low commitment is likely to coincide with limited investment in this key performance area.
- The percentage of firms having BC arrangements in place against supply chain disruption has risen from 57.7% to 72.0%. However, segmenting the data reveals that small and medium sized enterprises (SMEs) are less likely to have BC arrangements (63.9%) than large businesses (76.2%).
Lyndon Bird FBCI, Technical Director at the BCI, commented: “Should we be alarmed by some of the figures revealed in this survey? Perhaps so. Should we be surprised by them? Probably not. As long as organisations are failing to put business continuity mechanisms in place, and as long as top management is failing to give the issue the level of commitment it requires, supply chain disruptions will continue to occur and they will continue to cost the organisation dearly. In our globally connected world, these supply chains are becoming ever more complex and more action is needed to make sure that an incident in one organisation doesn’t become a crisis for another.”
Nick Wildgoose, Global Supply Chain Product Leader at Zurich Insurance Group, commented: “Top level management support is fundamental to driving improvement in supply chain resilience; I have witnessed the significant disruption cost reductions that have been achieved by companies that are proactive in this area. This should be regarded as business change programme in the context of driving value through Supplier Relationship Management” and becoming the customer of choice for your strategic suppliers to improve your business performance.”
Now into its sixth year, the BCI Annual Supply Chain Resilience Survey has established itself as an important vehicle to highlight and inform organisations of the importance of supply chain resilience and the key role it plays in achieving overall organisational resilience in today’s volatile global economic climate. The outcomes of previous surveys have provided organisations with critical insights and valuable information to support the development of appropriate strategic responses and approaches to mitigate the impact and consequences of disruptions within their supply chains.
On the surface, you’d think Big Data has established itself as a popular and winning proposition for businesses. Consider these recent research findings:
- Innovative companies are three times as likely to rely on Big Data analytics and data mining than their less-innovative peers. — The Boston Consulting Group’s study, “The Most Innovative Companies 2014: Breaking Through Is Hard to Do.”
- Nine out of 10 CXOs are happy with Big Data’s business outcomes and fifty-nine percent of executives at companies using Big Data say it’s extremely important. — Accenture Analytics.
- Sixty-seven percent of companies now have Big Data projects running in production, compared to 32 percent last year. Of those, 82 percent said Big Data is already integrated into the mainstream of their organization. — NewVantage Partners survey.
By all accounts, Big Data initiatives would seem successful and on the way to becoming established business practice.
(MCT) — When Ortiz Middle School Principal Steve Baca ordered a lockdown after a security guard found a gun in a student’s backpack earlier this month, English teacher Alexandra Robertson knew exactly what to do. She locked her classroom door, got her kids to help barricade it and said she was ready to use books, staplers and any other blunt objects she could find to fend off anyone who might try to enter.
Robertson’s response was far from fear-driven. It was part of a new approach to dealing with on-campus threats from outsiders called Run, Hide and Fight.
For years many schools relied on a basic lockdown approach of sealing off doors, turning off lights, shuttering windows, and silencing cellphones and other technological devices as a way to deal with an outside threat. But the 2012 massacre at Sandy Hook Elementary School in Newtown, Conn., that left six adults and 20 first-graders dead — as well as a raft of other school shootings in recent years — have forced school safety experts to rethink their approach.
Riverbed Technology today extended the level of control IT organizations can gain over wide area networks (WAN) by incorporating a policy engine into its line of SteelHead WAN optimization appliances.
The upgrade was announced today at the Riverbed FORCE 2014 conference. Amol Kabe, vice president of product management for Riverbed, says Riverbed SteelHead 9.0 now allows IT organizations to define the class of WAN links they should use based on the performance requirements of applications.
Kabe says it’s now routine for IT organizations to deploy a mix of Internet and private leased lines in support of cloud applications. As such, they need WAN appliances that can dynamically route application traffic across those WAN links based on latency requirements.
(MCT) — Hospitals already are seeing serious, sometimes life-threatening cases of flu this year, and early indicators show that a good chunk of those sickened are working-age adults.
In Franklin County, at least 28 people had been hospitalized for flu as of Oct. 25, the most recent date for which data is available from Columbus Public Health.
That’s more than double the number at that time last year, when there had been 11 hospitalizations in the county. In the flu season before that, 2012-13, just two people had been hospitalized at this point in the season.
Columbus Public Health looks at several factors to track flu, including hospitalizations, laboratory-test results, emergency-department visits and medication sales.
In the week ending on Oct. 25, there were upticks in pediatric visits for flulike illness and respiratory illness and in over-the-counter sales of cold and cough remedies.
Statewide, emergency-department visits, hospitalizations and thermometer sales were slightly above the baseline five-year average used by the Ohio Department of Health to assess flu activity.
The perceived security threats associated with cloud services become less of an issue as businesses adopt more cloud services. This is according to Databarracks’ fifth annual Data Health Check report, which surveys over 400 IT decision makers in the UK.
The report, which questions IT leaders from organizations of various sizes and industries, revealed that 81 percent of organizations that had adopted no cloud services rated security as a top factor to consider when selecting a potential provider, with core factors such as functionality scoring as poorly as 38 percent.
Once an organization has adopted two or more cloud services, however, the importance of security falls to just 44 percent with factors such as provider reputation becoming much more important overall.
Peter Groucutt, managing director of Databarracks, comments: “This isn’t a case of security becoming less important as you adopt more cloud services – data security is always going to be a priority for both the organization and the provider. What we’re actually seeing is organizations moving past the ‘fear of the unknown’, as they experience cloud services first-hand.
“We’ve been hearing it for years: security is the biggest inhibitor of cloud services. CSPs have been striving to change that perception, so it’s promising to actually see the attitudes change as the market matures. Once an organization actually uses a cloud service, they realise that the practicalities of working with a provider – the functionality, the location of their data centres - become far more important than the security risks they once feared.”
Download the full Data Health Check 2014 here.
As result of EDP Distribuição's responsibilities, its involvement was required in Portuguese efforts to comply with the European Council Directive 2008/114/EC, on the identification and designation of National Critical Infrastructures (NCI) and the assessment of the need to improve their protection.
EDP Distribuição is the Portuguese mainland Distribution System Operator, serving over 6 million customers in a regulated business with clearly defined responsibilities, being the holder of the concession to operate the Distribution Electric Power Network in Medium Voltage and High Voltage, and holding municipal concessions for the distribution of electricity in Low Voltage.
With EDP Distribuição under having responsibility for several assets and systems which are essential for the maintenance of vital societal functions - health, safety, security, economic or social well-being of people, the challenges were many. The selection of a manageable number of assets from a set of more than 400 main premises, the identification of their major threats and vulnerabilities, and writing down their emergency response procedures, were some.
(MCT) — The first call came on a Thursday, 12 days after Michael Brown was shot. Patti Knowles and her granddaughter were watching “Mickey Mouse Clubhouse.”
The caller warned that the collective of computer hackers and activists known as Anonymous had posted data online — her address and phone number and her husband James’ date of birth and Social Security number.
Anonymous had been targeting Ferguson and police officials for days. But this seemed to be an error. Patti and James weren’t city leaders, they were the parents of one — Ferguson Mayor James Knowles III.
Within hours, identity thieves had opened a credit application — the first of many — using the leaked data.
The second call came on a Friday, nearly two months later. This time, it was their bank.
Someone, posing as James, the mayor’s father, had called in and changed passwords, addresses and emails. Then the individual sent $16,000 in bank checks to an address in Chicago.
The name on the address?
Jon Belmar. Same as the chief of the St. Louis County police.
(MCT) — Florida, the most storm-battered state in the nation, now is home to groundbreaking research that allows scientists to dissect the raw power of hurricanes.
Both the University of Miami and Florida International University have built complexes that recreate realistic hurricane conditions, including the enormous wind, battering waves and rainfall they can generate.
The idea is to provide scientists with a better understanding of how the storms work, information that should help improve forecasts and bolster construction.
Additionally, the Miami Museum of Science makes it easy for patrons to view the inner workings of hurricanes — and get a feel for flying into one.
It's not easy to witness the destructive power of a Category 5 hurricane up close and personal.
Yet that's what UM scientists can now do inside a tank the size of an indoor swimming pool, housed in the school's new $50 million Marine Technology and Life Sciences Seawater Complex.
Business continuity is, especially in the Anglo-American world, not that much a new concept. Being not new also means that it probably is due to be redesigned. Since the inception of Business Continuity Management in the late 80s and early 90s of the last century, the world has changed quite a bit. The main concepts, procedures and processes of BCM however have not changed that much in the past 25 or so years. We are still talking PDCA, we are still talking process-based business impact analysis, we are still trying to do the work of risk managers with our task in the fields of operational and reputation risks. We still have the BCM Lifecycle.
Those who are practitioners in the profession may have already realized that the theoretical strategies and tactics as outlined by the BCM Lifecycle approach may not always meet the needs and possibilities of an organization seeking to implement BCM. The business impact analysis for instance needs processes, since it aims to operationalize the damage because of failed process. But, which organization does have a complete and operationalized process document which allows it to just sum up losses and damages along process chains? And, how can the BCM organization define the so-called BCM-Strategies when they haven’t even asked the business what they think they need as workarounds to cover a resource which was lost or damaged because of some crisis situation?
By Alistair Forbes
Backup seems simple: take the important files that you need, and make sure that they are duplicated in such a way that they can be recovered.
It’s clearly a necessity. Our own personal PCs nag us if we’re failing to backup our data, and everyone knows a dire tale of woe of failing to backup - and companies such as ma.gnolia that collapsed after losing all its customers’ data.
And yet simply having a backup isn’t enough: backup success rates today are between 75 and 85 percent. In some sectors, only three-quarters of backup recoveries are successful. The rest, despite having a backup solution in place, were only able to recover some, if any, of their data.
Sungard Availability Services (Sungard AS) has announced plans to open a new workplace recovery centre for central London. The facility, which offers capacity for over 700 customer staff, will be equipped with the latest conferencing equipment and IT infrastructure.
The centre, which will be situated just outside of the city, is part of Sungard AS’ ongoing programme of investment in its workplace recovery facilities to ensure they continue to meet the needs of today’s workforce.
Positioned within walking distance of the city, the City of London Workplace Recovery Centre will work as part of a ‘near-far’ continuity planning solution: with organizations also provisioning workspaces within Sungard AS’ alternative facilities across the country, including recovery centres in Borehamwood, Hertfordshire, and Hounslow, London.
Built for the future, users will have access to 10Gb-ready networking infrastructure, and will be able to cope with heavier data traffic and increasing workloads.
“The technology that businesses are using is changing but organizations still recognise that truly effective collaboration requires regular face-to-face contact between teams,” said Keith Tilley, executive vice president, Global Sales and Customer Services Management, Sungard AS.
“While the advances in enterprise mobility and remote working offer businesses more options, customers still want to provide space for their teams to work together. Sungard AS’ latest investments are part of a wider push in ensuring our customers have access to the services they need to maintain business-as-usual in even the most difficult circumstances.”
The City of London Workplace Recovery Centre is expected to be operational in January 2015.
Most organizations (67 percent) are facing rising threats in their information security risk environment, but over a third (37 percent) have no real-time insight on cyber risks necessary to combat these threats. This is one of the headline findings of EY’s annual Global Information Security survey, Get Ahead of Cybercrime, which this year surveyed 1,825 organizations in 60 countries.
Companies are lacking the agility, the budget and the skills to mitigate known vulnerabilities and successfully prepare for and address cybersecurity:
43 percent of respondents say that their organization’s total information security budget will stay approximately the same in the coming 12 months despite increasing threats, which is only a marginal improvement to 2013 when 46 percent said budgets would not change.
Over half (53 percent) say that a lack of skilled resources is one of the main obstacles challenging their information security program and only 5 percent of responding companies have a threat intelligence team with dedicated analysts. These figures also represent no material difference to 2013, when 50 percent highlighted a lack of skilled resources and 4 percent said they had a threat intelligence team with dedicated analysts.
GANTA, LIBERIA — The site of the U.S. military’s future Ebola treatment center is now an overgrown grassland next to an abandoned airstrip on the Guinean border.
Two miles away, in a converted eye clinic that now houses a makeshift Ebola ward, this county’s sole doctor is waiting. He will soon run out of protective gear. Some of his employees haven’t been paid for a month.
“We all know we need the new treatment center,” said the doctor, Paye Gbanmie. “I worry that we could run out of space here.”
The U.S. military aims to quell that anxiety when it erects the new treatment center, slated to be finished later this month and manned by newly imported doctors. Just the sight of American helicopters flying over Ganta, a city of about 50,000, has lifted hopes here.
Every historical era has its lessons, such as Don’t trust totalitarian dictators to respect diplomatic niceties, Avoid land wars in Asia, and You know what’s going to happen to Sean Bean in this movie. One of the lessons of the last decade is certainly Information is not intelligence. Unfortunately, many people who do software requirements, or depend on them to build and test software, have not seen the relevance of that maxim in their own work.
Requirements in software development serve much the same purpose as intelligence in national security: they are supposed to provide actionable, reliable insights. “Actionable” is largely a question of format, which software professionals can control directly. Older questions like, What is the Jesuitical distinction between a requirement and a specification? and newer questions like, What kind of words do we need to supplement the pictures that we’ve created in this wireframe? have the same purpose: make sure that the developer, tester, or some other species of technologist understands what action to take, based on the information provided. In a similar fashion, the US President’s daily intelligence briefing follows a format that its intended audience finds useful.
The reliability of the information is not under the complete control of software professionals. In fact, we should always assume that the information we have is, to some degree, unreliable. We can reduce the amount of unreliability, but it will never reach 100% certainty. People in the intelligence profession deal with this problem in a variety of ways. Here are a few examples:
If a major incident affected your business tomorrow, what are the processes, machinery or even suppliers that would be really hard to replace quickly – the really awkward ones, the unique machinery or equipment that perhaps there isn’t really a plan for, let alone a plan that gets you back within an acceptable recovery time?
Spotting the problems is relatively easy, particularly when you get into manufacturing or supply chain businesses. The challenge for Business Continuity Managers is to do something about them and develop practical, simple recovery plans – even for the hard stuff.
I lead Business Continuity Management at Rolls-Royce Plc, where we have several key manufacturing processes that are both important and challenging to recover quickly.
Over the last year, we have developed a simple but effective approach to business recovery planning for these processes and it fits in just two pages.
This approach has helped the business to understand the risk, recover more efficiently and to prioritise capital investment decisions.
At the 2014 BCI World Conference and Exhibition, I’ll be showing you how this works along with providing practical hints and tips so that you can make it work in your business too.
James Stevenson will be discussing this issue further on day one of the BCI World Conference and Exhibition on Wednesday 5th November.
The past few weeks of the Ebola outbreak in Sierra Leone, Guinea and Liberia have gripped the U.S. and the world in bizarre, comical and concerning ways. Every day the news brings stories of sexy hazmat suits that are the most sought out Halloween costumes, it’s fodder for late night talk shows and the Centers for Disease Control and Prevention (CDC) finally released new health-care worker protection guidelines.
The Ebola virus has deeply rocked the U.S. public health and health-care community and the public at large even though we are not likely to see the same Ebola transmission and mortality rates they have in West Africa. There have been only four cases in the U.S. and one death but many of my health-care emergency management colleagues can attest that we are now spending an inordinate amount of time on infection control webinars, donning and doffing training and fit testing, as well as ordering as much personal protective equipment (PPE) they can get their hands on.
There is nothing quite so scary in the IT universe than tearing down what you just built in order to make way for a new technology. Well, perhaps complete and utter network failure, but that’s about it.
But with the advent of containers, it seems like the enterprise is on the cusp of reworking one of the fundamental elements of the cloud, converged infrastructure, software-defined infrastructure, data mobility and just about every other initiative that is driving data center development these days. Fortunately, container-based virtualization does not require a forklift upgrade to the virtual layer, but it does alter the way virtual machines are managed, and it could cause a massive rethink when it comes to devising the higher-order architectures that are slated to drive business productivity in the future.
To some, however, it was traditional virtualization’s limitations in supporting advanced data architectures that led to the rise of containers in the first place. As Virtualization Review’s Jeffrey Schwartz put it, there was growing consensus that the application loads of elastic, cloud-based platforms and applications were already pushing the limits of even the most advanced virtualization platforms, and what was needed was a higher degree of portability, speed and scale. Containers achieve this by allowing a single operating system to handle multiple apps at once, which is a much more elegant solution than deploying numerous virtual machines each populated with its own OS.
By Geary Sikich
Inferno, the first part of Dante's Divine Comedy that inspired the latest Dan Brown's bestseller of the same title describes the poet's vision of Hell. The story begins with the narrator (who is the poet himself) being lost in a dark wood where he is attacked by three beasts which he cannot escape. He is rescued by the Roman poet Virgil who is sent by Beatrice (Dante's ideal woman). Together, they begin the journey into the underworld or the Nine Circles of Hell.
As business continuity planners you may have experienced or are experiencing the journey through the Nine Circles of Planning Hell. When you were assigned the responsibility for developing the business continuity plan, or disaster plan, or emergency plan, or any of the myriad regulatory driven planning initiatives; you found yourself in the first level of Planning Hell – Limbo. Your journey probably continued to several of the nine circles of planning hell, or maybe you got lucky and were able to stay in that nice state of limbo until you moved on to your next assignment, job or career change. If you were not so lucky; you travelled through all nine circles of planning hell. Hopefully, if you did travel through all nine circles of planning hell, you, like Dante, emerged to find the light.
David Sandin looks at whether we have heeded the lessons of Heartbleed bug, the implications of Shellshock and the future security of open-source coding.
‘First time’s an accident, the second time a coincidence but third time is stupidity’ has long been the mantra of infuriated parents, exasperated at their children’s ability to make the same mistake multiple times over. Oddly, it was also the phrase that came to mind as news of the Shellshock bug targeting open-source coding broke: just six months on from the Heartbleed attack.
Shellshock allows hackers to easily exploit many web servers that used the free and open source Bash command line shell. So far hackers have focussed efforts on exploiting the weakness to place malware on vulnerable web servers, with the intention of creating armies of bots for future distributed denial of service attacks, flooding website networks with traffic and taking them offline. While it was initially thought that the vulnerability would only affect machines that ran Bash as their default command line interface, it is now suspected that machines using related coding could also be exploited.
Texas regulators on Tuesday tightened rules for wells that dispose of oilfield waste, a response to the spate of earthquakes that have rattled North Texas.
The three-member Texas Railroad Commission voted unanimously to adopt the rules, which require companies to submit additional information – including historic records of earthquakes in a region– when applying to drill a disposal well. The proposal also clarifies that the commission can slow or halt injections of fracking waste into a problematic well and require companies to disclose the volume and pressure of their injections more frequently.
The commissioners – all Republicans – said the vote showed how well Texans canrespond to issues without federal intervention.
Commissioner Barry Smitherman called the vote a “textbook example” of how the commission identifies an issue and “moves quickly and proactively to address it.”
“We don’t need Washington,” he said.
The federal Environmental Protection Agency last month said it supported the proposed rules.
The times, they are a-changing. Mobile computing devices not to mention BYOD and a millennial attitude mean that a substantial number of employees in enterprises now do their work away from their desks. Whether at home, in a bus, train or plane, or in their favourite coffee-shop, if there’s a Wi-Fi connection available, there’s a potential workspace in the making. But naturally enough, all this may then escape the control of the enterprise or at least partially so. For instance, how can companies then implement effective work area recovery for such nomadic workers in the event of an IT incident?
By 2025, we should expect to have experienced a “significant” cyberattack, according to a canvas of technology experts and researchers conducted by the Pew Research Internet Project and reported upon today.
To this group of experts, Pew posed the following question:
By 2025, will a major cyber attack have caused widespread harm to a nation’s security and capacity to defend itself and its people? (By “widespread harm,” we mean significant loss of life or property losses/damage/theft at the levels of tens of billions of dollars.)
Over 1,600 responses came in; respondents were not required to reveal their names.
A lot of people in the IT industry are pulling for the hybrid cloud. Enterprise executives are intrigued by the idea of low-cost, broadly federated data infrastructure distributed over large, geographic areas, while traditional data center vendors are trying to preserve their legacy product lines in the new cloud era.
But just because people want it, does that make it a good idea? If the idea is to capitalize on the benefits of both public and private cloud infrastructure, will hybrid solutions undermine that effort by watering down the advantages of pure-play approaches?
One thing is clear: Many enterprises see the hybrid cloud as the end-game of the virtual transition. A recent survey by Gigaom indicates that more than three-quarters of top decision-makers have adopted hybrid as a core component of their ongoing cloud strategies. However, it is becoming evident that this is more than a simple change in technology—it’s a top-to-bottom shift in the entire enterprise structure that will affect everything from data and infrastructure to business processes, governance and the ownership of digital assets.
On Oct. 28, Healthmap.org reported the latest figures on the Ebola outbreak: Spain 1 case; Guinea 1,553 cases and 926 deaths; Sierra Leone 3,896 cases and 1,281 deaths; Liberia 4,665 cases and 2,705 deaths. And for the U.S., 4 cases and one death. The website's Ebola timeline also provides projections on the number of cases and deaths, based on infection rate data from the World Health Organization, a list of the most recent articles about Ebola outbreaks, as well as relevant social media postings.
Healthmap is one example of how easy it is to find information on this rapidly growing epidemic -- and it also represents the way technology can play a major role in the effort to track and control the disease. For example, mobile phones are perhaps the most ubiquitous type of technology available in Africa, used by millions there. So it didn’t take long for researchers to identify the devices as a possible way to not just send people information about the disease, but also to track it.
And with 95.5 percent of the global population having mobile cell subscriptions, call-data records (CDRs) are one way epidemiologists can see where people have been and where they're headed based on past movements.
NG9-1-1, Explained: 7 Important "Need to Knows"
Next Generation 9-1-1 (NG9-1-1) is a hot topic in the public safety and local government communities. But the specifics of this long-sought-after initiative can be complex and there are several parties that play important roles. Below is a list of critical elements and key players to help you make sure you’re up to date.
Glossary of Terms:
- Next Generation 9-1-1 (NG9-1-1)
- Public Safety Answering Point (PSAP)
- National Emergency Number Association (NENA)
- Analog-Based Infrastructure
- U.S. Department of Transportation Intelligent Transportation Systems Program (ITS)
- Systems Integrator
By Vikram Duvvoori, Chief Technologist and Corporate Vice President - Enterprise Transformation Services, HCL
IT leaders -- and the executive teams they report to -- have been bombarded with a virtual “shock and awe” campaign around Big Data. IDC estimates that the 1.8 zettabytes —1.8 trillion gigabytes — of information generated in 2011 will grow by a factor of nine over the next five years. Gartner has a similar take when looking at the segment, predicting that the Big Data market, now valued at $5 billion in revenues annually, will explode to $53 billion by 2016.
The initial reaction, and rightfully so, is “Wow!” and “How in the world are we going to deal with all this?”
While considerable attention has been placed on the three Vs of Big Data — volume, velocity, and variety — the most important aspect has been on the back burner: the actual value to the business.
At the 2014 BCI World Conference and Exhibition, participants will have an opportunity to listen to a real case study of the integration of Enterprise Risk Management (ERM) and Business Continuity Management (BCM) as an independent function. This is an innovative and forefront role for the ERM and BCM function.
In my presentation, I will show how the traditional reporting structure and work functions of ERM and BCM in an organisation are usually separated from each other. The ERM and BCM functions are typically part of the executive management team and the head of ERM and BCM reports to the executives such as the CEO or the CFO.
The US National Fire Protection Association (NFPA) has made two announcements regarding the current revision process for the 2016 edition of its business continuity standard, NFPA 1600.
First, the Public Comment closing date for online submissions is November 14th, 2014. For details on how to submit comments, please click here.
Second, the date for the Second Draft Meeting to review the updated standard will be March 24th-26th, 2015 at the Palmer House Hilton hotel in Chicago. For more details on this activity, please click here.
NFPA 1600, and its current version dated 2013, has been recognized as the National Preparedness Standard by the 911 Commission. It is also the US national standard on emergency preparedness, and has an important focus on business continuity. NFPA 1600, 2013 Edition, is also one of the three standards being used in the voluntary Private Sector Preparedness (PS-Prep) program as administered by the Department of Homeland Security.
A new survey from Lieberman Software Corporation has revealed that 78 percent of IT security professionals are confident that firewalls and antimalware tools are robust enough to combat today’s advanced persistent threats.
Lieberman Software says that these findings highlight the fact that while cybercrime is on the rise, many organizations are still dangerously relying on outdated perimeter security solutions to defend against the latest threats.
The survey, which was carried out at Black Hat USA in August 2014, also revealed that 22 percent of those surveyed do not think that tools like firewalls and antivirus are able to defend against APTs. However, given the surge in organizations suffering advanced targeted cyber attacks, this number should have been much higher.
When the topic of encryption comes up in conversation (and doesn’t it always?), skeptics are fond of interjecting self-satisfied statements along the lines of, “The question isn’t whether encryption is crackable, but when will it be cracked?” In the face of such smugness, I usually counter with the ego-deflating rejoinder, “Let me know when you’ve joined us in the cloud era.”
You see, when data is encrypted in the cloud, your keys remain within your control; thus only authorized users have access to protected data. Unauthorized users will only see indecipherable codes, which is fine, but how do you think unauthorized users will attempt to access and exploit said data?
With the security threats around today, the sheer mass of information and the vulnerabilities to attack, it has to be admitted that information security is a challenge. But not an insurmountable one. The right information security takes planning and organisation. The advantages include prevention of loss and damage through information being stolen or compromised, as well as a more alert, capable workforce. So why does one recent survey show a downwards trend in implementing information security procedures?
Leaders of business intelligence (BI) projects should push for a revamped data architecture that supports more integrated data, even if it means looking at a Big Data option, according to a recent InfoWorld column.
In “Why BI projects fail -- and how to succeed instead,” software consultant Andrew C. Oliver says it’s essential to be able to integrate large amounts of data. BI tools tend to be resource-hungry, he adds.
So, rather than viewing technologies such as Hadoop, data lakes, enterprise data hubs and data warehouses as “trendy,” you should view them as essential to BI success, argues Oliver.
“A successful BI project does not forget about either business integration (more later) or data integration,” he writes. “Your requirements should dictate what, how much, and how often (that is, how ‘real time’ you need it to be) data must be fed into your data warehousing technology.”
The proposition that human resources hold one of the golden keys to successful business continuity will be presented on day two of the BCI World Conference and Exhibition in the Listen Stream. David Evans and Lynne Donaldson of Corpress LLP will argue that the HR role in business continuity is often understated, possibly not understood and for many organisations undervalued.
Please share your thoughts with us on how important HR (Personnel) are to your BCM process: are they heavily engaged or just reactive when pushed and how much time do you spend working with them?
FXT Edge Filers and Lattus Enable High-Performance, Cost-Effective Access to Content
PITTSBURGH, Pa. – Avere Systems, a leading provider of enterprise storage for the hybrid cloud, and Quantum Corp. (NYSE: QTM), a leader in scale-out storage systems, today announced a joint storage solution designed to optimize workflows for the oil and gas industry. The combined approach provides an integrated networked attached storage (NAS) solution with cloud storage that extends data availability at a lower cost, enabling upstream workflows to keep strategic information close at hand, shortening project cycle time and improving exploration analysis. Avere Cloud NAS powered by FlashCloud™ combined with Quantum Lattus™ extended online storage delivers cost-effective cloud storage with the high-performance access required for oil and gas exploration.
Advantages of FXT Edge Filers and Lattus for Oil and Gas Environments
With more data than ever before being generated for oil exploration, traditional solutions relying on replication and RAID do not provide cost-effective global access to content, can place a heavy burden on network storage, and drive storage capacity demands beyond budget. Together, Avere FlashCloud on FXT Edge Series filers and Quantum Lattus extended online storage provide a comprehensive solution that delivers several key advantages:
- Extreme Scalability: Avere FXT filers deliver scalable NAS performance in a clustered configuration while FlashCloud provides access to Quantum Lattus, offering cost-effective performance and capacity that is simple to manage, capable of expanding to hundreds of petabytes without disruption.
- Flexible Onramp to the Cloud: Avere’s global namespace joins Lattus and legacy NAS into a single pool of storage so oil and gas users can store their data wherever it makes most sense and adopt Lattus storage at a managed pace. Avere’s FlashMove® transparently moves live, online data to Lattus without disruption while FlashMirror® replicates data to Lattus for disaster recovery.
- Durable, Self-Healing Protection: Lattus delivers built-in data resiliency with self-healing protection, guarding data against component failure and even site disaster.
- Lower Cost of Ownership: The combination of FXT Edge filers with Lattus’ cost-effective object storage leverages efficient data spread algorithms that require less storage than RAID to protect data, enabling 70% or more savings in total cost of ownership compared to traditional NAS implementations.
- High-Speed Access: Avere Cloud NAS - powered by FlashCloud running on FXT Edge filers - eliminates the latency of access to content pools that reside in cloud storage.
Simon Robinson, Senior Analyst, The 451 Group
“Avere’s added support for object-level storage such as Lattus make it an ideal solution for addressing ever increasing storage demands in an upstream workflow environment, and Quantum’s focus on infinitely scalable object storage delivers the perfect complement to Avere’s FXT Edge filers. The combined solution shows great potential for bringing a more economical approach to meeting the storage access and performance needs of the oil and gas industry.”
Mike McMahon, Vice President, Business Development, Avere Systems
“This joint offering provides a blended cost model that brings high-performance NAS to a very large pool of economical storage. Compared to conventional approaches that rely strictly on primary storage, we’re offering a solution that combines a high-performance tier with a cost-effective petascale storage tier. Avere FXT Edge filers provide fast access to content where the user needs it, when they need it.”
Geoff Stedman, Senior Vice President, StorNext Solutions, Quantum
“The combination with Avere extends online storage by delivering fast access to seismic data via a cost-effective private cloud solution with the extreme scalability and performance needed in the oil and gas market. With Quantum’s long history of delivering solutions for oil and gas, we know how critical it is for companies to store their valuable seismic content without limits and avoid contending with technology refreshes, data migration cycles and the very high costs associated with relying on traditional RAID storage.”
The combined Avere-Quantum solution is currently available.
Founded in January 2008, Avere Systems is radically changing the economics of data storage. Avere’s solutions give companies – for the first time – the ability to put an end to the rising cost and complexity of data storage by allowing customers the freedom to store files anywhere in the cloud or on premises without sacrificing the performance, availability or security of their data.
Based in Pittsburgh, Avere is led by veterans and thought leaders in the data storage industry and is backed by investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners, Tenaya Capital, and Western Digital Capital.
Quantum is a leading expert in scale-out storage, archive and data protection, providing solutions for capturing, sharing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. With Quantum, customers can Be Certain™ they have the end-to-end storage foundation to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.
East Timbalier Island is under threat of disappearing due to a combination of hurricane and coastal storm events, subsidence, and other factors
BROOMFIELD, Colo. – MWH Global, the premier solutions provider focused on water and natural resources, has been awarded a contract by the Louisiana Coastal Protection and Restoration Authority (CPRA) to provide engineering services for the restoration of the East Timbalier Barrier Island off the coast of Louisiana. Funding for the East Timbalier engineering and design services contract is being provided to CPRA by the National Fish and Wildlife Foundation.
East Timbalier Island is part of the Louisiana barrier island chain that separates Terrebonne and Timbalier bays from the Gulf of Mexico. East Timbalier Island has experienced significant loss of land due to multiple hurricanes, subsidence, and reduced sediment loads from the Mississippi River. The Island currently consists of two severely degraded segments, and is anticipated to disappear unless restoration activities are undertaken to replenish sediment that has been lost. The coastal barrier islands in the Gulf of Mexico off the Louisiana coast provide critical beach, dune and marsh habitat. They also serve to protect fragile interior marshes and infrastructure and provide quiescent bay habitats preferred by many fish and invertebrate species by lowering wave energy and storm surges originating from the Gulf. The Louisiana CPRA has identified the restoration of a number of islands similar to East Timbalier Island as an important part of the state's 2012 Coastal Master Plan and the Fiscal 2013 Annual Plan for Ecosystem Restoration and Hurricane Protection in Coastal Louisiana.
The MWH project scope will involve the coastal engineering and design to re-establish the historic island footprint with beach, dune, and marsh habitat creation, reconnecting the two segments. Project scope activities will be accomplished with close support from MWH’s key project team members to include Coastal Engineering Consultants, Ocean Surveys, Inc., GeoEngineers, Fugro/John Chance Land Surveyors, R.C. Goodwin & Associates, and Coastal Technology Corporation. Significant effort will be applied toward identifying suitable nearshore and offshore sediment sources to build the desired island habitats through coastal geophysical and geotechnical investigations and engineering analysis. Throughout the contract term, extensive coordination will be undertaken by MWH with Federal and State agencies for permitting and with oil and gas operators that have significant infrastructure in the region.
“This project represents a unique opportunity for the Louisiana coastal region to become better prepared for the future and to combat ongoing issues jeopardizing the long-term health of the coastal ecosystem. MWH is extremely proud to be a partner with CPRA and the associated agencies on such an important restoration project.” commented Marshall Davert, president for government and infrastructure in the Americas and Asia Pacific for MWH.
Construction and restoration of the improved East Timbalier Island is expected to begin in 2017.
MWH Global is the premier solutions provider focused on water and natural resources for built infrastructure and the environment. Offering a full range of innovative, award-winning services from initial planning through construction and asset management, we partner with our clients in multiple industries. Our nearly 8,000 employees in 35 countries spanning six continents are dedicated to fulfilling our purpose of Building a Better World, which reflects our commitment to sustainable development. MWH is a private, employee-owned firm with a rich legacy beginning in 1820. For more information, visit our website atwww.mwhglobal.com or connect with us on Twitter and Facebook.
Government-backed online service aims to help 20,000 UK SMEs to boost their exporting capability by 2016
HSBC has today been announced as the first corporate sponsor of Open to Export, an online export community backed by Government and business, which aims by 2016 to help 20,000 businesses by offering free access to its forum, webinars and industry contacts.
The service has been developed in direct response to the need identified by UK SMEs for trusted, practical advice available all in one place on the full range of issues they face when identifying and moving into new markets.
The Open to Export website provides an open, collaborative and responsive platform where SMEs can connect, learn and talk with experts and peers to improve their effectiveness when doing business abroad.
Through the website companies can:
- Get practical insight and advice from successful exporters and subject matter experts via webinars, country and topic guides and case studies
- Benefit from bespoke answers to export questions from experts and peers in the forum and live Q&As
- Connect with contacts from support organisations, events, opportunities and potential partners
Regular contributors include government departments, trade associations, private sector trade specialists and successful exporters such as UKTI, HMRC, Defra, the Institute of Export, the British and Overseas Chambers of Commerce and DHL.
Since launching in late 2012, the website now regularly attracts 30,000 unique visits a month and has a community of nearly 6,000 registered users.
As Principal Partner, HSBC is supporting Open to Export alongside founders UK Trade & Investment, the Federation of Small Businesses, the Institute of Export, and marketing solutions and websites provider Yell as part of its ongoing commitment to helping businesses of all sizes achieve their growth ambitions.
The bank will be contributing content to the website and have presence at key trade and sector specific events throughout the year and across the UK alongside Open to Export in order to engage customers and prospects face to face and connect them to practical support.
Ian Stuart, Head of Commercial Banking UK and Co-Head of Commercial Banking Europe at HSBC, said: “HSBC is determined to help Britain's ambitious businesses grow and expand into new markets overseas. As the leading international trade bank, we are here to support businesses of all sizes, and at whatever stage in their international growth. That's why we are bringing our knowledge, expertise and global connectivity to the partnership with Open to Export and helping to build a service that offers practical support and advice to those looking to export."
Successful technology entrepreneur and Open to Export Chairman, Julian Hucker, said: “Open to Export has proved there is a demand for a peer to peer community focussed on exporting. HSBC’s involvement now brings a wealth of international trade expertise and connections that will greatly help us to deliver our ambitious plans to develop the service and grow that community to help 20,000 companies by 2016. I wish this had been around when I was building my last business.”
The announcement of HSBC’s partnership with Open to Export comes ahead of UKTI’s Export Week, where both will be exhibiting as part of the Explore Export roadshow. Open to Export will also be conducting the first in a series of surveys aiming to identify the driving factors behind decisions being made on international growth in the coming year, which will in turn help shape the website’s content and features to ensure a relevant and responsive service for its users.
Companies can browse the site and register for free at www.opentoexport.com.
Delivers Comprehensive Mac Management with SCCM on Par with Native PC Management
Latest version enables even better management of Mac computers in corporate environments, with improved security capabilities that allow for control over Mac computers
LONDON – Parallels® has launched an update to Parallels Mac Management (parallels.com/uk/managemacs), the product that extends Microsoft® System Centre Configuration Manager (SCCM) functionality to Mac® computers. Parallels Mac Management enables IT departments to discover, enrol and manage Mac machines just as they do existing PCs – all through a single pane of glass.
Taking Mac management further than native SCCM, this latest version offers new software distribution capabilities that benefit both IT administrators and end users, including the ability for Mac users to install approved software, features for planning hardware refresh cycles accurately, improvements that empower IT administrators to make their Mac computers more secure using advanced encryption techniques, and a streamlined process for enrolling Mac computers in SCCM on large corporate networks.
Parallels tackles the perennial problem of Mac management head-on with Parallels Mac Management 3.1, enabling everyone from IT administrators to system architects to CIOs to leverage their current SCCM infrastructure and extend it to Mac without unnecessary costs. Users are empowered to:
- Manage and control Mac computers by leveraging their existing SCCM infrastructure, resources and talent
- Gain full visibility into the Mac computers coming onto their networks
- Take control and take action on those machines as they would on PCs – all while working in the same familiar environment
- Easily leverage Mac technologies, such as the configuration manager, to secure Mac computers
- Deploy and manage Parallels Desktop Enterprise Edition, as well as other virtual machines – key for Mac users who need access to business-critical Windows apps.
- Introduces the ability to manage Macs running Mac OS X 10.10 Yosemite, via Microsoft SCCM.
- Application Portal for Mac, which allows Mac users to install approved software even without administrative rights
- Mac warranty (AppleCare) status reporting, which lets IT plan hardware refresh cycles better
- Support for unique FileVault 2 personal recovery keys and the ability to escrow these keys, which increases security by encrypting Mac computers on the network
- Support for PKI and HTTPS, which enables support for an SCCM infrastructure operating in secure HTTPS mode
- Network discovery UI improvements, which enable the use of SCCM site boundary information in network discovery configuration settings—this means huge time savings for IT administrators who need to scan large corporate networks.
- Scan the corporate network automatically to discover Mac computers, then auto-enrol them in SCCM
- Gather hardware and software inventory of all Mac machines on the network
- Leverage native Microsoft SCCM reports to view information about Mac computers
- Enforce compliance via extended SCCM configuration items: Mac OS X configuration profiles and shell scripts
- Central management and installation of software packages and patches
- Support for deployment of a wide range of software packages: .dmg, .pkg, .iso, .app, scripts, and stand-alone files
- Support for silent deployment and deployment with user interaction
- Seamless integration of Mac OS X image deployment into SCCM workflow
- Deployment of preconfigured, company-standard OS X installation on new Mac computers
“Features and improvements made in this latest version of Parallels Mac Management for SCCM were added in direct response to requests from customers and prospects, and we are very excited to bring a product to market that will make IT administrators’ lives easier,” said Jack Zubarev, Parallels President. “We know that managing Macs in the enterprise can feel like a lawless Wild West, and we are working to change that by offering products that make configuration, deployment and overall management of Macs in business environments more efficient and secure.”
A recent winner of Microsoft’s Best of TechEd 2014 award in the Systems Management category, Parallels Mac Management 3.1 includes a number of new features:
Additional features include:
- Asset Inventory
- Configuration Management
- Software and Patch Deployment
- Mac OS X® Image Deployment via SCCM
Parallels offers resources for IT departments as they go through the proof of concept and implementation process for Parallels Mac Management. These include a hosted test lab program that lets IT professionals test Parallels Mac Management before installing it, and a JumpStart Program that includes Parallels Mac Management for one year on up to 100 Mac computers, as well as 10 hours of assisted installation and configuration support.
Parallels will be demoing Parallels Mac Management at TechEd Europe 2014 in Barcelona, October 28–31, 2014. Please stop by our booth (#99) for a demo and the chance to win a JumpStart Program for Parallels Mac Management (£3,000 value).
Bringing Mac into Business Environments
Parallels Mac Management is part of a larger suite of products for businesses of all sizes that work in cross-platform environments. Other offerings include Parallels® Access™ for Business (parallels.com/access-business), a remote access application for iPad®, iPhone® and Android devices that lets people run PC and Mac applications on their devices with touch gestures – just as if the apps were native to the device. Parallels Desktop® for Mac Enterprise Edition (parallels.com/enterprise) is the best way to run Windows apps on Mac, giving employees easy access to all the tools they need. Using Parallels Desktop Enterprise Edition, IT managers can support Windows applications for Mac users with a configurable, policy-compliant solution that fits seamlessly into their existing business processes.
Availability and Pricing
Parallels Mac Management is available immediately and starts at £30 annually per Mac. Parallels Access for Business starts at £49.99 per year for five computers. Parallels Desktop for Mac Enterprise Edition starts at £66 per year per Mac.
Parallels is a global leader in hosting and cloud services enablement and cross-platform solutions. Parallels began operations in 2000 and is a fast-growing company with more than 900 employees in North America, Europe, Australia and Asia. Visit parallels.com for more information.
Stay connected with Parallels and our online community: Follow us on LinkedIn (linkedin.com/company/parallels), like us on Facebook (facebook.com/parallelsdesktop), follow us on Twitter (twitter.com/parallelsmac) and visit our Apple in the Enterprise blog (blogs.parallels.com/enterprise-blog).
If you are a senior data analytics professional working in the heath care sector, what does the Ebola situation mean to you? To one such professional, it means the potential for an all-hands-on deck response that would break down data-sharing barriers and shift the data-analytics focus from “blocking and tackling” to true innovation.
That professional is Mike Berger, vice president of enterprise analytics at Geisinger Health System, a hospital system in Danville, Pa. I had the opportunity to speak with Berger last week at the Teradata 2014 Partners conference in Nashville, and I brought up the Ebola question almost as an afterthought. If Ebola became a major problem in the United States, I asked him, how might that affect his life? How might data analytics be tapped to deal with the problem?
“The clinical leadership would push us to break down the barriers that keep us from sharing data from one provider to another, which Obamacare was trying to do, but at a very slow pace,” Berger said. “I think the world would turn on its side, and we would be asked to instantly try to interconnect our data storage with other groups’ data storage, and someplace would become the place where the mining would happen, and that would probably be us, in our geographic region. It would be a horrific experience, but the value we would get from an analytics perspective in breaking down these barriers, really would be tremendous.”
When a compliance crisis strikes your industry, it shines a spotlight on how your own company is managing its compliance risk. Newspaper reports on high-profile cases of bribery, corruption, conflicts of interest or misconduct can prompt calls from your Audit Committee Chair and other key stakeholders who will be asking anxious questions. Even if it is a competitor facing these challenges, it falls on the Chief Compliance Officer to quell concerns in the organization. Among the likely queries:
- “Could the legal and public nightmares felt by this other company happen to us?”
- “Are we legally exposed by similar unethical practices within our own company?”
- “How can we be sure we’re not?”
The CCO must have programs in place and be prepared to provide easy visibility into the most critical risk areas. This means delivering essential data and communicating a detailed picture of the risk landscape to concerned stakeholders without resulting in misunderstanding or information overload. It means giving Board members accurate reports, and fostering an understanding about risk and compliance within the Board is critical. It means giving Board members the knowledge and guidance they require to provide the necessary support and resources.
The Board of The Committee of Sponsoring Organizations of the Treadway Commission (COSO) has announced a project to update the 2004 Enterprise Risk Management–Integrated Framework.
The Framework is a widely used by management to enhance an organization’s ability to manage uncertainty, consider how much risk to accept, and improve understanding of opportunities as it strives to increase and preserve stakeholder value.
The new project is intended to enhance the Framework’s content and relevance in an increasingly complex business environment so that organizations worldwide can attain better value from their enterprise risk management programs.
The Framework will update concepts developed in the original Framework and to reflect the evolution of risk management thinking and practices, as well as changing stakeholder expectations. The initiative will also develop tools to assist management in reporting risk information and in reviewing and assessing the application of enterprise risk management.
Recent research by Accenture Analytics shows that nine out of 10 CXOs are happy with Big Data’s business outcome, with leaders at large companies being most satisfied.
Enterprises with annual revenues of over $10 billion said Big Data was “extremely important” and reported better results than other organizations. There are several likely reasons for this, writes Accenture Analytics Senior Managing Director Narendra Mulani. Large companies are more likely to have:
- Greater financial and talent resources to devote to Big Data.
- A better understanding of the value and scope of Big Data.
- A tighter focus on the practical applications and business outcomes.
- A deeper appreciation for Big Data’s disruptive power.
EATONTOWN, N.J. – Since Hurricane Sandy made landfall Oct. 29, 2012, FEMA, in partnership with the federal family and state and local governments, has been on the scene helping individuals, government entities and eligible non-profits as New Jersey recovers from the storm’s devastation.
FEMA has funded more than 5,185 Public Assistance projects including repairing and restoring hospitals, schools, waterways, parks, beaches, marinas, water treatment plants and public buildings. A roster of services has been restored, including utilities critical to everyday life. Billions of federal dollars have been expended during the past two years. The numbers below tell the story. In the two years since Hurricane Sandy devastated New Jersey:
- $6.67 billion has been provided to the state of New Jersey for Hurricane Sandy Recovery.
- $422.9 million has been distributed to help survivors get back on their feet via temporary housing assistance, disaster unemployment and other needs assistance.
- $3.5 billion has been paid to policyholders for flood claims through FEMA's National Flood Insurance Program.
- $1.5 billion in Public Assistance funds has been obligated to communities and certain non-profit organizations for debris removal, emergency work and permanent work.
- $279.5 million in grants has been provided for projects to protect damaged facilities against future disasters.
- $123.9 million in funding for property acquisitions, elevation and planning updates has been paid to New Jersey communities through the Hazard Mitigation Grant Program.
- $847.7 million has been approved by the Small Business Administration for SBA disaster loans to 10,726 individuals and 1,718 small businesses.
To learn more about FEMA Public Assistance in New Jersey visit: fema.gov/public-assistance-local-state-tribal-and-non-profit and http://www.state.nj.us/njoem/plan/public-assist.html. For more information, visit http://www.fema.gov/sandy-recovery-office or the New Jersey Sandy Recovery website at http://www.fema.gov/new-jersey-sandy-recovery-0
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Everybody wants to see a greener data center. Environmentalists want lower carbon emissions, utilities want less strain on their infrastructure and the enterprise wants a lower energy bill.
The trouble starts when the conversation shifts from developing a “greener” data center to one that is fully and finally “green.” Everyone has a different idea of what green is, and while it is nice to have a goal, there is still a danger of tasking the IT industry with fulfilling an unreachable ideal.
This could become particularly troublesome now that many of the energy efficiency initiatives that have been launched so far are starting to produce diminishing returns, says Enterprise Tech senior editor George Leopold. And this is coming at a time when mobile architectures, which require a lot more energy than wired ones, are coming to dominate the data ecosystem. So while individual data centers may be drawing less energy than, say, five years ago, overall consumption across the industry will only increase as more resources are brought online to deal with the Internet of Things and other initiatives.
Universities and colleges throughout the U.S. have been adding emergency management degrees to their education offerings for the last decade. But in what could be the next evolution of emergency management-related offerings, at least one school has launched a general education course on the topic, therefore expanding the information to students in a variety of majors and future careers.
North Dakota State University began offering a general education course focused on emergency management during the fall 2012 semester after making a series of changes to an existing class and seeing an opportunity to engage more students on the subject.
The course was originally focused on technical government policy and doctrine and intended to be an introduction for students majoring in emergency management. Called EMGT 201: Introduction to Emergency Management, the thinking was that it would lead students to government emergency management positions, said Jessica Jensen, an assistant professor in the university’s Department of Emergency Management. “Over time our thinking about who we were educating and the career fields that they would be going into with our degree evolved and our thinking about the potential of this course also did,” she said.
During my interaction with senior management as a business continuity/information security consultant, especially amongst IT centric organisations, I am invariably asked a question: "We come across too many ISO standards which have common themes. In your opinion, which are some of the Standards that come very close especially from an implementation perspective?"
As you can see this is a very loaded question from the senior management who are typically fed up with too many rules, regulations and standards trying to govern their lives. Also, whilst they want to adhere to all applicable regulations and standards they want some optimisation of their costs in implementation.
Computers today use the Basic Input/Output System (BIOS) firmware to initialize the hardware process and then turn over control to the operating system. Therefore, any malware that affects the BIOS is a serious threat to the entire computer system.
To protect computers from malicious software, IT organizations must also attempt to secure the BIOS firmware.
The National Institute of Standards and Technology (NIST) has created a free document that details computer BIOS security. You can obtain a free copy in our IT Downloads area under the title, “BIOS Protection Guidelines for Servers.”
In the PDF, author Andrew Regenscheid of the Computer Security Division Information Technology Laboratory at NIST breaks the topic into several sections including:
- BIOS Security Principles
- Security Guidelines by Update Mechanism
- Guidelines for Service Processors
The Dollar Shave Club had a bottleneck problem, and his name was Juan. It’s not so much that Juan was the problem, but somehow, any web performance report — no matter how unusual, no matter how frequent — waited on Juan, according to a recent Cite World article. So if Juan was busy, the business user waited…and so did new site features.
Almost every company has a Juan. Often, Juan may be the most efficient, effective developer on staff, but it doesn’t matter. Inevitably, as businesses become more data-driven, there are two many tasks and not enough Juans.
The lesson here: If one developer is holding up your reports, maybe it’s time you looked at a simpler analytics solution. That’s what Todd Lehr, senior vice president of engineering at Dollar Shave Club, learned.
WARREN, MICH. – Winter is on its way, and the Michigan State Police, Emergency Management and Homeland Security Division and FEMA remind homeowners to make sure their heating systems and water heaters are in good working condition, especially those damaged by the August flooding.
“Michigan homeowners and their families may be at risk with flood-damaged furnaces, water heaters and electrical appliances,” warns Michigan State Police Capt. Chris A. Kelenske, State Coordinating Officer and Deputy State Director of Emergency Management and Homeland Security. “If the flood waters reached your heating system or water heater, have them checked for operating safety by experienced repair personnel.”
Dolph A. Diemont, the disaster’s lead federal official, reminded Michigan homeowners that FEMA grants may be available to help repair damaged furnaces and water heaters and replace those destroyed by flood waters.
“Michigan residents with flood damage to their furnaces and water heaters must register with FEMA by the Nov. 24 deadline to be eligible for grants,” Diemont added.
“If flood damage is found after the November date and the homeowner has failed to register, no FEMA assistance will be available.”
Homeowners who receive a FEMA grant for repairs and who later discover their furnace needs replacing must use the FEMA appeal process for additional grant funds. The homeowner has 60 days to appeal and must submit an estimate for replacement of the furnace on contractor company letterhead.
Disaster survivors may register online at disasterassistance.gov or by smart phone or tablet at m.fema.gov. Applicants may call 800-621-3362 or TTY users 800-462-7585. The toll-free telephone numbers are available 7 a.m. to 11 p.m. EDT seven days a week until further notice.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Ever since fusion centers were created in the aftermath of the 9/11 terrorist attacks to improve information-sharing between governments, they've often been criticized for their ineffectiveness. But if recent state investment in the centers is any indication, faith in the work they do may be on the rise.
Rick “Ozzie” Nelson, a senior associate at the Center for Strategic and International Studies, told Government Technology that as public-safety grant funding from the federal government has slowed, states are dedicating more of their own money to finance fusion centers. He believes that is a solid indicator that the centers “have found their sweet spot” when it comes to intelligence-gathering and communications activities.
“If a governor is looking for money to free up, you can close your fusion center and take that money and use it for some other public-safety endeavor,” Nelson said. “But that’s not what we’re seeing in the data. What we’re seeing is people are investing in the fusion centers.”
According to a new study from Protiviti, engagement by a company’s board of directors is a critical factor in best managing information security risks.
Overall, engagement and understanding of IT risks at the board level has increased, yet one in five boards still have a low level of comprehension. As the report states, this suggests “their organizations are not doing enough to manage these critical risks or engage the board of directors in a regular and meaningful way.” Further, while large companies do exhibit stronger board-level engagement, it is not a dramatic distinction.
NEW YORK — Since Hurricane Sandy made landfall Oct. 29, 2012, FEMA, in partnership with the federal family and state and local governments, has been on the scene helping individuals, government entities and eligible non-profits as New York recovers from the storm’s devastation.
FEMA has funded more than 3,500 Public Assistance projects including repairing and restoring hospitals, schools, transit venues, waterways, parks, beaches, marinas, water treatment plants and public buildings. A roster of services has been restored, including utilities critical to everyday life. Billions of federal dollars have been expended during the past two years. The numbers below tell the story.
It has been two years since Hurricane Sandy struck New York.
Total FEMA has already provided to New York.
The dollars given to help survivors get back on their feet with temporary housing assistance, disaster unemployment and other needs assistance.
Amount paid by FEMA to 53,288 policyholders for flood claims through its National Flood Insurance Program.
Total Public Assistance obligated to communities and certain non-profit organizations to help recover from Hurricane Sandy and includes:
Added to permanent repair projects to protect against future damage.
Through the Hazard Mitigation Grant Program to projects throughout the state to protect against future damage.
Small Business Administration loans for homeowners and businesses affected by the storm.
To learn more about FEMA Public Assistance in New York, visit: fema.gov/public-assistance-local-state-tribal-and-non-profit and dhses.ny.gov/oem/recovery.
For more information, visit http://www.fema.gov/sandy-recovery-office
The two health-care workers from Dallas who were infected with the Ebola virus are now in stable condition, and the NBC reporter who was quarantined has been released from the hospital. That was among the news from the Centers for Disease Control and Prevention (CDC), which tried to ease concerns and provided informational updates on the Ebola outbreaks during a conference call Oct. 23.
The two Dallas nurses were infected when attending to an Ebola-infected patient who eventually died. NBC reporter Nancy Snyderman had been in quarantine since returning from assignment in Liberia.
The 48 people who had been in contact with the now deceased Eric Duncan have cleared the 21-day incubation period, according to Kashef Ijaz, the principal deputy director for the Division of Global Health Protection in the CDC’s Center for Global Health.
With the winter season approaching, the Federal Emergency Management Agency (FEMA) reminds individuals to be prepared for winter storms and extreme cold. While the danger of severe winter weather varies across the country, everyone can benefit by taking a few easy steps now to prepare for emergencies. A first step, regardless of where you live, is to visit the Ready.gov website to find preparedness ideas you can use all year long.
“In our part of the country we make the most of winter,” said FEMA Region VIII Acting Administrator Tony Russell. “However, severe storms and blizzards can create major problems and residents need to take winter weather seriously by taking appropriate steps to prepare.”
Severe winter weather can include snow or subfreezing temperatures, strong winds and ice or heavy rain storms. An emergency supply kit both at home and in the car will help prepare you and your family for winter power outages and icy or impassable roads.
Both kits should include a battery-powered or hand-crank radio, extra flashlights and batteries. In addition, your home kit should include a three day supply of food and water. Thoroughly check and update your family’s emergency supply kit and add the following supplies in preparation for winter weather:
- Rock salt to melt ice on walkways,
- Sand to improve traction on driveways and sidewalks,
- Snow shovels and other snow removal equipment,
- And adequate clothing and blankets to help keep you warm.
When traveling in winter weather conditions, be sure to contact someone both before your departure and when you safely arrive. Always travel with a cell phone and ensure the battery is charged so you can contact someone in the case of a road emergency. If dangerous conditions are forecast, it’s often best to delay travel plans.
Finally, make sure to familiarize yourself with the terms that are used to identify a winter storm hazard and discuss with your family what to do if a winter storm watch or warning is issued. Terms used to describe a winter storm hazard include the following:
- Freezing Rain creates a coating of ice on roads and walkways.
- Sleet is rain that turns to ice pellets before reaching the ground. Sleet also causes roads to freeze and become slippery.
- Winter Weather Advisory means cold, ice and snow are expected.
- Winter Storm Watch means severe weather such as heavy snow or ice is possible in the next day or two.
- Winter Storm Warning means severe winter conditions have begun or will begin very soon.
For more information and winter preparedness tips, please visit: www.ready.gov/winter-weather or www.nws.noaa.gov/om/winter/ or www.fema.gov/about-region-viii/winter-weather-readiness.
The 2014 BCI Global business continuity Awards will be presented on Nov. 5, 2014, at London’s Science Museum as part of BCI World.
The BCI has published the list of individuals and organizations that have been shortlisted for an award. These are:
Business Continuity Consultant of the Year
- Paul Trebilcock MBCI, Director, JBT Global
- Thomas Keegan MBCI, Middle East Enterprise Resilience Leader, PwC
- Bill Crichton FBCI, Managing Director and Principal Consultant, Crichton Continuity Consulting Ltd
- Harvey Betan MBCI, Principal, H betan Inc
- Ahmed Riad Ali MBCI, Manager, Ventures Middle East
- Peter Frielinghaus MBCI, Senior BCM Advisor, ContinuitySA
- Mohammed Chughtai MBCI, Managing Director of Business Continuity, RecoveryWorks Consulting
Business Continuity Manager of the Year
- Werner Verlinden FBCI, Vice President Business Continuity Management, Reed Elsevier
- John Zeppos FBCI, Group Business Continuity Management Director, OTE Group of Companies
- Nisar Ahmed Khan MBCI, Business Continuity Management Leader, Kuwait Finance House
- Abdulrahman Alonaizan MBCI, Head of Business Continuity Management, Arab National Bank (ANB)
- Sylvain Prefumo MBCI, Head of Business, State Bank of Mauritius Ltd
- Dave Morgan MBCI, Senior Business Continuity Program Manager, Delta Dental
Public Sector Business Continuity Manager of the Year
- Brian Gray MBCI, Chief – Business Continuity Management, United Nations
- James McAlister MBCI, Business Continuity Manager, Merseyside Police
- Ian Goldfinch MBCI, Manager, ICT Continuity Planning, SA Health
- Dr Clifford Ferguson AMBCI, Government Pensions Administration Agency
Most Effective Recovery of the Year
- Bank of New Zealand
- EDP Distribucao
- Telus Communications
- Barclays Bank of Kenya
- Commercial International Bank (S.A.E) - Egypt
- Telekom Deutschland GmbH
BCM Newcomer of the Year
- Luke Bird MBCI, Business Continuity Executive, Atos
- Mohammad Farhan Khan AMBCI, Senior BCM Consultant, Protiviti Middle East
- Leanne Metz AMBCI, Associate Director, Enterprise Program Management Office, Mead Johnson Nutrition
- Yasmine Elhamouly AMBCI, Business Continuity Manager, PwC
- Mark Dossetor AMBCI, Manager Business Continuity, Department of Transport, Planning and Local Infrastructure (DTPLI)
Business Continuity Team of the Year
- Franklin Templeton Investments
- Marks & Spencer
- Commercial International Bank (S.A.E) - Egypt
- Barclays Bank of Kenya
- ATO Business Continuity Management Team
Business Continuity Provider of the Year (BCM Service)
- Continuity Shop
- Plan B Disaster Recovery
- Avalution Consulting
- Phoenix Quickstart
- Linus Information Security Solutions
- Hewlett-Packard Australia - Continuity Services
- Sungard Availability Services
Business Continuity Provider of the Year (BCM Product)
- Sungard Availability Services
- ResilienceONE® BCM Software
- Linus Information Security Solutions
Business Continuity Innovation of the Year (Product/Service)
- PAN Software Pty. Ltd.
- Pinbellcom Limited
- Linus Revive Business Continuity Management System
Industry Personality of the Year
- Peter Brouggy
- Chittaranjan Kajwadkar MBCI
- Frank Perlmutter FBCI
- Braam Pretorius
- Ahmed Riad Ali MBCI
- Andy Tomkinson MBCI
- John Zeppos FBCI
Aon Global Risk Consulting, in collaboration with the Wharton School of the University of Pennsylvania, has released its Aon Risk Maturity Index Insight Report, October 2014.
This year’s report indicates six main findings:
1. Confirmation of past analysis on the inverse relationship between a higher Risk Maturity Rating and lower stock price volatility, and a direct relationship between a higher Risk Maturity Rating and superior operational financial performance.
2. Confirmation of past analysis on the relationship between a higher Risk Maturity Rating and the relative resilience of an organization’s stock price in the immediate aftermath of significant risk events.
3. Identification that the 2013/2014 bull equity market environment may have an equalizing effect on an organization’s stock price and create a false sense of security around to need to invest in a robust, holistic risk management approach.
4. Introduction of new findings that evidence a correlation between board risk oversight practices and risk maturity.
5. Groundbreaking new research showing a direct relationship between risk-based forecasting and planning and firm volatility and earnings predictability.
6. Introduction of cross-over analysis to Aon’s Global Risk Management Survey that indicates while organizations appear to identify similar opportunities and risks an organization’s level of planning, preparedness and response to these risks is distinctly different.
The report was developed as a means of driving marketplace insight on the relationship between an organization’s risk maturity and factors that drive organizational performance. This edition of the report confirmed findings from previous analyses, which found that more mature risk management practices directly correlate to stronger financial results and organizational and stock price resiliency in response to significant risk events.
The Army National Guard's first cyber protection team received its new shoulder sleeve insignia during a ceremony conducted by US Army Cyber Command/Second Army.
Lt. Gen. Edward C. Cardon, commanding general, US Army Cyber Command, cited the ceremony as a major milestone for Army cyberspace operations, Guard and Reserve forces and for the Army.
"It is another indication of the tremendous momentum that the Army is building to organize, train and equip its cyberspace operations forces," Cardon said. "Army Cyber Command is taking a Total Force approach to building and employing the Army's cyber force."
The new cyber protection team is the first of almost a dozen similar Army National Guard/active duty cyber protection teams, according to Cardon.
Cardon cited the experience that Army Guard soldiers bring with them from both the military and civilian sectors as being beneficial to the mission. "They bring a wide range of experience, not only from serving in the Army National Guard, but also from working in industry, state government or other government agencies," he said. The teams will be responsible for conducting defensive cyberspace operations, readiness inspections and vulnerability assessments as well as a variety of other cyber roles and missions.
Ed. Note-today we have a guest post from noted ethics and compliance expert, as well as steel guitar player, Chris Bauer.
Okay, you know that you need to have effective compliance training but do you really know what will actually make it effective? The reality is that far too many compliance training program fail on multiple counts. With compliance as critical as it is, that is unacceptable. Thankfully, there are a few areas which, if attended to well, can correct many of the most-frequently seen problems with the development and execution of these programs.
Here are five of the areas I see getting missed time after time in compliance training programs.
Do you actually have a solid, working definition of what compliance is? I see ethics, compliance, and accountability as being ‘cross-defined’ all the time. Do they inter-relate? Absolutely and it’s even a great idea to inter-relate them in your training. However, until you are clear about what you mean by all three of those terms, your training will leave employees confused and confusion is never good for compliance training…
Something was bound to happen eventually. Isn’t that what disaster planning all about; prepare for the unplanned events that can throw things in chaos? After years of never experiencing any sort of terrorist actions, today that changed in Ottawa, Canada. Terrorists, which is what they attackers are being called at the moment, shot and killed a RCMP officer guarding the Canadian War Memorial and stormed the Parliament building, where Members of Parliament were actually on site. On Monday – Oct 20/14 – a radical ran down two Canadian soldiers in uniform; one later dying in hospital.
It pains me to know that a soldier guarding a memorial for fallen soldiers – in all wars – dies protecting that memorial. Our thoughts go out to his family and loved ones.
At the moment, there is no greater priority in enterprise IT than building out and leveraging the cloud. Organizations that make the transition successfully will reap the benefits of a more agile infrastructure and lower costs. Those that don’t will fall into obsolescence.
But the sheer number of options when it comes to cloud services and infrastructure is mind-boggling. Whether it is public, private or hybrid, SaaS, PaaS, IaaS or the numerous permutations within those groups, the roadmap to a successful cloud environment is far from clear.
Like any IT deployment, it all starts with the platform you choose. This is particularly crucial when it comes to the private cloud because it is the owned-and-operated rock upon which all other cloud services will be built. And it is why we’ve seen such a plethora of options lately, both from traditional IT vendors and the rising tide of cloud providers.
There are a number of reasons organizations need to be paying attention to their employees’ travel risks, including health scares, natural disasters and political unrest. Since unpredictable events like these are now a global reality, many businesses are taking a hard look at business travel risks and ways they can protect their employees abroad.
In fact, 80% of travelers believe their companies have a legal obligation to protect them abroad, according to On Call International LLC’s report, “Travel Risk Management.” This means employees may blame their organization if their health or safety is compromised during a business trip. Because so much is at stake for companies that send staff members across the globe, it is important for employers to understand business travel risks and implement a travel risk management strategy to protect their workforce—and their company.
The study notes that companies need to be prepared to respond quickly and effectively to any travel-related incident. Responses should also put the needs of the employee first. Companies need to anticipate the risks and prevent them from occurring–or at least limit their potential impact.
(MCT) — Officials with the Iowa Department of Homeland Security and Emergency Management on Tuesday announced the development of an Alert Iowa statewide mass notification and emergency messaging system.
The new alert system can be used by state and local authorities to quickly disseminate emergency information to residents in counties that use the system, according to Homeland Security agency Director Mark Schouten, who announced the launch of the new alert system at the opening of the 11th Annual Iowa Homeland Security Conference.
The system is free of charge and available to all counties So far 34 of Iowa’s 99 counties have signed up to use the Alert Iowa system, officials said. Alert Iowa will allow citizens to sign up for the types of alerts they would like to receive. Messages can be issued via landline or wireless phone, text messaging, email, FAX, TDD/TYY, and social media.
During my very first Stage 1Audit for ISO 22301 I was naturally very curious. I was spouting out all sorts of thoughts and questions (no doubt much to the annoyance of my Manager and the attending Auditor at the time but I think it’s important to ask those questions when learning). One thing I have remembered from that experience was being told:
“Achieving the initial ISO 22301 certification is probably the easiest part. Everything is new, employees tend to be enthusiastic and management often seem to have it at the top of their list. It’s the repeat visits (AKA Surveillance or Continuous Assessment Visits) or the Extension to Scope Assessments that present the real challenge. Employees can lose interest, other competing demands take over in the boardroom and documents can sometimes get mothballed”
In hindsight the Auditor wasn’t wrong. As soon as that organisation first achieved certification it was quickly celebrated but then the profile simply lost some of its “fizz”. Other challenges or new exciting initiatives took over and while the BCMS continued to tick over things definitely appeared to slow down but then came the return visit…
As you can imagine with these kinds of things, there was a last minute flurry of activity to update plans, roll out awareness campaigns, and brief all managers to within an inch of their life about the possible questions they might receive!
New Organizational Resilience Standard launch announced
DALLAS, Texas – DataBank Holdings, Ltd., a leading custom data center and colocation provider based in Dallas, announced the addition of HIPAA/HITECH Attestation to their annual audit certifications. With this latest compliance standard, DataBank offers the healthcare industry assurance and ease to deploy IT assets within compliance in DataBank data center facilities.
The HIPAA Security assessment was conducted in a structured approach that can identify and evaluate the controls in place which are associated with the operations of the IT environment and the business operations environment. The assessment addressed a wide range of Administrative Safeguards, Technical Safeguards, Physical Safeguards, Policies & Procedures, as well as Documentation Requirements as they relate to DataBank’s Data Center Services.
“We have a number of healthcare clients which currently conform to the HIPAA regulations and standards,” said Michael Gentry, VP of Operations for DataBank. “By securing DataBank’s attestation as a part of our own annual audit process, we make it much simpler for both current and future customers to comply with the guidelines laid out in the audit, potentially saving them a significant financial and manpower investment.”
DataBank’s HIPAA/HITECH examination was performed by a full-service audit and consulting firm that specializes in integrated compliance solutions and examinations. By completing such examinations on an annual basis, DataBank is able to demonstrate substantially higher levels of assurance and operational visibility to both prospects and clientele.
To learn more about DataBank, the company facilities, compliance standards, and the company’s complete suite of service solutions, please visit the corporate website at http://www.databank.com.
DataBank is a leading provider of enterprise-class data center solutions aimed at providing customers with 100% uptime availability of data, applications and deployed infrastructure. We offer a full suite of hosting solutions including colocation, managed services and cloud solutions that are anchored in world-class secure data center facilities with best of breed infrastructure and highly robust network architecture. Our customized customer deployments are designed to effectively manage risk, improve their technology performance and allow them to focus on their core business objectives. DataBank is headquartered in the historic former Federal Reserve Bank Building, in downtown Dallas, TX and has additional data centers in Dallas, Minneapolis and Kansas City. For more information on DataBank locations and services, please visit http://www.databank.com or call 1(800) 840-7533
Fourth annual benchmark of Net Promoter® Scores (NPS®) includes data on 283 companies across 20 industries.
WABAN, Mass. – Temkin Group released a new research report, "Net Promoter Score Benchmark Study, 2014", based on a study of 10,000 U.S. consumers.
Net Promoter Score (NPS) has become a popular customer experience metric. NPS identifies the likelihood of consumers to recommend a company to their friends and family, using a scoring range from -100 to +100.
USAA's insurance business (67) and JetBlue (61) earned the only NPS scores above 60. Other companies with NPS above 50 are H-E-B, USAA (banking and credit cards), Trader Joe's, Mercedes-Benz, Amazon.com, Apple (computers), Lexus, Toyota, and Aldi.
Citibank and HSBC earned the lowest NPS, followed by four firms that also had scores of -10 or below: Comcast, Charter Communications, Commonwealth Edison, and Super 8.
"Net Promoter Scores can provide a strong indication of your relationship with customers," states Bruce Temkin, Managing Partner of Temkin Group. Temkin goes on to say, "Like any customer metric, NPS is only valuable when it's used to drive improvements."
Here are some additional findings from the research:
- Auto dealers earned the highest average NPS (38) followed by grocery chains (32), computers (30), and insurance carriers (30).
- TV service providers (1), Internet service providers (2), and utilities (5) are the only industries with averages below 10.
- USAA's insurance, banking, and credit card businesses earned NPS levels that are 37 or more points above their industry averages. Seven other firms are 25 or more points above their peers: JetBlue, credit unions, Chick-fil-A, H-E-B, Kaiser Permanente, Amazon.com, and Trader Joe's.
- Five companies fell more than 20 points below their industry averages: Super 8, Motel 6, HSBC, Quality Inn, and Citibank.
- HSBC's NPS is 55 points below the industry average for banks and Super 8 is 42 points below the hotel industry. Four other firms are 30 or more points below their industry averages: Motel 6 (hotels), HSBC (credit cards), US Airways (airlines), and 7-Eleven (retail).
The 20 industries included in this report are airlines, auto dealers, banks, computer makers, credit card issuers, fast food chains, grocery chains, health plans, hotel chains, insurance carriers, Internet service providers, investment firms, major appliance makers, parcel delivery services, rental car agencies, retailers, software firms, TV service providers, utilities, and wireless carriers.
The report "Net Promoter Score Benchmark Study, 2014" can be downloaded from the Customer Experience Matters blog, at ExperienceMatters.wordpress.com as well as from the Temkin Group website, www.TemkinGroup.com.
About Temkin Group: Temkin Group is widely recognized as a leading customer experience research and consulting firm. Many of the world's largest brands rely on its insights and advice to steer their transformational journeys. Temkin Group combines customer experience thought leadership with a deep understanding of the dynamics of organizations to help accelerate results. Rather than layering on cosmetic changes, Temkin Group helps companies embed practices within their culture by building four critical competencies: Purposeful Leadership, Employee Engagement, Compelling Brand Values, and Customer Connectedness. The firm's ongoing research identifies leading and emerging best practices across a wide range of activities for engaging the hearts and minds of customers, employees, and partners. For more information, contact Bruce Temkin at 617-916-2075 or send an Email.
About Bruce Temkin: Bruce Temkin is widely recognized as a customer experience thought leader and is Customer Experience Transformist and Managing Partner of Temkin Group. He is also the author of a very popular blog, Customer Experience Matters® (ExperienceMatters.wordpress.com). Prior to forming Temkin Group, he was a VP at Forrester Research for 12 years. Bruce is a highly demanded speaker who consistently receives high marks for his content-rich, entertaining keynote addresses. He is also the co-founder and Chair of the Customer Experience Professionals Association (CXPA.org), a global non-profit organization dedicated to the advancement of customer experience management.
Net Promoter Score, Net Promoter, and NPS are registered trademarks of Bain & Company, Satmetrix Systems, and Fred Reichheld. Customer Experience Matters is a registered trademark of Temkin Group.
Well into the 21st century, businesses worldwide are focusing more and more on managing risks, be they internal or external, financial, operational or strategic, involving technology or regulations or related to reputation.
While organizations are raising the bar on effective risk management, executives face extraordinary headwinds spawned by a turbulent environment in which risks materialize virtually overnight. Just this year, global financial and business markets have been rocked by spectacular cybersecurity breaches, geopolitical instability in the Middle East and Eastern Europe, refugee crises and more.
Internal auditors working from risk-based annual plans developed before March are increasingly finding themselves addressing yesterday’s challenges.
All of this reinforces my long-held belief that internal audit must take a more continuous approach to risk assessment. Audit plans and coverage should constantly evolve as new, potential risks surface and undergo assessment. Such an approach adds significant value for internal audit’s stakeholders, particularly during sudden or unexpected crises.
Yes, I realize that the last thing we need in Business Continuity Planning practices is another anagram, but, hey, what’s the fun in writing a blog if you can’t cause trouble? So here goes – another BCP anagram …
I have been stating for a while now, that the BCP Methodology needs to be revisited. I think that the tried and true practice of conducting BIAs is a bit flawed. In practice, I think, the methodology attacks middle management and department level areas in the organization without first establishing corporate-wide and senior level objectives for business during a crisis. When we ask people to establish RTOs and RPOs (more of those lovely anagrams – see the chart below) what are they basing their answers on? When we ask for impacts of being down, to set those recovery objectives, what business objectives are they being designed to meet?
I think that the BCP Methodology needs to add a step in the beginning of our analyses in which we establish – are you ready for it, here it comes, the new anagram, in three, two, one – our ABOs, Adjusted Business Objectives. I think part of the fallacy in our current process is that RTOs (or MADs if you prefer that anagram) are set with the assumption that the company is still aiming to hit its established business objectives for the year. And, I think that is wrong. During times of crisis, I think management’s expectations of what the company should achieve are adjusted. During times of crisis, we may not have the same Income Targets, Profit Targets, Sales Targets, Margin Targets, Production Targets, etc.
The Hamilton Project at the Brookings Institution and the Stanford Woods Institute for the Environment released a new report Oct. 20 that addresses how Western states can confront the crippling drought that threatens the nation’s entire water system.
The report is comprised of three papers, each of which examines particular strategies for coping with ongoing drought conditions. The first paper, Shopping for Water, advocates using market forces to manage water resources and lessen the impact and frequency of water shortages. The second paper, The Path to Water Innovation, highlights the need for innovative new technologies for promoting efficiency and conservation and suggests reviews of regulatory practices and creating statewide offices for water innovation. The third paper looks at nine economic facts about water in the United States with “the aim of providing an objective framing of America's complex relationship with water.”
In conjunction with the release of the papers, a forum was hosted on Oct. 20 at Stanford University to discuss the topics and issues within the report. Authors of the paper were joined by other water experts, as well as California Gov. Jerry Brown, who opened the forum with his vision of the landscape of water in the west.
“Water is going to be a major issue that is going be addressed in the California Legislature, in Congress – water issues don’t get solved in one place. It’s a complicated interplay of governmental jurisdiction at every level,” Brown said.
The Ebola epidemic in Africa and fears of it spreading in the U.S. have turned the nation’s attention to the federal government’s front-line public health agency: the Centers for Disease Control and Prevention (CDC). But as with Ebola itself, there is much confusion about the role of the CDC and what it can and cannot do to prevent and contain the spread of disease. The agency has broad authority under federal law, but defers to or partners with state and local health agencies in most cases.
Julie Rovner answers some common questions.
As the number of companies suffering a data breach continues to grow – with U.S. retailer Staples now reported to be investigating a breach – so do the legal developments arising out of these incidents.
While companies that have suffered a data breach look to their insurance policies for coverage to help mitigate some of the enormous costs, recent legal developments underscore the fact that reliance on traditional insurance policies is not enough, notes the I.I.I. white paper Cyber Risks: The Growing Threat.
A post in today’s Wall Street Journal Morning Risk Report, echoes this point, noting that a lawsuit between restaurant chain P.F. Chang’s and its insurance company Travelers Indemnity Co. of Connecticut could further define how much, if any, cyber liability coverage is included in a company’s CGL policy.
Each year, Forrester Research and the Disaster Recovery Journal team up to launch a study examining the state of business resiliency. Each year, we focus on a particular resiliency domain: business continuity, IT disaster recovery, crisis communications, or overall enterprise risk management. The studies provide BC and other risk managers an understanding of how they compare to the overall industry and to their peers. While each organization is unique due to its size, industry, long-term business objectives, and tolerance for risk, it's helpful to see where the industry is trending, and I’ve found that peer comparisons are always helpful when you need to understand if you’re in line with industry best practices and/or you need to convince skeptical executives that change is necessary.
This year’s study will focus on business continuity. We’ll examine the overall state of BC maturity, particularly in process maturity (business impact analysis, risks assessment, plan development, testing, maintenance etc.) but we’ll also examine how social, mobile, analytics and cloud trends are positively and negatively affecting BC preparedness. In the last BC survey, one of the statistics that disturbed me the most was that very few firms assessed the BC preparedness of their strategic partners beyond asking for a copy of their BC plan. And we all know plans are always up to date, tested and specific enough to address the risk scenarios that the partner is most likely to experience (please note the tone of sarcasm in this sentence). I hope this year’s survey shows an improvement; otherwise, most of the industry is in mucho trouble.
For DRJ readers, the results and a summary analysis will be available on their website in January and if you attend the upcoming DRJ Spring World 2015, I'll be there to deliver the results in person. For Forrester clients, I’ll write a series of in-depth reports that will examine each of the survey topics in depth during the next several quarters. If you feel this data is valuable to the industry and you’re a BC decision-maker or influencer, please take 15 to 20 minutes to complete the survey. All the results are anonymous. We don’t even need your email address unless you’d like a complimentary Forrester report (and I promise we won’t use your email address for any other purpose).
Click here to take our survey.
By Paul Kirvan.
The Ebola outbreak shows how esoteric threats shelved in the ‘it will never happen’ folder can erupt to cause major disruption. Two other such threats spring to mind and it may be a good time for a reminder of these:
Solar flares traveling from the sun to the earth contain massive amounts of energy that have been known to disrupt electronic systems. Such an event could potentially cripple the world’s electrical grids for years, causing billions (trillions?) in damages.
Back in 2010, the US House of Representatives’ Energy and Commerce Committee voted unanimously to approve a bill allocating $100 million to protect the US energy grid from this rare but potentially devastating occurrence. The Grid Reliability and Infrastructure Defense Act, or H.R. 5026, aimed "to amend the Federal Power Act to protect the bulk-power system and electric infrastructure critical to the defense of the United States against cybersecurity and other threats and vulnerabilities."
Risk management is developing into a strategic function within European organizations. At the same time, risk management can contribute much more as its strategic role grows. Currently, risk managers are not satisfied with the level of mitigation for six of the top 10 risks ‘that keep their CEO awake at night’.
These are the key findings from the 2014 Risk Management Benchmarking Survey conducted earlier this year by the Federation of European Risk Management Associations (FERMA). Now its 7th edition, the FERMA Benchmarking Survey this year received a record number of 850 responses from 21 European countries.
Using the results of the survey, FERMA has published its first European Risk and Insurance Report. FERMA President Julia Graham says, "FERMA has said that risk managers are becoming risk leaders - the European Risk and Insurance Report provides evidence to support that view. It, therefore, also endorses FERMA's objective to shape and support risk management as a profession."
Would a football player take to the field without attending training? Would an actor take to the stage without going to rehearsals? Would a pilot take to the skies without having practiced how to fly a plane? I’m sure any sensible person would answer ‘no’ to these questions. Before you know you're good enough to take on a role, you need to have practiced it first. Similarly, before you know your business continuity plan is fit for purpose, you need to have practiced it too.
We all know that every organization should have a business continuity plan – common sense dictates that when disaster strikes you would want to continue functioning as normal as possible. But how many organizations actually test their plans? They can be time consuming, they can be expensive, it can be difficult to get management buy-in and you can often be frustrated by the lack of enthusiasm from the general workforce who just want to get on with their jobs without your disruption. According to a recent study by Databarracks, less than a third of respondents to a survey (29%) claimed they had tested their plan in the last twelve months.