Industry Hot News (6926)
What is the hardest risk to avoid? The risk you didn’t anticipate. The answer may seem obvious, after the fact, however most firms seldom analyze why. What is not so obvious are the decisions leading up to the risk event. It is human nature to assume that we understand risk and will avoid it just in time. Yet, time and again we are surprised.
Somewhere along the way, a consultant categorized risks into awareness buckets of “Knowns,” “Known Unknowns” and “Unknown Unknowns.” Unfortunately, categories of risk do not protect us from the effects of a risk occurrence. Senior executives do not like surprises and, more importantly, they expect risk professionals to detect and prevent them before they occur!
Let’s examine whether these events are really “Unknown Unknowns” or, quite simply, the avoidance of decision making that could have minimized or contained the risk. Cognitive research suggests that blind spots in decision making account for up to 90 percent of large operational risks across all organizations. Very few firms take the time to re-examine failed decisions, fearing where the truth may lead.
On November 7, 1940, high winds buffeted the Tacoma Narrows Bridge leading to its collapse. The first failure came at about 11 a.m., when concrete dropped from the road surface. Just minutes later, a 600-foot section of the bridge broke free. Subsequent investigations and testing revealed that when the bridge experienced strong winds from a certain direction, the frequency oscillations built up to such an extent that collapse was inevitable. For posterity, the collapse of the Bridge was captured on film.
I thought about this spectacular engineering failure when I read, yet again, commentary about representatives from the Department of Justice (DOJ) and Securities and Exchange Commission (SEC) appearing at for-profit conferences to give presentations to attendees. Personally, I was shocked, simply shocked to find out that one has to pay to attend these events. Further, it appears that one or more of the companies running these events, ACI, Momentum, IQPC, HansonWade, among others, might actually be for-profit companies. It was intimated that one of the ways the conference providers enticed registrants to pay their fees was to provide a forum of lawyers practicing in the Foreign Corrupt Practices Act (FCPA) space, to whom representatives from the DOJ and SEC could speak. Now I am really, really really shocked to find that people actually pay to obtain knowledge.
Armed with the new piece of information that there is a marketplace where people actually pay to obtain information, I have decided to practice what I preach and perform a self-assessment to determine if I am part of this commerce in ideas. Unfortunately I have come to the understanding that not only do I participate in that marketplace but also I actually use information provided by representatives of the US government in my very own marketing and commerce. So with a nod to Adam Smith’s Invisible Hand of the Marketplace; I now fully self-disclose that I digest to what US government regulators say about the FCPA, repackage it and then (try) and make money from it. (I know you are probably as shocked, shocked as I was to discover this.)
SACRAMENTO, Calif. – Federal disaster assistance now exceeds $2.4 million for those affected by the South Napa earthquake, just one week after they became eligible to apply. At the state’s request, the federal disaster declaration expanded on Oct. 27 to include Individual Assistance for homeowners and renters in Napa and Solano Counties.
Nearly 1,900 households have applied for assistance from the Federal Emergency Management Agency (FEMA).
Disaster assistance includes grants to help pay for temporary housing, home repair and other serious disaster-related needs, such as medical expenses, not covered by insurance or other sources.
Low-interest disaster loans are also available from the U.S. Small Business Administration (SBA) for homeowners, renters, businesses of all sizes, and private non-profit organizations. Disaster loans cover losses not fully compensated by insurance or other recoveries and do not duplicate benefits of other agencies or organizations.
To apply for assistance, register online at DisasterAssistance.gov or via smartphone or tablet at m.fema.gov. Applicants may also call FEMA at 800-621-3362 or (TTY) 800-462-7585. People who use 711-Relay or VRS may call 800-621-3362.
Multilingual phone operators are available on the FEMA Helpline/Registration. Choose Option 2 for Spanish and Option 3 for other languages.
The California Governor’s Office of Emergency Services (Cal OES) and FEMA have coordinated with the City of Vallejo and Solano County to open a Disaster Recovery Center and have partnered with the City and County of Napa to provide state and federal services in a Local Assistance Center. The centers provide face-to-face assistance for affected individuals to meet with specialists from Cal OES, FEMA and the SBA. To date, nearly 500 people have visited the centers.
Napa Earthquake Local Assistance Center
301 1st Street, Napa, CA 94559
Solano County Disaster Recovery Center
1155 Capitol Street, Vallejo, CA 94590
Standard hours for the centers are 9 a.m. to 6 p.m. weekdays and 9 a.m. to 4 p.m. weekends until further notice. On Veterans Day, Nov. 11, holiday hours will be 10 a.m. to 3 p.m.
During a visit to a center, visitors may:
- Discuss their individual disaster-related needs
- Submit any additional documentation needed, such as occupancy or ownership verification documents and letters from insurance companies
- Find out the status of an application
- Obtain information about different types of state and federal assistance
- Get help from SBA specialists in completing low-interest disaster loan applications for homeowners, renters and business owners
- Meet with FEMA hazard mitigation specialists to learn about reducing future disaster losses and rebuilding safer and stronger
People should register with FEMA before going to a Disaster Recovery Center, if possible. For visitors with a disability or functional need, the centers may have:
- Captioned telephones, which transcribe spoken words into text
- The booklet Help After a Disaster, in both Braille and large print Spanish and English
- American Sign Language interpreters available upon request
- Magnifiers and assistive listening devices
- 711-Relay or Video Relay Services available
If other accommodations are needed during any part of the application process, please ask any FEMA or Cal OES employee for assistance.
Stay in Touch with FEMA
After a person registers, a FEMA inspector will contact that person by phone to schedule an appointment. An applicant should give clear, accurate directions to the damaged property. An inspector will try three times to schedule an inspection appointment. To avoid unnecessary delays, FEMA asks applicants to make sure FEMA has their current phone number.
During the inspection, owners and renters must show proof of occupancy, such as a valid driver’s license. Owners must show proof of ownership and sign various forms. The length of the inspection will vary, depending on the amount and location of the damage.
FEMA inspectors document damage. They do not determine eligibility for disaster assistance. They do not condemn homes. When meeting with an applicant who owns a home that has been previously red-tagged, FEMA guidance allows inspectors to complete their inspection from a safe distance.
The SBA and insurance companies also have inspectors in the field.
Be Alert for Disaster Fraud
FEMA inspectors carry official photo identification. Please contact the local police if someone posing as an inspector asks for money.
Official inspectors never ask for money or use a vehicle bearing a FEMA logo. Inspectors must carry visible FEMA ID, which includes a photo and name, the FEMA seal and the ID’s expiration date. FEMA ID has a "property of the U.S. Government" disclaimer, a return address and a barcode.
Apply to Qualify
To be eligible for federal disaster assistance, at least one member of a household must be a U.S. citizen, Qualified Alien or non-citizen national with a Social Security number. Disaster assistance may be available to a household if a parent or guardian applies on behalf of a minor child who is a U.S. citizen or a Qualified Alien. FEMA will only need to know the immigration status and Social Security number of the child.
Disaster assistance grants are not taxable income and will not affect eligibility for Social Security, Medicaid, medical waiver programs, Temporary Assistance for Needy Families, the Supplemental Nutrition Assistance Program or Social Security Disability Insurance.
For more information on the California disaster recovery, go to http://www.fema.gov/disaster/4193.
Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The Cal OES coordinates overall state agency preparedness for, response to and recovery from major disasters. Cal OES also maintains the State Emergency Plan, which outlines the organizational structure for state management of the response to natural and manmade disasters.
There are signs that crowdsourcing is becoming a legitimate data strategy. What remains unclear, though, is whether it’s a reliable one.
One crowdsourcing project, PredictIt.org, illustrates the issues at play. PredictIt is an academic project under the auspices of the Victoria University of Wellington, New Zealand, with U.S. university affiliates. Essentially, it allows people to wager on political races, (yes, it’s legal), and tap the “wisdom of the crowd.”
Four days leading up to the elections, it ran a market on this year’s U.S. Congressional mid-term. The crowd successfully predicted the overall outcome in Congress, foreseeing a Republican take-over of the Senate and gains in the House. As of Monday, the site was predicting Republicans would have 53 or more seats in the Senate. The final outcome (thus far) in the Senate was 52 Republican seats with 43 Democrats.
“There are 25 or more years of data that show prediction markets do a better job predicting outcomes than polls,” Dr. Emile Servan-Schreiber, founder and CEO of Lumenogic and an expert in prediction markets, told Politico.
The New Orleans' emergency call administration center has a faster, more efficient response to emergencies that improves the flow of information between citizens, multiple agencies and first responders.
Orleans Parish Communication District (OPCD) covers an area with a population of more than 370,000 residents. They handle more than 1 million emergency calls annually, routing requests to police, fire and EMS personnel in the field. Considering its call volume, OPCD needed a better way to connect applications and automate the flow of information. The former system required multiple computers, monitors and programs, making emergency call management often painfully slow and complex.
In 2013, OCPD was selected by Motorola Solutions to conduct the field trial of a new product, eventually named PremierOne Computer Aided Dispatch NG911 Integrated Call Control.
While You May be Concentrating Your Efforts on Recovery after an Incident, Be Sure that Information Provided to the Media is What You Want Publicized
Part One of Three Regarding Your Crisis Communications
I recently had the pleasure of attending the 6th. Annual Business Continuity Symposium held in Rochester, New York, and sponsored by the Eastern Great Lakes Chapter of the Association of Contingency Planners (EGLACP). Chapter President John J. Luce and his organization staff lined up some great speakers and set a record for the number of sponsors attending the annual event.
The lead speaker was James W. Satterfield, President, COO and Founder of Firestorm Solutions, LLC, whose session was entitled “Crisis Management Reality Check: Consequence Management Lessons Learned After a Crisis”. At the start of the session, Mr. Satterfield asked “Have You Heard the One About Cannibalistic Rites Being Performed on a Major College Campus?“
By Shailendra Singh
Organizations today are presented with an ever-growing number of challenges, compounded by the speed of technological change and evolution, all of which act together to increase business risk.
In such an unpredictable environment, the ability to weather market and technological and financial stress is critical to sustainability. Reactive corporate disaster recovery is no longer sufficient. Resilient systems and processes that keep businesses running as usual during any crisis are the key to retaining competitive advantage.
One of the biggest issues facing organizations today is a plethora of unpredictable disruptions that have the potential to seriously destabilise business.
BSI has launched PAS 7000, a universally applicable supply chain information standard for suppliers and buyers at organizations of all sizes around the globe. PAS 7000 ‘Supply Chain Risk Management- Supplier prequalification’ helps answer three key questions relating to any organization’s supply chain partners: Who are they? Where are they? Can they be relied upon?
The standard draws on the collective expertise of 240 professionals drawn from global industry associations and organizations, and it addresses product, process and behavioural criteria for supplier prequalification.
PAS 7000 has been created in response to industry demand, with three quarters of executives considering supply chain risk management important or very important (1). As supply chains increasingly span continents, and brands become ever more exposed due to the demand for increased transparency, the challenges for procurement teams to assess the suitability of suppliers increases. 63 percent of EMEA companies have experienced disruption to their value chain due to unpredictable events beyond their control in the last 12 months, at an average cost of £449,525 per incident per company (2).
PAS 7000 provides companies with a uniform set of common information requirements that reduces duplication of effort in completing tender forms and aids procurement in bringing consistency to the supplier base. It establishes a model of governance, risk and compliance information for buyers to pre-qualify suppliers and confirm their intention and ability, to adhere to key compliance requirements. This in turn helps organizations make an informed decision about whether or not to engage with a potential supply chain partner.
For further information and to download the standard free of charge visit: www.bsigroup.com/PAS7000 (registration required).
(1) Don’t play it safe when it comes to Supply Chain Risk Management – Accenture Global Operations Megatrends Study 2015
(2) Dynamic Markets – Managing the Value Chain in Turbulent Times – Oracle, March 2013.
At a Gala Dinner at the Science Museum in London on the 5th November, the Business Continuity Institute (BCI) hosted their Global Awards ceremony, an event to recognise the outstanding contribution of business continuity professionals and organisations from across the world.
The BCI Global Awards consist of ten categories – nine of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being chosen by their peers in a vote. As expected the entries received during the year were all to a high standard and the panel of judges had a difficult task deciding upon a shortlist to go forward to the ceremony.
Inevitably there can be only one winner in each of the categories and those who went home celebrating were:
- Business Continuity Consultant of the Year: Bill Crichton FBCI, Managing Director and Principal Consultant at Crichton Continuity Consulting Ltd
- Business Continuity Manager of the Year: John Zeppos FBCI, Group Business Continuity Management Director at OTE Group of Companies
- Public Sector Business Continuity Manager of the Year: Brian Gray MBCI, Chief of Business Continuity Management at the United Nations
- BCM Newcomer of the Year: Luke Bird MBCI, Business Continuity Executive at Atos
- Business Continuity Team of the Year: Franklin Templeton Investments
- Business Continuity Innovation of the Year: Deloitte
- Business Continuity Provider of the Year (BCM Service): Continuity Shop
- Business Continuity Provider of the Year (BCM Product): ezBCM
- Most Effective Recovery of the Year: Bank of New Zealand
- Industry Personality of the Year: Chittaranjan Kajwadkar MBCI
Steve Mellish, Chairman of the BCI said: "The geographical range of winners at tonight's awards is a sign of just how the industry is developing internationally and how global an organisation the BCI is. The high standard of entries we received gave the judges some very difficult decisions to make so my congratulations go to everyone who won for what is a tremendous achievement."
The BCI Global Awards are held annually and coincide with the BCI World Conference and Exhibition, one of the premier events in the global industry calendar. Held over two days, the conference features fifty exhibitors, a similar number of speakers and close to a thousand visitors.
The world turns and things change – and that includes computer hacker approaches too. The immediate threats of malware and cybercriminals are relatively well-known. Phishing emails are designed to get you to click right away on a hacker’s link. Worms burrow through systems, always on the go. Viruses in that free software you should not have downloaded replicate and ravage. But now there’s a new menace with a different approach. Instead of attacking your system now, some hackers are making themselves at home for the longer term. They enter by stealth and lie low. Then they start to use your computers – just like they were their own computers. Welcome to the Advanced Persistent Threat or APT for short.
The goal of the Advanced Persistent Threat is typically not to do damage, but to steal data. The most sophisticated APTs require considerable effort and expertise, possibly requiring new internal system code. APT campaigns are also part of the spying arsenal of certain governments that can muster the high levels of hacking resources and expertise required.
Big Data is changing things, and not just because it requires shiny, new solutions such as Hadoop or Apache whatsit-of-the-week. As organizations use and assimilate Big Data, the more obvious it becomes that IT will need to reimagine some old standards in the data toolbox.
Why? The obvious reason is standard data tools aren’t designed to handle unstructured or high-velocity data. But there are other issues unique to Big Data that will require us to rethink the tools we’re using to manage, analyze and present the data. Here are two that have been in the news recently:
The Executive Dashboard
Executive dashboards were created over a decade ago to help leaders visualize specific enterprise metrics, such as key performance indicators. Not a lot has changed since then. That’s a problem in the era of Big Data, when insight is gained not so much through route reporting as it is through exploration.
Now that the software-defined data center (SDDC) is nearly upon us, enterprise executives need to start asking a number of pertinent questions; namely, how do I build one, and what do I do with it once it is built?
In essence, the SDDC is more about applications than technology. The same basic virtual and cloud technologies that have infiltrated server, storage and now networking are employed to lift data architectures off of bare metal hardware and into software. But it is the way in which those architectures support enterprise apps, and the way in which the apps themselves are reconfigured to leverage this new, more flexible environment that gives the SDDC its cachet.
Until lately, however, the application side of the SDDC has been largely invisible, with most developments aimed at the platform itself. Last week, however VMware announced an agreement with India’s Tata Consulting Services (TCS) to develop pre-tested and pre-integrated applications for the SDDC. Under the plan, TCS will provide architectural support and operational expertise to help organizations transition legacy apps into virtual environments powered by VMware solutions, namely vSphere, NSX, Virtual SAN and the vRealize Suite. The deal also calls for the creation of a Center of Excellence to link data centers in Milford, Ohio and Pune, India to handle beta test and workload assessment functions.
Susan L. Cutter is a Carolina Distinguished professor of geography at the University of South Carolina where she directs the Hazards and Vulnerability Research Institute. Her primary research interests are in the area of disaster vulnerability/resilience science — what makes people and the places where they live vulnerable to extreme events and how vulnerability and resilience are measured, monitored and assessed.
Cutter is a GIS hazard mapping guru who supports emergency management functions. I posed a series of questions about mapping and asked her to respond in writing. In Cutter’s responses she reminds us to ask the “why of the where” question when looking at maps.
According to a 2012 McKinsey study reported by Chui and colleagues, employees on average spend 28% of their workday reading and responding to email. Digging deeper into the amount of email usage, Jennifer Deal describes a 2013 study that surveyed a group of executives, managers and professionals (EMPs) and found that 60% of EMPs with smartphones are connected (primarily via email) for 13.5 hours or more per workday and spend about five hours connected during the weekend. This amounts to a 72-hour workweek.
In response to this hyper-connectedness the German automaker Daimler (maker of Mercedes-Benz) provides vacationing employees with an unusual extension to the automatic out-of-office response. As usual, the response states the employee is on vacation and provides an alternative contact person. But then, the Daimler system goes a step further and “poof” the sender’s e-mail is automatically deleted from the vacationer’s inbox. Daimler’s intent is to let the employee “come back to work with a fresh spirit.” Volkswagen and Deutsche Telekom also have policies that limit e-mails.
EATONTOWN, N.J. – The process of recovering from a disaster begins almost as soon as the threat has passed and responders have arrived. Hundreds, if not thousands, of people will need help immediately as well as for the foreseeable future. Non-governmental volunteer groups, churches and faith-based organizations are often among the first to step in and help, but also have limited resources to sustain their presence.
In 13 New Jersey counties affected by Hurricane Sandy, many of these organizations came together to form long-term recovery groups (LTRGs), and Federal Disaster Recovery Coordination (FDRC; regionally referred to as Federal Interagency Regional Coordination) connects these groups to the Federal Emergency Management Agency. FEMA Voluntary Agency Liaisons (VAL) support the LTRGs as they address the unmet needs of individuals that they can help with, in contrast to FIRC’s emphasis on communities as a whole.
While a few groups had come into existence after Hurricane Irene struck in 2011, many LTRGs were formed in the immediate aftermath of Sandy. The VALs assisted in getting some of the groups launched, using the VOAD (Voluntary Organizations Active in Disaster) manual and other toolkits to bring representatives together.
There are 14 active groups in New Jersey in 13 counties (Atlantic City has its own group separate from Atlantic County). These long-term recovery groups mainly consist of and represent faith-based and nonprofit organizations that have resources to assist survivors.
“Survivors that are still not back in their homes need things like rental assistance, construction assistance and help filling funding gaps, and members of the LTRGs seek to provide those resources and guidance,” said Susan Zuber, VAL for the New Jersey Sandy Recovery Field Office. She also said that one advantage of having religious organizations involved in the LTRGs is “they can reach up to the national level and potentially get funds and resources.”
Along with investigating the issues communities are facing during recovery, FIRC coordinates information and resources to affected survivors, so they can determine where help is available.
“The LTRG disaster case managers strive to make sure various resources get to the people they know need help, and FIRC helps them ensure that there is no duplication of benefits,” Zuber said. “We assist in being the best stewards possible of limited available funds.”
FIRC VAL Lori Ross says that nearly two years after Sandy struck, the LTRGs are still actively helping survivors with some serious issues.
“New Jersey 211 (the state’s resource hotline) is receiving (an average of) 44 new referrals for help every week,” she said. “The Ocean and Monmouth county groups have started receiving requests for rental assistance” as people who had been renting properties while their homes were repaired or rebuilt are in need of more money to pay their rent and mortgage, she added. Mold in homes that wasn’t dealt with properly initially continues to be an issue.
Not all of the problems survivors are facing are of a physical nature, either.
“We’re also seeing more cases where people are asking for mental and emotional assistance,” Zuber said. “We’re getting requests for clergy and mental health treatment. There’s a real emotional and spiritual care element as it relates to the impact of the storm.”
Ross added that even caregivers and case workers are feeling the pressure of what is now a two-year process. “This (the anniversary) is a very critical time,” she said, noting that requests for this type of aid increased at this time last year as well.
Rebuilding after a disaster the magnitude of Hurricane Sandy takes years. FEMA, the FIRC, and the long-term recovery groups of New Jersey are using coordinated teamwork and resources to help people put their lives back together.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
A newly published report from the Business Continuity Institute (BCI) highlights that, while overall results indicate a good uptake of emergency communications planning, a significant minority remain passive or have difficulty securing management buy in. It is worrying to note that among those organisations without an emergency communications plan, two-thirds (63.4%) of them would only consider adopting one after a business changing event, a bit like shutting the stable door once the horse has bolted. This could have dire consequences as previous BCI research suggests that business affecting events may often severely affect an organisation’s viability.
Supported by Everbridge, the report concludes that emergency communications remains an essential part of any BC programme and this research demonstrates that while a great majority of companies are aware of its importance, there are some gaps in implementation that need to be addressed. In order to be effective, emergency communications plans must be continuously updated to reflect the risks that a business faces and be embedded well enough within the organisation. Relevant training and education programmes, as well as ensuring top management buy in, are necessary in promoting a culture of awareness and reducing the risk of communications failure during incidents.
Further findings from the report include:
- In a sign of growing awareness, only less than 13.5% of organisations surveyed do not have an emergency communications plan.
- Emergency communications plans are quite comprehensive in their scope. At least 70% of organisations have plans covering the following threats: IT outages (81.2%), fire (77.8%), power outages (76.2%), weather related incidents (75.6%), natural disasters (74.9%) and security related incidents (70.0%). These mirror the top three causes of business disruption as reported by respondents in the last 12 months: IT outages (59.8%), power outages (51.6%) and weather related incidents (47.2%).
- Almost a fifth of respondents (18.7%) belong to organisations where more than 500 staff members travel internationally on a regular basis. More than 30% report travelling to ‘high-risk’ countries.
- Almost two-thirds of companies (64.7%) report having training and education programmes in place related to emergency communications. Most have regularly scheduled programmes (64.2%).
- Around 15% of organisations regularly schedule exercises of their emergency communications plans. Most schedule their exercises once a year (55.8%). This is a worrying finding considering that almost half of organisations are likely to invoke their plans more than once during any given year (49.6%)
- More than 70% of organisations take 30 minutes or less to activate their emergency communications plans. Nonetheless, more than a quarter of organisations (27.4%) do not request responses from their staff in the event of an incident or have defined acceptable response rates (28.2%).
- Social media appears to play an important role in an emergency communications plan. 42% of respondents report using social media to monitor their staff during emergencies and almost a third (31.6%) utilise it to inform stakeholders.
Patrick Alcantara, Research Associate at the BCI and author of the report, commented: “This survey is seen as the first step toward benchmarking an organisation’s emergency communications arrangements. It is hoped that it will allow companies to take a second look at their emergency communications capability and introduce improvements that will redound to their benefit. Given how emergency communications may improve survival during extreme situations, it is important that organisations take heed and aspire for a robust capability before it is too late.”
Imad Mouline, Chief Technology Officer at Everbridge, commented: “Fluctuating global threat levels, sophisticated cyber attacks and an ever growing mobile workforce present increasingly diverse and complex risks to business interests. In this unpredictable environment, Business Continuity Practitioners are consistently faced with the challenge to plan for the unexpected while ensuring the safety of their staff and communities and protecting their businesses from both financial loss and reputational damage. This survey provides a benchmark for Emergency Communication Planning.”
This is the first dedicated piece of research into understanding the Emergency Communications Plans of a wide range of organisations and learning how these are integrated within wider recovery programs. The results supports the anecdotal feedback from the industry, demonstrating that Emergency Communication Plans form an established, vital element of continuity plans for mid to large size enterprises while also offering some practical ideas for those looking to improve their capabilities in this area.
A newly published report from the Business Continuity Institute (BCI) highlights that nearly a quarter respondents to a survey claimed their organisation had suffered losses of at least €1 million during the previous twelve months (up from 15% last year) as a result of supply chain disruptions. 13.2 percent suffered a one-off disruption that cost in excess of €1 million (up from 9% last year). The study also showed that 40 percent of respondents claimed their organisation was not insured against any of these losses while a 20 percent were only insured against half these losses.
Organisations cannot simply bury their heads in the sand and pretend an incident will never happen to them. The survey showed that 76 percent of respondents had experienced at least one supply chain disruption during the previous twelve months, yet a quarter of respondents (28 percent) still had no BC arrangements in place to deal with such an event.
Supported by global insurer Zurich, the report concludes that supply chain disruptions are costly and may cause significant damage to an organisation’s reputation. While the survey results indicate a growing awareness of BC and its role in ensuring supply chain resilience, many organisations have yet to improve on their reporting and BC arrangements. While budgets for business continuity and ensuring supply chain resilience are often slashed in favour of other priorities, this study demonstrates why this often might not be a wise course of action. With the growing cost of disruption worldwide and the potential reputational damage caused as a result of failing to have appropriate transparency in the supply chain, investments in this area are essential and can spell the difference when disaster strikes.
Further findings from the report include:
- 78.6% of respondents do not have full visibility of their supply chains. Only 26.5% of organisations coordinate and report supply chain disruption enterprise-wide. 44.4% of disruptions originate below the Tier 1 supplier and 13% of organisations do not analyse their supply chains to identify the source of the disruption.
- The primary sources of disruption to supply chains in the last 12 months were unplanned IT and telecommunications outage (52.9%), adverse weather (51.6%) and outsourcer service failure (35.8%).
- The loss of productivity (58.5%) remains as the top consequence of supply chain disruptions for the sixth year running. Increased cost of working (47.5%) and loss of revenue (44.7%) are also more commonly reported this year and round out the top three.
- Respondents reporting low top management commitment to this issue have risen from 21.1% to 28.6%. This is a worrying finding as low commitment is likely to coincide with limited investment in this key performance area.
- The percentage of firms having BC arrangements in place against supply chain disruption has risen from 57.7% to 72.0%. However, segmenting the data reveals that small and medium sized enterprises (SMEs) are less likely to have BC arrangements (63.9%) than large businesses (76.2%).
Lyndon Bird FBCI, Technical Director at the BCI, commented: “Should we be alarmed by some of the figures revealed in this survey? Perhaps so. Should we be surprised by them? Probably not. As long as organisations are failing to put business continuity mechanisms in place, and as long as top management is failing to give the issue the level of commitment it requires, supply chain disruptions will continue to occur and they will continue to cost the organisation dearly. In our globally connected world, these supply chains are becoming ever more complex and more action is needed to make sure that an incident in one organisation doesn’t become a crisis for another.”
Nick Wildgoose, Global Supply Chain Product Leader at Zurich Insurance Group, commented: “Top level management support is fundamental to driving improvement in supply chain resilience; I have witnessed the significant disruption cost reductions that have been achieved by companies that are proactive in this area. This should be regarded as business change programme in the context of driving value through Supplier Relationship Management” and becoming the customer of choice for your strategic suppliers to improve your business performance.”
Now into its sixth year, the BCI Annual Supply Chain Resilience Survey has established itself as an important vehicle to highlight and inform organisations of the importance of supply chain resilience and the key role it plays in achieving overall organisational resilience in today’s volatile global economic climate. The outcomes of previous surveys have provided organisations with critical insights and valuable information to support the development of appropriate strategic responses and approaches to mitigate the impact and consequences of disruptions within their supply chains.
On the surface, you’d think Big Data has established itself as a popular and winning proposition for businesses. Consider these recent research findings:
- Innovative companies are three times as likely to rely on Big Data analytics and data mining than their less-innovative peers. — The Boston Consulting Group’s study, “The Most Innovative Companies 2014: Breaking Through Is Hard to Do.”
- Nine out of 10 CXOs are happy with Big Data’s business outcomes and fifty-nine percent of executives at companies using Big Data say it’s extremely important. — Accenture Analytics.
- Sixty-seven percent of companies now have Big Data projects running in production, compared to 32 percent last year. Of those, 82 percent said Big Data is already integrated into the mainstream of their organization. — NewVantage Partners survey.
By all accounts, Big Data initiatives would seem successful and on the way to becoming established business practice.
(MCT) — When Ortiz Middle School Principal Steve Baca ordered a lockdown after a security guard found a gun in a student’s backpack earlier this month, English teacher Alexandra Robertson knew exactly what to do. She locked her classroom door, got her kids to help barricade it and said she was ready to use books, staplers and any other blunt objects she could find to fend off anyone who might try to enter.
Robertson’s response was far from fear-driven. It was part of a new approach to dealing with on-campus threats from outsiders called Run, Hide and Fight.
For years many schools relied on a basic lockdown approach of sealing off doors, turning off lights, shuttering windows, and silencing cellphones and other technological devices as a way to deal with an outside threat. But the 2012 massacre at Sandy Hook Elementary School in Newtown, Conn., that left six adults and 20 first-graders dead — as well as a raft of other school shootings in recent years — have forced school safety experts to rethink their approach.
Riverbed Technology today extended the level of control IT organizations can gain over wide area networks (WAN) by incorporating a policy engine into its line of SteelHead WAN optimization appliances.
The upgrade was announced today at the Riverbed FORCE 2014 conference. Amol Kabe, vice president of product management for Riverbed, says Riverbed SteelHead 9.0 now allows IT organizations to define the class of WAN links they should use based on the performance requirements of applications.
Kabe says it’s now routine for IT organizations to deploy a mix of Internet and private leased lines in support of cloud applications. As such, they need WAN appliances that can dynamically route application traffic across those WAN links based on latency requirements.
(MCT) — Hospitals already are seeing serious, sometimes life-threatening cases of flu this year, and early indicators show that a good chunk of those sickened are working-age adults.
In Franklin County, at least 28 people had been hospitalized for flu as of Oct. 25, the most recent date for which data is available from Columbus Public Health.
That’s more than double the number at that time last year, when there had been 11 hospitalizations in the county. In the flu season before that, 2012-13, just two people had been hospitalized at this point in the season.
Columbus Public Health looks at several factors to track flu, including hospitalizations, laboratory-test results, emergency-department visits and medication sales.
In the week ending on Oct. 25, there were upticks in pediatric visits for flulike illness and respiratory illness and in over-the-counter sales of cold and cough remedies.
Statewide, emergency-department visits, hospitalizations and thermometer sales were slightly above the baseline five-year average used by the Ohio Department of Health to assess flu activity.
The perceived security threats associated with cloud services become less of an issue as businesses adopt more cloud services. This is according to Databarracks’ fifth annual Data Health Check report, which surveys over 400 IT decision makers in the UK.
The report, which questions IT leaders from organizations of various sizes and industries, revealed that 81 percent of organizations that had adopted no cloud services rated security as a top factor to consider when selecting a potential provider, with core factors such as functionality scoring as poorly as 38 percent.
Once an organization has adopted two or more cloud services, however, the importance of security falls to just 44 percent with factors such as provider reputation becoming much more important overall.
Peter Groucutt, managing director of Databarracks, comments: “This isn’t a case of security becoming less important as you adopt more cloud services – data security is always going to be a priority for both the organization and the provider. What we’re actually seeing is organizations moving past the ‘fear of the unknown’, as they experience cloud services first-hand.
“We’ve been hearing it for years: security is the biggest inhibitor of cloud services. CSPs have been striving to change that perception, so it’s promising to actually see the attitudes change as the market matures. Once an organization actually uses a cloud service, they realise that the practicalities of working with a provider – the functionality, the location of their data centres - become far more important than the security risks they once feared.”
Download the full Data Health Check 2014 here.
As result of EDP Distribuição's responsibilities, its involvement was required in Portuguese efforts to comply with the European Council Directive 2008/114/EC, on the identification and designation of National Critical Infrastructures (NCI) and the assessment of the need to improve their protection.
EDP Distribuição is the Portuguese mainland Distribution System Operator, serving over 6 million customers in a regulated business with clearly defined responsibilities, being the holder of the concession to operate the Distribution Electric Power Network in Medium Voltage and High Voltage, and holding municipal concessions for the distribution of electricity in Low Voltage.
With EDP Distribuição under having responsibility for several assets and systems which are essential for the maintenance of vital societal functions - health, safety, security, economic or social well-being of people, the challenges were many. The selection of a manageable number of assets from a set of more than 400 main premises, the identification of their major threats and vulnerabilities, and writing down their emergency response procedures, were some.
(MCT) — The first call came on a Thursday, 12 days after Michael Brown was shot. Patti Knowles and her granddaughter were watching “Mickey Mouse Clubhouse.”
The caller warned that the collective of computer hackers and activists known as Anonymous had posted data online — her address and phone number and her husband James’ date of birth and Social Security number.
Anonymous had been targeting Ferguson and police officials for days. But this seemed to be an error. Patti and James weren’t city leaders, they were the parents of one — Ferguson Mayor James Knowles III.
Within hours, identity thieves had opened a credit application — the first of many — using the leaked data.
The second call came on a Friday, nearly two months later. This time, it was their bank.
Someone, posing as James, the mayor’s father, had called in and changed passwords, addresses and emails. Then the individual sent $16,000 in bank checks to an address in Chicago.
The name on the address?
Jon Belmar. Same as the chief of the St. Louis County police.
(MCT) — Florida, the most storm-battered state in the nation, now is home to groundbreaking research that allows scientists to dissect the raw power of hurricanes.
Both the University of Miami and Florida International University have built complexes that recreate realistic hurricane conditions, including the enormous wind, battering waves and rainfall they can generate.
The idea is to provide scientists with a better understanding of how the storms work, information that should help improve forecasts and bolster construction.
Additionally, the Miami Museum of Science makes it easy for patrons to view the inner workings of hurricanes — and get a feel for flying into one.
It's not easy to witness the destructive power of a Category 5 hurricane up close and personal.
Yet that's what UM scientists can now do inside a tank the size of an indoor swimming pool, housed in the school's new $50 million Marine Technology and Life Sciences Seawater Complex.
Business continuity is, especially in the Anglo-American world, not that much a new concept. Being not new also means that it probably is due to be redesigned. Since the inception of Business Continuity Management in the late 80s and early 90s of the last century, the world has changed quite a bit. The main concepts, procedures and processes of BCM however have not changed that much in the past 25 or so years. We are still talking PDCA, we are still talking process-based business impact analysis, we are still trying to do the work of risk managers with our task in the fields of operational and reputation risks. We still have the BCM Lifecycle.
Those who are practitioners in the profession may have already realized that the theoretical strategies and tactics as outlined by the BCM Lifecycle approach may not always meet the needs and possibilities of an organization seeking to implement BCM. The business impact analysis for instance needs processes, since it aims to operationalize the damage because of failed process. But, which organization does have a complete and operationalized process document which allows it to just sum up losses and damages along process chains? And, how can the BCM organization define the so-called BCM-Strategies when they haven’t even asked the business what they think they need as workarounds to cover a resource which was lost or damaged because of some crisis situation?
By Alistair Forbes
Backup seems simple: take the important files that you need, and make sure that they are duplicated in such a way that they can be recovered.
It’s clearly a necessity. Our own personal PCs nag us if we’re failing to backup our data, and everyone knows a dire tale of woe of failing to backup - and companies such as ma.gnolia that collapsed after losing all its customers’ data.
And yet simply having a backup isn’t enough: backup success rates today are between 75 and 85 percent. In some sectors, only three-quarters of backup recoveries are successful. The rest, despite having a backup solution in place, were only able to recover some, if any, of their data.
Sungard Availability Services (Sungard AS) has announced plans to open a new workplace recovery centre for central London. The facility, which offers capacity for over 700 customer staff, will be equipped with the latest conferencing equipment and IT infrastructure.
The centre, which will be situated just outside of the city, is part of Sungard AS’ ongoing programme of investment in its workplace recovery facilities to ensure they continue to meet the needs of today’s workforce.
Positioned within walking distance of the city, the City of London Workplace Recovery Centre will work as part of a ‘near-far’ continuity planning solution: with organizations also provisioning workspaces within Sungard AS’ alternative facilities across the country, including recovery centres in Borehamwood, Hertfordshire, and Hounslow, London.
Built for the future, users will have access to 10Gb-ready networking infrastructure, and will be able to cope with heavier data traffic and increasing workloads.
“The technology that businesses are using is changing but organizations still recognise that truly effective collaboration requires regular face-to-face contact between teams,” said Keith Tilley, executive vice president, Global Sales and Customer Services Management, Sungard AS.
“While the advances in enterprise mobility and remote working offer businesses more options, customers still want to provide space for their teams to work together. Sungard AS’ latest investments are part of a wider push in ensuring our customers have access to the services they need to maintain business-as-usual in even the most difficult circumstances.”
The City of London Workplace Recovery Centre is expected to be operational in January 2015.
Most organizations (67 percent) are facing rising threats in their information security risk environment, but over a third (37 percent) have no real-time insight on cyber risks necessary to combat these threats. This is one of the headline findings of EY’s annual Global Information Security survey, Get Ahead of Cybercrime, which this year surveyed 1,825 organizations in 60 countries.
Companies are lacking the agility, the budget and the skills to mitigate known vulnerabilities and successfully prepare for and address cybersecurity:
43 percent of respondents say that their organization’s total information security budget will stay approximately the same in the coming 12 months despite increasing threats, which is only a marginal improvement to 2013 when 46 percent said budgets would not change.
Over half (53 percent) say that a lack of skilled resources is one of the main obstacles challenging their information security program and only 5 percent of responding companies have a threat intelligence team with dedicated analysts. These figures also represent no material difference to 2013, when 50 percent highlighted a lack of skilled resources and 4 percent said they had a threat intelligence team with dedicated analysts.
GANTA, LIBERIA — The site of the U.S. military’s future Ebola treatment center is now an overgrown grassland next to an abandoned airstrip on the Guinean border.
Two miles away, in a converted eye clinic that now houses a makeshift Ebola ward, this county’s sole doctor is waiting. He will soon run out of protective gear. Some of his employees haven’t been paid for a month.
“We all know we need the new treatment center,” said the doctor, Paye Gbanmie. “I worry that we could run out of space here.”
The U.S. military aims to quell that anxiety when it erects the new treatment center, slated to be finished later this month and manned by newly imported doctors. Just the sight of American helicopters flying over Ganta, a city of about 50,000, has lifted hopes here.
Every historical era has its lessons, such as Don’t trust totalitarian dictators to respect diplomatic niceties, Avoid land wars in Asia, and You know what’s going to happen to Sean Bean in this movie. One of the lessons of the last decade is certainly Information is not intelligence. Unfortunately, many people who do software requirements, or depend on them to build and test software, have not seen the relevance of that maxim in their own work.
Requirements in software development serve much the same purpose as intelligence in national security: they are supposed to provide actionable, reliable insights. “Actionable” is largely a question of format, which software professionals can control directly. Older questions like, What is the Jesuitical distinction between a requirement and a specification? and newer questions like, What kind of words do we need to supplement the pictures that we’ve created in this wireframe? have the same purpose: make sure that the developer, tester, or some other species of technologist understands what action to take, based on the information provided. In a similar fashion, the US President’s daily intelligence briefing follows a format that its intended audience finds useful.
The reliability of the information is not under the complete control of software professionals. In fact, we should always assume that the information we have is, to some degree, unreliable. We can reduce the amount of unreliability, but it will never reach 100% certainty. People in the intelligence profession deal with this problem in a variety of ways. Here are a few examples:
If a major incident affected your business tomorrow, what are the processes, machinery or even suppliers that would be really hard to replace quickly – the really awkward ones, the unique machinery or equipment that perhaps there isn’t really a plan for, let alone a plan that gets you back within an acceptable recovery time?
Spotting the problems is relatively easy, particularly when you get into manufacturing or supply chain businesses. The challenge for Business Continuity Managers is to do something about them and develop practical, simple recovery plans – even for the hard stuff.
I lead Business Continuity Management at Rolls-Royce Plc, where we have several key manufacturing processes that are both important and challenging to recover quickly.
Over the last year, we have developed a simple but effective approach to business recovery planning for these processes and it fits in just two pages.
This approach has helped the business to understand the risk, recover more efficiently and to prioritise capital investment decisions.
At the 2014 BCI World Conference and Exhibition, I’ll be showing you how this works along with providing practical hints and tips so that you can make it work in your business too.
James Stevenson will be discussing this issue further on day one of the BCI World Conference and Exhibition on Wednesday 5th November.
The past few weeks of the Ebola outbreak in Sierra Leone, Guinea and Liberia have gripped the U.S. and the world in bizarre, comical and concerning ways. Every day the news brings stories of sexy hazmat suits that are the most sought out Halloween costumes, it’s fodder for late night talk shows and the Centers for Disease Control and Prevention (CDC) finally released new health-care worker protection guidelines.
The Ebola virus has deeply rocked the U.S. public health and health-care community and the public at large even though we are not likely to see the same Ebola transmission and mortality rates they have in West Africa. There have been only four cases in the U.S. and one death but many of my health-care emergency management colleagues can attest that we are now spending an inordinate amount of time on infection control webinars, donning and doffing training and fit testing, as well as ordering as much personal protective equipment (PPE) they can get their hands on.
There is nothing quite so scary in the IT universe than tearing down what you just built in order to make way for a new technology. Well, perhaps complete and utter network failure, but that’s about it.
But with the advent of containers, it seems like the enterprise is on the cusp of reworking one of the fundamental elements of the cloud, converged infrastructure, software-defined infrastructure, data mobility and just about every other initiative that is driving data center development these days. Fortunately, container-based virtualization does not require a forklift upgrade to the virtual layer, but it does alter the way virtual machines are managed, and it could cause a massive rethink when it comes to devising the higher-order architectures that are slated to drive business productivity in the future.
To some, however, it was traditional virtualization’s limitations in supporting advanced data architectures that led to the rise of containers in the first place. As Virtualization Review’s Jeffrey Schwartz put it, there was growing consensus that the application loads of elastic, cloud-based platforms and applications were already pushing the limits of even the most advanced virtualization platforms, and what was needed was a higher degree of portability, speed and scale. Containers achieve this by allowing a single operating system to handle multiple apps at once, which is a much more elegant solution than deploying numerous virtual machines each populated with its own OS.
By Geary Sikich
Inferno, the first part of Dante's Divine Comedy that inspired the latest Dan Brown's bestseller of the same title describes the poet's vision of Hell. The story begins with the narrator (who is the poet himself) being lost in a dark wood where he is attacked by three beasts which he cannot escape. He is rescued by the Roman poet Virgil who is sent by Beatrice (Dante's ideal woman). Together, they begin the journey into the underworld or the Nine Circles of Hell.
As business continuity planners you may have experienced or are experiencing the journey through the Nine Circles of Planning Hell. When you were assigned the responsibility for developing the business continuity plan, or disaster plan, or emergency plan, or any of the myriad regulatory driven planning initiatives; you found yourself in the first level of Planning Hell – Limbo. Your journey probably continued to several of the nine circles of planning hell, or maybe you got lucky and were able to stay in that nice state of limbo until you moved on to your next assignment, job or career change. If you were not so lucky; you travelled through all nine circles of planning hell. Hopefully, if you did travel through all nine circles of planning hell, you, like Dante, emerged to find the light.
David Sandin looks at whether we have heeded the lessons of Heartbleed bug, the implications of Shellshock and the future security of open-source coding.
‘First time’s an accident, the second time a coincidence but third time is stupidity’ has long been the mantra of infuriated parents, exasperated at their children’s ability to make the same mistake multiple times over. Oddly, it was also the phrase that came to mind as news of the Shellshock bug targeting open-source coding broke: just six months on from the Heartbleed attack.
Shellshock allows hackers to easily exploit many web servers that used the free and open source Bash command line shell. So far hackers have focussed efforts on exploiting the weakness to place malware on vulnerable web servers, with the intention of creating armies of bots for future distributed denial of service attacks, flooding website networks with traffic and taking them offline. While it was initially thought that the vulnerability would only affect machines that ran Bash as their default command line interface, it is now suspected that machines using related coding could also be exploited.
Texas regulators on Tuesday tightened rules for wells that dispose of oilfield waste, a response to the spate of earthquakes that have rattled North Texas.
The three-member Texas Railroad Commission voted unanimously to adopt the rules, which require companies to submit additional information – including historic records of earthquakes in a region– when applying to drill a disposal well. The proposal also clarifies that the commission can slow or halt injections of fracking waste into a problematic well and require companies to disclose the volume and pressure of their injections more frequently.
The commissioners – all Republicans – said the vote showed how well Texans canrespond to issues without federal intervention.
Commissioner Barry Smitherman called the vote a “textbook example” of how the commission identifies an issue and “moves quickly and proactively to address it.”
“We don’t need Washington,” he said.
The federal Environmental Protection Agency last month said it supported the proposed rules.
The times, they are a-changing. Mobile computing devices not to mention BYOD and a millennial attitude mean that a substantial number of employees in enterprises now do their work away from their desks. Whether at home, in a bus, train or plane, or in their favourite coffee-shop, if there’s a Wi-Fi connection available, there’s a potential workspace in the making. But naturally enough, all this may then escape the control of the enterprise or at least partially so. For instance, how can companies then implement effective work area recovery for such nomadic workers in the event of an IT incident?
By 2025, we should expect to have experienced a “significant” cyberattack, according to a canvas of technology experts and researchers conducted by the Pew Research Internet Project and reported upon today.
To this group of experts, Pew posed the following question:
By 2025, will a major cyber attack have caused widespread harm to a nation’s security and capacity to defend itself and its people? (By “widespread harm,” we mean significant loss of life or property losses/damage/theft at the levels of tens of billions of dollars.)
Over 1,600 responses came in; respondents were not required to reveal their names.
A lot of people in the IT industry are pulling for the hybrid cloud. Enterprise executives are intrigued by the idea of low-cost, broadly federated data infrastructure distributed over large, geographic areas, while traditional data center vendors are trying to preserve their legacy product lines in the new cloud era.
But just because people want it, does that make it a good idea? If the idea is to capitalize on the benefits of both public and private cloud infrastructure, will hybrid solutions undermine that effort by watering down the advantages of pure-play approaches?
One thing is clear: Many enterprises see the hybrid cloud as the end-game of the virtual transition. A recent survey by Gigaom indicates that more than three-quarters of top decision-makers have adopted hybrid as a core component of their ongoing cloud strategies. However, it is becoming evident that this is more than a simple change in technology—it’s a top-to-bottom shift in the entire enterprise structure that will affect everything from data and infrastructure to business processes, governance and the ownership of digital assets.
On Oct. 28, Healthmap.org reported the latest figures on the Ebola outbreak: Spain 1 case; Guinea 1,553 cases and 926 deaths; Sierra Leone 3,896 cases and 1,281 deaths; Liberia 4,665 cases and 2,705 deaths. And for the U.S., 4 cases and one death. The website's Ebola timeline also provides projections on the number of cases and deaths, based on infection rate data from the World Health Organization, a list of the most recent articles about Ebola outbreaks, as well as relevant social media postings.
Healthmap is one example of how easy it is to find information on this rapidly growing epidemic -- and it also represents the way technology can play a major role in the effort to track and control the disease. For example, mobile phones are perhaps the most ubiquitous type of technology available in Africa, used by millions there. So it didn’t take long for researchers to identify the devices as a possible way to not just send people information about the disease, but also to track it.
And with 95.5 percent of the global population having mobile cell subscriptions, call-data records (CDRs) are one way epidemiologists can see where people have been and where they're headed based on past movements.
NG9-1-1, Explained: 7 Important "Need to Knows"
Next Generation 9-1-1 (NG9-1-1) is a hot topic in the public safety and local government communities. But the specifics of this long-sought-after initiative can be complex and there are several parties that play important roles. Below is a list of critical elements and key players to help you make sure you’re up to date.
Glossary of Terms:
- Next Generation 9-1-1 (NG9-1-1)
- Public Safety Answering Point (PSAP)
- National Emergency Number Association (NENA)
- Analog-Based Infrastructure
- U.S. Department of Transportation Intelligent Transportation Systems Program (ITS)
- Systems Integrator
By Vikram Duvvoori, Chief Technologist and Corporate Vice President - Enterprise Transformation Services, HCL
IT leaders -- and the executive teams they report to -- have been bombarded with a virtual “shock and awe” campaign around Big Data. IDC estimates that the 1.8 zettabytes —1.8 trillion gigabytes — of information generated in 2011 will grow by a factor of nine over the next five years. Gartner has a similar take when looking at the segment, predicting that the Big Data market, now valued at $5 billion in revenues annually, will explode to $53 billion by 2016.
The initial reaction, and rightfully so, is “Wow!” and “How in the world are we going to deal with all this?”
While considerable attention has been placed on the three Vs of Big Data — volume, velocity, and variety — the most important aspect has been on the back burner: the actual value to the business.
At the 2014 BCI World Conference and Exhibition, participants will have an opportunity to listen to a real case study of the integration of Enterprise Risk Management (ERM) and Business Continuity Management (BCM) as an independent function. This is an innovative and forefront role for the ERM and BCM function.
In my presentation, I will show how the traditional reporting structure and work functions of ERM and BCM in an organisation are usually separated from each other. The ERM and BCM functions are typically part of the executive management team and the head of ERM and BCM reports to the executives such as the CEO or the CFO.
The US National Fire Protection Association (NFPA) has made two announcements regarding the current revision process for the 2016 edition of its business continuity standard, NFPA 1600.
First, the Public Comment closing date for online submissions is November 14th, 2014. For details on how to submit comments, please click here.
Second, the date for the Second Draft Meeting to review the updated standard will be March 24th-26th, 2015 at the Palmer House Hilton hotel in Chicago. For more details on this activity, please click here.
NFPA 1600, and its current version dated 2013, has been recognized as the National Preparedness Standard by the 911 Commission. It is also the US national standard on emergency preparedness, and has an important focus on business continuity. NFPA 1600, 2013 Edition, is also one of the three standards being used in the voluntary Private Sector Preparedness (PS-Prep) program as administered by the Department of Homeland Security.
A new survey from Lieberman Software Corporation has revealed that 78 percent of IT security professionals are confident that firewalls and antimalware tools are robust enough to combat today’s advanced persistent threats.
Lieberman Software says that these findings highlight the fact that while cybercrime is on the rise, many organizations are still dangerously relying on outdated perimeter security solutions to defend against the latest threats.
The survey, which was carried out at Black Hat USA in August 2014, also revealed that 22 percent of those surveyed do not think that tools like firewalls and antivirus are able to defend against APTs. However, given the surge in organizations suffering advanced targeted cyber attacks, this number should have been much higher.
When the topic of encryption comes up in conversation (and doesn’t it always?), skeptics are fond of interjecting self-satisfied statements along the lines of, “The question isn’t whether encryption is crackable, but when will it be cracked?” In the face of such smugness, I usually counter with the ego-deflating rejoinder, “Let me know when you’ve joined us in the cloud era.”
You see, when data is encrypted in the cloud, your keys remain within your control; thus only authorized users have access to protected data. Unauthorized users will only see indecipherable codes, which is fine, but how do you think unauthorized users will attempt to access and exploit said data?
With the security threats around today, the sheer mass of information and the vulnerabilities to attack, it has to be admitted that information security is a challenge. But not an insurmountable one. The right information security takes planning and organisation. The advantages include prevention of loss and damage through information being stolen or compromised, as well as a more alert, capable workforce. So why does one recent survey show a downwards trend in implementing information security procedures?
Leaders of business intelligence (BI) projects should push for a revamped data architecture that supports more integrated data, even if it means looking at a Big Data option, according to a recent InfoWorld column.
In “Why BI projects fail -- and how to succeed instead,” software consultant Andrew C. Oliver says it’s essential to be able to integrate large amounts of data. BI tools tend to be resource-hungry, he adds.
So, rather than viewing technologies such as Hadoop, data lakes, enterprise data hubs and data warehouses as “trendy,” you should view them as essential to BI success, argues Oliver.
“A successful BI project does not forget about either business integration (more later) or data integration,” he writes. “Your requirements should dictate what, how much, and how often (that is, how ‘real time’ you need it to be) data must be fed into your data warehousing technology.”
The proposition that human resources hold one of the golden keys to successful business continuity will be presented on day two of the BCI World Conference and Exhibition in the Listen Stream. David Evans and Lynne Donaldson of Corpress LLP will argue that the HR role in business continuity is often understated, possibly not understood and for many organisations undervalued.
Please share your thoughts with us on how important HR (Personnel) are to your BCM process: are they heavily engaged or just reactive when pushed and how much time do you spend working with them?
FXT Edge Filers and Lattus Enable High-Performance, Cost-Effective Access to Content
PITTSBURGH, Pa. – Avere Systems, a leading provider of enterprise storage for the hybrid cloud, and Quantum Corp. (NYSE: QTM), a leader in scale-out storage systems, today announced a joint storage solution designed to optimize workflows for the oil and gas industry. The combined approach provides an integrated networked attached storage (NAS) solution with cloud storage that extends data availability at a lower cost, enabling upstream workflows to keep strategic information close at hand, shortening project cycle time and improving exploration analysis. Avere Cloud NAS powered by FlashCloud™ combined with Quantum Lattus™ extended online storage delivers cost-effective cloud storage with the high-performance access required for oil and gas exploration.
Advantages of FXT Edge Filers and Lattus for Oil and Gas Environments
With more data than ever before being generated for oil exploration, traditional solutions relying on replication and RAID do not provide cost-effective global access to content, can place a heavy burden on network storage, and drive storage capacity demands beyond budget. Together, Avere FlashCloud on FXT Edge Series filers and Quantum Lattus extended online storage provide a comprehensive solution that delivers several key advantages:
- Extreme Scalability: Avere FXT filers deliver scalable NAS performance in a clustered configuration while FlashCloud provides access to Quantum Lattus, offering cost-effective performance and capacity that is simple to manage, capable of expanding to hundreds of petabytes without disruption.
- Flexible Onramp to the Cloud: Avere’s global namespace joins Lattus and legacy NAS into a single pool of storage so oil and gas users can store their data wherever it makes most sense and adopt Lattus storage at a managed pace. Avere’s FlashMove® transparently moves live, online data to Lattus without disruption while FlashMirror® replicates data to Lattus for disaster recovery.
- Durable, Self-Healing Protection: Lattus delivers built-in data resiliency with self-healing protection, guarding data against component failure and even site disaster.
- Lower Cost of Ownership: The combination of FXT Edge filers with Lattus’ cost-effective object storage leverages efficient data spread algorithms that require less storage than RAID to protect data, enabling 70% or more savings in total cost of ownership compared to traditional NAS implementations.
- High-Speed Access: Avere Cloud NAS - powered by FlashCloud running on FXT Edge filers - eliminates the latency of access to content pools that reside in cloud storage.
Simon Robinson, Senior Analyst, The 451 Group
“Avere’s added support for object-level storage such as Lattus make it an ideal solution for addressing ever increasing storage demands in an upstream workflow environment, and Quantum’s focus on infinitely scalable object storage delivers the perfect complement to Avere’s FXT Edge filers. The combined solution shows great potential for bringing a more economical approach to meeting the storage access and performance needs of the oil and gas industry.”
Mike McMahon, Vice President, Business Development, Avere Systems
“This joint offering provides a blended cost model that brings high-performance NAS to a very large pool of economical storage. Compared to conventional approaches that rely strictly on primary storage, we’re offering a solution that combines a high-performance tier with a cost-effective petascale storage tier. Avere FXT Edge filers provide fast access to content where the user needs it, when they need it.”
Geoff Stedman, Senior Vice President, StorNext Solutions, Quantum
“The combination with Avere extends online storage by delivering fast access to seismic data via a cost-effective private cloud solution with the extreme scalability and performance needed in the oil and gas market. With Quantum’s long history of delivering solutions for oil and gas, we know how critical it is for companies to store their valuable seismic content without limits and avoid contending with technology refreshes, data migration cycles and the very high costs associated with relying on traditional RAID storage.”
The combined Avere-Quantum solution is currently available.
Founded in January 2008, Avere Systems is radically changing the economics of data storage. Avere’s solutions give companies – for the first time – the ability to put an end to the rising cost and complexity of data storage by allowing customers the freedom to store files anywhere in the cloud or on premises without sacrificing the performance, availability or security of their data.
Based in Pittsburgh, Avere is led by veterans and thought leaders in the data storage industry and is backed by investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners, Tenaya Capital, and Western Digital Capital.
Quantum is a leading expert in scale-out storage, archive and data protection, providing solutions for capturing, sharing and preserving digital assets over the entire data lifecycle. From small businesses to major enterprises, more than 100,000 customers have trusted Quantum to address their most demanding data workflow challenges. With Quantum, customers can Be Certain™ they have the end-to-end storage foundation to maximize the value of their data by making it accessible whenever and wherever needed, retaining it indefinitely and reducing total cost and complexity. See how at www.quantum.com/customerstories.
East Timbalier Island is under threat of disappearing due to a combination of hurricane and coastal storm events, subsidence, and other factors
BROOMFIELD, Colo. – MWH Global, the premier solutions provider focused on water and natural resources, has been awarded a contract by the Louisiana Coastal Protection and Restoration Authority (CPRA) to provide engineering services for the restoration of the East Timbalier Barrier Island off the coast of Louisiana. Funding for the East Timbalier engineering and design services contract is being provided to CPRA by the National Fish and Wildlife Foundation.
East Timbalier Island is part of the Louisiana barrier island chain that separates Terrebonne and Timbalier bays from the Gulf of Mexico. East Timbalier Island has experienced significant loss of land due to multiple hurricanes, subsidence, and reduced sediment loads from the Mississippi River. The Island currently consists of two severely degraded segments, and is anticipated to disappear unless restoration activities are undertaken to replenish sediment that has been lost. The coastal barrier islands in the Gulf of Mexico off the Louisiana coast provide critical beach, dune and marsh habitat. They also serve to protect fragile interior marshes and infrastructure and provide quiescent bay habitats preferred by many fish and invertebrate species by lowering wave energy and storm surges originating from the Gulf. The Louisiana CPRA has identified the restoration of a number of islands similar to East Timbalier Island as an important part of the state's 2012 Coastal Master Plan and the Fiscal 2013 Annual Plan for Ecosystem Restoration and Hurricane Protection in Coastal Louisiana.
The MWH project scope will involve the coastal engineering and design to re-establish the historic island footprint with beach, dune, and marsh habitat creation, reconnecting the two segments. Project scope activities will be accomplished with close support from MWH’s key project team members to include Coastal Engineering Consultants, Ocean Surveys, Inc., GeoEngineers, Fugro/John Chance Land Surveyors, R.C. Goodwin & Associates, and Coastal Technology Corporation. Significant effort will be applied toward identifying suitable nearshore and offshore sediment sources to build the desired island habitats through coastal geophysical and geotechnical investigations and engineering analysis. Throughout the contract term, extensive coordination will be undertaken by MWH with Federal and State agencies for permitting and with oil and gas operators that have significant infrastructure in the region.
“This project represents a unique opportunity for the Louisiana coastal region to become better prepared for the future and to combat ongoing issues jeopardizing the long-term health of the coastal ecosystem. MWH is extremely proud to be a partner with CPRA and the associated agencies on such an important restoration project.” commented Marshall Davert, president for government and infrastructure in the Americas and Asia Pacific for MWH.
Construction and restoration of the improved East Timbalier Island is expected to begin in 2017.
MWH Global is the premier solutions provider focused on water and natural resources for built infrastructure and the environment. Offering a full range of innovative, award-winning services from initial planning through construction and asset management, we partner with our clients in multiple industries. Our nearly 8,000 employees in 35 countries spanning six continents are dedicated to fulfilling our purpose of Building a Better World, which reflects our commitment to sustainable development. MWH is a private, employee-owned firm with a rich legacy beginning in 1820. For more information, visit our website atwww.mwhglobal.com or connect with us on Twitter and Facebook.
Government-backed online service aims to help 20,000 UK SMEs to boost their exporting capability by 2016
HSBC has today been announced as the first corporate sponsor of Open to Export, an online export community backed by Government and business, which aims by 2016 to help 20,000 businesses by offering free access to its forum, webinars and industry contacts.
The service has been developed in direct response to the need identified by UK SMEs for trusted, practical advice available all in one place on the full range of issues they face when identifying and moving into new markets.
The Open to Export website provides an open, collaborative and responsive platform where SMEs can connect, learn and talk with experts and peers to improve their effectiveness when doing business abroad.
Through the website companies can:
- Get practical insight and advice from successful exporters and subject matter experts via webinars, country and topic guides and case studies
- Benefit from bespoke answers to export questions from experts and peers in the forum and live Q&As
- Connect with contacts from support organisations, events, opportunities and potential partners
Regular contributors include government departments, trade associations, private sector trade specialists and successful exporters such as UKTI, HMRC, Defra, the Institute of Export, the British and Overseas Chambers of Commerce and DHL.
Since launching in late 2012, the website now regularly attracts 30,000 unique visits a month and has a community of nearly 6,000 registered users.
As Principal Partner, HSBC is supporting Open to Export alongside founders UK Trade & Investment, the Federation of Small Businesses, the Institute of Export, and marketing solutions and websites provider Yell as part of its ongoing commitment to helping businesses of all sizes achieve their growth ambitions.
The bank will be contributing content to the website and have presence at key trade and sector specific events throughout the year and across the UK alongside Open to Export in order to engage customers and prospects face to face and connect them to practical support.
Ian Stuart, Head of Commercial Banking UK and Co-Head of Commercial Banking Europe at HSBC, said: “HSBC is determined to help Britain's ambitious businesses grow and expand into new markets overseas. As the leading international trade bank, we are here to support businesses of all sizes, and at whatever stage in their international growth. That's why we are bringing our knowledge, expertise and global connectivity to the partnership with Open to Export and helping to build a service that offers practical support and advice to those looking to export."
Successful technology entrepreneur and Open to Export Chairman, Julian Hucker, said: “Open to Export has proved there is a demand for a peer to peer community focussed on exporting. HSBC’s involvement now brings a wealth of international trade expertise and connections that will greatly help us to deliver our ambitious plans to develop the service and grow that community to help 20,000 companies by 2016. I wish this had been around when I was building my last business.”
The announcement of HSBC’s partnership with Open to Export comes ahead of UKTI’s Export Week, where both will be exhibiting as part of the Explore Export roadshow. Open to Export will also be conducting the first in a series of surveys aiming to identify the driving factors behind decisions being made on international growth in the coming year, which will in turn help shape the website’s content and features to ensure a relevant and responsive service for its users.
Companies can browse the site and register for free at www.opentoexport.com.
Delivers Comprehensive Mac Management with SCCM on Par with Native PC Management
Latest version enables even better management of Mac computers in corporate environments, with improved security capabilities that allow for control over Mac computers
LONDON – Parallels® has launched an update to Parallels Mac Management (parallels.com/uk/managemacs), the product that extends Microsoft® System Centre Configuration Manager (SCCM) functionality to Mac® computers. Parallels Mac Management enables IT departments to discover, enrol and manage Mac machines just as they do existing PCs – all through a single pane of glass.
Taking Mac management further than native SCCM, this latest version offers new software distribution capabilities that benefit both IT administrators and end users, including the ability for Mac users to install approved software, features for planning hardware refresh cycles accurately, improvements that empower IT administrators to make their Mac computers more secure using advanced encryption techniques, and a streamlined process for enrolling Mac computers in SCCM on large corporate networks.
Parallels tackles the perennial problem of Mac management head-on with Parallels Mac Management 3.1, enabling everyone from IT administrators to system architects to CIOs to leverage their current SCCM infrastructure and extend it to Mac without unnecessary costs. Users are empowered to:
- Manage and control Mac computers by leveraging their existing SCCM infrastructure, resources and talent
- Gain full visibility into the Mac computers coming onto their networks
- Take control and take action on those machines as they would on PCs – all while working in the same familiar environment
- Easily leverage Mac technologies, such as the configuration manager, to secure Mac computers
- Deploy and manage Parallels Desktop Enterprise Edition, as well as other virtual machines – key for Mac users who need access to business-critical Windows apps.
- Introduces the ability to manage Macs running Mac OS X 10.10 Yosemite, via Microsoft SCCM.
- Application Portal for Mac, which allows Mac users to install approved software even without administrative rights
- Mac warranty (AppleCare) status reporting, which lets IT plan hardware refresh cycles better
- Support for unique FileVault 2 personal recovery keys and the ability to escrow these keys, which increases security by encrypting Mac computers on the network
- Support for PKI and HTTPS, which enables support for an SCCM infrastructure operating in secure HTTPS mode
- Network discovery UI improvements, which enable the use of SCCM site boundary information in network discovery configuration settings—this means huge time savings for IT administrators who need to scan large corporate networks.
- Scan the corporate network automatically to discover Mac computers, then auto-enrol them in SCCM
- Gather hardware and software inventory of all Mac machines on the network
- Leverage native Microsoft SCCM reports to view information about Mac computers
- Enforce compliance via extended SCCM configuration items: Mac OS X configuration profiles and shell scripts
- Central management and installation of software packages and patches
- Support for deployment of a wide range of software packages: .dmg, .pkg, .iso, .app, scripts, and stand-alone files
- Support for silent deployment and deployment with user interaction
- Seamless integration of Mac OS X image deployment into SCCM workflow
- Deployment of preconfigured, company-standard OS X installation on new Mac computers
“Features and improvements made in this latest version of Parallels Mac Management for SCCM were added in direct response to requests from customers and prospects, and we are very excited to bring a product to market that will make IT administrators’ lives easier,” said Jack Zubarev, Parallels President. “We know that managing Macs in the enterprise can feel like a lawless Wild West, and we are working to change that by offering products that make configuration, deployment and overall management of Macs in business environments more efficient and secure.”
A recent winner of Microsoft’s Best of TechEd 2014 award in the Systems Management category, Parallels Mac Management 3.1 includes a number of new features:
Additional features include:
- Asset Inventory
- Configuration Management
- Software and Patch Deployment
- Mac OS X® Image Deployment via SCCM
Parallels offers resources for IT departments as they go through the proof of concept and implementation process for Parallels Mac Management. These include a hosted test lab program that lets IT professionals test Parallels Mac Management before installing it, and a JumpStart Program that includes Parallels Mac Management for one year on up to 100 Mac computers, as well as 10 hours of assisted installation and configuration support.
Parallels will be demoing Parallels Mac Management at TechEd Europe 2014 in Barcelona, October 28–31, 2014. Please stop by our booth (#99) for a demo and the chance to win a JumpStart Program for Parallels Mac Management (£3,000 value).
Bringing Mac into Business Environments
Parallels Mac Management is part of a larger suite of products for businesses of all sizes that work in cross-platform environments. Other offerings include Parallels® Access™ for Business (parallels.com/access-business), a remote access application for iPad®, iPhone® and Android devices that lets people run PC and Mac applications on their devices with touch gestures – just as if the apps were native to the device. Parallels Desktop® for Mac Enterprise Edition (parallels.com/enterprise) is the best way to run Windows apps on Mac, giving employees easy access to all the tools they need. Using Parallels Desktop Enterprise Edition, IT managers can support Windows applications for Mac users with a configurable, policy-compliant solution that fits seamlessly into their existing business processes.
Availability and Pricing
Parallels Mac Management is available immediately and starts at £30 annually per Mac. Parallels Access for Business starts at £49.99 per year for five computers. Parallels Desktop for Mac Enterprise Edition starts at £66 per year per Mac.
Parallels is a global leader in hosting and cloud services enablement and cross-platform solutions. Parallels began operations in 2000 and is a fast-growing company with more than 900 employees in North America, Europe, Australia and Asia. Visit parallels.com for more information.
Stay connected with Parallels and our online community: Follow us on LinkedIn (linkedin.com/company/parallels), like us on Facebook (facebook.com/parallelsdesktop), follow us on Twitter (twitter.com/parallelsmac) and visit our Apple in the Enterprise blog (blogs.parallels.com/enterprise-blog).
If you are a senior data analytics professional working in the heath care sector, what does the Ebola situation mean to you? To one such professional, it means the potential for an all-hands-on deck response that would break down data-sharing barriers and shift the data-analytics focus from “blocking and tackling” to true innovation.
That professional is Mike Berger, vice president of enterprise analytics at Geisinger Health System, a hospital system in Danville, Pa. I had the opportunity to speak with Berger last week at the Teradata 2014 Partners conference in Nashville, and I brought up the Ebola question almost as an afterthought. If Ebola became a major problem in the United States, I asked him, how might that affect his life? How might data analytics be tapped to deal with the problem?
“The clinical leadership would push us to break down the barriers that keep us from sharing data from one provider to another, which Obamacare was trying to do, but at a very slow pace,” Berger said. “I think the world would turn on its side, and we would be asked to instantly try to interconnect our data storage with other groups’ data storage, and someplace would become the place where the mining would happen, and that would probably be us, in our geographic region. It would be a horrific experience, but the value we would get from an analytics perspective in breaking down these barriers, really would be tremendous.”
When a compliance crisis strikes your industry, it shines a spotlight on how your own company is managing its compliance risk. Newspaper reports on high-profile cases of bribery, corruption, conflicts of interest or misconduct can prompt calls from your Audit Committee Chair and other key stakeholders who will be asking anxious questions. Even if it is a competitor facing these challenges, it falls on the Chief Compliance Officer to quell concerns in the organization. Among the likely queries:
- “Could the legal and public nightmares felt by this other company happen to us?”
- “Are we legally exposed by similar unethical practices within our own company?”
- “How can we be sure we’re not?”
The CCO must have programs in place and be prepared to provide easy visibility into the most critical risk areas. This means delivering essential data and communicating a detailed picture of the risk landscape to concerned stakeholders without resulting in misunderstanding or information overload. It means giving Board members accurate reports, and fostering an understanding about risk and compliance within the Board is critical. It means giving Board members the knowledge and guidance they require to provide the necessary support and resources.
The Board of The Committee of Sponsoring Organizations of the Treadway Commission (COSO) has announced a project to update the 2004 Enterprise Risk Management–Integrated Framework.
The Framework is a widely used by management to enhance an organization’s ability to manage uncertainty, consider how much risk to accept, and improve understanding of opportunities as it strives to increase and preserve stakeholder value.
The new project is intended to enhance the Framework’s content and relevance in an increasingly complex business environment so that organizations worldwide can attain better value from their enterprise risk management programs.
The Framework will update concepts developed in the original Framework and to reflect the evolution of risk management thinking and practices, as well as changing stakeholder expectations. The initiative will also develop tools to assist management in reporting risk information and in reviewing and assessing the application of enterprise risk management.
Recent research by Accenture Analytics shows that nine out of 10 CXOs are happy with Big Data’s business outcome, with leaders at large companies being most satisfied.
Enterprises with annual revenues of over $10 billion said Big Data was “extremely important” and reported better results than other organizations. There are several likely reasons for this, writes Accenture Analytics Senior Managing Director Narendra Mulani. Large companies are more likely to have:
- Greater financial and talent resources to devote to Big Data.
- A better understanding of the value and scope of Big Data.
- A tighter focus on the practical applications and business outcomes.
- A deeper appreciation for Big Data’s disruptive power.
EATONTOWN, N.J. – Since Hurricane Sandy made landfall Oct. 29, 2012, FEMA, in partnership with the federal family and state and local governments, has been on the scene helping individuals, government entities and eligible non-profits as New Jersey recovers from the storm’s devastation.
FEMA has funded more than 5,185 Public Assistance projects including repairing and restoring hospitals, schools, waterways, parks, beaches, marinas, water treatment plants and public buildings. A roster of services has been restored, including utilities critical to everyday life. Billions of federal dollars have been expended during the past two years. The numbers below tell the story. In the two years since Hurricane Sandy devastated New Jersey:
- $6.67 billion has been provided to the state of New Jersey for Hurricane Sandy Recovery.
- $422.9 million has been distributed to help survivors get back on their feet via temporary housing assistance, disaster unemployment and other needs assistance.
- $3.5 billion has been paid to policyholders for flood claims through FEMA's National Flood Insurance Program.
- $1.5 billion in Public Assistance funds has been obligated to communities and certain non-profit organizations for debris removal, emergency work and permanent work.
- $279.5 million in grants has been provided for projects to protect damaged facilities against future disasters.
- $123.9 million in funding for property acquisitions, elevation and planning updates has been paid to New Jersey communities through the Hazard Mitigation Grant Program.
- $847.7 million has been approved by the Small Business Administration for SBA disaster loans to 10,726 individuals and 1,718 small businesses.
To learn more about FEMA Public Assistance in New Jersey visit: fema.gov/public-assistance-local-state-tribal-and-non-profit and http://www.state.nj.us/njoem/plan/public-assist.html. For more information, visit http://www.fema.gov/sandy-recovery-office or the New Jersey Sandy Recovery website at http://www.fema.gov/new-jersey-sandy-recovery-0
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Everybody wants to see a greener data center. Environmentalists want lower carbon emissions, utilities want less strain on their infrastructure and the enterprise wants a lower energy bill.
The trouble starts when the conversation shifts from developing a “greener” data center to one that is fully and finally “green.” Everyone has a different idea of what green is, and while it is nice to have a goal, there is still a danger of tasking the IT industry with fulfilling an unreachable ideal.
This could become particularly troublesome now that many of the energy efficiency initiatives that have been launched so far are starting to produce diminishing returns, says Enterprise Tech senior editor George Leopold. And this is coming at a time when mobile architectures, which require a lot more energy than wired ones, are coming to dominate the data ecosystem. So while individual data centers may be drawing less energy than, say, five years ago, overall consumption across the industry will only increase as more resources are brought online to deal with the Internet of Things and other initiatives.
Universities and colleges throughout the U.S. have been adding emergency management degrees to their education offerings for the last decade. But in what could be the next evolution of emergency management-related offerings, at least one school has launched a general education course on the topic, therefore expanding the information to students in a variety of majors and future careers.
North Dakota State University began offering a general education course focused on emergency management during the fall 2012 semester after making a series of changes to an existing class and seeing an opportunity to engage more students on the subject.
The course was originally focused on technical government policy and doctrine and intended to be an introduction for students majoring in emergency management. Called EMGT 201: Introduction to Emergency Management, the thinking was that it would lead students to government emergency management positions, said Jessica Jensen, an assistant professor in the university’s Department of Emergency Management. “Over time our thinking about who we were educating and the career fields that they would be going into with our degree evolved and our thinking about the potential of this course also did,” she said.
During my interaction with senior management as a business continuity/information security consultant, especially amongst IT centric organisations, I am invariably asked a question: "We come across too many ISO standards which have common themes. In your opinion, which are some of the Standards that come very close especially from an implementation perspective?"
As you can see this is a very loaded question from the senior management who are typically fed up with too many rules, regulations and standards trying to govern their lives. Also, whilst they want to adhere to all applicable regulations and standards they want some optimisation of their costs in implementation.
Computers today use the Basic Input/Output System (BIOS) firmware to initialize the hardware process and then turn over control to the operating system. Therefore, any malware that affects the BIOS is a serious threat to the entire computer system.
To protect computers from malicious software, IT organizations must also attempt to secure the BIOS firmware.
The National Institute of Standards and Technology (NIST) has created a free document that details computer BIOS security. You can obtain a free copy in our IT Downloads area under the title, “BIOS Protection Guidelines for Servers.”
In the PDF, author Andrew Regenscheid of the Computer Security Division Information Technology Laboratory at NIST breaks the topic into several sections including:
- BIOS Security Principles
- Security Guidelines by Update Mechanism
- Guidelines for Service Processors
The Dollar Shave Club had a bottleneck problem, and his name was Juan. It’s not so much that Juan was the problem, but somehow, any web performance report — no matter how unusual, no matter how frequent — waited on Juan, according to a recent Cite World article. So if Juan was busy, the business user waited…and so did new site features.
Almost every company has a Juan. Often, Juan may be the most efficient, effective developer on staff, but it doesn’t matter. Inevitably, as businesses become more data-driven, there are two many tasks and not enough Juans.
The lesson here: If one developer is holding up your reports, maybe it’s time you looked at a simpler analytics solution. That’s what Todd Lehr, senior vice president of engineering at Dollar Shave Club, learned.
WARREN, MICH. – Winter is on its way, and the Michigan State Police, Emergency Management and Homeland Security Division and FEMA remind homeowners to make sure their heating systems and water heaters are in good working condition, especially those damaged by the August flooding.
“Michigan homeowners and their families may be at risk with flood-damaged furnaces, water heaters and electrical appliances,” warns Michigan State Police Capt. Chris A. Kelenske, State Coordinating Officer and Deputy State Director of Emergency Management and Homeland Security. “If the flood waters reached your heating system or water heater, have them checked for operating safety by experienced repair personnel.”
Dolph A. Diemont, the disaster’s lead federal official, reminded Michigan homeowners that FEMA grants may be available to help repair damaged furnaces and water heaters and replace those destroyed by flood waters.
“Michigan residents with flood damage to their furnaces and water heaters must register with FEMA by the Nov. 24 deadline to be eligible for grants,” Diemont added.
“If flood damage is found after the November date and the homeowner has failed to register, no FEMA assistance will be available.”
Homeowners who receive a FEMA grant for repairs and who later discover their furnace needs replacing must use the FEMA appeal process for additional grant funds. The homeowner has 60 days to appeal and must submit an estimate for replacement of the furnace on contractor company letterhead.
Disaster survivors may register online at disasterassistance.gov or by smart phone or tablet at m.fema.gov. Applicants may call 800-621-3362 or TTY users 800-462-7585. The toll-free telephone numbers are available 7 a.m. to 11 p.m. EDT seven days a week until further notice.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Ever since fusion centers were created in the aftermath of the 9/11 terrorist attacks to improve information-sharing between governments, they've often been criticized for their ineffectiveness. But if recent state investment in the centers is any indication, faith in the work they do may be on the rise.
Rick “Ozzie” Nelson, a senior associate at the Center for Strategic and International Studies, told Government Technology that as public-safety grant funding from the federal government has slowed, states are dedicating more of their own money to finance fusion centers. He believes that is a solid indicator that the centers “have found their sweet spot” when it comes to intelligence-gathering and communications activities.
“If a governor is looking for money to free up, you can close your fusion center and take that money and use it for some other public-safety endeavor,” Nelson said. “But that’s not what we’re seeing in the data. What we’re seeing is people are investing in the fusion centers.”
According to a new study from Protiviti, engagement by a company’s board of directors is a critical factor in best managing information security risks.
Overall, engagement and understanding of IT risks at the board level has increased, yet one in five boards still have a low level of comprehension. As the report states, this suggests “their organizations are not doing enough to manage these critical risks or engage the board of directors in a regular and meaningful way.” Further, while large companies do exhibit stronger board-level engagement, it is not a dramatic distinction.
NEW YORK — Since Hurricane Sandy made landfall Oct. 29, 2012, FEMA, in partnership with the federal family and state and local governments, has been on the scene helping individuals, government entities and eligible non-profits as New York recovers from the storm’s devastation.
FEMA has funded more than 3,500 Public Assistance projects including repairing and restoring hospitals, schools, transit venues, waterways, parks, beaches, marinas, water treatment plants and public buildings. A roster of services has been restored, including utilities critical to everyday life. Billions of federal dollars have been expended during the past two years. The numbers below tell the story.
It has been two years since Hurricane Sandy struck New York.
Total FEMA has already provided to New York.
The dollars given to help survivors get back on their feet with temporary housing assistance, disaster unemployment and other needs assistance.
Amount paid by FEMA to 53,288 policyholders for flood claims through its National Flood Insurance Program.
Total Public Assistance obligated to communities and certain non-profit organizations to help recover from Hurricane Sandy and includes:
Added to permanent repair projects to protect against future damage.
Through the Hazard Mitigation Grant Program to projects throughout the state to protect against future damage.
Small Business Administration loans for homeowners and businesses affected by the storm.
To learn more about FEMA Public Assistance in New York, visit: fema.gov/public-assistance-local-state-tribal-and-non-profit and dhses.ny.gov/oem/recovery.
For more information, visit http://www.fema.gov/sandy-recovery-office
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The two health-care workers from Dallas who were infected with the Ebola virus are now in stable condition, and the NBC reporter who was quarantined has been released from the hospital. That was among the news from the Centers for Disease Control and Prevention (CDC), which tried to ease concerns and provided informational updates on the Ebola outbreaks during a conference call Oct. 23.
The two Dallas nurses were infected when attending to an Ebola-infected patient who eventually died. NBC reporter Nancy Snyderman had been in quarantine since returning from assignment in Liberia.
The 48 people who had been in contact with the now deceased Eric Duncan have cleared the 21-day incubation period, according to Kashef Ijaz, the principal deputy director for the Division of Global Health Protection in the CDC’s Center for Global Health.
With the winter season approaching, the Federal Emergency Management Agency (FEMA) reminds individuals to be prepared for winter storms and extreme cold. While the danger of severe winter weather varies across the country, everyone can benefit by taking a few easy steps now to prepare for emergencies. A first step, regardless of where you live, is to visit the Ready.gov website to find preparedness ideas you can use all year long.
“In our part of the country we make the most of winter,” said FEMA Region VIII Acting Administrator Tony Russell. “However, severe storms and blizzards can create major problems and residents need to take winter weather seriously by taking appropriate steps to prepare.”
Severe winter weather can include snow or subfreezing temperatures, strong winds and ice or heavy rain storms. An emergency supply kit both at home and in the car will help prepare you and your family for winter power outages and icy or impassable roads.
Both kits should include a battery-powered or hand-crank radio, extra flashlights and batteries. In addition, your home kit should include a three day supply of food and water. Thoroughly check and update your family’s emergency supply kit and add the following supplies in preparation for winter weather:
- Rock salt to melt ice on walkways,
- Sand to improve traction on driveways and sidewalks,
- Snow shovels and other snow removal equipment,
- And adequate clothing and blankets to help keep you warm.
When traveling in winter weather conditions, be sure to contact someone both before your departure and when you safely arrive. Always travel with a cell phone and ensure the battery is charged so you can contact someone in the case of a road emergency. If dangerous conditions are forecast, it’s often best to delay travel plans.
Finally, make sure to familiarize yourself with the terms that are used to identify a winter storm hazard and discuss with your family what to do if a winter storm watch or warning is issued. Terms used to describe a winter storm hazard include the following:
- Freezing Rain creates a coating of ice on roads and walkways.
- Sleet is rain that turns to ice pellets before reaching the ground. Sleet also causes roads to freeze and become slippery.
- Winter Weather Advisory means cold, ice and snow are expected.
- Winter Storm Watch means severe weather such as heavy snow or ice is possible in the next day or two.
- Winter Storm Warning means severe winter conditions have begun or will begin very soon.
For more information and winter preparedness tips, please visit: www.ready.gov/winter-weather or www.nws.noaa.gov/om/winter/ or www.fema.gov/about-region-viii/winter-weather-readiness.
The 2014 BCI Global business continuity Awards will be presented on Nov. 5, 2014, at London’s Science Museum as part of BCI World.
The BCI has published the list of individuals and organizations that have been shortlisted for an award. These are:
Business Continuity Consultant of the Year
- Paul Trebilcock MBCI, Director, JBT Global
- Thomas Keegan MBCI, Middle East Enterprise Resilience Leader, PwC
- Bill Crichton FBCI, Managing Director and Principal Consultant, Crichton Continuity Consulting Ltd
- Harvey Betan MBCI, Principal, H betan Inc
- Ahmed Riad Ali MBCI, Manager, Ventures Middle East
- Peter Frielinghaus MBCI, Senior BCM Advisor, ContinuitySA
- Mohammed Chughtai MBCI, Managing Director of Business Continuity, RecoveryWorks Consulting
Business Continuity Manager of the Year
- Werner Verlinden FBCI, Vice President Business Continuity Management, Reed Elsevier
- John Zeppos FBCI, Group Business Continuity Management Director, OTE Group of Companies
- Nisar Ahmed Khan MBCI, Business Continuity Management Leader, Kuwait Finance House
- Abdulrahman Alonaizan MBCI, Head of Business Continuity Management, Arab National Bank (ANB)
- Sylvain Prefumo MBCI, Head of Business, State Bank of Mauritius Ltd
- Dave Morgan MBCI, Senior Business Continuity Program Manager, Delta Dental
Public Sector Business Continuity Manager of the Year
- Brian Gray MBCI, Chief – Business Continuity Management, United Nations
- James McAlister MBCI, Business Continuity Manager, Merseyside Police
- Ian Goldfinch MBCI, Manager, ICT Continuity Planning, SA Health
- Dr Clifford Ferguson AMBCI, Government Pensions Administration Agency
Most Effective Recovery of the Year
- Bank of New Zealand
- EDP Distribucao
- Telus Communications
- Barclays Bank of Kenya
- Commercial International Bank (S.A.E) - Egypt
- Telekom Deutschland GmbH
BCM Newcomer of the Year
- Luke Bird MBCI, Business Continuity Executive, Atos
- Mohammad Farhan Khan AMBCI, Senior BCM Consultant, Protiviti Middle East
- Leanne Metz AMBCI, Associate Director, Enterprise Program Management Office, Mead Johnson Nutrition
- Yasmine Elhamouly AMBCI, Business Continuity Manager, PwC
- Mark Dossetor AMBCI, Manager Business Continuity, Department of Transport, Planning and Local Infrastructure (DTPLI)
Business Continuity Team of the Year
- Franklin Templeton Investments
- Marks & Spencer
- Commercial International Bank (S.A.E) - Egypt
- Barclays Bank of Kenya
- ATO Business Continuity Management Team
Business Continuity Provider of the Year (BCM Service)
- Continuity Shop
- Plan B Disaster Recovery
- Avalution Consulting
- Phoenix Quickstart
- Linus Information Security Solutions
- Hewlett-Packard Australia - Continuity Services
- Sungard Availability Services
Business Continuity Provider of the Year (BCM Product)
- Sungard Availability Services
- ResilienceONE® BCM Software
- Linus Information Security Solutions
Business Continuity Innovation of the Year (Product/Service)
- PAN Software Pty. Ltd.
- Pinbellcom Limited
- Linus Revive Business Continuity Management System
Industry Personality of the Year
- Peter Brouggy
- Chittaranjan Kajwadkar MBCI
- Frank Perlmutter FBCI
- Braam Pretorius
- Ahmed Riad Ali MBCI
- Andy Tomkinson MBCI
- John Zeppos FBCI
Aon Global Risk Consulting, in collaboration with the Wharton School of the University of Pennsylvania, has released its Aon Risk Maturity Index Insight Report, October 2014.
This year’s report indicates six main findings:
1. Confirmation of past analysis on the inverse relationship between a higher Risk Maturity Rating and lower stock price volatility, and a direct relationship between a higher Risk Maturity Rating and superior operational financial performance.
2. Confirmation of past analysis on the relationship between a higher Risk Maturity Rating and the relative resilience of an organization’s stock price in the immediate aftermath of significant risk events.
3. Identification that the 2013/2014 bull equity market environment may have an equalizing effect on an organization’s stock price and create a false sense of security around to need to invest in a robust, holistic risk management approach.
4. Introduction of new findings that evidence a correlation between board risk oversight practices and risk maturity.
5. Groundbreaking new research showing a direct relationship between risk-based forecasting and planning and firm volatility and earnings predictability.
6. Introduction of cross-over analysis to Aon’s Global Risk Management Survey that indicates while organizations appear to identify similar opportunities and risks an organization’s level of planning, preparedness and response to these risks is distinctly different.
The report was developed as a means of driving marketplace insight on the relationship between an organization’s risk maturity and factors that drive organizational performance. This edition of the report confirmed findings from previous analyses, which found that more mature risk management practices directly correlate to stronger financial results and organizational and stock price resiliency in response to significant risk events.
The Army National Guard's first cyber protection team received its new shoulder sleeve insignia during a ceremony conducted by US Army Cyber Command/Second Army.
Lt. Gen. Edward C. Cardon, commanding general, US Army Cyber Command, cited the ceremony as a major milestone for Army cyberspace operations, Guard and Reserve forces and for the Army.
"It is another indication of the tremendous momentum that the Army is building to organize, train and equip its cyberspace operations forces," Cardon said. "Army Cyber Command is taking a Total Force approach to building and employing the Army's cyber force."
The new cyber protection team is the first of almost a dozen similar Army National Guard/active duty cyber protection teams, according to Cardon.
Cardon cited the experience that Army Guard soldiers bring with them from both the military and civilian sectors as being beneficial to the mission. "They bring a wide range of experience, not only from serving in the Army National Guard, but also from working in industry, state government or other government agencies," he said. The teams will be responsible for conducting defensive cyberspace operations, readiness inspections and vulnerability assessments as well as a variety of other cyber roles and missions.
Ed. Note-today we have a guest post from noted ethics and compliance expert, as well as steel guitar player, Chris Bauer.
Okay, you know that you need to have effective compliance training but do you really know what will actually make it effective? The reality is that far too many compliance training program fail on multiple counts. With compliance as critical as it is, that is unacceptable. Thankfully, there are a few areas which, if attended to well, can correct many of the most-frequently seen problems with the development and execution of these programs.
Here are five of the areas I see getting missed time after time in compliance training programs.
Do you actually have a solid, working definition of what compliance is? I see ethics, compliance, and accountability as being ‘cross-defined’ all the time. Do they inter-relate? Absolutely and it’s even a great idea to inter-relate them in your training. However, until you are clear about what you mean by all three of those terms, your training will leave employees confused and confusion is never good for compliance training…
Something was bound to happen eventually. Isn’t that what disaster planning all about; prepare for the unplanned events that can throw things in chaos? After years of never experiencing any sort of terrorist actions, today that changed in Ottawa, Canada. Terrorists, which is what they attackers are being called at the moment, shot and killed a RCMP officer guarding the Canadian War Memorial and stormed the Parliament building, where Members of Parliament were actually on site. On Monday – Oct 20/14 – a radical ran down two Canadian soldiers in uniform; one later dying in hospital.
It pains me to know that a soldier guarding a memorial for fallen soldiers – in all wars – dies protecting that memorial. Our thoughts go out to his family and loved ones.
At the moment, there is no greater priority in enterprise IT than building out and leveraging the cloud. Organizations that make the transition successfully will reap the benefits of a more agile infrastructure and lower costs. Those that don’t will fall into obsolescence.
But the sheer number of options when it comes to cloud services and infrastructure is mind-boggling. Whether it is public, private or hybrid, SaaS, PaaS, IaaS or the numerous permutations within those groups, the roadmap to a successful cloud environment is far from clear.
Like any IT deployment, it all starts with the platform you choose. This is particularly crucial when it comes to the private cloud because it is the owned-and-operated rock upon which all other cloud services will be built. And it is why we’ve seen such a plethora of options lately, both from traditional IT vendors and the rising tide of cloud providers.
There are a number of reasons organizations need to be paying attention to their employees’ travel risks, including health scares, natural disasters and political unrest. Since unpredictable events like these are now a global reality, many businesses are taking a hard look at business travel risks and ways they can protect their employees abroad.
In fact, 80% of travelers believe their companies have a legal obligation to protect them abroad, according to On Call International LLC’s report, “Travel Risk Management.” This means employees may blame their organization if their health or safety is compromised during a business trip. Because so much is at stake for companies that send staff members across the globe, it is important for employers to understand business travel risks and implement a travel risk management strategy to protect their workforce—and their company.
The study notes that companies need to be prepared to respond quickly and effectively to any travel-related incident. Responses should also put the needs of the employee first. Companies need to anticipate the risks and prevent them from occurring–or at least limit their potential impact.
(MCT) — Officials with the Iowa Department of Homeland Security and Emergency Management on Tuesday announced the development of an Alert Iowa statewide mass notification and emergency messaging system.
The new alert system can be used by state and local authorities to quickly disseminate emergency information to residents in counties that use the system, according to Homeland Security agency Director Mark Schouten, who announced the launch of the new alert system at the opening of the 11th Annual Iowa Homeland Security Conference.
The system is free of charge and available to all counties So far 34 of Iowa’s 99 counties have signed up to use the Alert Iowa system, officials said. Alert Iowa will allow citizens to sign up for the types of alerts they would like to receive. Messages can be issued via landline or wireless phone, text messaging, email, FAX, TDD/TYY, and social media.
During my very first Stage 1Audit for ISO 22301 I was naturally very curious. I was spouting out all sorts of thoughts and questions (no doubt much to the annoyance of my Manager and the attending Auditor at the time but I think it’s important to ask those questions when learning). One thing I have remembered from that experience was being told:
“Achieving the initial ISO 22301 certification is probably the easiest part. Everything is new, employees tend to be enthusiastic and management often seem to have it at the top of their list. It’s the repeat visits (AKA Surveillance or Continuous Assessment Visits) or the Extension to Scope Assessments that present the real challenge. Employees can lose interest, other competing demands take over in the boardroom and documents can sometimes get mothballed”
In hindsight the Auditor wasn’t wrong. As soon as that organisation first achieved certification it was quickly celebrated but then the profile simply lost some of its “fizz”. Other challenges or new exciting initiatives took over and while the BCMS continued to tick over things definitely appeared to slow down but then came the return visit…
As you can imagine with these kinds of things, there was a last minute flurry of activity to update plans, roll out awareness campaigns, and brief all managers to within an inch of their life about the possible questions they might receive!
New Organizational Resilience Standard launch announced
DALLAS, Texas – DataBank Holdings, Ltd., a leading custom data center and colocation provider based in Dallas, announced the addition of HIPAA/HITECH Attestation to their annual audit certifications. With this latest compliance standard, DataBank offers the healthcare industry assurance and ease to deploy IT assets within compliance in DataBank data center facilities.
The HIPAA Security assessment was conducted in a structured approach that can identify and evaluate the controls in place which are associated with the operations of the IT environment and the business operations environment. The assessment addressed a wide range of Administrative Safeguards, Technical Safeguards, Physical Safeguards, Policies & Procedures, as well as Documentation Requirements as they relate to DataBank’s Data Center Services.
“We have a number of healthcare clients which currently conform to the HIPAA regulations and standards,” said Michael Gentry, VP of Operations for DataBank. “By securing DataBank’s attestation as a part of our own annual audit process, we make it much simpler for both current and future customers to comply with the guidelines laid out in the audit, potentially saving them a significant financial and manpower investment.”
DataBank’s HIPAA/HITECH examination was performed by a full-service audit and consulting firm that specializes in integrated compliance solutions and examinations. By completing such examinations on an annual basis, DataBank is able to demonstrate substantially higher levels of assurance and operational visibility to both prospects and clientele.
To learn more about DataBank, the company facilities, compliance standards, and the company’s complete suite of service solutions, please visit the corporate website at http://www.databank.com.
DataBank is a leading provider of enterprise-class data center solutions aimed at providing customers with 100% uptime availability of data, applications and deployed infrastructure. We offer a full suite of hosting solutions including colocation, managed services and cloud solutions that are anchored in world-class secure data center facilities with best of breed infrastructure and highly robust network architecture. Our customized customer deployments are designed to effectively manage risk, improve their technology performance and allow them to focus on their core business objectives. DataBank is headquartered in the historic former Federal Reserve Bank Building, in downtown Dallas, TX and has additional data centers in Dallas, Minneapolis and Kansas City. For more information on DataBank locations and services, please visit http://www.databank.com or call 1(800) 840-7533
Fourth annual benchmark of Net Promoter® Scores (NPS®) includes data on 283 companies across 20 industries.
WABAN, Mass. – Temkin Group released a new research report, "Net Promoter Score Benchmark Study, 2014", based on a study of 10,000 U.S. consumers.
Net Promoter Score (NPS) has become a popular customer experience metric. NPS identifies the likelihood of consumers to recommend a company to their friends and family, using a scoring range from -100 to +100.
USAA's insurance business (67) and JetBlue (61) earned the only NPS scores above 60. Other companies with NPS above 50 are H-E-B, USAA (banking and credit cards), Trader Joe's, Mercedes-Benz, Amazon.com, Apple (computers), Lexus, Toyota, and Aldi.
Citibank and HSBC earned the lowest NPS, followed by four firms that also had scores of -10 or below: Comcast, Charter Communications, Commonwealth Edison, and Super 8.
"Net Promoter Scores can provide a strong indication of your relationship with customers," states Bruce Temkin, Managing Partner of Temkin Group. Temkin goes on to say, "Like any customer metric, NPS is only valuable when it's used to drive improvements."
Here are some additional findings from the research:
- Auto dealers earned the highest average NPS (38) followed by grocery chains (32), computers (30), and insurance carriers (30).
- TV service providers (1), Internet service providers (2), and utilities (5) are the only industries with averages below 10.
- USAA's insurance, banking, and credit card businesses earned NPS levels that are 37 or more points above their industry averages. Seven other firms are 25 or more points above their peers: JetBlue, credit unions, Chick-fil-A, H-E-B, Kaiser Permanente, Amazon.com, and Trader Joe's.
- Five companies fell more than 20 points below their industry averages: Super 8, Motel 6, HSBC, Quality Inn, and Citibank.
- HSBC's NPS is 55 points below the industry average for banks and Super 8 is 42 points below the hotel industry. Four other firms are 30 or more points below their industry averages: Motel 6 (hotels), HSBC (credit cards), US Airways (airlines), and 7-Eleven (retail).
The 20 industries included in this report are airlines, auto dealers, banks, computer makers, credit card issuers, fast food chains, grocery chains, health plans, hotel chains, insurance carriers, Internet service providers, investment firms, major appliance makers, parcel delivery services, rental car agencies, retailers, software firms, TV service providers, utilities, and wireless carriers.
The report "Net Promoter Score Benchmark Study, 2014" can be downloaded from the Customer Experience Matters blog, at ExperienceMatters.wordpress.com as well as from the Temkin Group website, www.TemkinGroup.com.
About Temkin Group: Temkin Group is widely recognized as a leading customer experience research and consulting firm. Many of the world's largest brands rely on its insights and advice to steer their transformational journeys. Temkin Group combines customer experience thought leadership with a deep understanding of the dynamics of organizations to help accelerate results. Rather than layering on cosmetic changes, Temkin Group helps companies embed practices within their culture by building four critical competencies: Purposeful Leadership, Employee Engagement, Compelling Brand Values, and Customer Connectedness. The firm's ongoing research identifies leading and emerging best practices across a wide range of activities for engaging the hearts and minds of customers, employees, and partners. For more information, contact Bruce Temkin at 617-916-2075 or send an Email.
About Bruce Temkin: Bruce Temkin is widely recognized as a customer experience thought leader and is Customer Experience Transformist and Managing Partner of Temkin Group. He is also the author of a very popular blog, Customer Experience Matters® (ExperienceMatters.wordpress.com). Prior to forming Temkin Group, he was a VP at Forrester Research for 12 years. Bruce is a highly demanded speaker who consistently receives high marks for his content-rich, entertaining keynote addresses. He is also the co-founder and Chair of the Customer Experience Professionals Association (CXPA.org), a global non-profit organization dedicated to the advancement of customer experience management.
Net Promoter Score, Net Promoter, and NPS are registered trademarks of Bain & Company, Satmetrix Systems, and Fred Reichheld. Customer Experience Matters is a registered trademark of Temkin Group.
Well into the 21st century, businesses worldwide are focusing more and more on managing risks, be they internal or external, financial, operational or strategic, involving technology or regulations or related to reputation.
While organizations are raising the bar on effective risk management, executives face extraordinary headwinds spawned by a turbulent environment in which risks materialize virtually overnight. Just this year, global financial and business markets have been rocked by spectacular cybersecurity breaches, geopolitical instability in the Middle East and Eastern Europe, refugee crises and more.
Internal auditors working from risk-based annual plans developed before March are increasingly finding themselves addressing yesterday’s challenges.
All of this reinforces my long-held belief that internal audit must take a more continuous approach to risk assessment. Audit plans and coverage should constantly evolve as new, potential risks surface and undergo assessment. Such an approach adds significant value for internal audit’s stakeholders, particularly during sudden or unexpected crises.
Yes, I realize that the last thing we need in Business Continuity Planning practices is another anagram, but, hey, what’s the fun in writing a blog if you can’t cause trouble? So here goes – another BCP anagram …
I have been stating for a while now, that the BCP Methodology needs to be revisited. I think that the tried and true practice of conducting BIAs is a bit flawed. In practice, I think, the methodology attacks middle management and department level areas in the organization without first establishing corporate-wide and senior level objectives for business during a crisis. When we ask people to establish RTOs and RPOs (more of those lovely anagrams – see the chart below) what are they basing their answers on? When we ask for impacts of being down, to set those recovery objectives, what business objectives are they being designed to meet?
I think that the BCP Methodology needs to add a step in the beginning of our analyses in which we establish – are you ready for it, here it comes, the new anagram, in three, two, one – our ABOs, Adjusted Business Objectives. I think part of the fallacy in our current process is that RTOs (or MADs if you prefer that anagram) are set with the assumption that the company is still aiming to hit its established business objectives for the year. And, I think that is wrong. During times of crisis, I think management’s expectations of what the company should achieve are adjusted. During times of crisis, we may not have the same Income Targets, Profit Targets, Sales Targets, Margin Targets, Production Targets, etc.
The Hamilton Project at the Brookings Institution and the Stanford Woods Institute for the Environment released a new report Oct. 20 that addresses how Western states can confront the crippling drought that threatens the nation’s entire water system.
The report is comprised of three papers, each of which examines particular strategies for coping with ongoing drought conditions. The first paper, Shopping for Water, advocates using market forces to manage water resources and lessen the impact and frequency of water shortages. The second paper, The Path to Water Innovation, highlights the need for innovative new technologies for promoting efficiency and conservation and suggests reviews of regulatory practices and creating statewide offices for water innovation. The third paper looks at nine economic facts about water in the United States with “the aim of providing an objective framing of America's complex relationship with water.”
In conjunction with the release of the papers, a forum was hosted on Oct. 20 at Stanford University to discuss the topics and issues within the report. Authors of the paper were joined by other water experts, as well as California Gov. Jerry Brown, who opened the forum with his vision of the landscape of water in the west.
“Water is going to be a major issue that is going be addressed in the California Legislature, in Congress – water issues don’t get solved in one place. It’s a complicated interplay of governmental jurisdiction at every level,” Brown said.
The Ebola epidemic in Africa and fears of it spreading in the U.S. have turned the nation’s attention to the federal government’s front-line public health agency: the Centers for Disease Control and Prevention (CDC). But as with Ebola itself, there is much confusion about the role of the CDC and what it can and cannot do to prevent and contain the spread of disease. The agency has broad authority under federal law, but defers to or partners with state and local health agencies in most cases.
Julie Rovner answers some common questions.
As the number of companies suffering a data breach continues to grow – with U.S. retailer Staples now reported to be investigating a breach – so do the legal developments arising out of these incidents.
While companies that have suffered a data breach look to their insurance policies for coverage to help mitigate some of the enormous costs, recent legal developments underscore the fact that reliance on traditional insurance policies is not enough, notes the I.I.I. white paper Cyber Risks: The Growing Threat.
A post in today’s Wall Street Journal Morning Risk Report, echoes this point, noting that a lawsuit between restaurant chain P.F. Chang’s and its insurance company Travelers Indemnity Co. of Connecticut could further define how much, if any, cyber liability coverage is included in a company’s CGL policy.
Each year, Forrester Research and the Disaster Recovery Journal team up to launch a study examining the state of business resiliency. Each year, we focus on a particular resiliency domain: business continuity, IT disaster recovery, crisis communications, or overall enterprise risk management. The studies provide BC and other risk managers an understanding of how they compare to the overall industry and to their peers. While each organization is unique due to its size, industry, long-term business objectives, and tolerance for risk, it's helpful to see where the industry is trending, and I’ve found that peer comparisons are always helpful when you need to understand if you’re in line with industry best practices and/or you need to convince skeptical executives that change is necessary.
This year’s study will focus on business continuity. We’ll examine the overall state of BC maturity, particularly in process maturity (business impact analysis, risks assessment, plan development, testing, maintenance etc.) but we’ll also examine how social, mobile, analytics and cloud trends are positively and negatively affecting BC preparedness. In the last BC survey, one of the statistics that disturbed me the most was that very few firms assessed the BC preparedness of their strategic partners beyond asking for a copy of their BC plan. And we all know plans are always up to date, tested and specific enough to address the risk scenarios that the partner is most likely to experience (please note the tone of sarcasm in this sentence). I hope this year’s survey shows an improvement; otherwise, most of the industry is in mucho trouble.
For DRJ readers, the results and a summary analysis will be available on their website in January and if you attend the upcoming DRJ Spring World 2015, I'll be there to deliver the results in person. For Forrester clients, I’ll write a series of in-depth reports that will examine each of the survey topics in depth during the next several quarters. If you feel this data is valuable to the industry and you’re a BC decision-maker or influencer, please take 15 to 20 minutes to complete the survey. All the results are anonymous. We don’t even need your email address unless you’d like a complimentary Forrester report (and I promise we won’t use your email address for any other purpose).
Click here to take our survey.
By Paul Kirvan.
The Ebola outbreak shows how esoteric threats shelved in the ‘it will never happen’ folder can erupt to cause major disruption. Two other such threats spring to mind and it may be a good time for a reminder of these:
Solar flares traveling from the sun to the earth contain massive amounts of energy that have been known to disrupt electronic systems. Such an event could potentially cripple the world’s electrical grids for years, causing billions (trillions?) in damages.
Back in 2010, the US House of Representatives’ Energy and Commerce Committee voted unanimously to approve a bill allocating $100 million to protect the US energy grid from this rare but potentially devastating occurrence. The Grid Reliability and Infrastructure Defense Act, or H.R. 5026, aimed "to amend the Federal Power Act to protect the bulk-power system and electric infrastructure critical to the defense of the United States against cybersecurity and other threats and vulnerabilities."
Risk management is developing into a strategic function within European organizations. At the same time, risk management can contribute much more as its strategic role grows. Currently, risk managers are not satisfied with the level of mitigation for six of the top 10 risks ‘that keep their CEO awake at night’.
These are the key findings from the 2014 Risk Management Benchmarking Survey conducted earlier this year by the Federation of European Risk Management Associations (FERMA). Now its 7th edition, the FERMA Benchmarking Survey this year received a record number of 850 responses from 21 European countries.
Using the results of the survey, FERMA has published its first European Risk and Insurance Report. FERMA President Julia Graham says, "FERMA has said that risk managers are becoming risk leaders - the European Risk and Insurance Report provides evidence to support that view. It, therefore, also endorses FERMA's objective to shape and support risk management as a profession."
Would a football player take to the field without attending training? Would an actor take to the stage without going to rehearsals? Would a pilot take to the skies without having practiced how to fly a plane? I’m sure any sensible person would answer ‘no’ to these questions. Before you know you're good enough to take on a role, you need to have practiced it first. Similarly, before you know your business continuity plan is fit for purpose, you need to have practiced it too.
We all know that every organization should have a business continuity plan – common sense dictates that when disaster strikes you would want to continue functioning as normal as possible. But how many organizations actually test their plans? They can be time consuming, they can be expensive, it can be difficult to get management buy-in and you can often be frustrated by the lack of enthusiasm from the general workforce who just want to get on with their jobs without your disruption. According to a recent study by Databarracks, less than a third of respondents to a survey (29%) claimed they had tested their plan in the last twelve months.
When was the last time you saw a survey on Information security in enterprises? It’s a topic that often means different things to different people. For some it’s antivirus software to stop malware getting in, while for others it’s strict secrecy to stop marketing strategies from getting out. Yet data breaches can happen anywhere in a company and in a multitude of ways. Here are a few aspects that may help broaden your perception of some of the risks.
In a previous post, I discussed ways that small to midsize businesses (SMBs) can take their offices paperless. One of the biggest issues that companies face is finding a better way to store all those files than a clunky file cabinet full of papers.
Many companies rely on servers and cloud services to store their vast collections of files. One up-and-coming company, eFileCabinet, provides software and web services for SMBs to create, organize and store their important documents.
In an email interview with Matt Peterson, president and CEO of eFileCabinet, I discussed why many SMBs haven’t gone paper-free, the future of digital document management and how the eFileCabinet service works.
I asked Peterson why he felt more SMBs haven’t embraced a completely paperless office. In his opinion, people are afraid of change and find conversion of current paper files to be overwhelming:
(MCT) — Firefighters in the Houston region soon will have more information about certain buildings before they arrive to contain the blazes that threaten them.
Using a federal anti-terrorism grant, fire departments in the five-county area have developed a digital database of high-risk structures — those critical to the nation's daily operations, high-rises and some large commercial buildings. The database, accessible by tablet computer while en route to a scene, will replace binders full of papers tucked in the back of engines or command vehicles. Fire officials admit the binders often were not used, or at least not right away, because they were difficult to reach as an engine raced to a scene or as crews geared up for the fire.
"Those had really good information, but you only had time to access them about an hour into an incident," said Richard Mann, assistant chief for Houston Fire Department. "(The new database) will tell you what you need to know in the first two minutes at the scene.'"
Although the initiative to create the digital system started before last May's deadly Southwest Inn fire that killed four HFD firefighters, the effort mirrors internal department recommendations to improve the quality of planning before a fire even starts.
(MCT) — Nigeria was declared Ebola-free by the World Health Organization (WHO) on Monday after recording no new confirmed cases for 42 days, which is twice the incubation period for the deadly Ebola virus.
"This is a spectacular success story that shows that Ebola can be contained," WHO said in a statement. "The story of how Nigeria ended what many believed to be potentially the most explosive Ebola outbreak imaginable is worth telling in detail."
The UN organization attributed Nigeria's success to the country's rapid adaptation of a polio eradication plan to fight the Ebola virus, including information campaigns and international support.
Nigeria confirmed 19 Ebola cases, seven of whom died, giving the country a fatality rate of 40 per cent — much lower than the approximate 70 percent seen elsewhere, WHO said.
Are companies prepared for skyrocketing energy costs to combat extreme heat? Can farmers handle average crop losses of up to 73%? Should businesses invest in oceanfront property that is virtually guaranteed to flood? Because of climate change, these are just some of the crucial questions the United States will face before the end of the century, according to “Risky Business: The Economic Risks of Climate Change in the United States,” a report co-chaired by business experts Michael R. Bloomberg, Henry Paulson and Tom Steyer. The report quantifies and publicizes the economic risks posed by a changing climate. While climate change can be a politicized topic, there is little controversy that the phenomenon presents a great deal of risk to everyone, from individuals to institutions.
Decision-makers already use risk analysis to address uncertain situations, routinely evaluating potential threats and challenges such as bad investments or schedule delays. The report adds climate change to the risks that all decision-makers should account for. Robert E. Rubin, co-chair of the Council on Foreign Relations and member of the report’s risk committee, said, “Companies should disclose both their potential exposure to climate risk, and the potential costs they may someday be required to absorb to address carbon emissions.”
The report uses risk analysis, Monte Carlo simulation (MCS) and models to illustrate how different regions are likely to be affected by climate change. The project’s simulation also analyzes efforts to mitigate climate change, showing a changed distribution of probabilities if those efforts are made in the coming years. “As there a very high number of permutations and combinations of weather events, it would be very difficult to analyze these meaningfully using an averaged or deterministic approach,” said Robert Kinghorn, associate director at the consulting firm KPMG Australia. “MCS overcomes this by allowing thousands of possible combinations of extreme weather events to be analyzed.”
Ubidata already boasts of an extensive European client-base and now adds to this portfolio with new clients Samskip of the Netherlands, Ancotrans in Denmark and the internationally renowned rail freight company VoestAlpine Railpro. It also announced today a 3 Mio Eur capital increase to extend this international growth.
With Ancotrans’ large and sophisticated fleet came the need to develop an information delivery approach which will help the client save money and resources helping not only the bottom line but also the environment. Ubidata’s redesigned and easy-to-use Android app has been launched to ensure the right communication gets to key stakeholders to help them make the right decisions.
Our work with Samskip in the Netherlands has showed how Ubidata can add flexibility and give power to a client system which is reliant on third party subcontractors. We help Samskip in empowering them take control by converting data into key performance indicators. This way Samskip can make decisions independently of other railways undertakings backlogs and can then consolidate their work accordingly.
VoestAlpine Railpro found Ubidata’s solution effective over their large fleet by reducing redundancy by up to 15%. Ubidata’s telematics product helps flag up where redundancies in the system can and do occur which helps focus the client’s resources and time.
These new client projects illustrate how Ubidata’s international client base is growing and underline an exciting new capital increase that has now begun. This investment phase will fund key areas of growth in product development and client relationship management. Ubidata’s aim is to grow the client base while continuing to serve current customers well through delivering the right information at the right time in the right place.
Ubidata is a Brussels based company specialising in Mobile Logistics Systems. On top of developing and commercializing fast evolving high-end software and hardware for the fleet and logistic industry, it offers a full range of services to assist clients in every step of the process: from analyzing their unique fleet situation and offering advice on the most optimal approach for improving their productivity to the seamless integration into their backoffices.
Study exposes a lack of readiness for EU data laws, shows organisations are struggling to enforce acceptable usage policies and reveals the activity of Europe's most ‘dangerous' cloud user
LONDON – Skyhigh Networks, the Cloud Visibility and Enablement company, today released its latest quarterly European Cloud Adoption and Risk Report. The report analyses real-life usage data from 1.6 million European users.
In Europe, the number of cloud services in use by the average company increased 23 percent, rising from 588 in Q1 to 724 in Q3. However, not all of these services are ready for the enterprise. Developed in conjunction with the Cloud Security Alliance, Skyhigh's Cloud Trust Program tracks the attributes of cloud services and ranks them according to risk. The report found that only 9.5 percent of all services meet the most stringent security requirements including strong password policies and data encryption.
The report also reveals a worrying lack of conformance to the EU Data Protection Directive, particularly with regards to the transfer of personally identifiable information outside Europe. Skyhigh found that nearly three quarters (74.3 percent) of the cloud services used by European organisations do not meet the requirements of the current privacy regulations, with data being sent to countries without adequate levels of data protection. With stricter policies and harsher penalties set to come into force soon, organisations have just a short window to address these issues.
"The growth in cloud services being used in Europe is testament to the benefits users see in the services on offer," said Rajiv Gupta, CEO, Skyhigh Networks. "On the other hand, the IT department needs to make sure that these services don't put the organisation's intellectual property at risk. This report analyses real-world cloud usage data to shine a light on the extent of Shadow IT."
Echoing the last report, much of the adoption of cloud services still remains under the radar of IT departments with 76 percent of IT professionals not knowing the scope of Shadow IT at their companies but wanting to know. As such, a key problem that IT teams face is the enforcement of an acceptable use policy. The report found that IT personnel are often surprised when it is discovered that cloud services that they believe to have been blocked are actually being used by employees. As part of the study, Skyhigh surveyed IT professionals to understand their expected block rates for certain cloud services, and then compared this to actual block rates measured in the wild. The resulting ‘cloud enforcement gap' was surprising, for example 44 percent of IT professionals intended to block YouTube, but only 1 percent of organisations blocked the service comprehensively.
In terms of trends, the report found that 80 percent of all corporate data uploaded to the cloud is sent to just 15 percent of cloud services, which makes it easier for IT teams to prioritise security and risk analysis. The top destination for corporate data in Europe is Microsoft Office 365, followed by Salesforce. However, there's a long tail of services below these top 15 and this is where 73 percent of the compromised accounts, insider threats and malware originate.
"The gap between perception and reality uncovered by this study is worrying, as so much corporate data is being uploaded to cloud services that IT teams believe they have blocked," continued Gupta. "It only takes one misstep to cause a serious security or compliance threat to an organisation. As such, mechanisms should be in place not only to discover which cloud services are being used, but also to analyse the risk profile of these services and understand the true implications for enterprise data security."
Finally, by digging deeper into the statistics, the report has for the first time revealed the behaviour of the most ‘dangerous' cloud user in Europe. This person uploaded greater than 17.5GB of data to 71 high-risk cloud services in a three month period, the equivalent of 8,750 copies of War and Peace. Some of these high-risk services are also used to distribute malware into organisations. This highlights the threat a single user could pose to an organisation and its data.
The full report is available here: www.skyhighnetworks.com/cloud-report
The idea of data as philanthropy received a Silicon Valley boost this week when Informatica and Cloudera announced plans to support the non-profit, DataKind. Both Informatica, which specializes in data integration, and Cloudera, a Hadoop analytics company, will jointly sponsor DataKind programs and projects.
DataKind applies data science to world problems by making data scientists available to work with governments and other mission-organizations that are working on issues such as education, vaccine delivery and poverty eradication. For example, Bayes Impact created a model that would help reduce fraud while maximizing loans to honest people for micro-financier, Zidisha.
Big Data has a long track-record of social justice work. For instance, last year, ITBE’s Don Tennant wrote about Big Data’s use in the fight against human trafficking. Earlier this year, civic technologist Matt Stempeck proposed businesses make data donations to non-profits, which prompted my earlier post about the business value of data philanthropy.
As people increasingly turn to social media after a disaster — both to get information and check to see if their friends and family have been affected — the platforms are creating disaster-specific tools. Twitter Alerts, for example, was launched in September 2013 as a way to highlight emergency information from vetted agencies across the social networking platform. And now Facebook has joined the movement with a new tool, called Safety Check, that’s designed to be an easy way for users to let their friends and family members know if they’re OK after a disaster.
Introduced via a blog post on Oct. 15, the company says that in addition to helping users let others know if they’re safe, Safety Check also allows users to check on people in the affected area and mark friends as safe. The feature works on Facebook’s desktop and mobile applications, including Android and iOS.
When users are within the vicinity of an area affected by a disaster, they will receive a notification from Facebook asking if they’re safe. Selecting “I’m Safe” will post an update on that user’s Facebook page.
(MCT) — California is on track to deliver, within two years, an earthquake early warning system that can give 10 seconds to a minute or more warning that a major earthquake is about to hit, officials said Thursday.
The development of such a system would enable gas and electric utilities, railroad operators, crane operators and people time to take evasive action, said Sen. Alex Padilla, D-Pacoima. His Senate Bill 135 mandated that an early-warning system be developed.
The bill, which went into effect in January, required the state Office of Emergency Services to develop a statewide earthquake early warning system to alert Californians in advance of dangerous shaking.
The initial cost to build and operate the system for five years is $80 million.
On Thursday, Padilla said that state Office of Emergency Services officials have told him the system is on track to be operational by January 2016.
Winter storms caused $1.9 billion in insured losses in 2013, five times higher than the $38 million in damages seen in 2012, so it’s good to read via NOAA’s U.S. Winter Outlook that a repeat of last year’s winter of record cold and snow is unlikely.
In a release, NOAA’s Climate Prediction Center says:
Last year’s winter was exceptionally cold and snowy across most of the United States, east of the Rockies. A repeat of this extreme pattern is unlikely this year, although the Outlook does favor below-average temperatures in the south-central and southeastern states.”
While the South may experience a colder winter, the Outlook favors warmer-than-average temperatures in the western U.S., Alaska, Hawaii and New England, according to NOAA.
Over the past 40 years, tidal flooding has quadrupled in many low-lying areas, but that change is accelerating due to sea level rising. According to a new study, even moderate rising could as much as triple coastal flooding events in many communities in the next 15 years. Based on even moderate projections for sea level rise from the 2014 National Climate Assessment, the Union of Concerned Scientists’ study “Encroaching Tides” calls attention to the threat of routine tidal flooding to much of the East and Gulf Coasts. As opposed to storm surges, tidal flooding occurs far more regularly, bringing water above the base sea level during routine tide patterns or, for example, twice a month due to the moon’s increased gravitational pull.
With anticipated sea level rise, even daily tides may flood many areas, according to the report. As the base sea level changes, deviations take on new meanings–which can have drastic implications for property.
Following the release of Insignia Communications’s latest report ‘The effect of social media on breaking news’, managing director, Jonathan Hemus, discusses what the findings mean for business continuity managers.
By Jonathan Hemus
With the increased use of social media and ‘citizen journalism’, people are creating and sharing more information than ever before. It is now far easier (and quicker) for disgruntled employees, unhappy customers and campaigners to voice their opinions online – providing a wealth of content for journalists in a crisis.
A perfect example of this affected Apple just last month. Two days after the iPhone 6 went on sale on 19th September, images surfaced on social media showing phones which appeared to have bent in people’s pockets as a result of accidental pressure. Within hours, the pictures had spread like wildfire on Twitter with thousands of people posting comments using the hashtags Bentgate and Bendgate: an unwanted headache for Apple and further proof of the speed at which social media can propel an issue into the spotlight.
One of the reasons I enjoy writing about technology, particularly data technology, is because I believe it can illuminate real-world problems. So you can imagine my frustration when I tried to fact-check the conflicting data on Ebola’s infection rates. One article claimed a case fatality rate of 25 percent, while another cited 90 percent.
I checked and, surprisingly, both are right — well, sort of. WHO states:
"The average EVD case fatality rate is around 50%. Case fatality rates have varied from 25% to 90% in past outbreaks.”
This week, WHO bumped that fatality rate to 70 percent.
The reason the numbers range so widely is simple: West African health care systems and reporting structures aren’t advanced enough to properly track it, according to the CDC.
That’s a rational explanation, but it doesn’t resolve the confusion. Surely, if we’re serious about stopping the spread of Ebola and finding a cure, we’re going to need real data.
It took home improvement retailing giant Home Depot about a week before it finally confirmed it had suffered a data breach. Home Depot first reported the possibility of a breach on 2 September 2014, but did not actually confirm the hacking until 8 September. During that time, the company made somewhat vague statements that it was still carrying out an investigation to determine whether or not its systems had actually been compromised.
Based on the company’s recent press release confirming the breach (see “The Home Depot Provides Update on Breach Investigation“), it appears that Home Depot’s internal IT security team was unaware that its payment data systems had been compromised. Instead, it looks as if the company only caught on to the breach, and then launched its investigation, once it had received reports from banking partners and law enforcement officials notifying the company of suspicious activity with payment cards used at the retailer’s various stores. (This is a trend we are seeing more often, and it is disturbing because it signals that the malware used to infect store POS systems is very difficult to detect.) The company believes the breach took place initially sometime in April 2014. No information regarding the size of the breach was included in the press release.
(MCT) — When hurricanes sweep across the ocean’s surface, they whip up a foamy mix of sea and air, swapping energy in a loop that can crank up the force of powerful storms.
The physics of that exchange — nearly impossible to measure in the dangerous swirl of a real storm — has remained largely a mystery, vexing meteorologists who have struggled to improve intensity predictions even as they bettered forecast tracks. Now scientists have a shot at solving that puzzle with a new 38,000-gallon research tank unveiled this month at the Rosenstiel School of Marine and Atmospheric Science at the University of Miami.
Powered by a 1,400-horsepower engine, the tank will let scientists map Category 5 hurricanes in three dimensions for the first time.
“It can really help us understand why this behavior is occurring,” said Mark Powell, an atmospheric scientist with the National Ocean and Atmospheric Administration who published a study on the exchange in the journal Nature in 2003.
WARREN, Mich. – The Federal Emergency Management Agency (FEMA) encourages disaster survivors to visit one of the four Disaster Recovery Centers in Macomb, Oakland and Wayne counties to learn about the many paths toward recovering from the August severe storms and flooding.
The recovery centers are one-stop shops where disaster survivors can register for assistance, discuss types of disaster assistance programs with specialists, receive the status of their existing application and obtain other information.
The advantage of already being registered before visiting a recovery center is that FEMA staff can look up an applicant’s case and tell how it is progressing. The same information is available at FEMA’s toll-free number, but the face-to-face experience at the centers makes it easier.
U.S. Small Business Administration (SBA) customer service representatives at the recovery centers can explain the several different kinds of low-interest, long-term disaster assistance loans available. Not only businesses and private nonprofit organizations, but homeowners and renters can apply for SBA disaster recovery loans.
Rebuilding stronger and safer homes is the specialty of FEMA’s mitigation specialists. They are at the centers and can explain how to protect property against damaging winds and floods, and reduce damages from future events.
The centers should not be confused with the recovery support sites located throughout neighborhoods in southeast Michigan. The support sites are open for a short period of time and are staffed with FEMA personnel who can help survivors register and quickly answer questions about disaster assistance programs.
It is not necessary to go to a recovery center to register for the various federally-funded recovery programs that can be accomplished better by phone or on the web.
Register at www.DisasterAssistance.gov or via smart phone or Web-enabled device at m.fema.gov. Applicants may also call 1-800-621-3362. TTY users may call 1-800-462-7585.
The toll-free telephone numbers will operate from 7 a.m. to 11 p.m. EDT seven days a week until further notice.
The deadline for individuals to apply for disaster assistance is Nov. 24, 2014.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
The hurricane season always is a good time to take a look at disaster recovery and business continuity (DR/BC). These twin endeavors are aimed at keeping organizations operational, and if that doesn’t work out, getting them back up and running as quickly as possible.
Recently, virtualization has given DR/BC some new tools and new challenges. Through virtualization, network operators can break functions and collected data into little pieces to be scattered in a variety of places. They also have the ability to reroute and otherwise change networks on the fly.
ComputerWeekly recently discussed the value of virtualization for DR/BC and the players involved in the sector. The case for virtualization was made near the start:
Virtualisation changes everything and increases the number of options. First, data can be easily backed-up as part of an image of a given virtual machine (VM), including application software, local data, settings and memory. Second, there is no need for a physical server rebuild; the VM can be recreated in any other compatible virtual environment. This may be spare in-house capacity or acquired from a third-party cloud service provider. This means most of the costs of redundant systems disappear.
(MCT) — If, or more likely when, another Hurricane Hazel hits the Carolinas, experts say many things would be different than they were in 1954.
Sadly, they note, the outcome would be the same.
Despite more than a half-century's advances in technology, communication and the lessons learned from other storms, there is only so much people can do to prepare for a tropical buzz saw with a two-story storm surge and winds of 140 mph.
"We've come light-years in terms of prediction and preparation," said Gene Booth, Cumberland County Emergency Management coordinator. "But ultimately, there would still be a tremendous amount of damage.
"The biggest difference now is we have planning in place. Back then, there wasn't the level of planning because there wasn't a history of anything like Hazel."
Some 25 years after the Loma Prieta earthquake, the San Francisco Bay area faces increased risk of a major quake, two separate studies suggest.
A study published online in the Bulletin of the Seismological Society of America says that sections of the San Andreas fault system—the Hayward, Rodgers Creek and Green Valley faults—are nearing or past their average earthquake recurrence intervals.
It says the faults ‘are locked and loaded’ and estimates a 70 percent chance that one of them will rupture within the next 30 years. This would trigger an earthquake of magnitude 6.7 or larger, the study’s authors say.
A second study by catastrophe modeler RMS says the next major quake could be financially devastating to the Bay Area economy in part because of low earthquake insurance penetration.
Continuity Central recently conducted a quick survey into whether there is a change in business terminology taking place: from business continuity management to organizational resilience. The survey was a follow up to an article in which Lyndon Bird, the technical director of the Business Continuity Institute, claimed that such a development is under way.
306 respondents took part in the online survey which was conducted using Survey Monkey.
The results show that just over half of respondents (53.27 percent) agree that a terminology change from business continuity management to organizational resilience is taking place. 32.03 percent of respondents disagree and 14.71 percent don't know.
However, when respondents were asked about their own organization, the situation was somewhat different, with only 29.74 percent of respondents stating that their organization was starting to use 'organizational resilience' rather than 'business continuity management' terminology. 67.32 percent said that their organization was still using business continuity management terminology; and 2.94 percent didn't know.
Axway and Ovum have published the results of a global study that examined data security, governance and integration challenges facing organizations. Conducted by Ovum, the study highlighted how the growing complexity of governance and compliance initiatives challenge IT integration and C-level executives, and how isolation between IT integration and corporate governance forms economic and reputational risks.
Of the 450 respondents from North America, Asia Pacific and EMEA, 23 percent said their company failed a security audit in the last three years, while 17 percent either didn’t believe or didn’t know if they would pass a compliance audit today. The study also revealed that the average overall cost of a data breach was $3 million.
In examining the key priorities for chief information officers (CIOs), chief information security officers (CISOs) and chief risk officers (CROs), the study identified business continuity and disaster recovery as the top priority (87 percent), followed by protecting against cyber threats (85 percent), managing insider threats (84 percent) and compliance monitoring (83 percent).
Current infrastructure and governance silos, the need to manage an increasing number and type of integrations, and the problems with existing file transfer solutions have created data security and privacy concerns about file transfers. Respondents listed data/file encryption at rest (89 percent), defining and enforcing security policies (86 percent) and identity and access management (78 percent) as the most pressing issues. These concerns are particularly important as the study found organizations use file transfers for 32 percent or more of business critical processes, on average.
New research from Kroll Ontrack, reveals how companies that don’t regulate employee usage of business devices with effective IT policies are putting data security at risk.
The research highlights that in the last year, 38 percent of UK employees downloaded personal files and 29 percent of employees installed personal apps or programs on devices, which they also use for work.
Five percent of people used P2P file sharing services, such as BitTorrent and Gnutella, the same percentage temporarily disabled firewall/antivirus software and 4 percent of workers cancelled antivirus scans on these devices.
Paul Le Messurier, Programme and Operations Manager at Kroll Ontrack commented: “As the line between work and personal life continues to blur, employees will increasingly conduct personal activities on a device they also work from. This will raise a number of issues for organizations, from data security through to productivity uncertainties.
“As such, businesses must look to protect their assets, both digital and physical. Employers must educate employees on what activities are acceptable; develop a simple, but thorough IT usage policy; and ensure backups are in place and up to date for when disaster does strike.”
The survey was conducted by ICM and was the result of interviews with 1,151 UK employees between 18th and 20th July 2014.
A European study by information storage and management company Iron Mountain has discovered an unexpected downside to advanced data back-up and storage capability. The research revealed that employees have become more casual in their approach to saving documents, confident that, if required, they can call on IT support to help them retrieve missing data.
In a series of in-depth interviews with senior IT professionals in France, Germany, the Netherlands, Spain and the UK, Iron Mountain found that IT teams are frustrated by the casual approach to storing data but are doing nothing to change employee behaviour.
The most common reason for the employee approach to saving documents is thought to be a simple lack of IT skills, although other explanations included general carelessness and complacency, poor version control of documents, an inconsistent or incomplete approach to naming files (making them difficult to find) and the challenge of unstructured data for creative teams.
According to IT professionals, Europe’s top ten worst document savers are as follows:
5. Business development
6. Creative teams
7. Customer support
8. IT and software development teams
9. Senior management
Very little quantitative progress has been made in Business Continuity Management since IT-Disaster Recovery programs began to morph into BCM programs in the 1980’s. Standards and best practices have been hashed and rehashed but nothing substantial has changed.
BCM programs still struggle to attain “management buy-in”. Newcomers to the industry (lacking any other meaningful bearings) cling to measuring their programs against ‘standards’ to justify their – and their own – existence. Industry analysts, consultants, certification bodies and practitioners continue to march to the same tune: BCM for BCM’s sake.
Lately there have been many conversations on BCM discussion forums regarding where BCM, as an industry, is headed. The consensus seems to be that many believe the industry has gone as far as possible down the present path – and desperately needs a new direction, a new vision.
One of the issues that mobile device vendors, service providers and users are well aware of is battery life. While it still is a hot issue, the dynamics have changed a bit during the last couple of years.
In the past, twin trends were seen as a tremendous problem. On one hand, applications and services were becoming more power-hungry and, on the other, devices were getting smaller. The small size of the device limits the size, and therefore the power, of the battery. This was seen as a looming threat to the very survival of the sector.
The pressure has eased a bit, however: The popularity of video on mobiles has led to a consistent growth in screen size, which means batteries can grow a bit.
Despite the tremendous gains it has made over the past decade, storage is still lagging behind its compute and networking counterparts in terms of speed and performance.
This isn’t an indictment of storage itself, mind you, as technologies like Flash and other forms of solid-state infrastructure have done wonders for both speed and throughput in advanced enterprise settings. Rather, it is in the support infrastructure surrounding physical storage where most of the bottlenecks remain.
Latency in the storage farm, in fact, is increasingly seen as an impediment to many higher order data center functions, such as virtualization and cloud computing. According to a recent survey from PernixData, a vendor of server-side Flash solutions, about half of respondents say storage performance is a higher priority than additional capacity, while only 21 percent cited capacity as a priority. As well, the survey has upwards of 70 percent of respondents considering storage acceleration software to help boost performance. A key driver in this shortage of performance continues to be the proliferation of virtual machines, which tends to flood storage infrastructure with more requests than it can handle.
Rapidly developing computer technologies and the unrelenting evolution of cyber risks present one of the biggest challenges to the (re)insurance sector today. Liabilities from cyberattacks and threats to the data security of cloud computing and social media have become key emerging risks for carriers. The unprecedented rise in cyberattacks, in addition to the threat cyberrisk poses to global supply chains, has seen the cyberinsurance market grow significantly in recent years.
Client demand for cyber coverage has been growing, on average, 30% annually in the United States over the past several years, according to Marsh. While demand varies by industry, the one constant has been that more clients are investigating and analyzing existing traditional insurance coverage and whether they need standalone cyberrisk insurance coverage.
(MCT) — As scary as the Ebola incidents in Texas and the outbreak in Africa are, it's worth noting that nine years ago this month the country was confronting another outbreak that looked rather ominous, too: a deadly strain of influenza that had originated in birds in Asia.
The so-called bird flu elicited a widespread government response, including a white paper from then-President George W. Bush's White House laying out the strategies should the flu reach pandemic levels in the United States. There were worries at the time that the flu, which was passed from birds to humans, could mutate, turning into a flu pandemic similar to the one at the end of World War I that killed between 20 and 40 million people globally in 1918-1919.
Millions of birds were purposely killed to stop the disease, and the bird flu scare abated over that winter of 2005-2006.
Which disaster recovery measurements do you really need? The answer is the ones that are effective in helping you to plan and execute good DR. So your choice will naturally depend on your IT operations. The two ‘classics’ of the recovery time objective (RTO) and recovery point objective (RPO) are so fundamental that they apply to practically all situations. But suppose your organisation is running a service-oriented IT architecture with business applications like ERP using resources supplied by other servers. If some of the servers cannot be recovered satisfactorily, there may be a secondary impact elsewhere. How can you measure this situation and define a minimum acceptable level of recovery?
DALLAS — As a 26-year-old Dallas nurse lay infected in the same hospital where she treated a dying Ebola patient last week, government officials on Monday said the first transmission of the disease in the United States had revealed systemic failures in preparation that must “substantially” change in coming days.
“We have to rethink the way we address Ebola infection control, because even a single infection is unacceptable,” Thomas Frieden, director of the Centers for Disease Control and Prevention, said in a news conference.
Frieden did not detail precisely how the extensive, government-issued safety protocols in place at many facilities might need to change or in what ways hospitals need to ramp up training for front-line doctors or nurses.
By Matthew Neigh, Global Technology Evangelist, Cherwell Software
Today’s IT environments are complex, and the commoditization of IT is one of the driving elements. This is manifest in a variety of ways in the enterprise. However, few are as vexing as “bring your own device” (BYOD).
BYOD is not only the future—actually, it’s already here. Organizations should expect the trend and learning curve to increase, and the required time to adapt to decrease at a sharp rate. That means IT organizations are responsible for laying the groundwork for today’s need: the creation and implementation of policy. Listed below are key factors you’ll want to consider as you move toward the creation and implementation phase.
(MCT) — If the Loma Prieta earthquake happened today, Buck Helm might have survived his Nimitz Freeway commute to watch his two youngest children grow up. Donna Marsden could have finished fixing up her Victorian home. Delores Stewart could have cheered on her beloved Oakland A's.
Twenty-five years later, the freeways and bridges that collapsed have been rebuilt to stand up to a quake even more powerful than the 6.9 magnitude Loma Prieta.
More than $22 billion in infrastructure upgrades have built a metropolitan area that is far safer and far more resilient than before. It's a testament to the power of long-term planning, borne of the ashes of the tragedy — 25 years ago Friday.
More than 440,000 in Missouri to Participate in Nationwide Drill
KANSAS CITY, Mo. — With just one week to go, communities throughout Missouri are preparing for the fourth annual Great Central U.S. ShakeOut Earthquake Drill, scheduled for October 16 at 10:16 a.m.
Great ShakeOut Earthquake Drills are occurring in more than 45 states and territories — nationwide more than 19 million people are expected to participate in the activity. During the drill, participants simultaneously practice the recommended response to earthquake shaking:
- DROP to the ground
- Take COVER by getting under a sturdy desk or table, or cover your head/neck with your arms, and
- HOLD ON until the shaking stops
The ShakeOut is free and open to the public. Participants include individuals, schools, businesses, local and state government agencies and many other groups. See the list of all the participants in Missouri at, www.shakeout.org/centralus/participants.php?start=Missouri. The goal of the program is to engage individuals to take steps to become better prepared for earthquakes and other disasters.
“Participating in this drill is a quick, simple thing we should all do—at work, at home, alone or with family or co-workers—to prepare for earthquakes,” said Regional Administrator Beth Freeman, FEMA Region VII. “When we practice ‘drop, cover and hold on’ it makes it more likely we will react appropriately during an earthquake and that can and does save lives.”
States participating in the Great Central U.S. ShakeOut include Alabama, Arkansas, Illinois, Indiana, Kentucky, Missouri, Mississippi, Ohio, Oklahoma, and Tennessee.
Interested citizens, schools, communities, businesses, etc. are encouraged to visit http://www.shakeout.org/centralus/register to register to participate and receive instructions on how to hold their earthquake drill. On social media, information about the drill is being provided on Twitter through www.twitter.com/CentUS_ShakeOut. In addition, www.twitter.com/femaregion7 and others are tweeting earthquake safety tips and drill information using the hashtag #ShakeOut.
The Great Central U.S. ShakeOut is being coordinated by Missouri State Emergency Management Agency, the Central U.S. Earthquake Consortium and its other Member and Associate States, the Federal Emergency Management Agency, the U.S. Geological Survey and dozens of other partners.
Great ShakeOut Earthquake Drills began in California in 2008 and have expanded each year since then.
Visit FEMA Region VII online at www.fema.gov/region7. Follow FEMA online at www.twitter.com/femaregion7, www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Charlie Maclean-Bristol, FBCI, discusses whether the time has come for business continuity managers to make contingency plans for an Ebola pandemic.
Spain is now dealing with the first case of direct infection of Ebola in Western Europe; the first Ebola death has occurred in the United States; and the World Health Organization has warned that ‘Ebola is now entrenched in the capital cities of all three worst-affected countries and is accelerating in almost all settings’. So has the time come for business continuity managers to make contingency plans for a possible future Ebola pandemic? I think the answer to this question is, yes, we should be.
I am not suggesting that you immediately go out to the supermarket and buy lots of tinned food and water, barricade the house, be prepared to operate on battery power and bottled gas and then lie low.
What I am suggesting is that we should be quietly thinking about how a possible Ebola pandemic might affect our organization; thinking through what an Ebola plan might look like; and monitoring the situation to ensure that you are ready to react if the situation escalates further.
So what at this stage should business continuity managers be doing?
Enterprises are moving more and more applications to the cloud. The use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend, according to Gartner, Inc (1). 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.
“While the benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, for mission-critical applications, such as SQL Server, Oracle and SAP, companies need a strategy for high availability (HA) and disaster recovery (DR) protection,” said Jerry Melnick, COO of SIOS Technology Corp. “While traditional SAN-based clusters are not possible in these environments, SANless clusters can provide an easy, cost-efficient alternative.”
Jerry says that separating the truths and myths of HA and DR in cloud deployments can dramatically reduce data center costs and risks. He debunks what he says are five myths:
As part of a broad effort to reinvent itself, BMC Software this week added advanced analytics capabilities to its suite of IT operations management software, while at the same time revamping its Remedy service desk software.
In addition, BMC has created a series of Smartflow Solutions that combine various BMC Software products into frameworks that make it possible to more easily manage IT at scale, while providing access to Automation Passport, a compilation of reference guides and best practices for automating IT operations.
Paul Appleby, worldwide executive vice president of sales and marketing for BMC Software, says BMC is moving to modernize its complete suite of distributed IT management offerings to make it easier to manage IT at scale in the age of cloud computing. Organizations that are increasingly relying on IT as a competitive weapon need to be able to operate IT on an industrial scale in order to successfully compete, says Appleby.
Now that the Ebola virus has made its way to the United States and we enter the traditional US Flu season, companies are beginning to revisit and/or develop Pandemic Plans to address this scare. But, Pandemic Planning is a little bit different than your standard business continuity plan development process. I have often chastised organizations for saying they have business continuity or disaster recovery “plans” when all they really have are plans to create plans, but, in the case of pandemic planning, I think, that is actually the right approach to take.
The reason why it is so important to have well developed and relatively detailed business continuity plans, strategies and solutions in place today is that most disasters occur without warning and do not provide the luxury of time to figure out what to do after the incident occurs. Pandemics represent an evolving threat that comes in various shapes and sizes and does afford us a luxury (if that word really applies here) to construct a response plan based on the particular pandemic that poses the threat.
The “Pandemic Influenza Risk Management / WHO Interim Guidance” published by the World Health Organization in 2013 (click here to read this document) states:
As hacking attempts become more complex, governments continue to improve their cybersecurity presence through sophisticated firewalls and expanded procedures. But while high-profile data breaches have focused more state and municipal attention on cyberintrusions, a decidedly old-school problem continues to plague efforts to beef up security — communication.
With a variety of security options available, public-sector agencies often are deploying tools and using strategies that utilize different terminology and principles. These differences can lead to frustration when trying to compare cybersecurity programs and address the latest digital threats across agencies or jurisdictions. Without a standardized language, it’s difficult to gauge how strong another organization’s cybersecurity is.
To illustrate the concept, consider an advertisement for a new hotel. The hotel boasts that it has superior service, amenities and security. The only way to know that for sure, however, is for those claims to be verified. In the lodging industry, organizations like AAA visit hotels and rate them — five-star, four-star, etc. Customers then read those ratings and make a decision on where to stay based on the commonly understood vernacular.
By Geary Sikich
Our concept of risk management needs to change. I’m not saying that the current practice is wrong; it just provides us with too much static risk assessment and the creation of many false positives in risk reports. One may ask why I chose the use the example of commodities traders for a new risk mindset. The answer is rather simple; commodities traders view risk as a rapid change agent. That is to say, risk changes in likelihood, velocity, impact and exposure over time.
If one refocuses to look at the consequences or potential consequences of the ‘near miss’ event instead of trying to determine the cause (which is often masked by opacity) preventative measures can be undertaken. Controls often times are reactive and not sufficiently proactive. Once we have changed the mindset, we can create a more proactive culture.
Businesses which respond to supply chain scandals with additional rules and regulations leave workers even more vulnerable, according to a new report published by the Institute of Risk Management (IRM).
‘Extended Enterprise: managing risk in complex 21st century organisations’ argues that the modern commercial obsession with systems and processes obscures the real problem: failure to understand and predict human behaviour and build trust. It urges companies to prioritise behavioural risk over ‘tick box compliance’ to tackle the ethical uncertainties in today’s complex delivery networks.
IRM states that the report marks the transition from risk management of a single organization to a coherent programme which meets the global and interdependent challenges of today’s joint endeavours. The report’s project group, made up of IRM practitioners together with academic experts, provides developed models, tools and techniques to help risk practitioners understand and manage risk across extended enterprises.
As well as supporting organizational performance, the report claims that a better understanding of risk across the extended enterprise is also vital in tackling wider problems including slavery, abuse, environmental damage and dangerous working conditions. The report argues that wilful blindness by organizations to these issues within their broader networks is unacceptable. Firms must ask themselves whether any claims that they make about their values hold true across their extended enterprise.
A recent announcement explained that cyber-security ‘big names’ McAfee and Symantec have agreed to share their threat data. It’s a development that should benefit customers of both vendors. Historically, IT vendors have swung back and forth between the multi-vendor approach (“we’ll handle the other vendor’s stuff for you”) and so-called coopetition, where two or more providers joined forces by agreeing to operate to a common standard for instance. The McAfee-Symantec pact ranges over sharing malware signatures to information on real-time attacks. Who else might follow this apparently enlightened example?
As the Internet of Things comes online, it will almost certainly require changes to how IT manages data, according to Gartner analyst Joe Skorupa.
"The enormous number of devices, coupled with the sheer volume, velocity and structure of IoT data, creates challenges, particularly in the areas of security, data, storage management, servers and the data center network, as real-time business processes are at stake," Skorupa, vice president and distinguished analyst at Gartner, states. "Data center managers will need to deploy more forward-looking capacity management in these areas to be able to proactively meet the business priorities associated with IoT."
The highly distributed nature of the IoT will make it impractical to move all of the data to a central location for processing, Skorupa theorizes. Instead, data will be aggregated in “distributed mini data centers where initial processing can occur.” Only the business-relevant data would be sent to a central location for further processing, he added.
Erosion Threat Assessment Reduction Team (ETART) is a multijurisdictional, interdisciplinary team formed jointly by FEMA and the State of Washington in response to the 2014 Central Washington wildfires to address the threat of flooding, mudslides, debris flows and other erosion over the approximately 415 square miles of burned lands.(For a landownership breakdown, see the following map and chart.)
In the summer of 2014, the Carlton Complex Fire burned more than 250,000 acres of land in Washington, the largest wildfire in state history. The fire burned private, federal, state and tribal lands, consumed 300 homes and destroyed critical infrastructure in its path. Then intense rainstorms over the scarred landscape caused more damage from flooding, mudslides and debris flow.
Fire suppression costs topped $68 million. But post-fire recovery costs have yet to be tallied.
Given the size and severity of the fire, President Obama issued a major disaster declaration on Aug. 11, which authorized the Federal Emergency Management Agency (FEMA) to coordinate federal disaster relief and to help state, tribal and local agencies recover from the disaster.
Once firefighters contained the Carlton fire on Aug. 25, the U.S. Forest Service (USFS) deployed its Burn Area Emergency Response (BAER) team to measure soil quality, assess watershed changes, identify downstream risks and develop recommendations to treat burned federal lands.
FEMA officials and the BAER team acted fast. They knew more floods may follow without vegetation to soak up rainwater. More silt and debris in the runoff can plug culverts and raise water levels, which may further threaten downstream communities and properties.
To reduce the vulnerability of those downstream communities, FEMA created ETART. Modeled after BAER, ETART would measure soil quality, assess watershed changes, identify downstream risks and develop recommendations to treat burned state, tribal and private lands.
FEMA and the State of Washington recruited biologists, engineers, hydrologists, mapping experts, range specialists, soil scientists and support staff from more than 17 entities.
SPIRIT OF COOPERATION
ETART participants include: Cascadia Conservation District, the Confederated Tribes of the Colville Reservation, FEMA, Methow Conservancy, National Weather Service (NWS), Okanogan Conservation District, Skagit Conservation District, Spokane Conservation District, U.S. Army Corps of Engineers, U.S. Bureau of Land Management (BLM), U.S. Department of Agriculture, U.S. Department of the Interior, USFS, Washington State Department of Natural Resources, Washington State Department of Fish and Wildlife, Whatcom Conservation District and Yakama Nation Fisheries.
Team members scored the benefits of working together across jurisdictional boundaries and overlapping authorities right away. To start, they stitched their maps together and overlaid their findings to gain consistency and a better perspective. Field assessments used extensive soil sampling. Computer modeling showed the probability of debris flow and other hazards.
Standard fixes in their erosion control toolbox include seeding and other ground treatments, debris racks, ditch protection, temporary berms, low-water crossings and sediment retention basins. Suggested treatments were evaluated based on their practical and technical feasibility.
Regional conservation districts provided a vital and trusted link to private landowners. They:
• held public meetings and acted as the hub of communications
• posted helpful links on their websites
• collected information on damage to crops, wells, fences, livestock and irrigation systems
• secured necessary permits that grant state and federal workers access to private property to assess conditions.
Local residents shared up-to-the minute information on road conditions and knew which seed mixtures worked best for their area. Residents proved key to the success of ETART.
Note: Teams found a few positive consequences of the wildfire. For instance, debris flow delivered more wood and gravel downstream, which may create a better fish habitat once the debris and sediment settle. The resultant bedload may enhance foraging, spawning and nesting for endangered species, such as Steelhead, Bull Trout and Spring Chinook Salmon.
STRENGTH OF COLLECTIVE ACTION
Final reports from BAER and ETART have helped several state agencies formulate and prioritize their projects, and leverage their budget requests for more erosion control funds.
Landowners and managers might share equipment, gain economies of scale and develop more cost-effective solutions. In the end, collaboration and collective action may avert future flooding.
CULTURE OF RESILIENCE
While public health and safety remain the top priority, other values at risk include property, natural resources, fish and wildlife habitats, as well as cultural and heritage sites.
Estimated costs for the emergency restoration and recovery recommendations on federal lands run $1.5 million. For short-term stabilization, USFS initiated funding requests for seeding and mulching urgent areas before the first snowfall. Other suggested treatments include bigger culverts, more warning signs and the improvement of road drainage systems.
For state and private lands, emergency restoration and recovery recommendations may cost in excess of $2.8 million. Erosion controls include seeding, invasive species removal and the construction of berms and barriers. In its final report, ETART also recommended better early warning systems, more warning signs on county roads and electronic message signs to aid residents evacuating via highways.
Landowners, managers and agencies continue to search for funding to pay for implementation. For instance, BLM regulations may allow it to seed its lands, as well as adjoining properties, after a wildfire. Select state agencies may provide seedlings, technical assistance on tree salvaging, or partial reimbursement for pruning, brush removal and weed control.
Knowing a short period of moderate rainfall on burned areas can lead to flash floods, the NWS placed seven real-time portable gauges in September to monitor rainfall in and around the area, and plans to place eight more rain gauges in the coming weeks. The NWS will issue advisory Outlooks, Watches and Warnings, which will be disseminated to the public and emergency management personnel through the NWS Advanced Weather Information Processing System.
Certain projects may qualify for FEMA Public Assistance funds. Under this disaster declaration, FEMA will reimburse eligible tribes, state agencies, local governments and certain private nonprofits in Kittitas and Okanogan counties for 75 percent of the cost of eligible emergency protective measures.
Successful ETARTs replicated in the future may formalize interagency memorandums of understanding, develop more comprehensive community wildfire protection plans and promote even greater coordination of restoration and recovery activities following major wildfires.
I have participated in a number of conversations where people argue what the basis for business continuity plans should be. Some people say you should have plans designed for specific threats inherent in your environment and others say that “what” happens is not important; plans should be based on the impacts of what happened and not the event itself. I say, they are both right, in a way.
Business continuity planning, I think, has evolved over time and has expanded in scope of what it tries to achieve. I’m not sure why we have gotten away from the term “contingency plans”, but I think Business Continuity Planning today includes both emergency response components and contingency planning components.
Considering these two components of the overall program, I think the Emergency Response part, that part that addresses how an organization responds to an incident should, in fact, have scenario specific components for the known risks and threats in the area where you do business. If you have facilities in hurricane regions, you absolutely should have Hurricane Preparedness Plans. Same goes for if you have facilities on fault lines; in flood plains; near active volcanoes; near nuclear power plants; etc. When specific threats arise, like pandemics, for example, your organization should develop a scenario specific plan for prevention and contention techniques for that exact threat.
(MCT) A few years ago a group of researchers used computer modeling to put California through a nightmare scenario: Seven decades of unrelenting mega-drought similar to those that dried out the state in past millennia.
"The results were surprising," said Jay Lund, one of the academics who conducted the study.
The California economy would not collapse. The state would not shrivel into a giant, abandoned dust bowl. Agriculture would shrink but by no means disappear.
Traumatic changes would occur as developed parts of the state shed an unsustainable gloss of green and dropped what many experts consider the profligate water ways of the 20th century. But overall, "California has a remarkable ability to weather extreme and prolonged droughts from an economic perspective," said Lund, director of the Center for Watershed Sciences at the University of California, Davis.
(MCT) — Gov. Dannel P. Malloy has declared Ebola a public health emergency and authorized officials to quarantine anyone who may have been exposed to or infected with the virus.
Though Ebola has not been reported anywhere near Connecticut, the order is a precautionary measure and just one of several actions being taken to guard against the disease in the state.
"Right now, we have no reason to think that anyone in the state is infected or at risk of infection," Malloy said in a news release. "But it is essential to be prepared, and we need to have the authorities in place that will allow us to move quickly to protect public health if and when that becomes necessary."
With more than 7,000 people sickened and more than 3,000 killed by the virus in West Africa, fears spiked last week with the announcement that Ebola was found in a man who had traveled from Liberia to Dallas.
By 2017, half of employers will require employees to provide their own mobile devices for work use, Gartner reports. There are many benefits to BYOD policies, from greater productivity on devices users are more comfortable with to lower corporate costs when businesses do not have to purchase mobile equipment or service plans. But securing these devices poses tremendous risk that may not be worth the reward. According to data security firm Bitdefender, 33% of U.S. employees who use their own devices for work do not meet minimum security standards for protecting company data. In fact, 40% do not even activate the most basic layer of protection: activating lock-screen features. Further, while the majority of workers could access their employer’s secure network connection, only half do so.
Bitdefender reports that there are 5 core security functionalities a strong BYOD policy should check:
To respond effectively during a disaster, it’s first vital to understand the demographics of residents and visitors. Most offices of emergency management maintain detailed inventories of critical infrastructure, their vulnerabilities, states of repair and hotspots around their jurisdictions frequently impacted such as roads that consistently flood or ice over. However, the same amount of critical information is rarely available about the community’s most valuable asset — its people.
Just as other significant storms have in the past, Hurricane Sandy served as a strong reminder of the importance of having access to critical information about the individuals who reside in or commute to an area. Nearly half the victims of the storm were age 65 or older, similar to that of Hurricane Katrina where 71 percent of those who died were 60 or older. Recent lawsuits brought against the cities of New York and Los Angeles (as well as Los Angeles County) have reinforced the importance of anticipating and preparing for the needs of some of the population who might require additional or specialized assistance during a disaster. It’s hard to say whether knowledge of the locations of older residents or those with other needs, particularly along coastal areas, would have reduced the death toll during Sandy, but having access to more information is always better when managing response to a disaster.
While low interest rates are likely to continue to present a challenge well into 2015, a stronger economy presents the property/casualty insurance industry’s best opportunity for growth, according to I.I.I. president Dr. Robert Hartwig.
Dr. Hartwig shared his thoughts on the industry’s growth outlook in his Commentary on 2014 First Half Results.
There are two principal drivers of premium growth in the P/C insurance industry he noted: exposure growth and rate activity.
Exposure growth—basically an increase in the number and/or value of insurable interests (such as property and liability risks)—is being fueled primarily by economic growth and development.
Although the nation’s real (inflation-adjusted) GDP in the first quarter of 2014 actually declined at an annual rate of -2.1 percent, economic growth snapped back in the second quarter, as real GDP surged by 4.6 percent.
There are very few more pressing issues in management today than cyber security. Notice that I didn’t say IT management; I said management. When the hacking of a major US retailer (Target) leads to the loss of billions of dollars in stock value and sales and the removal of not only the CSO, but the CIO and ultimately the CEO as well, stockholders, investors, and customers take notice.
Organizations worldwide depend increasingly on information and communications technology to operate and manage 24/7/365, and wireless devices, BYOD, social media, and the like all combine to make the jobs of those responsible for cyber security exponentially more difficult. Like the Dutch boy and the dike, security people worldwide have too many holes to plug and too few arms and fingers. Recently, I was watching a 1960s spy movie in which the agent had to find and access physical documents on site, take pictures of them, reduce the photos to microdots, paste the dots in place of periods in another document, and then smuggle those documents past the authorities. Today, an equivalent theft can be done remotely, often from another, hostile country, at light speed. And Edward Snowden’s 2013 disclosures about the doings of the US National Security Agency (NSA) amply demonstrate what a skilled technical organization with nearly unlimited resources can accomplish from half a world away.
The National Fire Protection Association (NFPA) reports that property losses at U.S. factories total nearly $1 billion annually. Between 2006-2010, about 42,800 industrial or manufacturing property fires in the utility, defense, agriculture, and mining industries were reported to U.S. fire departments each year, as well as 22 deaths and 300 injuries each year, according to the NFPA.
“Fire is the No. 1 preventable disaster at manufacturing facilities,” Cindy Slubowski, vice president and head of manufacturing at Zurich, said in a statement. “Most fires are preventable, and the risks can be reduced dramatically.”
In recognition of National Fire Prevention Week (Oct. 5-11), Zurich recommends that factory owners implement a pre-fire plan, starting with these steps:
One of the intuitive responses to Bring Your Own Device (BYOD) concerns is that it is important for organizations to have prudent and well publicized policies in place to clarify necessary information for users; including mitigating dangers and ensuring that everybody knows who pays for services.
Of course, this makes sense, but it may be difficult to do. Respecting the rights of employees and organizations is a tough balancing act. Perhaps this is why BYOD policies are not being followed as much as they should – or as much as they were in the past. Teksystems recently released a survey that suggests a lot of the people who should be paying attention to policies aren’t, and that the number of workers bypassing policies is growing.
Even more troubling, the survey found that 64 percent of IT professionals said that their organization has no official BYOD policy, and that percentage rose from 43 percent in 2013.
The steady stream of high-profile data breach incidents we’ve seen over the last few years makes one thing clear: cyber risk is a serious concern for virtually any enterprise. Disruption of day-to-day business operations and damage caused by the exposure of critical intellectual property or consumer information are just a couple of examples of potential fallout from an information security incident, not to mention a tide of expensive and embarrassing litigation and the possibility of damaging regulatory inquiries or compliance actions.
Federal agencies extend their reach into cybersecurity
Not convinced? One need only look at the breadth of publicly disclosed document requests from the Federal Trade Commission (FTC) in response to recent data breaches to get a sense of the entirely new level of scrutiny regulators are focusing on information security risk management practices following a serious breach incident. Other federal agencies like the Securities and Exchange Commission (SEC) and the Commodity Futures Trade Commission (CFTC) are also extending their reach by issuing new guidance regarding cybersecurity. Even congressional committees are getting into the act.
How security policy orchestration software can help reduce downtime in hybrid environments.
By REUVEN HARRISON
In our global, 24/7, online world, the individuals and organizations we deal with increasingly expect – and often rely on – our systems and applications being available at all times. When disaster strikes and downtime hits (whether through error, misfortune or malice), it can damage both an organization’s reputation and its bottom line. The companies you’re trusting to store and handle valuable information securely, or to access to the applications and services must do all they can to minimise the risk of breaches and downtime.
While stories about hackers and viruses breaking into (or bringing down) systems tend to prompt the biggest headlines, those of us in IT know that more downtime is due to network configuration errors than to security breaches. Because today’s networks are so complicated, and the pace and volume of changes is so great, it’s not uncommon for rushed-off-their-feet IT staff to make occasional configuration errors – and that could mean downtime for an application, service or even an entire business.
Entries are now being accepted for the BCI North America Awards 2015, which will be presented at the DRJ Spring World conference in Orlando.
This year's Award categories are:
- Business Continuity Consultant of the Year
- Business Continuity Manager of the Year
- Public Sector Business Continuity Manager of the Year
- Most Effective Recovery of the Year
- BCM Newcomer of the Year
- Business Continuity Team of the Year
- Business Continuity Provider of the Year (BCM Service)
- Business Continuity Provider of the Year (BCM Product)
- Business Continuity Innovation of the Year (Product/Service)
- Industry Personality of the Year.
The entry deadline is January 23rd 2015.
A new survey-based study conducted by IDG Research Services on behalf of Sungard Availability Services and EMC Corporation has looked at the cloud recovery market, amongst other areas.
The survey of 132 organizations found that faster recovery and reduced disaster recovery costs were reported as the top benefits of cloud recovery services (58 percent) followed by reduced downtime (44 percent) and improved reliability (38 percent).
Nearly half of respondents either have already invested in cloud recovery services or are planning to invest in the next one to two years; nearly an additional third have cloud recovery services on their radar but have no current investment plans.
Significantly, over three-fourths (78 percent) of those already investing in cloud recovery services acknowledge faster recovery as a benefit, compared with just 54 percent of organizations planning on investing and 57 percent of those with no plans to invest.
With regard to challenges specifically associated with cloud recovery services, those who are planning to invest (80 percent) and those who have no plans to invest (57 percent) are significantly more likely to have security concerns than those who are already investing (48 percent) in cloud recovery.
Organizations also wonder whether they will realize a return on their cloud spending, with 38 percent believing it will prove a challenge to realize an ROI on cloud recovery services.
The full results of the survey can be found after registration here.
When should you bring in new technology? When it does a better job at meeting your needs, of course. It’s the same for business continuity management. Migrating from in-house physical servers to cloud computing services should be properly justified by lower costs, higher reliability and better performance for instance. Without sacrificing data confidentiality, control or conformance. While cloud computing makes sense for many organisations, there are cases where it doesn’t (example – cloud computing isn’t always cheaper). Looking at the following business criteria and then analysing what new generation technology has to offer may be the smarter way to do things.
Suppose your business suffers a temporary disruption. (The cause of the disruption doesn’t matter; neither, necessarily, does the length of the disruption.) A disruption that impacts customers, prospects or finances (and almost every disruption – even for a few minutes – will), may trigger compliance obligations. You may need to file an insurance claim. Or you may need to provide government or industry regulators with the details of how your organization dealt with the disruption.
Do your Business Continuity and Incident Management plans lay out the needs and requirements for documenting actions taken during disaster or other disruption?
Any business disruption will generate a flurry of activity. Will you be able to recall all of those actions once order has been restored? Or will you have to spend countless hours reconstructing what happened, who did what and how long each action took. It is unlikely you’ll be able to capture every action by every participant. And the longer the disruption lasts, the longer that list of action will be.
Two surveys have been released recently that show the way consumers think about enterprise data breaches.
The first survey, conducted by HyTrust, isn’t surprising. It found that the majority of consumers will take their business elsewhere after discovering their information was compromised in a breach. And consumers aren’t patient on this matter. For approximately 45 percent of survey respondents, data security is a one strike and you’re out deal – they aren’t going to wait around for your company to get its act together and fix the security holes.
Also, that 45 percent wants to see companies held criminally negligent when a data breach occurs. Eric Chiu, president and co-founder of HyTrust, told eWeek that this survey result may have been the most surprising statistic to come out of the survey, adding:
One of the primary benefits of the cloud is the ability to distribute data architectures across wide geographic areas. Not only does this protect against failure and loss of service, but it allows the enterprise to locate and provision the lowest-cost resources for any given data load.
But problems arise in the ability, or lack thereof, of managing and monitoring these disparate resources, particularly as Big Data and other emerging trends require all enterprise data capabilities to be marshalled into a cohesive whole.
When it comes to storage, many organizations are attempting to do this through global file management, which is essentially putting SAN and NAS capabilities on steroids. The idea, as Nasuni and other promoters point out, is to extend resource connectivity across broadly distributed architectures while maintaining centralized control. This is not as easy as it sounds, however. Traditional snapshot and replication techniques must now work across multiple platforms and be free to make multiple versions of data that would overwhelm standard storage architectures. They must also be flexible enough to accommodate numerous performance levels, but not so unwieldy as to drive up costs by endlessly copying data sets for each new cloud deployment.
Data can be a fundamental tool in disaster preparedness, but the insights aren’t always heeded. This was the observation of three emergency management experts from academia, government and the private sector in an exchange last week on natural disaster data.
The trio, who spoke about data use for city resilience at the Atlantic CityLab Summit in Los Angeles, Sept. 29, said that an analysis of data shows an overwhelming need for infrastructure improvements, but states and cities typically take short-term savings over long-term protections against catastrophe.
Lucy Jones, a seismologist at the U.S. Geological Survey (USGS), is collaborating with Los Angeles to draft a seismic-resilience plan. She said the city is a prime example of what happens when there’s an abundance of data and absence of investment in disaster preparation. About 85 percent of the city’s water supply is delivered by aqueducts across the southern San Andreas Fault — a fault line the USGS estimates will generate a major earthquake sometime in the next decade or so, according to its data. The danger centers on indications city aqueducts will break, leaving only a six-month supply of water reserves for residents, she said.
“What if there was a case of Ebola in my community?” With the growing outbreak in West Africa, public health preparedness planners across the country are mulling this question as news broke that the CDC confirmed a case of Ebola in Texas and concerns grow over the threat posed by Ebola to global health security. This question is inevitably followed up with, “Are we ready?”
These are the types of questions that keep public health preparedness planners up at night. The reason these questions are so pressing right now is not only because of the alarming symptoms and mortality rate of Ebola, but also because of the continuous funding cuts that local health departments have faced since 2007. The United States is not West Africa, and Ebola is unlikely to have sustained transmission here because of better infection control in healthcare facilities, cultural differences, and protocols put in place by the Centers for Disease Control and Prevention (CDC) to stop the spread of the disease. But while local health departments would do everything in their power to protect lives in the face of a public health emergency like Ebola, there are other consequences to a community tasked with responding to a public health emergency that are complicated by ongoing funding cuts. For example, even the containment, treatment, and contact investigation of a small number of Ebola patients would have the potential to quickly overwhelm local health department budgets, as per capita spending on public health preparedness has decreased by nearly 50 percent in just the past year. Administrative burdens often delay state and federal emergency response funding that supplements local budgets. Additionally, lack of funding has decreased the number of preparedness programs.
Business Continuity and IT Disaster Recovery planning tends to first focus on system and application recovery (Recovery Time Objective – RTO) and data recovery (Recovery Point Objective – RPO) second. That makes sense when you consider the order it which things are usually recovered, but does it really? Isn’t the data or the information the life blood of the company? Isn’t that why it is called Information Technology and not just technology?
Customer information, financial data, product specifications, research data, procedures, accounts payable, forms (the list could go on and on) is what the company runs on.
I read two articles recently – Michael O’Dwyer’s “How snapshot recovery ensures business continuity” and Marc Staimer’s “Why Business Continuity Processes Fail and How To Recover Them.” Both share a lot of good information about improving data backup methods and timeliness. They explain how important the RPO is to disaster recovery planning and talk about backup and restore procedures, media, storage and locations. I would like to add some additional considerations for determining the RPO and developing recovery strategies that will meet the business need.
New WatchGuard Firebox M440 UTM/NGFW makes it easy to apply the right policies to the correct network segment
WatchGuard Dimension™ provides industry first, real-time view into the performance of security policies across segmented networks
WatchGuard® Technologies has launched the WatchGuard Firebox® M440 UTM/NGFW appliance designed to further simplify network security. The WatchGuard Firebox ® M440 features multiple independent ports, removing the need for complex configurations such as VLANs and simplifying the critical process of applying traffic-appropriate policies across multiple network segments – a process beyond the technical reach of many organisations. WatchGuard’s visibility solution, Dimension™, also provides the industry’s only real-time, single-pane-of-glass view to show the effect each policy is having on that segment’s traffic.
“Network security solutions are only good if they’re not too difficult for IT pros to use,” said Dave R. Taylor, vice president of corporate strategy and product management for WatchGuard. “The Firebox M440 makes it drop-dead easy to create segments, map the traffic, create custom policies based on what traffic is in each segment, and instantly see how it affects traffic. Applying the appropriate security policies to the correct traffic flows is what truly defines the success of your segmentation strategy and the Firebox M440 takes the configuration complexity out of the process.”
John Stengel, President of J Stengel Consulting, a network security, management and training firm, stresses that effective segmentation has never been more critical, stating, “The common misconception that strategies such as role-based authentication or basic VLAN switching and routing constitutes effective network segmentation, delivers a false sense of security. With the increased expectation for anytime employee access and advances around embedded Internet devices (IoT) and recent breaches like Target tied to a lack of proper segmentation, it has never been a better time for organisations to re-evaluate how they segment the network and ensure they have the right policies applied.”
The WatchGuard Firebox M440 delivers 25 1Gb Ethernet ports, eight that deliver Power over Ethernet (PoE), plus two 10 Gb SFP+ (fiber) ports. For more information click here: http://www.watchguard.com/wgrd-products/utm/firebox-m440/overview.
About WatchGuard Technologies, Inc.
WatchGuard® Technologies, Inc. is a global leader of integrated, multi-function business security solutions that intelligently combine industry standard hardware, best-of-breed security features, and policy-based management tools. WatchGuard provides easy-to-use, but enterprise-powerful protection to hundreds of thousands of businesses worldwide. WatchGuard products are backed by WatchGuard LiveSecurity® Service, an innovative support program. WatchGuard is headquartered in Seattle, Wash. with offices throughout North America, Europe, Asia Pacific, and Latin America. To learn more, visit WatchGuard.com.
WatchGuard is a registered trademark of WatchGuard Technologies, Inc. All other marks are property of their respective owners.
On Saturday, September 26, 2014 Mount Ontake – 200km west of Tokyo – suddenly erupted, spewing ash and rock over a wide area and killing nearly 50 people (at last count). What’s strange is that this volcanic eruption occurred with no warning – at least that’s what the specialists are saying at this stage. I’m not so sure that’s true.
It’s always been said that Japan has one of the best early warning / monitoring systems in the world due to its location on the Pacific Rim of Fire. If the best monitoring system in the world didn’t catch this, then is the best system even worth it? I mean, these systems are developed to help save lives and provide early warnings to evacuate people and ensure life safety. Yet, that didn’t happen so are the monitoring systems we have in place any good? Are they providing any help at all?
What do we need to do to get to a point that can predict – with sufficient notification – that something is (or could be) imminent? A few seconds won’t cut it and isn’t enough to allow for any communications or sufficient response – unless you’re a race car driver. Should we educate people instead to understand the risks of where they are – like climbing the side of a volcano, which makes up for the vast majority of those that died on Mount Ontake – or do we put trust in systems that can’t predict or measure potential dangers?
So I’m listening to the radio in the car on the way home from work and not surprisingly there’s comments about the current Ebola crisis in West Africa – it is a major headline after all and serious matter. In fact, as I was listening this particular broadcast was talking about the fact that Ebola had made its way to Dallas, Texas from Liberia via a male visitor.
Now, what got me surprised was that commentators and experts were saying that people should be panicked or scared of Ebola (in the Western world anyway) and I agree with them. But then they went on to kind of criticize people for being scared; taking their kids out of school, buying masks and disinfectants. They were saying that people were over reacting and there was no need to do this sort of thing. Yet, when flu season in making the rounds – in schools, office buildings, subway systems and shopping malls – people are blamed for not taking the proper precautions to ensure they don’t catch the flu, getting sick and getting other sick (and taking a flu shot of course). So what’s the difference?
There isn’t a pill people can take to proactively prevent themselves from catching Ebola, even though you can’t catch it from just walking past someone on the street. This is what people will do to protect themselves, to take themselves out of possible harm’s way, I don’t think that’s over-reacting. Yes, buying hazmat suits might be bit overboard but taking one’s loved one’s out of school and not interacting in areas where illnesses can spread – malls, subways etc. – is natural for people. So which is it? Do we protect ourselves proactively or not? Do we ensure our safety and that of our loved ones, or do we continue as if nothing is happening?
A Washington-area hospital announced Friday that it had admitted a patient with symptoms and a travel history associated with Ebola. The case has not been confirmed, but the number of similar incidents around the country and a confirmed Ebola patient in Dallas have spurred concerns about whether U.S. hospitals are as prepared to deal with the virus as federal officials insist they are.
Since July, hospitals around the country have reported more than 100 cases involving Ebola-like symptoms to the federal Centers for Disease Control and Prevention, officials there said. Only one patient so far — Thomas Duncan in Dallas — has been diagnosed with Ebola.
But in addition to lapses at the Dallas hospital where Duncan is being treated, officials say they are fielding inquiries from hospitals and health workers that make it clear that serious questions remain about how to properly and safely care for potential Ebola patients.
A CDC official said the agency realized that many hospitals remain confused and unsure about how they are supposed to react when a suspected patient shows up. The agency sent additional guidance to health-care facilities around the country this week, just as it has numerous times in recent months, on everything from training personnel to spot the symptoms of Ebola to using protective gear.
California Gov. Jerry Brown signed legislation on Tuesday, Sept. 30, to kick-start adoption of next-generation emergency communications technology in the state. But while the law requires state leaders to develop a comprehensive rollout plan, questions remain on how to adequately fund the upgrades.
Senate Bill 1211 orders the Governor’s Office of Emergency Services (OES) to establish a transparent process for calculating how much next-gen 911 technology will cost to implement on an annual basis, including how it sets the statewide 911 customer fee on phone bills. But according to one expert, questions have surfaced across the U.S. about whether states are using their 911 funds appropriately.
Kim Robert Scovill, executive director of the NG9-1-1 Institute, a nonprofit organization that promotes the deployment of next-generation 911 services, explained that some states move 911 money over to their general fund for other purposes. And while that doesn’t indicate a state is ignoring public-safety, he said increased fiscal transparency was a good move to ensure the money is being used properly.
No matter how complicated and unwieldy you think your data environment is, chances are you have nothing on the federal government.
The U.S. government is the single largest employer in the world, with more than 2 million civilian employees plus another 3.2 million military personnel around the world. That means it has had to build and maintain digital infrastructure of gargantuan size in order to keep all those people connected. Estimated at close to 9,000 data centers, the government IT footprint is clearly in need of a slimdown, not just to cut costs but to keep government processes in working order as mobile and cloud infrastructure take hold in the private sector.
To that end, government agencies have been working on a consolidation project for the past few years that, according to the Government Accountability Office (GAO), has shaved more than $1 billion off the U.S. government’s IT budget so far. The project has already led to the shuttering or planned closing of more than 1,100 data centers, while at the same time encouraging leading departments like the DoD to embrace the cloud and other advanced architectures to ensure that remaining resources can be distributed quickly and evenly to both critical and non-critical functions.
One of the challenges of developing a community that’s resilient to disaster is getting citizens to sign up for alert notifications. For example, a year after Itawamba County, Miss., deployed an emergency notification system, 25 percent of households had signed up to receive it. That’s considered good. Really good.
In fact, getting residents to sign up for any number of emergency services is difficult for a multitude of reasons. Some people are averse because of the privacy and security implications and are afraid to share personal information. And some of it is that people just tune out when it comes to the gruesome nature of preparing for a disaster.
But there are strategies to maximize the buy-in from residents. Ana-Marie Jones, executive director of the nonprofit agency Collaborating Agencies Responding to Disasters (CARD), shared her favorite ways for getting buy-in from the public:
(MCT) — USAA on Thursday became the first insurance company to seek federal permission to test ways drones could expedite claim processing in disaster areas.
The insurance and financial services company is seeking an exemption from the Federal Aviation Administration's Modernization and Reform Act of 2012 that would allow it to test unmanned aircraft systems on its San Antonio campus as well as on private, rural property nearby.
The FAA has largely limited commercial drone-use research to six test sites named in December, including a collection of Texas ranges managed by Texas A&M University-Corpus Christi.
Kathleen Swain, a USAA underwriter and FAA-rated commercial pilot and flight instructor, said USAA has already worked with A&M at the testing zone in College Station and was now ready to go further.
A second annual survey from Experian and the Ponemon Institute appears to show that more companies are prepared for a data breach, and that cyber insurance policies are becoming a more important part of those preparedness plans.
The study, which surveyed 567 executives in the United States, found that 73 percent of companies now have data breach response plans in place, up from 61 percent in 2013. Similarly, 72 percent of companies now have a data breach response team, up from 67 percent last year.
In the last year the purchase of cyber insurance by those companies has more than doubled, with 26 percent now saying they have a data breach or cyber policy, up from just 10 percent in 2013.
One of the monumental shifts in telecommunications and enterprise networking during the past century was the ascendency of the Internet protocol. The reason that it is so powerful is simple: Everything is divisible to the same basic language. Instead of French, English, Russian and Turkish, the world’s networks all talk in Esperanto.
Myriad advantages come with this, but one big issue: Video, voice and data are sent through the same network. Vital and incidental pieces of information – sales results and the menu in the cafeteria – are carried alongside each other. The comingling of so many applications and so much data actually has two implications: If the network goes down, losers have no connectivity, and the data that must be secured becomes more cumbersome.
Customer data integration currently is the top barrier to adopting digital marketing technologies, according to a recent survey of senior marketers at global companies.
Teradata, an analytics platform vendor, released “Enterprise Priorities in Digital Marketing” this week. It’s based on a global survey conducted by Econsultacy US, which queried 402 senior marketing officers about their plans for digital marketing.
I find the term “digital marketing” to be a bit vague, but for the survey, it was defined as “the strategy of connecting large amounts of online data with traditional offline data, rapidly analyzing it and gaining cross-channel insights about customers.” The goal is much simpler: Deliver personalized content and messages to customers wherever — or however — they’re online.
It’s not hard to figure out why companies value this approach, but the findings fill in the gap between common sense and theory:
“The largest marketing organizations in the world have concluded that enhancing customer relationships via multiple digital channels best supports sustainable growth and reliable retention. This focus on thoroughly understanding the customer through data, and acting on insights found in data to design interactions, is driving an unprecedented demand for technology.”
With the amount of data that IT organizations are being asked to manage rising considerably, backing up all that data has become a significant challenge. Looking to provide IT organizations with some additional headroom, Symantec today introduced a NetBackup 5330 appliance that can store up to 229TB of data at throughput speeds that are four times faster than previous generations of the appliance using 10G Ethernet.
The end result, says Drew Meyer, director of marketing for integrated backup at Symantec, is backup that is now two times faster, data recovery that is three times faster, and data replication that is 4.8 times faster.
Meyer says the NetBackup 5330 appliance is a core element of the company’s overall approach to software-defined data protection. Rather than requiring IT organizations to acquire and manage separate backup and recovery systems to handle physical and virtual servers, Meyer says NetBackup provides a single platform for managing data protection across the data center.
It is clear that the Ebola virus outbreak has devastated Liberia, Guinea and Sierra Leone by killing more than 3,000 people to date of the 7,000 individuals infected. Even more troubling is that the BBC News reported that “five people are infected every hour” and the Centers for Disease Control and Prevention (CDC) stated that “cases in Liberia are currently doubling every 15-20 days, and those in Sierra Leone and Guinea are doubling every 30-40 days.” With the CDC providing confirmation on the first Ebola virus patient in the U.S., as well as projecting that the spread of the Ebola virus in 2015 will be upward of a million cases in West Africa, now is the time for nations to step up their prevention efforts. Because the Ebola virus transmission takes place through the exchange of blood and bodily fluids and is not spread by air or water, health-care personnel and close family caring for the patients are at the greatest risk of getting the virus.
Health-care providers, hospitals, long-term care agencies and primary and specialty care should use this threat to seize the opportunity to refine worker protection and infectious control plans and procedures. With the competing priorities of providing health care, emergency management is shockingly not always on the minds of health-care administration. As many of us know, emergency management planning is often a top priority only when it is desperately needed.
Many risk managers are struggling to get their arms around reputation risk. One challenge is that risk, a threat to valued asset or desired outcome, is hard to discuss in modern terms without statistics. Statistics, on the other hand, can be mind-numbing.
First, the accountancies. Eisner & Amper reports that reputation risk has been the number one board concern for each of the past four years. Deloitte concurs on the ranking but emphasizes the strategic nature of reputation risk. E&Y finds reputation risk in international tax matters; PwC finds reputation risk in bribery, corruption and money laundering. Oliver Wyman, a human resource and strategy consultancy, reports that reputation risk is a rising C-suite imperative ranking fourth this year (and third among risk professionals). Reputation risk was fourth in Aon’s 2013 survey. Willis shared data showing that 95% of major companies experienced at least one major reputation event in the past 20 years.
Ace in 2013 reported that 81% of companies told the insurer that reputation was their most important asset. Allianz’s 2014 global survey ranked the risk sixth of the top 10. Rounding out the professions, the 2014 study written by the Economist Intelligence Unit and published by the law firm, Clifford Chance, reported that 74% of U.K. board members see reputation damage as the most worrying consequence of an incident or scandal, ranking it as more serious than the potential direct financial costs, loss of business contracts and even impact on share price.
Unified communications is an important trend but, when it comes to business continuity planning for critical communications systems, it may not be the best approach.
By Andrew Jones
Smart mobile devices have, by their very nature, brought voice and data convergence to a mass market. It’s easy to be convinced that they offer a panacea communications solution: addressing all needs and offering the best value for money. However, when critical communications are a key requirement the situation can become much more complicated and it may even become clear that separating voice and data systems could be a better solution, which could contradict the unified communications trend.
It is certainly possible to bring voice and data together when planned carefully with the right level of consideration for the longer term but it may not be that one size fits all and alternative designs and infrastructure may prove to be a more effective solution.
One of the biggest benefits to using smartphones in an organization is the ability to not only use the commercial cellular services but also private networks (either a private cellular/GSM network or even a wifi-enabled solution) – and rightly so, this is the kind of flexibility that is highly useful and simply was not available in the past. Today we continue to build our onsite networks and links to the outside world to provide high speed rich data content to suit our needs. However, as each year passes the content, definition of graphics and tolerance to delays shift, requiring us to carefully manage and upgrade our onsite wifi and Internet connectivity so it provides the best for our employees for the foreseeable future. We continue this stepwise investment to keep abreast of IT demands of our users and as far we know this trend is set to continue. So is introducing VoIP (Voice over IP) onto a wifi network that continually struggles to keep abreast of our needs counterproductive, as while it uses an existing asset, upgrading for voice is not inexpensive.
By James Moore
Increasing reports of compromises by well-funded and resourced attackers are raising the profile of cyber security to such an extent that headlines of data breaches are becoming mainstream. On a regular basis, reports are being released showing the skill and persistence of attackers. Advanced attacks such as spear phishing, watering holes booby-trapped with custom malware and zero-day exploits, even entry via supplier links are all being reported on an almost weekly basis. And all of these attacks have one thing in common - they target individuals.
Generally, we still see that most organizations rely on traditional security controls in the form of technology such as anti-virus, firewalls, SIEM etc to protect their critical assets. However, the increasing importance of employee security awareness is often overlooked and instead only basic awareness training is given, focussing available resources on deploying and testing traditional security controls.
The US National Fire Protection Association (NFPA) Standards Council has approved a request to establish a standard for community risk assessments and reduction plans.
The standard will provide a process for jurisdictions to follow in developing and implementing a community risk reduction plan, which helps identify a community risk profile and allocate resources to minimize risks.
The standard is expected to be completed in the next two years.
A new UK-based company which aims to demystify business continuity management and make it easier and more straightforward than ever before has opened its doors for business.
With more than 15 years’ business continuity experience with RSA (Royal & SunAlliance), one of the UK’s leading general insurers and a FTSE 100 company, Ian Houghton’s trademark no-nonsense, down-to-earth approach will now be available to clients across the country with the launch of his own consultancy.
Called Easy BCM Ltd, Houghton’s new venture aims to make business continuity management easy to understand, implement and maintain for companies large and small.
“I’ve always believed that BCM should be approached in a sensible and straightforward way, to reflect the nature, scale and complexity of a business,” explains Houghton. “Too often plans are dictatorial and take no account of the industry, the size of the organization and the complexity of its operations.
“At Easy BCM we make business continuity management accessible and show clients that it can be a valuable asset for a company which can help drive improvements in many different areas.”
Ten new National Science Foundation projects will investigate how to keep complex, interdependent infrastructure available.
When critical infrastructure is resilient, it is able to bounce back after a disruption at an acceptable cost and speed. When resilient infrastructure is interdependent, cascading failures between infrastructure systems may be eased or possibly even avoided.
This ideal of resilience is far from the norm, particularly as critical infrastructure becomes more interconnected and complex.
To investigate innovative ways to bolster the resilience of the electrical grid, water systems and other critical infrastructure areas, the US National Science Foundation (NSF) has awarded grants totaling nearly $17 million through cross-disciplinary funding by its Directorates for Engineering and Computer and Information Science and Engineering.
During the next three years, more than 50 researchers at 16 institutions will pursue transformative research in the area of Resilient Interdependent Infrastructure Processes and Systems (RIPS).
It’s an unfortunate truth. The holes in your IT security are most likely to be where you neither see them nor expect them. That means they’ll be outside the basic security arrangements that most organisations make. Firewalls, up to date software versions and strong user passwords are all necessary, but not sufficient. Really testing security is akin to an exercise in lateral thinking or even method acting. You have to look at your systems and network from the outside to see how a hacker or cybercriminal might try to get through or round the mechanisms you’ve put in place. And there’s more still to this inside-out approach to protecting your organisation.
The government released 4.4 million medical payment records this week as part of the Open Payments database, and it’s already attracting national headlines and criticisms for being incomplete and slow.
It’s a major reminder that while open data may be free, it isn’t necessarily clean.
NPR, the Wall Street Journal and Forbes, have all reported on the controversial data release, which is required under a provision of the Affordable Care Act. The records show $3.5 billion in payments made by pharmaceutical and device companies to doctors.
(MCT) — Tom Fuller could tell how well folks understood earthquake insurance once he mentioned that he has a policy for his damaged home in Napa.
The uninitiated responded, “Well, you’re lucky.” The more knowledgeable said, “I hope you didn’t hit your deductible.”
Fuller, a public relations consultant, said the repairs from last month’s magnitude-6.0 quake won’t come close to his $48,000 deductible — the amount of structural damage his home must suffer before the insurance company becomes liable for major repairs. That means he will cover virtually all the damage from the Aug. 24 temblor to his 1940s-era home south of downtown.
Even so, his insurance policy still gives him peace of mind that he could rebuild should a massive, 1906-type quake ever level his city.
(MCT) -- Under the blistering Central Valley sun, Filiberta Sanchez and her toddler granddaughter strolled down a Parkwood sidewalk lined with yellow weeds, dying grass and trees more fit for kindling than shade.
"It was very pretty here, very pretty," said Sanchez, 56, as little Jenny crunched a fistful of parched dirt and pine needles she grabbed from the ground. "Now everything's dry."
Parkwood's last well dried up in July. County officials, after much hand-wringing, made a deal with the city of Madera for a temporary water supply, but the arrangement prohibited Parkwood's 3,000 residents from using so much as a drop of water on their trees, shrubs or lawns. The county had to find a permanent water fix.
Risk assessment is, of course, the foundation of effective compliance measures. This has always been true as a matter of common sense. And, since the Federal Sentencing Guidelines for Organizations went into effect two years ago this November, this has been true as a matter of legal expectation.
Risk assessment is also, in my view, the most challenging aspect of C&E work – both conceptually and as a practical matter. Indeed, even though I’ve been writing this column for four years (the fruits of which are contained in this complimentary e-book issued by CCI), I can see no end of risk assessment topics in sight. So, to attempt to chip away at the backlog, this most recent installment will look at some of the recurring questions C&E officers have on risk assessment methodology.
This post by O’Dwyers announcing that H+K Strategies (formerly Hill & Knowlton) has officially declared that digital public relations and marketing communications is now the backbone to any organization’s communications. O’Dwyers is quite snarky in their comments about this “announcement” by H+K. It’s obvious they say, and that H+K is clearly outdated by even having to tout their digital savvy.
While it is true that some agencies, like Edelman, have long established credibility in digital comms, what O’Dwyer ignores is the fact that most organizations, even some of the most powerful and sophisticated in the world, still do not really get this. Almost any crisis communication plan I look at is still “media first.” That is, the primary focus of the plan is preparing for and delivering info and messages to media outlets.
By John D’Ambrosia, chairman, Ethernet Alliance board of directors; chief Ethernet evangelist, CTO office, Dell Networking
Ethernet and its standards-based approach have been a fundamental pillar leveraged by the data center community from inception. CxOs and IT managers have embraced Ethernet and its strong history of seamless, multi-vendor interoperability. In today's data centers, Gigabit Ethernet for servers and 10 Gigabit Ethernet (10 GbE) for networking have been the proven workhorses – cloud-scale data centers are shifting to 10 GbE for servers, and 40 Gigabit Ethernet (40 GbE) for networking.
The introduction of 40 Gigabit Ethernet provided CxOs and IT managers with a cost-effective solution to deal with the never-ending traffic burden on their networks, while 100 GbE technology continues to evolve. The initial development of 40 GbE was intended as the next-generation solution for servers beyond 10 GbE, but its inherent architecture enabled a high-density aggregation for 10 GbE server connections. This interconnect scheme enabled the cost efficiencies fueling the phenomenal growth rates being seen in today's cloud-scale data centers. The same inherent structure also exists at 100GbE, and given the maturity in development of 25 Gb/s signaling to enable 100 GbE, industry forces are driving toward 25 GbE as the next high-volume deployment for servers. This will take today’s cloud-scale data centers to the next level of performance at the lowest cost per bit from a CAPEX and OPEX perspective.
Every once in a while, talk of the all-cloud data center starts to circulate throughout professional IT circles. While most people are quick to dismiss this notion, it’s important to note the distinction between fully cloud-based data architecture and the end of the traditional data center as we know it.
In short, many organizations will likely stick with in-house infrastructure for some time to come, but others could reap tremendous benefits by outsourcing their entire data environment, at least in the short term.
A case in point is Infor Inc., which built its software business entirely in the cloud and now specializes in application-centric business solutions that allow other organizations to do the same. The company claims its lack of a data center allows it to focus more of its energy on development and other business-facing concerns and gives it an edge against well-heeled competitors like SAP and Oracle. The company utilizes an open framework and public providers like Amazon, and is looking to port some of its Big Data needs onto Amazon’s RedShift platform or possibly the IBM cloud. Company executives say that manpower costs alone are enough to deter them from building their own facilities for the foreseeable future.
Mary Schoenfeldt is the public education coordinator for the Everett, Wash., Office of Emergency Management. She is a 2013 inductee into the International Network of Women in Emergency Management hall of fame and has written numerous books on school safety during her 30 years in the field.
Schoenfeldt is considered an expert in crisis management, helping communities assess response systems; writing crisis plans; conducting physical site safety audits; and designing school training exercises. She created the community preparedness campaign “Who Depends on You?” This interview has been edited for clarity and length.
Exercises are conducted to identify strengths and weaknesses; assess gaps and shortfalls in plans, policies and procedures; clarify roles and responsibilities among different entities; improve interagency coordination and communications; and identify needed resources and opportunities for improvement.
Do exercises achieve these goals? Probably not. Not because they can’t, but because the organizations planning and executing these exercises don’t use them as real tests. These organizations are engaging in “exercises in futility.” But organizations may be ready for a new kind of dynamic exercise, based on risk-reward principles.
The goal is to provide a deliverable: the after action report or improvement plan. What if we changed this deliverable to measurable improvement in actual policy, procedure, capability or technical assistance to support performance? This would change the conversation from planning exercises, to exercising plans or at least exercising the concepts in the plans. If there is no plan, consultants could help the organization by using dynamic exercises to develop hypotheses, reveal weakness, uncover strengths, innovate new approaches to problem-solving, and then support planning efforts to capture and implement improvements based on the exercise outcomes.
New model will help forecasters predict a storm’s path, timing and intensity better than ever
- This is a comparison of two weather forecast models looking six hours ahead for the New Jersey area. Image on left shows the forecast which doesn't distinguish localized hazardous weather. Image on right shows the new HRRR (High-Resolution Rapid Refresh) model that clearly depicts where local thunderstorms (yellow and red coloring) are likely. (Credit: NOAA)
Today, meteorologists at NOAA’s National Weather Service are using a new model that will help improve forecasts and warnings for severe weather events. Thanks to the High-Resolution Rapid Refresh (HRRR) model, forecasters will be able to pinpoint neighborhoods under threat of tornadoes and hail, heavy precipitation that could lead to flash flooding or heavy snowfall and warn residents hours before a storm hits. It will also help forecasters provide more information to air traffic managers and pilots about hazards such as air turbulence and thunderstorms.
Developed over the last five years by researchers at NOAA’s Earth System Research Laboratory, the HRRR is a NOAA research to operations success story. It provides forecasters more detailed, short-term information about a quickly developing small-scale storm by combining higher detail, more frequent radar input and an advanced representation of clouds and winds. The HRRR model forecasts are run in high resolution every hour using the most recent observations with forecasts extending out 15 hours, allowing forecasters to better monitor rapidly developing and evolving localized storms.
- VIDEO: NOAA launches new tool to improve weather forecasts. (Credit: NOAA)
“This is the first in a new generation of weather prediction models designed to better represent the atmosphere and mechanics that drive high-impact weather events,” said William Lapenta, Ph.D., director of the National Centers for Environmental Prediction, part of the National Weather Service. “The HRRR is a tool delivering forecasters a more accurate depiction of hazardous weather to help improve our public warnings and save lives.”
Hyper local forecasts are possible with the HRRR because of higher resolution. The HRRR’s spatial resolution is four times finer than what is currently used in hourly updated NOAA models offering a more precise prediction of a storm’s location, formation, and structure. Using the HRRR, forecasters have an aerial image in which each pixel represents a neighborhood instead of a city. “This increase in resolution from eight to two miles is a game-changer,” added Lapenta.
What Goes In…
The HRRR starts with a full 3-D picture of the atmosphere one hour before the forecast and then brings in observations from surface stations, commercial aircraft, satellites, and weather balloons to create a more detailed and balanced starting point for the forecast. Another key innovation for the HRRR is adding in radar data every 15 minutes during that hour to help the model “know” where precipitation is ongoing. Integrating atmospheric data gathered before a model run, including radar data at a two mile resolution, provides a more accurate picture of what is happening in the atmosphere at the start of the forecast. This helps predict changes to storms and development of new storms faster than current models.
…And What Comes Out
The HRRR model’s hourly output includes more frequent snapshots, in 15 minutes intervals, of the atmosphere. With this information forecasters can better anticipate and predict the onset of a storm and critical details of its evolution, allowing for earlier watches and warnings.
“The HRRR model will provide forecasters a powerful tool to help them inform communities about evolving severe weather,” said Stan Benjamin, Ph.D., a research meteorologist at NOAA’s Earth System Research Laboratory who led the research team that developed the model. "Being able to warn the public of weather hazards earlier and with greater detail is an outstanding return from NOAA's investment in research and observation systems."
Many NOAA scientists were involved with testing, optimizing, and implementing the model, including experts at NOAA’s National Weather Service and its National Centers for Environmental Prediction. NOAA’s partners at the Cooperative Institute for Research in Environmental Science at the University of Colorado at Boulder and the Cooperative Institute for Research in the Atmosphere at Colorado State University, Fort Collins helped with development. NOAA researchers partnered with users such as the Federal Aviation Administration, the National Center for Atmospheric Research, and the Department of Energy to significantly improve forecasts for aviation, energy among other industries through the HRRR model.
“Implementation of the HRRR is just one of many model improvements made possible with NOAA’s boost in its supercomputing power for weather prediction,” said Louis Uccellini, Ph.D., director, National Weather Service. “With advances in our forecast models, like the HRRR, we’re moving toward building a Weather-Ready Nation by improving our forecasts, providing better information to decision makers, and helping communities become more weather-ready and resilient against severe weather events.”
NOAA's National Weather Service is the primary source of weather data, forecasts and warnings for the United States and its territories. NOAA’s National Weather Service operates the most advanced weather and flood warning and forecast system in the world, helping to protect lives and property and enhance the national economy. Working with partners, NOAA’s National Weather Service is building a Weather-Ready Nation to support community resilience in the face of increasing vulnerability to extreme weather. Visit us at weather.gov and join us on Facebook and Twitter.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram and our other social media channels.
EATONTOWN, N.J. -- September is National Preparedness Month, and the latter half of the year is an ideal time for people to review their insurance policies. Understanding the details of what specific policies cover and what the policyholder is responsible for after a disaster is important as both clients’ needs and insurance companies’ rules change.
Insurers’ decisions and legislative changes have the biggest effect on changes in policies. Consumers should make themselves aware of possible changes in these areas and know what to look for while reviewing their policies.
The first check is the most obvious: the actual coverage. Policyholders should look at the specifics of which property is covered and the type of damage that is covered. Property owners should know that floods are not covered by standard insurance policies and that separate flood insurance is available. Flood insurance is required for homes and buildings located in federally designated high risk areas with federally backed mortgages, referred to as Special Flood Hazard Areas (SFHAs). Residents of communities that participate in the National Flood Insurance Program (NFIP) are automatically eligible to buy flood insurance. According to www.floodsmart.gov, mortgage lenders can also require property owners in moderate to low-risk areas to purchase flood insurance.
There are two types of flood insurance coverage: Building Property and Personal Property. Building Property covers the structure, electrical, plumbing, and heating and air conditioning systems. Personal Property, which is purchased separately, covers furniture, portable kitchen appliances, food freezers, laundry equipment, and service vehicles such as tractors.
What’s Not Covered
Policy exclusions describe coverage limits or how coverage can be purchased separately, if possible. Property owners should know that not only is flood insurance separate from property (homeowners) insurance, but that standard policies may not cover personal items damaged by flooding. In these cases, additional contents insurance can be purchased as an add-on at an additional cost. Some policies may include coverage, but set coverage limits that will pay only a percentage of the entire loss or a specific dollar amount.
The Federal Emergency Management Agency’s Standard Flood Insurance Program (SFIP) “only covers direct physical loss to structures by flooding,” FEMA officials said. The SFIP has very specific definitions of what a flood is and what it considers flood damage. “Earth movement” caused by flooding, such as a landslide, sinkholes and destabilization of land, is not covered by SFIP.
Structures that are elevated must be built at least to the minimum Base Flood Elevation (BFE) standards as determined by the Flood Insurance Rate Maps (FIRMs). There may be coverage limitations regarding personal property in areas below the lowest elevated floor of an elevated building.
Cost Impact of Biggert-Waters
The Biggert-Waters Flood Insurance Reform Act of 2012 extends and reforms the NFIP for five years by adjusting rate subsidies and premium rates. Approximately 20 percent of NFIP policies pay subsidized premiums, and the 5 percent of those policyholders with subsidized policies for non-primary residences and businesses will see a 25 percent annual increase immediately. A Reserve Fund assessment charge will be added to the 80 percent of policies that pay full-risk premiums. Un-elevated properties constructed in a SFHA before a community adopted its initial FIRMs will be affected most by rate changes.
In March 2014, the Consolidated Appropriations Act of 2014 and the Homeowner Flood Insurance Affordability Act (HFIAA) of 2014 were signed into law, lowering rate increases on some policies, preventing rate increases on others, and delaying the implementation of Section 207 of Biggert-Waters, which was to ensure that certain properties’ flood insurance rates reflected their full risk after a mapping change or update. HFIAA also repeals a portion of Biggert-Waters that eliminated grandfathering properties into lower risk classes. Many of the changes have not yet been implemented because the necessary new programs and procedures have not been established.
The General Conditions section informs the consumer and the insurer of their responsibilities, including fraud, policy cancellation, subrogation (in this case, the insurer’s right to claim damages caused by a third party) and payment plans. Policies also have a section that offers guidance on the steps to take when damage or loss occurs. It includes notifying the insurer as soon as practically possible, notifying the police (if appropriate or necessary) and taking steps to protect property from further damage.
“FEMA’s top priority is to provide assistance to those in need as quickly as possible, while also meeting our requirements under the law,” FEMA press secretary Dan Watson said. “To do this, FEMA works with its private sector, write-your-own insurance (WYO) company partners who sell flood insurance under their own names and are responsible for the adjustment of their policy holders’ claims.”
Policyholders should speak with their insurance agent or representative if they have any questions about coverage. For further information and direction, call the NFIP Call Center at 1-800-427-4661 or the NFIP Referral Center at 1-888-379-9531. Comprehensive information about NFIP, Biggert-Waters, HFIAA and flood insurance in general can be found at the official NFIP website, www.floodsmart.gov.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
Retail, by its very nature, is fast-moving: competition is intense and customers are increasingly demanding. In this cutthroat environment, the inability to do business can quickly damage a retailer: and making up lost ground is often extremely difficult, if it’s possible at all.
“All businesses need to have business continuity plans in place to avoid risks and minimise disaster, but retailers operate in a particularly competitive environment,” says Grant Minnaar, Business Continuity Management Advisor at ContinuitySA. “Retailers need to understand their risk profiles and make sure they have strategies in place to ensure they can stay trading, or they risk losing customers and damaging their brands.”
ContinuitySA has identified some of the top business continuity risks faced by retailers:
Craig Young overviews the Bash /‘Shellshock’ vulnerability which was recently identified and looks at whether it really is worse than Heartbleed, as has been widely claimed.
What is the vulnerability?
An Akamai researcher discovered that Bash, the dominant command-line interpreter present on Unix/Linux based systems, will improperly process crafted variable definitions allowing trailing bytes to be processed as OS commands. Bash allows users to define environmental variables which contain function definitions and a flaw within this parsing process means that commands specified after the function are executed when the variable definitions are passed to a Bash interpreter. The problem can easily be reproduced by logging into Bash shell and defining a crafted variable definition with trailing commands but in this scenario there is little risk since the commands are limited to the permissions of the already logged in user. Where this ‘Shellshock’ vulnerability really becomes a problem is when we consider the many ways in which Bash is indirectly exposed to an adversary. The most prominent (and worrisome) example of this is web technologies which use the vulnerable command-interpreter to generate responses to http requests. Since various details from the request are stored in Bash variables and passed to the command-interpreter, a remote unauthenticated attacker can use these scripts to inject commands which will run in the context of the web server.
The BCI’s Australasian Awards will be presented in Melbourne on October 17th 2014. The shortlst for the awards has now been published and is as follows:
Business Continuity Consultant of the Year
Steven Cvetkovic MBCI Managing Director Continuity & Compliance Management Services Pty Ltd
Ian Perry Director Chelmsford Consulting Limited
Oliver Pettit Client Director – Risk Services Deloitte Touch Tohmatsu
Ken Simpson MBCI Principal Consultant The VR Group
Paul Trebilcock MBCI Director JBTGlobal Coporate Advisory
Nalin Wijetilleke MBCI Director/Principal Consultant ContinuityNZ Limited
Business Continuity Manager of the Year
John Doble Business Continuity Manager NBN Co.
Sarah McDonald MBCI Senior Manager – Business Resilience Deloitte Touche Tomatsu
Public Sector BC Manager of the Year
Ian Goldfinch MBCI Manager, ICT Continuity Planning eHealth Systems, SA Health
David Reason Senior Risk Manager EQC (Earthquake Commission)
BCM Newcomer of the Year
Dale Cochrane CBCI Business Continuity Consultant National Australia Bank
Mark Dossetor AMBCI Manager Business Continuity Department of Transport, Planning and Local Infrastructure (DTPLI)
Eddie Ramirez Business Continuity Coordinator Westpac Group
Business Continuity Team of the Year
Australian Taxation Office
Department of Justice, Victoria
Victorian Department of Transport, Planning and Local Infrastructure
Business Continuity Provider of the Year (Product)
Linus Information Security Solutions Pty Ltd
RiskLogic Pty Ltd
Business Continuity Provider of the Year (Service)
Continuity & Compliance Management Services Pty Ltd
Hewlett-Packard Australia Pty Ltd
Linus Information Security Solutions Pty Ltd
Plan B Limited
Business Continuity Innovation of the Year
Continuity & Compliance Management Services Pty Ltd
PAN Software Pty. Ltd.
RiskLogic Pty Ltd
Most Effective Recovery of the Year
Bank of New Zealand
Plan B Limited
Westpac Banking Corporation
Industry Personality of the Year
Steven Cvetkovic MBCI
Howard Kenny MBCI
To help business continuity professionals better understand IT-related risk, they should develop and test risk scenarios. A new guide and tool kit from ISACA provides 60 examples of IT-related risk scenarios covering 20 categories of risk that organizations can customize for their own use.
‘Risk Scenarios Using COBIT 5 for Risk’ provides an understanding of risk assessment and risk management concepts in business terms, based on the principles of the globally recognized COBIT framework. It also defines the following six steps to effectively using risk scenarios to improve risk management:
1. Use generic risk scenarios, such as those presented in the publication, to define a set that is tailored to your organization;
2. Validate the risk scenarios against the business objectives of the organization, ensuring that the scenarios address business impacts;
3. Refine the selected scenarios based on this validation and ensure their level of detail is in line with the business criticality;
4. Reduce the number of scenarios to a manageable set;
5. Keep all scenarios in a list so they can be reevaluated; and
6. Include in the scenarios an unspecified event (an incident not covered by other scenarios)
Risk Scenarios provides scenario examples across categories such as IT investment decision making, staff operations, infrastructure, software, regulatory compliance, geopolitical, malware, acts of nature and innovation. It also provides guidance on how to respond to a risk that exceeds the organization’s tolerance level and how to use COBIT 5 to accomplish key risk management activities.
Risk Scenarios is available at www.isaca.org/riskscenarios
Whenever a breach of some sort occurs, two things tend to happen. First, the general password warning is given: Change them now, change them regularly, and don’t repeat passwords for anything. Second, people experience angst over password use in general. They often feel that the password has come to the end of its usefulness and we need to move on to other sorts of authentication.
You know what we never talk about when news breaks about a data breach and stolen passwords? Usernames. If we look back at two major password-related breach stories from recent months, it’s obviously something that should be considered. When word went out about the Russian hackers who had stolen a billion passwords, it was also reported that usernames were stolen.
It was the same situation with the Gmail incident of earlier this month. But if we look closely at the way an eSecurity Planet story phrased the incident, we see what the real issue is:
The following day, however, Google published a blog post stating that less than 2 percent of the username and password combinations would have worked for Gmail.
Username and password. Not just password alone.
(MCT) -- With flu season approaching, public health officials hope a crowdsourcing app that tracks flu activity will gain additional traction.
Flu Near You, a disease detection app, helps predict outbreaks of the flu in real time. Users self-report symptoms in a weekly survey, which the app then analyzes and maps to show where pockets of influenza-like illness are located.
HealthMap, Boston Children’s Hospital, the Skoll Global Threats Fund and the American Public Health Association developed the app, which was launched in November 2011. It now has more than 120,000 subscribers.
“It engages the public directly,” said Jennifer Olsen, manager of pandemics for the Skoll Global Threats Fund, a San Francisco-based non-governmental organization that works to confront dangers around the world.
We recently received a low ranking by a major market research organization, ranking eBRP Suite among the “Niche Players” in their mystical rating chart. Then why are we smiling?
We have been told that eBRP Suite does not deliver what these industry “experts” expect in a BCM software product. In last year’s review, we were ranked among the top companies. What did we do wrong this year? We did what we always do: act on our Customer’s feedback to continue to improve our products. We also added a stream of new customers – including several Fortune 500 companies and international banks – all of whom found eBRP Suite to be exactly what they needed. So what happened to drop us so far in the rankings? The simple answer is: they changed the survey! We still offer the same great product. We still provide the same world class service. Just as we have for more than a decade.
What those market researchers got right is that eBRP Suite isn’t for everyone. For those looking for a tool to simply conduct a BIA and write plans, there are plenty of companies to choose from. That’s not what we are, or want to be – even if those market “gurus” think we’re wrong.
In 2010, just as the recession’s wave of fiscal calamity was peaking, George Bascom and Todd Foglesong, from Harvard’s Kennedy School of Government, published a report, Making Policing More Affordable. They pointed out that public expenditures on policing had more than quadrupled between 1982 and 2006. But with city budget shortfalls opening up across the country, police departments and their chiefs, once used to ever-growing budgets, were now facing a new reality of cutbacks, layoffs and even outright mergers and consolidations of entire police departments with others. With federal subsidies disappearing (federal support for criminal justice assistance grant programs shrank by 43 percent between 2011 and 2013), thanks to a frugal Congress, police had few options.
With funding spigots turning off, law enforcement agencies must find ways to operate more affordably, according to Bascom and Foglesong. One obvious way is to use technology in more efficient ways. Being more efficient with technology also means being smarter.
One example can be found in Camden, N.J., a poverty-ridden, high-crime city of 77,000, located on the banks of the Delaware River, across from Philadelphia. Desperate to cut costs, the city disbanded its entire police force. The Camden County Police Department rehired most of the laid-off officers, and hired another 100 at much lower salaries and benefits, to create a consolidated regional police force. The move is considered highly controversial and certainly radical. While police departments in other jurisdictions have merged or consolidated to cut costs, none have gone down the path that Camden has taken.
During the January 2014 winter storm that crippled the Atlanta metro area and left thousands stranded on the city’s highways, businesses stepped up to the plate to assist those with nowhere to turn. Home Depot opened 26 stores in Georgia and Alabama to shelter stranded travelers, and other local stores like Walgreens, Wal-Mart, and Target welcomed weary – and cold – drivers who abandoned their cars when it was obvious they were not going to make it home that night. These businesses provided the community with resources and services when people needed them most.
In planning for public health emergencies, communities are quickly learning that businesses are true partners in response and recovery efforts. The private sector has the expertise, resources, and systems that operate every day that can assist in a public health response, be it for a pandemic, terrorist event, or natural disaster. During Hurricane Sandy, for example, big businesses used their commercial supply chain to deliver water, food, and other supplies. As the U.S. Chamber of Commerce says, “when the going gets rough, businesses gets moving.”
Staff at CDC’s Strategic National Stockpile – the largest global stockpile of pharmaceuticals and medical supplies for a public health emergency – are working to help state and local agencies forge these partnerships for both distribution and dispensing efforts and as a way to increase access to medicines in an event that affects that entire community. Partnering with public health is good business, too. These private partners are members of the community and when disaster strikes, they can help keep their employees safe and healthy and their businesses up and running.
“As a global manufacturer of computers and computer services, we have committed ourselves to providing our customers with quality products and services,” said a representative from Dell, the information technology powerhouse that has partnered with public health to assist in dispensing medicine to its employees during an emergency. “We are doing the same thing with our employees. We want them to feel good about coming to work and their company taking care of them. That’s why we have gotten very much involved in the points of dispensing program that is being offered by many of our health departments around the country.”
In addition to serving as closed points of dispensing, which allows businesses to provide medicine to their own employees, companies also are coordinating with CDC and their public health departments to provide volunteers, to assist in communications, and to serve the larger community as public dispensing sites. This type of collaboration and partnership between the private and public sector will augment and support a public health response and ultimately help keep Americans prepared, safe, and protected.
For more information on how businesses can partner for preparedness, visit http://www.cdc.gov/phpr/partnerships/.
There has been a “dirty little secret” in security that the risks associated with compliance violations, brand damage and remediation costs simply are not sufficient to encourage ubiquitous use of multi-factor authentication, encryption of sensitive data and other proven controls for preventing breaches. This has been a major contributing factor behind the data breach epidemic. (Why is ANY sensitive data unencrypted in this day and age?)
As the frequency of attacks increases and the nature of the threats change, companies are playing a game of Russian roulette with hackers. They are not utilizing an encryption security infrastructure and risking an attack that will leave privileged customer information available for these criminals to use.
In the first three months of 2014, there were 200 million records stolen according to the Breach Level Index. In 2013, we saw some of the biggest players in retail get hacked and there seemed to be few negative financial consequences for these companies. Stock prices and company reputation have rebounded back to normal within a few months. Shoppers are comfortable patronizing these businesses again, even the customers whose information was hacked.
Properly assessing risk is critical to any business. Successful businesspeople understand that every decision they make must be weighed against the potential risk to the company. This risk assessment must not be limited solely to situations directly related to the business itself, however. They must also consider reputation risk, or the risk events will have a negative impact on one’s personal reputation and, by extension, the business.
Whether fair or not, the decisions made in someone’s personal life can have a substantial impact on the company they are connected to. This risk extends beyond just the owner or executives of a company; employees caught doing unscrupulous things can cause a public relations nightmare for the business, ultimately resulting in massive losses for the company itself.
Assessing Reputation Risk
Unlike business transactions, where there are countless models and historical examples of the likely risk and reward of most given situations, reputation risk is far harder to quantify and prepare for. It is nearly impossible to predict, for example, whether or not an executive will get belligerently intoxicated and assault a police officer. The executive can bring unwelcome attention to the company, which in turn can cause investors, advertisers, and partners to shy away in the short or even long-term.
Health officials from dozens of countries gathered Friday at the White House, seeking ways to strengthen international defenses against epidemics such as the Ebola outbreak raging in West Africa.
The Obama administration launched a global health security initiative in February to help other nations develop basic disease-detection and monitoring systems to contain and combat the spread of deadly illnesses. That push to develop a long-term strategy gained urgency in the wake of the Ebola epidemic.
“Now, the good news is today our nations have begun to answer the call,” President Obama told the Friday gathering. “With all the knowledge, all the medical talent, all the advanced technologies at our disposal, it is unacceptable if, because of lack of preparedness and planning and global coordination, people are dying when they don’t have to. So, we have to do better , especially when we know that outbreaks are going to keep happening.”
North America leads the way in Big Data, besting other regions when it comes to investing, according to a new market survey by Gartner. The research firm found that while Big Data experienced international growth last year, North America led with a 9.2 percent jump in the past year.
The survey also found that 73 percent of organizations have either already invested or plan to do so in the next two years. That’s another significant increase over 2013, when the number was 64 percent.
By comparison, InsideBigData quotes IDG’s 2014 Enterprise Big Data report, which showed lower numbers. IDG found that 49 percent were already in the process of implementing Big Data projects or in the process of doing so in the future.
That begs the question: Who are these Gartner respondents that are so gung-ho on Big Data? Well, if you’re familiar with Gartner, you know its clients tend to be established enterprises and larger government agencies, more so than, say, small businesses or startups. In this case, the survey responses came from 302 Gartner Research Circle members, who are “the voice of selected business decision makers,” according to this.
America’s PrepareAthon! Campaign Offers Simple, Specific Actions Americans Should Know and Practice to Prepare For a Disaster in their Community
WASHINGTON – Today, the Federal Emergency Management Administration (FEMA) encourages individuals, families, workplaces, schools and organizations across the nation to take part in America’s PrepareAthon!, a national day of action that will take place September 30. America’s PrepareAthon! is a community-based campaign to increase emergency preparedness and resilience through participation in hazard-specific drills, group discussions and exercises every fall and spring. To register, individuals and organizations can visit www.ready.gov/prepare.
According to a recent survey conducted by FEMA, 50 percent of Americans have not discussed or developed an emergency plan for family members about where to go and what to do in the event of a local disaster. Additionally, nearly 70 percent of Americans have not participated in a preparedness drill or exercise, aside from a fire drill at their workplace, school or home in the past two years.
“Disasters can strike anytime and anywhere,” FEMA Administrator Craig Fugate said. “America’s PrepareAthon! is about practicing what to do in an emergency with enough regularity so that it becomes second nature when the real disaster actually happens.”
To encourage more Americans to prepare and practice, the campaign offers easy-to-implement preparedness guides, checklists and resources. These tools help individuals, organizations and entire communities practice the simple, specific actions they can take for the emergencies disasters relevant to their area. Examples include:
- Sign up for local text alerts and warnings and download weather apps to your smartphone. Stay aware of worsening weather conditions. Visit www.ready.gov/prepare and download Be Smart: Know Your Alerts and Warnings to learn how to search for local alerts and weather apps relevant for hazards that affect your area.
- Gather important documents and keep them in a safe place. Have all of your personal, medical, and legal papers in one place, so you can evacuate without worrying about gathering your family’s critical documents at the last minute. Visit www.ready.gov/prepare and download Be Smart: Protect Your Critical Documents and Valuables for a helpful checklist.
- Create an emergency supply kit. Bad weather can become dangerous very quickly. Be prepared by creating an emergency supply kit for each member of your family. Visit www.ready.gov/kit for more ideas of what to include in your kit.
- Develop an emergency communication plan for your family. It’s possible that your family will be in different locations when a disaster strikes. Come up with a plan so everyone knows how to reach each other and get back together if separated. Visit http://www.ready.gov/make-a-plan for communication plan resources.
Managed and sponsored by the Ready Campaign each September, National Preparedness Month is designed to raise awareness and encourage Americans to take steps to prepare for emergencies in their homes, schools, organizations, businesses and places of worship, culminating with the National Day of Action. America’s PrepareAthon! was established to provide a comprehensive campaign to build and sustain national preparedness as directed in Presidential Policy Directive-8. The campaign is coordinated by FEMA in collaboration with federal, state, local, tribal, and territorial governments, the private sector, and non-governmental organizations.
More information about America’s PrepareAthon!, including how to register, is available at ready.gov/prepare.
EATONTOWN, NJ -- Nearly two years after Hurricane Sandy, communities around New Jersey are still recovering from the damages inflicted by that historic storm.
The cost of cleaning up debris, clearing waterways and roads, repairing damaged sewer systems and other critical infrastructure, and rebuilding homes and businesses assaulted by wind and water is well into the tens of billions of dollars.
The idea that a storm like Sandy could happen again isn’t one we want to contemplate. But the fact is, not only could it happen again, chances are good that it will.
It’s just a matter of time.
The good news is that it’s possible to take steps now to reduce your community’s vulnerability to flooding and strengthen its resilience before another Sandy comes to town.
One way to accomplish that is to participate in the Community Rating System, a hazard mitigation program administered by the Federal Emergency Management Agency.
The goals of the CRS program are to reduce losses caused by flooding, facilitate accurate insurance ratings and promote awareness about flood insurance.
Residents of towns that participate in CRS pay reduced flood insurance premiums. The premiums are discounted in five percent increments based on the level of flood protection each community has achieved.
Communities raise their CRS rating via their achievements in four categories: Information, Mapping and Regulations, Flood Damage Reduction, and Flood Preparedness.
Sixty-one communities and the Meadowlands area in New Jersey are presently enrolled in the CRS program, saving more than $17 million combined on their flood insurance premiums.
Joining the CRS program is free, but it does require the commitment of the community. Mayors of towns that want to participate must send a letter of interest to the regional office of FEMA, which for New Jersey is:
Federal Emergency Management Agency
Region II office
26 Federal Plaza, 13th Floor
New York, N.Y.10278
FEMA representatives will then arrange a visit to review the community’s floodplain management status and ensure that it meets federal regulations.
Once the community is granted a “letter of good standing,” it receives a verification visit from the Insurance Services Office, a FEMA contract agency, to verify the community’s eligibility for the program and to determine its rating.
Once accepted into the program, towns must file annual reports showing the measures they have taken to reduce their flood risks. Every five years, each town must undergo a complete audit to ensure that they remain in compliance with the CRS program.
Most communities enter the CRS at Level 9, which immediately entitles residents to a five percent reduction in their flood insurance bills. Communities achieve the maximum premium discount of 45 percent when they reach level one.
More importantly, they will have strengthened their ability to withstand the whims of Mother Nature when storm clouds gather and waters rise.
As of May 1, 2014, 11 communities in New Jersey had achieved a Level 5 in the CRS, earning property owners a 25 percent reduction in their flood insurance premiums. Those communities are: Avalon, Beach Haven, Long Beach Township, Longport, Mantoloking, Margate, Pompton Lakes, Sea Isle City, Stafford Township, Stone Harbor and Surf City.
With another hurricane season on the horizon, now is the perfect time to increase your town’s ability to weather a future storm. Learn more about NFIP’s CRS program online at http://www.fema.gov/national-flood-insurance-program-community-rating-system
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
Drought continues to make the headlines, with the latest U.S. Drought Monitor showing moderate to exceptional drought covers 30.6 percent of the contiguous United States.
Its weekly update also shows that 82 percent of the state of California is in a state of extreme or exceptional drought. Reservoir levels in the state continued to decline, and groundwater wells continued to go dry, the U.S. Drought Monitor says.
The LA Times reports that California’s historic drought has 14 communities on the brink of waterlessness. It quotes Tim Quinn, executive director of the Association of California Water Agencies, saying that communities that have made the list are often small and isolated and have relied on a single source of water without backup sources.
(MCT) — President Obama and other leaders delivered a sobering message at the United Nations on Thursday, saying the world was not doing enough to contain the Ebola outbreak in West Africa and avert a “humanitarian catastrophe.”
“This is more than a health crisis,” Obama told leaders at a special gathering convened while the U.N. General Assembly was meeting in New York. “This is a growing threat to regional and global security.”
Faced with a caseload that is doubling every three weeks, U.N. Secretary-General Ban Ki-moon has called for a “twentyfold surge in care, tracking, transport and equipment” to get in front of the epidemic, which is believed to have killed more than 2,900 people.
Obama said last week that he would send as many as 3,000 military personnel to establish a coordination center in Liberia and work with partners to set up Ebola treatment facilities, train health workers and distribute medical supplies and prevention information.
Exams can be hard enough without having to sit them in a foreign language. Our Good Practice Guidelines are already available in several languages so why not the CBCI exam also? Good question! The Business Continuity Institute is pleased to say that you can now sit your exam in Spanish, French, Italian or Japanese at computer-based testing centres, or alternatively you can sit paper and pencil exams through our global network of training providers, currently in Arabic, French, German, Italian and Spanish. Our long term aim is to have many other languages available.
To book your computer-based exam simply purchase it from the BCI shop. Once payment is complete you will receive an email containing an individual ID number and link to the Prometric website. You will then be able to choose the location of the exam and the language you wish to sit the exam in.