Industry Hot News (6550)
Ian J. Kalin, the new chief data officer for the Commerce Department, certainly seems to understand that data is the new oil. That makes sense, given his roots at a small energy startup company and his work with the U.S. Energy Data Initiative. Surely, if anyone can understand the value of data as fuel, it would be Kalin.
So when TechRepublic contributor Alex Howard asked him to compare an “infinitely replicable digital commodity” to a natural resource like oil, you’d expect nuance. His answer doesn’t disappoint:
A major hurricane or earthquake hitting a densely populated metropolitan area like Miami or Los Angeles will leave insurers facing losses that far exceed their estimated 100 year probable maximum loss (PML) due to highly concentrated property values, a new report suggests.
In its analysis, Karen Clark & Company (KCC) notes that the PMLs that the insurance industry has been using to manage risk and rating agencies and regulators have been using to monitor solvency can give a false sense of security.
For example, it says the 100 year hurricane making a direct hit on downtown Miami will cause over $250 billion insured losses today, twice the estimated 100 year PML.
(TNS) — A new analysis shows more than 100,000 people are at risk from a tsunami on the Northwest coast — but the outlook isn’t uniformly grim.
In many communities, residents should be able to make it to high ground in time simply by walking at a brisk pace.
Tsunami surges are expected to slam into some parts of the coast within 15 to 30 minutes of an earthquake on the Cascadia Subduction Zone, the offshore fault where two tectonic plates collide.
Published Monday in the Proceedings of the National Academy of Sciences, the analysis takes the most comprehensive look yet at the threat along the 700-mile-long coast of Washington, Oregon and Northern California — and finds surprising variability.
The global data load is about to surge as Big Data and the Internet of Things threaten to turn every device on the planet into an information source. While it is easy to see the promise that such an environment can offer, is the enterprise turning a blind eye to some of the consequences?
The obvious one is the sheer load that we are contemplating and whether it is possible to build the infrastructure to support it. By some estimates, the global load is due to rise from today’s output of about 4 zettabytes per year to more than 44 zettabytes by 2020. That’s not the total amount of data under management, mind you, but the amount the world will generate in a single year. Compare this to the average annual growth of the data center market, currently estimated at about 11 percent per year, and it is clear trouble is brewing down the line.
The most immediate implication of the surging data load is where to store it all. As Seagate Technology’s Mark Whitby noted to Tech Radar recently, even the most optimistic estimates of storage capacity generation over the next few years would leave us about six ZB short by 2020, which is twice the data output of 2013. New technologies are showing promise in high-density storage – resistive random access memory (RRAM) and heat-assisted magnetic recording (HAMR), to name a few – but it is questionable whether they will be ready for production environments in time for the data deluge.
WASHINGTON, D.C. – Today, the Federal Emergency Management Agency (FEMA) launched a new feature to its free app that will enable users to receive weather alerts from the National Weather Service for up to five locations across the nation. This new feature allows users to receive alerts on severe weather happening anywhere they select in the country, even if the phone is not located in the area, making it easy to follow severe weather that may be threatening family and friends.
“Emergency responders and disaster survivors are increasingly turning to mobile devices to prepare for, respond to and recover from disasters,” said Craig Fugate, FEMA administrator. “This new feature empowers individuals to assist and support family and friends before, during, and after a severe weather event.”
“Every minute counts when severe weather threatens and mobile apps are an essential way to immediately receive the life-saving warnings provided by NOAA’s National Weather Service,” said Kathryn Sullivan, Ph.D., NOAA administrator. “These alerts are another tool in our toolbox as we work to build a ‘Weather Ready Nation’ – a nation that’s ready, responsive, and resilient to extreme weather events.”
According to a recent survey by Pew Research, 40 percent of Americans have used their smartphone to look up government services or information. Additionally, a majority of smartphone owners use their devices to keep up to date with breaking news, and to be informed about what is happening in their community.
The new weather alert feature adds to the app’s existing features to help Americans through emergencies. In addition to this upgrade, the app also provides a customizable checklist of emergency supplies, maps of open shelters and Disaster Recovery Centers, and tips on how to survive natural and manmade disasters. The FEMA app also offers a “Disaster Reporter” feature, where users can upload and share photos of disaster damage.
Some other key features of the app include:
- Safety Tips: Tips on how to stay safe before, during, and after over 20 types of hazards, including floods, hurricanes, tornadoes and earthquakes
- Disaster Reporter: Users can upload and share photos of damage and recovery efforts
- Maps of Disaster Resources: Users can locate and receive driving directions to open shelters and disaster recovery centers
- Apply for Assistance: The app provides easy access to apply for federal disaster assistance
- Information in Spanish: The app defaults to Spanish-language content for smartphones that have Spanish set as their default language
The latest version of the FEMA app is available for free in the App Store for Apple devices and Google Play for Android devices. Users who already have the app downloaded on their device should download the latest update for the weather alerts feature to take effect. The new weather alerts feature in the FEMA app does not replace Wireless Emergency Alerts (WEA) function available on many new smartphones. WEAs have a special tone and vibration and are sent for emergencies such as extreme weather, AMBER alerts, or Presidential Alerts.
To learn more about the FEMA app, visit: The FEMA App: Helping Your Family Weather the Storm.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Agile practitioners are often proud — and justifiably so — that when people are seriously adhering to the principles and practices, they keep the focus on value. They usually do a better job on average, I would argue from both first-hand experience and a fair amount of research, than the adherents of Waterfall methods. That’s not the same as saying that there’s room for improvement.
Value is a slippery concept. What’s valuable to you isn’t necessarily valuable to me. That statement extends to user stories, in which the “so that…” clause differs, depending on the persona identified in the “As a…” section that precedes it. We’re supposed to write stories that have some value for that persona, no matter how minimal it might be, but we often don’t show significant value until we’ve finished all the stories organized into an epic, theme, sprint, or release. (The attraction of creating an expense report, for example, is significantly less until you can update it when needed, too.) We prioritize the backlog from highest to lowest value stories for a variety of reasons, such as ensuring that if we run out of time before a planned release, we cut the lowest-value stories, which are coming conveniently last in the list. However, we know that life isn’t as simple as a single queue of neatly sequenced work items. Which is more valuable, the ability of a salesperson on the road to enter sales activity easily, or the report that tells the sales manager about the current state of the pipeline?
The strategic markets of Philippines, China, Japan and Bangladesh are home to over half of the 100 cities most exposed to natural hazards, highlighting the potential risks to foreign business, supply chains and economic output in Asia from extreme weather events and seismic disasters, according to new research from global risk analytics company, Verisk Maplecroft.
The 5th annual Natural Hazards Risk Atlas (NHRA) assesses the natural hazard exposure of over 1,300 cities, selected for their importance as significant economic and population centres in the coming decade. Of the 100 cities with the greatest exposure to natural hazards, 21 are located in the Philippines, 16 in China, 11 in Japan and 8 in Bangladesh. The analysis considers the combined risk posed by tropical storms and cyclones, floods, earthquakes, tsunamis, severe storms, extra-tropical cyclones, wildfires, storm surges, volcanoes and landslides.
According to Verisk Maplecroft, natural hazards constitute one of the most severe disrupters of business and supply chain continuity, and also threaten economic output and growth in some of the world’s key cities, especially for those located in the emerging markets. Although adverse weather dropped from 4th to 7th place in the Business Continuity Institute's latest Horizon Scan report, it is still considered to be a concern by over half (52%) of the business continuity professionals who responded to a survey. Meanwhile, earthquake/tsunami is considered a concern by nearly a quarter (22%).
“As typhoon Haiyan in the Philippines and the tsunami in Japan showed us, natural hazard events can have far-reaching and long-lasting impacts on supply chains, business and economies,” states Dr Richard Hewston, Principal Environmental Analyst at Verisk Maplecroft. “Understanding how, where and why those risks manifest is an imperative in managing potential shocks.”
Natural hazard risk is compounded in the Philippines by poor institutional and societal capacity to manage, respond and recover from natural hazards events. In addition to assessing exposure, the Natural Hazards Risk Atlas also evaluates a country’s ability to manage and mitigate the impacts of natural hazard events, through the Socio-economic Resilience Index. While Japan, which ranks 178th out 198 countries for resilience, is classified as ‘low risk,’ the Philippines (80th), is considered ‘high risk’, in part due to entrenched corruption and high levels of poverty.
“With foreign investment continuing to flow into countries highly exposed to natural hazards, those which are unable to demonstrate robust resilience may lose an element of their competitiveness,” adds Hewston. “Company decision-making over sourcing locations or market entry is increasingly influenced by issues such as strength of infrastructure and institutional robustness.”
What could be worse than stealing millions of personal records in a large data breach?
How about destructive cyberattacks against our vital infrastructure companies that run dams, power plants, transportation systems and other critical infrastructures around the globe?
Sadly, such cyberattacks are becoming much more common and causing more harm than previously reported.
A new, first-of-its-kind report was released just this week which reveals astonishing survey results from more than 500 security chiefs spread across 26 member countries in the Organization of American States (OAS). The official report was created in collaboration between OAS and Trend Micro, and you can get a copy of the full report at this website.
Here are some of the findings that I found very surprising – even somewhat shocking:
(TNS) — Add this to your smartphone’s many functions: In the near future it could help save lives by warning that a powerful, distant earthquake is about to shake the ground.
Earthquake scientists are proposing that “crowdsourcing” hundreds or even thousands of volunteers with their highly sensitive mobile phones could create a seismic early warning system to alert users of oncoming seismic shocks.
Seismologists in Menlo Park and UC Berkeley are testing the phones and foresee them as particularly useful in developing regions, like Southeast Asia and parts of Africa, that are prone to large and often devastating earthquakes but where more sophisticated warning systems don’t exist.
There is an old joke in sales that things would be great if it wasn’t for the customers. Of course, it is the customers that buy and that keep salespeople in a job. More generally, people accomplish tasks, do projects, have ideas and help to run businesses. Business continuity is inextricably bound up with people. They may be unpredictable as individuals, but display rather more predictable behaviour when grouped together. Predictive analytics has already been growing as a method of forecasting market conditions, economic trends and environmental developments. Increasingly, these techniques are also being applied in cases where people have a direct impact on business continuity.
In 2014, the Federal Bureau of Investigation sent a private notice to healthcare organizations regarding the industry’s preparedness to fight cyber intrusions. The notice stated healthcare organizations are “not as resilient to cyber intrusions compared to the financial and retail sectors, therefore the possibility of increased cyber intrusions is likely.” Until recently, privacy and information security has not been a significant focus of the healthcare industry. That is changing. In 2013, it was estimated that North American healthcare organizations were expected to spend $34.5 billion in 2014 on information technology. While there are numerous privacy and security risks in the healthcare space, two key areas for 2015 are enforcement by the Office for Civil Rights (OCR) through audits and investigations and the increase of bring-your-own-device workplaces.
Norway, Switzerland, Netherlands and Ireland are considered the countries most resilient to supply chain disruption according to the 2015 FM Global Resilience Index. Australia has dropped out of the top 10 this year, moving from 4th place in 2014 to 14th place this year, one place behind New Zealand. Venezuela, meanwhile, is rooted to the foot of the index, but Guyana and Bolivia both rose out of the bottom 10, owing to significant improvements in commitment to natural hazard risk management in the region.
The FM Global Resilience Index highlights the risks that come with operating in various countries and quantifies all the vulnerabilities these countries have in a definitive ranking of supply chain resilience around the world.
Supply chain disruption is a major cause for concern among business continuity professionals with the Business Continuity Institute’s latest Horizon Scan report revealing that it is the fifth in the list of potential threats to organizations compared to 16th place the year before. This is no surprise as three quarters (76%) of respondents to the BCI’s latest Supply Chain Resilience survey claimed they had experienced at least one supply chain disruption during the previous year.
Ireland keeps its place in the top 10, moving up one place to 4th in the rankings, reflecting both its low exposure to natural hazards and the fruits of its austerity and fiscal regimes. For the third year running, the United Kingdom has held on to its 20th place. Its ranking reflects its resistance to oil shocks as its consumption of oil relative to GDP is comparatively low. The UK scores well on other key drivers such as perceptions of its control of corruption and the quality of local suppliers, but there is scope for improvement in risk quality, particularly as it relates to fire risk management. In addition, the risk of terrorism continues to threaten supply chain security.
The United States and China are each segmented into three separate regions because the geographic spread of these countries produces significantly disparate exposures to natural hazards. Region 3 of the US, which includes most of the central part of the country, ranks 10th. Region 1, encompassing much of the East Coast, ranks 16th and Region 2, primarily the West Coast, ranks 21st. China’s three regions rank 63rd (Region 3), 64th (Region 1), and 69th (Region 2). Beyond natural disaster risk, China's other challenges range from poor accountability and transparency, high levels of perceived corruption and growing security concerns to problems in its financial sector, especially with regard to the fragile position of its banks.
“Business leaders who don’t evaluate countries and supply chain resilience can suffer long-term consequences,” said Bret Ahnell, executive vice president, operations, FM Global. “If your supply chain fails, it can be difficult or impossible to get your market share, revenue and reputation back. The FM Global Resilience Index is designed to help business leaders stay in business by making informed decisions about where to place and maintain global supplier facilities.”
The top 10 countries, those most resilient to supply chain disruption, according to the report were:
10. United States (central region)
The bottom 10 countries, those considered least resilient to supply chain disruption, were:
126. Dominican Republic
129. Kyrgyz Republic
The Index is compiled annually for FM Global by analytics and advisory firm Oxford Metrica. The Index is generated by combining three core factors of business resilience to supply chain disruption: economics, risk quality and qualities of the supply chain itself. The drivers of these factors include GDP per capita, political risk, vulnerability to oil shortages and price shocks, exposure to natural hazards, quality of natural hazard risk management, fire risk, control of corruption and the quality of infrastructure and local suppliers.
There are five laws of IT security.
1. There is no such thing as perfect security: Systems designed by humans are vulnerable to humans. Bugs exist. Mistakes are made. The things that make your computers useful--that is, communication, calculation and code execution--also make them exploitable. Information security is the management of risk. A good infosec design starts with a risk profile, and then matches solutions to the likely threat.
Nearly 37 percent of the United States and more than 98 percent of the state of California is in some form of drought, according to the latest U.S. Drought Monitor.
Its weekly update shows that more than 44 percent of California is now in a state of exceptional drought, with little relief in sight.
(TNS) — Henri might have to wait.
Colorado State University researchers are predicting a below average 2015 Atlantic hurricane season, with seven named storms, leaving Henri, the possible eighth named storm, out of the alphabetical running.
Of the seven storms, three are expected to become hurricanes and one is forecast to reach major hurricane strength with winds of 111 mph or more, researchers reported in their annual forecast released Thursday.
The report comes with a caution: "It just takes that one storm to make it an active season," said Phil Klotzbach, the lead author of the report put out by CSU's Tropical Meteorology Project since 1984.
It might surprise you to learn that the vast majority of Big Data analytics takes place within on-premises infrastructure.
This makes the most logical sense, in fact, because despite what you hear about the rise of the cloud, most Big Data loads reside in the enterprise data center in the form of both structured and unstructured historical data. To lower costs, organizations are placing their analytics capabilities as close to that data as possible.
But this is likely to change relatively quickly.
According to Wikibon, spending on Big Data hit $27.3 billion last year and is expected to top $35 billion in 2015, which is impressive for a phenomenon that didn’t even have a formal name until about three years ago. The cloud, however, holds only about $1.3 billion of the market, dwarfed even by the “professional services” (read, consultants) category, which draws about $10.4 billion.
The demands placed upon Business Continuity (BC), Risk Management (RM), and Disaster Recovery (DR) professionals are increasing every day. As a result, organizations need to reassess their approach Business Continuity Management (BCM). If they don’t, they’ll get left behind, affected by continued adherence to outdated methods. The convergence of the BC and RM disciplines are ongoing.
Emerging regulations, frameworks, and standards place greater emphasis on risk management. As decision makers accept this evolution, Business Continuity increasingly becomes a subset of Risk Management. How the process is implemented—the value it brings a risk-based model—determines whether or not the process is sound.
For months, federal law enforcement agencies and industry have been deadlocked on a highly contentious issue: Should tech companies be obliged to guarantee government access to encrypted data on smartphones and other digital devices, and is that even possible without compromising the security of law-abiding customers?
Recently, the head of the National Security Agency provided a rare hint of what some U.S. officials think might be a technical solution. Why not, suggested Adm. Michael S. Rogers, require technology companies to create a digital key that could open any smartphone or other locked device to obtain text messages or photos, but divide the key into pieces so that no one person or agency alone could decide to use it?
“I don’t want a back door,” Rogers, the director of the nation’s top electronic spy agency, said during a speech at Princeton University, using a tech industry term for covert measures to bypass device security. “I want a front door. And I want the front door to have multiple locks. Big locks.”
BATS Global Markets (BATS), a leading operator of exchanges and services for financial markets globally, has published details of a successful test of its business continuity processes.
As part of the test BATS took its company headquarters completely offline and operated from its Kansas City-area disaster recovery site instead. All of the 110 employees based at BATS’ global headquarters either reported to the DR site and conducted their daily routines from the secure and remote location or worked remotely. The BATS offices in the New York area, Chicago, London and Singapore continued normal operations.
In addition to the twice-yearly BCP test, BATS also tests its local Kansas City DR site each month. For one full day monthly since 2008, the company’s Operations, Technology, Regulatory and Surveillance teams in Kansas City have operated from the local DR site, with the primary headquarters remaining online.
BATS also maintains a DR site in Chicago that serves as a backup for its exchange technology infrastructure that is located in Secaucus, N.J.
Statement issued after the 5th meeting of the IHR Emergency Committee regarding the Ebola outbreak in West Africa.
The fifth meeting of the Emergency Committee convened by the WHO Director-General under the International Health Regulations (IHR) 2005 regarding the Ebola virus disease outbreak in West Africa was conducted with members and advisors of the Emergency Committee on Thursday, 9 April 2015.
The main issues considered were: ‘does the event continue to constitute a Public Health Emergency of International Concern’ and, if so, ‘should the current temporary recommendations be extended, revised, and/or new temporary recommendations issued.’
The Committee reviewed developments since the previous meeting on 20th January 2015, including the current epidemiological situation. The Committee noted that as a result of further improvements in EVD prevention and control activities across West Africa, including in the area of contact tracing, the overall risk of international spread appears to have further reduced since January with a decline in case incidence and geographic distribution in Liberia, Sierra Leone and Guinea. These three IHR States Parties provided updates and assessment of the Ebola outbreak, in terms of the epidemiological situation and the status and performance of exit screening and contact tracing.
The Committee recognized the progress achieved by all three countries and emphasized that there was no place for complacency, the primary goal remaining the interruption of transmission as rapidly as possible. The Committee reinforced the importance of community engagement in ‘getting to zero’. The Committee expressed its continued concern about the recent infection of health care workers and reaffirmed the importance of ensuring the rigourous application of appropriate infection prevention and control measures.
The Committee discussed the issue of probable sexual transmission of EVD, particularly the recent case who is likely to have been infected following sexual contact involving an Ebola survivor some months after his recovery. The Committee welcomed the ongoing programme of research underway in this area and urged its acceleration as a priority.
The Committee discussed the issue of inappropriate health measures that go beyond those in the temporary recommendations issued to date. The Committee was very concerned that additional health measures, such as quarantine of returning travellers, refusal of entry, cancellation of flights and border closures significantly interfere with international travel and transport and negatively impact both the response and recovery efforts. Although some countries are reported to have recently rescinded these additional health measures, and some regional airlines have resumed flights to affected countries, about 40 countries are still implementing additional measures and a number of airlines have not resumed flights to these countries.
The Committee concluded that the event continues to constitute a Public Health Emergency of International Concern and recommended that all previous temporary recommendations should be extended.
Source: World Health Organization
One of the more promising vertical markets for cloud adoption is healthcare. With the Health Insurance Portability and Accountability Act (HIPAA) regulations being updated to incorporate the modern information technology landscape, the demand for managed service providers (MSPs) to help secure the industry’s data storage and cloud-based file sharing will continue to grow.
A recent story from FierceGovernmentIT cited Joe Klosky, senior technical advisor at the U.S. Food and Drug Administration (FDA), who suggested that managing health data moving from system to system is “critical.” FierceGovernmentIT also reported the complex mission government officials are experiencing as “the rapid growth of health data is helping federal agencies better chart the quality of care being provided and other nationwide trends, but it’s also presenting some privacy and security challenges.”
Even in today’s wired world, many organizations require access to original documents to deliver goods or services. If yours is one of them, how you maintain continuity of access to those documents should be part of your Business Continuity Planning.
Even though we like to think we live in a paperless age, most of us don’t. In paper-intense industries, access to original documentation may have both financial and regulatory implications. In many other businesses, those ‘original documents’ are fleeting: checks, authorizations, forms and others that are acted upon then discarded. They are necessary only until converted or input.
Think of original documents as “paper data”. Even with documents of only temporary importance, their loss (or loss of access to them) may be vital to the performance of our most critical functions or processes. Why do we put emphasis on Recovery Point Objectives (RPO)? Because we understand losing electronic data may imperil our business. There is little difference with “paper data” waiting for conversion to electronic data. If it’s gone (because of physical destruction) or elusive (because we can’t get postal deliveries, or we’ve been forced out of our office) we can’t fully function.
Emergency preparedness exercise scheduled for the Three Mile Island Nuclear Power Plant
PHILADELPHIA – The Federal Emergency Management Agency (FEMA) will evaluate a Biennial Radiological Emergency Preparedness Exercise at the Three Mile Island Nuclear Power Plant. The exercise will occur during the week of April 13, 2015 to assess the ability of the Commonwealth of Pennsylvania to respond to an emergency at the nuclear facility.
“These drills are held every other year to evaluate government’s ability to protect public health and safety,” said MaryAnn Tierney, Regional Administrator for FEMA Region III. “We will assess state and local emergency response capabilities within the 10-mile Emergency Planning Zone as well as the adjacent support jurisdictions within the Commonwealth of Pennsylvania.”
Within 90 days, FEMA will send its evaluation to the Nuclear Regulatory Commission (NRC) for use in licensing decisions. The final report will be available to the public approximately 120 days after the exercise.
FEMA will present preliminary findings of the exercise in a public meeting at 11:00 a.m. on Friday, April 17, 2015 at the Hilton Garden Inn, 3943 Tecport Drive, Harrisburg, PA. Scheduled speakers include representatives from FEMA, NRC, and the Commonwealth of Pennsylvania.
At the public meeting, FEMA may request that questions or comments be submitted in writing for review and response. Written comments may also be submitted after the meeting by emailing FEMAR3NewsDesk@fema.dhs.gov or by mail to:
MaryAnn TierneyRegional AdministratorFEMA Region III615 Chestnut Street, 6th FloorPhiladelphia, PA 19106
FEMA created the Radiological Emergency Preparedness (REP) Program to (1) ensure the health and safety of citizens living around commercial nuclear power plants would be adequately protected in the event of a nuclear power plant accident and (2) inform and educate the public about radiological emergency preparedness.
REP Program responsibilities cover only “offsite” activities, that is, state and local government emergency planning and preparedness activities that take place beyond the nuclear power plant boundaries. Onsite activities continue to be the responsibility of the NRC.
Additional information on FEMA’s REP Program is available online at FEMA.gov/Radiological-Emergency-Preparedness-Program.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. FEMA Region III’s jurisdiction includes Delaware, the District of Columbia, Maryland, Pennsylvania, Pennsylvania and West Pennsylvania. Stay informed of FEMA’s activities online: videos and podcasts are available at fema.gov/medialibrary and youtube.com/fema. Follow us on Twitter at twitter.com/femaregion3.
A new survey out this week offers good evidence as to why so many businesses today bungle their response to security compromises and breach discoveries.
The study of 170 businesses conducted by the Security for Business Innovation Council (SBIC) and RSA, The Security Division of EMC (EMC), shows the majority of businesses lack incident response plans and have no capabilities to correlate security-related data from IT infrastructure, can't properly analyze live network forensic and have no way to take advantage of industry-wide threat intelligence.
"Organizations are struggling to gain visibility into operational risk across the business," said Dave Martin, chief trust officer for RSA. "While many organizations may feel they have a good handle on their security, it is still rarely tied in to a larger operational risk strategy, which limits their visibility into their actual risk profile."
(TNS) — A Belfast-based epidemiologist and family physician who investigated infectious disease outbreaks for the Centers for Disease Control and Prevention said Monday that public hysteria and paranoia are not the answer when such crises emerge.
A case in point was the recent furor over Ebola. Maine made international headlines when Kaci Hickox, a nurse who until recently lived in Fort Kent, was quarantined after returning from treating Ebola patients in West Africa despite having no symptoms of the disease.
As Dr. Peter Millard sees it, there is a better way.
“We’re going to have epidemics. Epidemics are always going to be with us. We’re going to have a bad epidemic eventually and if we don’t pull together and use science as a basis for our response, we’re going to be in big trouble,” Millard said Monday evening during a lecture in Husson University’s Kominsky Auditorium.
(TNS) — Mine rescue entered the 21st century Wednesday with the successful test of a suite of technologies that will keep rescue teams in constant communication with the emergency command center, state and federal officials said.
“We think we're ready to rock and roll with the systems we've built,” said Joe Main, the assistant labor secretary in charge of the federal Mine Safety and Health Administration.
The MSHA, the Department of Environmental Protection, Consol Energy and the Homer City-based Special Medical Response Team ran a simulated emergency at Consol's Harvey Mine in West Finley to test the system.
Has an entire year actually passed since the Heartbleed vulnerability was discovered? It seems like only yesterday that my social media news feeds were in pure panic mode. Chicken Little, the sky is falling! Or, in this case, the Internet is broken and our privacy is gone and everything we ever posted is going to be stolen!
The mass hysteria was unlike anything I’ve witnessed before or since in regards to IT security, and I’d be willing to bet if I asked 10 people about Heartbleed today, at least eight of them would have no memory of it. They’ve moved along to the next crisis, real or imagined.
So Heartbleed might be out of mind, but it isn’t out of our networks. And that’s the problem. A year later, 74 percent of Global 2000 companies are still vulnerable, according to a new study by Venafi. In August, a similar survey found that 76 percent of Global 2000 companies hadn’t fully addressed Heartbleed. I’m not a math whiz, but a 2 percent improvement over an eight-month period doesn’t sound positive. Plus, this just includes the 2,000 biggest companies in the world. I have my doubts that if large corporations are still struggling with Heartbleed, smaller companies are doing any better.
Are you and your family prepared to face a disaster? What about your neighborhood? Do you know your neighbors’ emergency plan or how you can help each other during an emergency? April kicks-off America’s PrepareAthon!—a nationwide campaign to increase emergency preparedness and community resilience. Throughout the month local, state, and federal groups will take the pledge to help improve their preparedness. All of these activities will lead up to PrepareAthon’s national day of action on April 30, 2015.
So what can you do?
You don’t have to be an expert in emergency preparedness, or the leader of a large community group to take part in America’s PrepareAthon! Learn more about what you can do in your neighborhood or community to become more personally prepared and help build your community’s resilience.
In your Neighborhood.
Youth volunteers performing an emergency response exercise.
If you haven’t taken the time to talk to your neighbors about emergency preparedness, or even just met them, take the PrepareAthon! pledge and make a plan to include your neighbors in your emergency planning. Often the first people on scene after a disaster are not first responders (EMS, police, firefighter, etc.), but rather the people who are closest to where the emergency took place. When a disaster occurs in your community you will most likely have to rely on those around you, especially if the scale of the disaster makes it hard for first responder to get to the scene.
Do not wait for a disaster to occur to meet your neighbors or learn about your community’s preparedness plans. Reach out to people in your neighborhood and discuss their emergency plans. If you have any medical or physical needs, such as limited mobility or dependence on medication or medical devices, talk to your neighbors about the assistance you may need in a disaster. Likewise, find out about the unique needs of those who live around you. Reach out to elderly neighbors and offer your assistance from shoveling snow to checking on them during a heat wave. No matter what the disaster or emergency, forming relationships with those around you can help improve resilience after a disaster occurs.
In your Community.
Beyond your neighborhood, getting involved in community preparedness groups and emergency response exercises can help improve your own personal preparedness and also your community’s ability to respond to emergencies and natural disasters. Strong community resilience requires people to come together and participate in planning and training before a disaster occurs. A good place to start when looking to become more involved in your community’s preparedness is with groups focused on emergency preparedness, such as your local Community Emergency Response Team (CERT), Medical Reserve Corps, or American Red Cross chapter. You may also consider getting a community group you are already involved in talking about emergency preparedness. Faith-based organizations, schools, or even your workplace are good places to start a conversation about emergency preparedness.
Take the Pledge.
Whether it is meeting your neighbors, joining a local emergency preparedness group, or starting an emergency preparedness initiative within one of your community organizations, make sure to register your efforts with America’s PrepareAthon! Help move your individual community and our entire nation closer to being prepared for any emergency or disaster that comes our way.
Planning Meetings: The Risk Management Plan
This new edition of "Risk Management: Concepts and Guidance" supplies a look at risk in light of current information, yet remains grounded in the history of risk practice. Taking a holistic approach, it examines risk as a blend of environmental, programmatic, and situational concerns. Supplying comprehensive coverage of risk management tools, practices, and protocols, the book presents powerful techniques that can enhance organizational risk identification, assessment, and management—all within the project and program environments.
Updated to reflect the Project Management Institute’s "A Guide to the Project Management Body of Knowledge (PMBOK® Guide), Fifth Edition," this edition is an ideal resource for those seeking Project Management Professional and Risk Management Professional certification.
With the expansion of large multinational corporations into developing countries such as Russia, Brazil, India, Mexico and China, a proliferation of global regulatory enforcement actions, including anti-bribery and anti-corruption, has risen. Recently, HP Russia paid more than $108 million in fines for Foreign Corrupt Practices Act (FCPA) violations that occurred when its subsidiaries in three different countries, Russia, Poland and Mexico, made improper payments to government officials to obtain or retain lucrative public contracts.
Executives, including general counsel, compliance and risk officers, are smart to plan in advance for potential regulatory investigations. The disclosure, or production, of information that might be relevant to the allegations from the requesting regulatory bodies–part of the electronic discovery in the legal realm–is complex, costly and time-consuming in today’s world of information. It involves the identification, acquisition and review of information and communications from a myriad of sources, including day-to-day operations, financials, communications with foreign and government officials, employees and third party representatives, system data reporting, travel and entertainment expenditures, payment data, chat messaging, social media posts, and the like. All of this information is subject to scrutiny by legal counsel and the requesting regulatory body to determine whether there was any wrongdoing.
When some information is in one or more foreign languages, the document review process can become significantly more unwieldy and inefficient. Understanding and implementing best practices is critical for making the process easier.
(TNS) -- A downed power transmission line in southern Maryland caused a momentary loss of power that led to "widespread outages" in the nation's capital Tuesday afternoon, according to officials.
Previously, District of Columbia emergency management officials had said a reported explosion at a southern Maryland power plant may have been the cause.
A large number of outages were reported throughout the district about 1 p.m., including at the White House, Capitol and State Department headquarters.
According to Sean Kelly, a spokesman for Potomac Electric Power Co., just before 1 p.m., there was a momentary dip in voltage caused by a downed transmission line at a substation in southern Maryland, which is connected to a power plant there.
(TNS) — Gov. Mary Fallin expressed disappointment on Monday that federal assistance was denied to help individuals and businesses in Tulsa and Cleveland counties that were hit by March tornadoes. On April 1, the governor asked for a major disaster declaration for the state based on damages by tornadoes, straight-line winds and flooding March 25-26 in Cleveland and Tulsa counties.
Tornadoes resulted in four deaths with 26 people suffering injuries that required treatment at area hospitals, according to a state press release.
Damage assessments estimated that 1,047 homes and businesses were damaged in the tornadoes, severe storms, straight-line winds and flooding that occurred March 25.
When Datto acquired Backupify last year, we did so because we knew the technology landscape was shifting for MSPs. Data on-premise isn’t going away, but it isn’t the only place data exists. As more data is moved to the cloud, and to SaaS apps in particular, we realized that to build a Total Data Protection platform we needed expertise in SaaS data protection.
As a result of the acquisition, Datto now has more than 2 million Google Apps end users protected, and is scheduled to launch an Microsoft Office 365 backup at our partner conference in June. Building these products required us to get deeply embedded in both the Microsoft and Google ecosystems. We now know both companies well, know their key partners, and know the technical road maps of both organizations. So for those MSPs who may be considering whether to invest time in one of these products, here is our view from the trenches about the things you should consider.
For extended analysis of regional temperature and precipitation patterns,as well as extreme events, please see our full report that will be released on April 10th.
March was 12th warmest on record for the Contiguous United States
First quarter 2015: Record warmth in the West and cold in the Northeast, dire drought conditions in the West
The March contiguous U.S. average temperature was 45.4°F, 3.9°F above the 20th century average — the warmest March since 2012. Near-record warmth spanned the Great Plains to the West Coast and parts of the Southeast, while the Northeast was cooler than average. The March Lower 48 precipitation total was 2.08 inches, 0.43 inch below average, tying as the 19th driest March on record. Below-average precipitation was widespread across the northern tier states and the Southeast, with above-average precipitation in the Southern Plains and Ohio Valley.
This analysis of U.S. temperature and precipitation is based on data back to January 1895, resulting in 121 years of data.
- Fifteen states across the Southeast, Northern Plains, and West had a March temperature that was much above average, while five states in the Northeast had a March temperature that was much below average. No state was record warm or cold.
- Below-average precipitation was observed along both the East and West Coasts, connected by drier-than-average states across the northern tier. Twelve states had a March precipitation total that was much below average. Above-average precipitation accumulated from the Southern Plains into the Ohio Valley; Arkansas and Texas were both much wetter than average. No state was record dry or wet.
- According to the March 31st U.S. Drought Monitor report, 36.8 percent of the contiguous U.S. was in drought, up from 31.9 percent at the beginning of March. Drought conditions worsened across parts of the Central Rockies as well as the Central and Northern Plains and the Upper Midwest where spring drought could impact the upcoming growing season. Drought remained entrenched in the West, where mountain snowpack was record low for many locations in the Cascade and Sierra Nevada Mountains. Abnormally dry conditions developed in parts of the Southeast and Northeast. Drought improved in the Southern Plains and the Mid- to Lower-Mississippi River Valley.
U.S. climate highlights: Year-to-date (January-March)
- The year-to-date contiguous U.S. average temperature was 37.2°F, 2.0°F above the 20th century average, and the 24th warmest January-March on record. Record warmth engulfed much of the West, where seven states were record warm, and an additional five states, including Alaska, had temperatures that were much above average. California's year-to-date temperature of 53.0°F was 7.5°F above average and bested the previous record set just last year by 1.8°F.
- Below-average January-March temperatures were observed across the South, the Midwest, and Northeast where 16 states had a much cooler-than-average January-March period. New York and Vermont were both record cold for the year-to-date. The New York year-to-date temperature was 16.9°F, 6.8°F below average, dropping below the previous record of 17.4°F set in 1912. The Vermont January-March temperature was 13.3°F, 6.4°F below average, tying the same period in 1923.
- The year-to-date contiguous U.S. precipitation total was 5.66 inches, 1.30 inches below the 20th century average, and the seventh driest January-March on record. This was the driest first three months of a year since 1988. Below-average precipitation was observed across the West and much of the northern half of the nation. Twelve states had much below average precipitation during the first three months of 2015. South Dakota had its driest January-March on record with a precipitation total of 0.85 inch, 1.21 inches below average. Above-average precipitation was observed across the Southern Rockies and Plains.
- The U.S. Climate Extremes Index (USCEI) for the year-to-date was nine percent above average and the 11th highest value on record. The warm West and cold East temperature pattern during January-March contributed to the much above average USCEI, with the components that measure both warm and cold daytime and nighttime temperatures being much above average. The USCEI is an index that tracks extremes (falling in the upper or lower 10 percent of the record) in temperature, precipitation, and drought across the contiguous U.S.
Note: NOAA's National Centers for Environmental Information (NCEI) is the merger of the National Climatic Data Center, National Geophysical Data Center, and National Oceanographic Data Center as approved in the Consolidated and Further Continuing Appropriations Act, 2015, Public Law 113-235. From the depths of the ocean to the surface of the sun and from million-year-old sediment records to near real-time satellite images, NCEI is the Nation's leading authority for environmental information and data. For more information go to http://www.ncdc.noaa.gov/news/coming-soon-national-centers-environmental-information
For extended analysis of regional temperature and precipitation patterns, as well as extreme events, please see our full report that will be released on April 10th.
While I mostly talk to company, agency or organization leaders about crisis communication and reputation management, sometimes the reputation in question belongs to an individual. You don’t have to be a celebrity to have potential for reputation disaster. Individuals whose name is attached to the business or profession they are in, in other words where their name is also a brand, are particularly susceptible. Search engines and the long memory of the internet make the problem so much greater. Yesterday’s newspaper is already in the garbage and yesterday’s TV report is already in the ether along with all past reports, but on the Internet they are retained presumably for ever, and always accessible at the touch of a Google button.
A recent conversation reminded me of how the Internet has changed reputation management and how it therefore changes the response. The really big question when dealing with media coverage of bad news about a brand (personal, corporate or otherwise) is whether or not to respond, and if so, how far and wide to push the response. The basic rule is: don’t make it worse. You can make it worse by bringing the bad reports to the attention of others who might otherwise have missed the 11 pm news. Maybe it will all just go away. Or, not.
Your client calls in a panic. Something’s gone wrong with a server, and the Web store is down. You get there fast, run to the server and determine that it has suffered a hard drive failure. You collect your thoughts, think carefully about the procedure for restoring this piece of equipment quickly, but you draw a blank. The clock is ticking. Downtime is piling up, and your client’s face is reddening with anger because she’s not sure you know what you’re doing. You don’t tell her, but you’re not sure you know, either.
This is the last situation you want to find yourself in. As your client’s frustration mounts, her patience thins, her wallet empties, and her trust in you erodes. There’s only one thing that can stop this from happening, and it goes beyond having a backup and recovery plan. You need to make sure your plans work effectively, and you can only do this by testing them. Remember, you’re not just testing a backup, you’re testing your own ability to recover so you don’t end up testing your client’s patience.
In order to make backup and recovery testing effective, there are some questions you will want to ask yourself. The following should help you gather information you need to create a testing strategy that’s a regular part of your process. This way, when the time comes you’re not just “kind of sure” you can recover—you’re absolutely positive.
ERMS had a great quarter! With an increased demand we have been busy. Busy training new customers and helping them implement their new system. Just how busy? VERY! Our quarterly sales results were almost 50% above target.
Some of our newest customers include: Canadian Red Cross, Desjardin, Intact Insurance, Simon Fraser University, Worker’s Compensation Board of Manitoba (WCB), British Columbia Emergency Management (EMBC), the City of Cambridge, Canadian Federal Government (Shared Services), Independence Bank, Jewish General Hospital, and many more.
Why the increased demand? We believe it’s because more and more organization are starting to understand the value and benefits of emergency and crisis mass communication solutions. Our new, and existing customers, benefit in many ways when they implement an emergency mass notification system (EMNS). Some of those benefits include:
Risk is part of nearly every aspect of business. The daily practices for nearly every employee involve some mitigation of certain risks to keep the business moving forward.
Within many enterprises, risk management involves a person or team of individuals who attempt to consider future scenarios and extract possible business risks from them in order to identify areas of liability and possibilities for improvement and success—this is especially important in the area of project management.
In the latest edition of the book, “Risk Management: Concepts and Guidance,” author Carl Pritchard, a certified expert in the project management field, identifies systems that project management professionals (PMPs) can apply to manage risks within ongoing projects. Pritchard then explains how to use these systems in the daily work of project management in accordance with the most recent Project Management Body of Knowledge (PMBOK).
DENTON, Texas – People living in parts of Arkansas, Louisiana, New Mexico, Oklahoma and Texas are urged to get ready now for potential severe weather that could strike over the next few days in the form of possible severe thunderstorms, hail, strong winds, flash flooding, tornadoes and wildfires.
The Federal Emergency Management Agency’s (FEMA) Region 6 office continues to monitor the situation and stands ready to support state and local partners as needed and requested in any affected areas.
“We encourage people to keep listening to their local and state officials for updated instructions and information. The safety of people is the first priority,” said FEMA Region 6 Administrator Tony Robinson. “We encourage people to have an individual or family emergency plan in place, practice that plan and put together an emergency kit.”
If you have severe weather in your area, you will likely want to become familiar with the terms used to identify a severe weather hazard including:
- Watch: Meteorologists are monitoring an area or region for the formation of a specific type of threat (e.g. flooding, severe thunderstorms, or tornadoes); and
- Warning: Specific life and property threatening conditions are occurring and imminent. Take appropriate safety precautions.
Like virtualization, it seems that containers are going to work their way into the enterprise by stealth – that is, whether the people in charge of technology and infrastructure want them or not.
Part of this is due to the advent of the cloud. The more the enterprise offloads data and applications to third-party infrastructure, the less it has to say about the make-up and configuration of that infrastructure. But part is due to the fact that, like virtualization, containers are making their way into leading data platforms where they will exert their influence through standard upgrade and refresh cycles.
A case in point is container management firm CoreOS’s decision to integrate Google’s Kubernetes cluster management system into its new Tectonic platform. According to ZDnet’s Steven J. Vaughn-Nichols, this will enable the enterprise to manage Linux containers within their data centers in scale-out cloud fashion and by extension foster compatibility with existing Google applications that are almost universally housed on containers managed by Kubernetes. As the enterprise gravitates toward private clouds, particularly Linux-based clouds, an integrated container stack will be crucial for the delivery of applications and microservices to a diverse workforce. Other Linux developers such as Mirantis and Mesosphere are also working to integrate Kubernetes into the platforms.
Venafi has published new research reevaluating the risk of attacks that exploit incomplete Heartbleed remediation in Global 2000 organizations.
Using Venafi TrustNet, a cloud-based certificate reputation service designed to protect enterprises from the growing threat of attacks that misuse cryptographic keys and digital certificates, Venafi Labs found that 84 percent of Forbes Global 2000 organizations’ external servers remain vulnerable to cyber attacks due to Heartbleed. This leaves these organizations open to reputational damage and widespread intellectual property loss.
When the Heartbleed vulnerability was discovered in April 2014, many organizations scrambled to patch the bug, but failed to take all of the necessary steps to fully remediate their servers and networks. But despite significant guidance from Gartner and other industry experts, the majority have failed to take the necessary steps to fully remediate their servers and networks.
“A year after Heartbleed revealed massive vulnerabilities in the foundation for global trust online, a major alarm needs to be sounded for this huge percentage of the world’s largest and most valuable businesses who are still exposed to attacks,” said Jeff Hudson, CEO, Venafi. “Given the danger that these vulnerabilities pose to their business, remediating risks and securing and protecting keys and certificates needs to be a top priority not only for the IT team alone, but for the CEO, BOD, and CISO.”
Download the Venafi Heartbleed +1 Year Analysis (PDF) at:
The International Organization of Securities Commissions (IOSCO) has published two consultation reports aimed at further enhancing the ability of financial markets and intermediaries to manage risks, withstand catastrophic events, and swiftly resume their services in the event of disruption.
The consultation report ‘Mechanisms for Trading Venues to Effectively Manage Electronic Trading Risks and Plans for Business Continuity’ provides a comprehensive overview of the steps trading venues need to take to manage the risks associated with electronic trading and the ways they plan for and manage disruptions through business continuity plans. As technology continues to evolve, trading venues will need to continuously adapt to these changes.
The report provides recommendations to help regulators ensure that trading venues are able to manage effectively a broad range of evolving risks. It also proposes sound practices that should be considered by trading venues when developing and implementing risk mitigation mechanisms and business continuity plans aimed at safeguarding the integrity, resiliency and reliability of their critical systems.
IOSCO´s second consultation report, ‘Market Intermediary Business Continuity and Recovery Planning’, proposes standards and sound practices that regulators could consider as part of their oversight of the business continuity and recovery planning by market intermediaries. These sound practices may also prove useful to intermediaries who are developing and implementing business continuity plans.
The two consultation reports draw on the results of surveys of IOSCO members and stakeholders, and feedback from roundtables organized with industry participants.
A key objective of the reports is to address possible weaknesses or gaps in the business continuity plans and recovery strategies of trading venues and market intermediaries.
Comments should be submitted on or before Saturday 6th June 2015.
Read the documents:
- Mechanisms for Trading Venues to Effectively Manage Electronic Trading Risks and Plans for Business Continuity
- Market Intermediary Business Continuity and Recovery Planning
(TNS) — Critics call it “sharpening the pencil.”
Since the Diablo Canyon nuclear power plant opened on a rocky stretch of California coast in 1985, researchers have discovered three nearby fault lines capable of stronger quakes than the one that struck Napa last year.
And yet the plant’s owner, Pacific Gas and Electric Co., insists that Diablo isn’t in greater danger than previously thought. If anything, it’s in less.
PG&E has, at several times in Diablo’s complicated history, changed the way the company assesses the amount of shaking nearby faults can produce, as well as the plant’s ability to survive big quakes.
World Backup Day 2015 gave managed services providers (MSPs) a great opportunity to educate their customers about the importance of backing up personal data.
And even though this year's event has come and gone, MSPs don't have to wait until 2016 to teach customers about the value of data protection.
For example, a new survey from data backup and disaster recovery (BDR) solutions provider Kroll Ontrack revealed 61 percent of data recovery customers had a backup solution in place at the time of data loss.
The role of the IT manager ain’t what it used to be. There was a time when responsibilities primarily included building a software stack, managing the company’s infrastructure, and operating company-owned equipment. With the rapid adoption of cloud technology, including cloud-based file sharing, those roles and responsibilities have changed dramatically – and it’s critical for MSPs to understand this shift.
IT managers now fill more of a relationship manager role and are ideally viewed as partners by business leaders and department heads. MSPs looking to provide cloud services to clients need to understand this shift in roles in order to work – and be successful with – the new IT department.
Russ Banham from Forbes recently outlined some of the things IT pros are doing now instead of managing infrastructure. Here are a few things IT managers are doing now that MSPs should be prepared for:
The cloud has given business units within the enterprise a chance to do an end-run around IT when they need quick resources to complete a given task.
The CIO is rightly concerned about this, given the security and governance issues that such free-wheeling activity promotes. But in the front office, the end results of greater productivity and lower costs are hard to resist, particularly once the appropriate agreements are struck with cloud providers that enable broad protection and availability measures for data placed on third-party infrastructure.
It stands to reason, then, that many providers are positioning their services away from the technical elements of the enterprise and more toward the people who actually stand to benefit – the line-of-business managers who are under increasing pressure to get the job done no matter what. This is why we are seeing the rise of cloud services tailored toward key functions, such as marketing, as opposed to generic server and storage resources.
If your system has been hacked, what would your first reaction be?
Speaking for myself, I think I would want to know who did it and figure out how it was done. That’s my personality, to learn the who, what, and why of a situation first, and then focus on the damage control. I suspect that this is human nature for a lot of people, too.
On the other hand, when I asked that question to a security professional during an informal conversation, his response was this: Find out what information was hacked and determine whether the FBI needs to be involved immediately. You have to figure the data had already been compromised, he said, so you’ve got to work on minimizing the damage.
According to Edward J. McAndrew, assistant United States attorney and cybercrime coordinator with the U.S. Attorney’s Office in the District of Delaware, and Anthony DiBello, director of strategic partnerships for Guidance Software, the security professional I spoke with is on the right track. When a hack happens, it is important to resist human nature regarding the hacker (at least immediately). Instead, you want to focus on mitigating damage and data loss and providing information to law enforcement so the cops can identify and take action against the bad guys.
The data experts are still sounding the warning bell about data lakes, prognosticating a list of problems that data lakes will cause you.
Meanwhile, word on the street is that enterprises are building data lakes anyway, because everyone else thinks it’s a great idea. This means that many enterprises are now stuck looking for ways out of the prognosticated problems.
It’s going to get interesting for the rest of us—and possibly very expensive for some.
Gartner Director of Public Relations Christy Pettey revisited the problems of data lakes, drawing on Research Director Nick Heudecker’s presentation at the Business Intelligence & Analytics Summit. Pettey’s article identifies the three main problem areas with data lakes:
Concepts and fashions in business come and go. And sometimes they come back again with a new look or a different name. The origin of the DevOps name is simple to guess. It’s a combination of development and operations. The advantages cited of using a DevOps approach include a lower failure rate of software releases, a faster time to fix, and a faster time to recover if a new release crashes your server. DevOps is currently a buzzword in IT circles, but despite an inception date of 2008, just how new is it?
While many companies would like to adopt cloud services, many still resist over concerns about data security. Here's how managed service providers (MSPs) can overcome the two main objections to cloud computing and cloud-based file sharing in 2015.
As a recent article from CloudWedge says, “The most cited barrier to entry for cloud into the enterprise continues to be the security concerns involved with an infrastructure overhaul.” The problem with that lingering concern is that the enduring lack of education is hindering the market for MSPs. Yet, this knowledge also presents an opportunity.
What these hesitant or resistant organizations really fear is the unknown. And, what they don’t know is what adopting the cloud will mean for their most valuable, most highly-protected data.
(TNS) — So many earthquakes rumble through south-central Kansas these days that the Harper County Herald charts them in each week’s edition the way some papers run baseball box scores.
They run on page 12. Right next to the oil and gas industry news as a not-so-subtle reminder that there’s a likely connection between the quakes and an upswing in drilling operations.
“For a while there, every day, several times a day it was shaking,” said Herald editor-in-chief Kate Catlin.
(TNS) — Haunted by the public health community's failure to prevent or contain Ebola, a top Houston expert is spearheading a government-sponsored effort to prepare North Africa and the Middle East so that the region doesn't spawn the next infectious disease epidemic.
Dr. Peter Hotez, named a U.S. science envoy in December, fears the next virulent outbreak of a neglected tropical disease or emerging infection could strike ISIS-occupied territories in Syria, Iraq, Yemen or Libya, all of which fit the historical mold for such a disaster. He is working to identify institutions in the region that could send scientists to train in Houston, then ramp up back at home to produce vaccines in time to prevent an epidemic.
"We can't wait for catastrophic epidemics to happen and only then start making vaccines," said Hotez, an infectious disease specialist at Baylor College of Medicine and Texas Children's Hospital. "We need to start anticipating the next threat."
(TNS) — The National Oceanic and Atmospheric Administration is testing a new feature that lets people get a look at what kind of damage and storm surges are possible, and using Charleston, S.C., for the preliminary model.
The Experimental Storm Surge Simulator shows a street-level view of where water could rise in a storm surge.
"Surveys of the public show there is still a consistent misunderstanding of what the storm surge is, and how deadly it can be," reads the introduction to the app. "In part this is due to the challenge scientists encounter in trying to simplify the complex physics of hurricanes for the public, and in part this is due to poor misunderstanding of flood zone maps that represent the flooding scenario as it might be viewed from above."
Risk professionals aren’t prepared for the age of the customer. Empowered consumers and changing market dynamics are upending longstanding business models and lines of operation, but risk professionals largely stand pat, and continue to neglect risks related to their organizations’ most critical asset – company reputation. Yesterday we published a report on "Brand Resilience" that will hopefully help you change that legacy risk mentality.
New survey results suggest some communities are much better prepared for emergencies than others.
The Census Bureau and U.S. Department of Housing and Urban Development released data this week showing the extent to which Americans in different parts of the country have taken measures to prepare for natural disasters or other emergencies. Disaster preparedness questions were a new addition to the 2013 American Housing Survey, intended to assist policymakers and emergency responders with planning.
Nationwide, just over half of households had prepared an emergency evacuation kit. Only a third had communication plans in place, while 37 percent had established emergency meeting locations.
The April 2013 Boston bombing may have marked the first successful terrorist attack on U.S. soil since the September 11, 2001 tragedy, but terrorism on a global scale is increasing.
Yesterday’s attack by the Al-Shabaab terror group at a university in Kenya and a recent attack by gunmen targeting foreign tourists at the Bardo museum in Tunisia point to the persistent nature of the terrorist threat.
Groups connected with Al Qaeda and the Islamic State committed close to 200 attacks per year between 2007 and 2010, a number that grew by more than 200 percent, to about 600 attacks in 2013, according to the Global Terrorism Database at the University of Maryland.
Everyone knew the cloud was going to be big when the term first appeared in tech circles five or so years ago. But the speed at which it is taking over data infrastructure and the enthusiasm it has generated in the enterprise are surprising nonetheless.
As a rule, the enterprise does not alter the fundamentals of its data infrastructure lightly – even the transition from one core switch or centralized server or storage platform to another is a study in careful planning, particularly when a change in product lines or vendors is on the table. So when word came down that organizations could remove virtual architectures to entirely new resource sets that are not even controlled by the enterprise, there was every reason to think that maybe this would happen, someday.
But someday seems to be approaching at lightning speed if the latest research is to be believed. Goldman Sachs recently projected that spending on cloud computing and infrastructure will jump from today’s $16 billion – which is already a three-fold increase from the beginning of the decade – to more than $43 billion by 2018. And according to CenturyLink, 2020 will unfold with upwards of 70 percent of IT infrastructure residing in the cloud, nearly the opposite of what it is today. And reports coming in from the field indicate that most organizations expect to see improved service in the cloud compared to legacy infrastructure, as well as lower costs.
Last week we began the first workshop in our MSc Organisational Resilience from the module that has a specific focus on Security Management. We covered the usual discussions about crime theory and motivational influence before going on to discuss the scope and parameters of security. So far so routine: vanilla security management ideas. Then we began to move onto the more interesting and challenging elements of the workshop, where the contextualised approach was developed. Where does security management ‘fit’ with other resilience disciplines; and what does the critically evaluative approach that we undertake at postgrad level reveal about security’s true profile and organisational relevance?
It is context that is important and that is something that we can develop and analyse extremely well. How? Because our students and tutors are multi-disciplinary. If you undertake a security management course and staff it with criminologists; and all of your students are from a security, military or law enforcement background; you get bias. Bias is not something that we are too fond of as it tends to skew research and its outcomes. So with, for example, business continuity and emergency and crisis management specialists within our group, we have the opportunity to challenge the rigidity of thought that some see as the underlying trait of many security people. We have covered the theories of crime and we will not cover the processes of security (and its multiple sub activities) in any more detail from now on. However, we will look at the development of ideas, thoughts and research into security management in the organisation and its resilience; dismantling the behaviours and attitudinal approaches that restrict organisational capability from much wider viewpoints.
For years enterprises have attempted to move away from spreadsheets in favor of enterprise resource planning (ERP) systems, accounting systems and various other software systems and applications. Yet, no matter how hard organizations try, it seems spreadsheets will not go away.
Besides being easy to use and accessible, people are comfortable working with spreadsheets. When they have a job to do, spreadsheets are there—not waiting for IT. Yet when left unmanaged, the risks associated with spreadsheets can prove costly, resulting in bad business decisions, regulatory penalties, and even lawsuits. In some instances, unmanaged spreadsheets are costing organizations millions of dollars.
For example, last October a spreadsheet mistake cost Tibco shareholders $100 million during a sale to Vista Equity Partners. Goldman, Tibco’s adviser, used a spreadsheet that overstated the company’s share count in the deal. This error led to a miscalculation of Tibco’s equity value, a $100 million savings for Vista and a slightly lower payment to Tibco’s shareholders.
There are many products and services on the market today designed to help notify the right people with (hopefully) the right messages in the event of disruption of day-to-day operations.
Yet we in Business Continuity (and Emergency Management, Crisis Management and ITDR) spend little time, money or effort streamlining how we receive intelligence about events that could potentially disrupt our businesses. Why all the emphasis on outgoing information yet so little on incoming intelligence?
We already know what kind of intelligence we should be anticipating. After all, successful Business Continuity Management and Risk Management uncover knowledge of events that may negatively impact day-to-day operations. And there are many readily available sources which can alert us to those potential, impending or current events for both personal and business use.
If the title of this post makes you go cross-eyed, don’t worry. All will become clear. Let’s explain. Active/active IT configurations consist of computer servers that are connected in a network and that share a common database. The ‘active/active’ part refers to the capability to handle server failure. First, if one server fails, it does not affect the other servers. Second, users on a server that fails are then rapidly switched to another server that works. The database that the servers use is also replicated so that there is always one copy available. Now for the other two acronyms: HA stands for high availability; DR (of course) for disaster recovery. It is DR that is more affected in this case.
(TNS) — After nearly seven years without a large hurricane threatening the entire Gulf Coast from Texas to Florida, emergency planners say they're having a difficult time getting residents to prepare for the upcoming season.
"It's human nature," said Rick Knabb, director of the National Hurricane Center. When hurricanes don't happen, people forget about them.
This week the country's leading emergency managers and hurricane officials are meeting in Austin at the annual National Hurricane Conference, and this year the buzz has been about the recent lull in Gulf of Mexico activity and how that has made preparations for the season, which begins June 1, more difficult.
Recently, President Obama issued an executive order to address cyberspying and other maliciously intended cyber activities conducted by hackers and spies in foreign countries. The order will assess penalties for overseas cyberspying and those that knowingly benefit from the act. In an email message to me, Greg Foss, senior security researcher with LogRhythm, called it an “interesting move,” adding:
This is primarily because attribution within the information security space is not nearly as easy as it sounds. It is trivial for hackers to pivot through other countries and misplace blame in order to create the illusion that an attack originated from a specific location. Malware can and will be created that contains false data, to shift culpability.
As I’ve mentioned often in the past, the enterprise is not transitioning to the cloud, but many clouds. And with the advanced automation systems hitting the channel, it will soon be a relatively simple matter to deploy workloads to the appropriate cloud with little or no oversight from users or IT managers.
But how do you determine which cloud is the right cloud? And how exactly will all these clouds work together to produce at least the semblance of an integrated data environment?
According to EMC’s Peter Cutts, the either/or debate surrounding public and private clouds is over. Enterprises that have chosen both, in fact, are likely to see significant advantages over those who restrict themselves to pure-play infrastructure. The public cloud’s scalability cannot be denied, of course, but neither can the security, governance and performance of private infrastructure. In a hybrid scenario, the enterprise has the ultimate in flexibility when it comes to compiling the optimal resources for the business objective at hand.
Confusion surrounds the topic of how to bring some sense of order to Big Data. Depending on the day, the discussion might come down to data quality, data governance or master data management.
Here’s a hint: One of these is much less necessary than the others. You should always understand the quality of your data — big or otherwise. And it’s just basic legal smarts to create governance rules about data lest you fall afoul of regulatory compliance.
But when it comes to master data management and Big Data, you may be better off leaving each to its own. If you’re not clear on why, I recommend this post by veteran integration technologist Kumar Gauraw, who takes you through his thought process on why MDM and Hadoop don’t match.
On this day we celebrate the greatest upset in the history of the NCAA Basketball Tournament, when Villanova beat Georgetown for the 1985 national championship. Georgetown was the defending national champion and had beaten Villanova at each of their regular season meetings. In the final the Wildcats shot an amazing 79% from the field, hitting 22 of 28 shots plus 22 of 27 free throws. Wildcats forward Dwayne McCain, the leading scorer, had 17 points and 3 assists. The Wildcats’ 6’ 9” center Ed Pinckney outscored 7’ Hoyas’ center, Patrick Ewing, 16 points to 14 and 6 rebounds to 5 and was named MVP of the Final Four. It was one of the greatest basketball games I have ever seen and certainly one for the ages.
I thought about this game when I read an article in the most recent issue of Supply Chain Management Review by Jennifer Blackhurst, Pam Manhart and Emily Kohnke, entitled “The Five Key Components for SUPPLY CHAIN”. In their article the authors asked “what does it take to create meaningful innovation across supply chain partners?” Their findings were “Our researchers identify five components that are common to the most successful supply chain innovation partnerships.” The reason innovation in the Supply Chain is so important is that it is an area where companies cannot only affect costs but can move to gain a competitive advantage. To do so companies need to see their Supply Chain third parties as partners and not simply as entities to be squeezed for costs savings. By doing so, companies can use the Supply Chain in “not only new product development but also [in] process improvements”.
BSI has published a white paper that explores the role of metrics in the ISO 22301 business continuity standard and aims to help people understand the standard’s BCM measurement requirements.
The executive summary of the 'Measurement matters: the role of metrics in ISO 22301' white paper states that ISO 22301 recognizes the importance of having accurate performance information, laying down requirements for ‘monitoring, measurement, analysis and evaluation’. However, the emphasis on monitoring performance, measurement and metrics in ISO 22301 has caused confusion in some organizations. This whitepaper clarifies the requirements around measurement in ISO 22301. In addition three BSI clients describe how they have approached these requirements.
Read the white paper (PDF).
During the first quarter of 2015 Continuity Central conducted an online survey asking business continuity professionals about their expectations for the rest of 2015.
239 responses were received, with the majority (82.8 percent) being from large organizations (companies with more than 250 employees). The highest percentage of respondents were from the United States (35.6 percent), followed by the UK (24.7 percent). Significant numbers of responses were also received from Australia and New Zealand (6.7 percent), Canada (5.9 percent) and India (4 percent).
SEATTLE — The Ebola epidemic in West Africa has killed more than 10,000 people. If anything good can come from this continuing tragedy, it is that Ebola can awaken the world to a sobering fact: We are simply not prepared to deal with a global epidemic.
Of all the things that could kill more than 10 million people around the world in the coming years, by far the most likely is an epidemic. But it almost certainly won’t be Ebola. As awful as it is, Ebola spreads only through physical contact, and by the time patients can infect other people, they are already showing symptoms of the disease, which makes them relatively easy to identify.
We all know that we need to exercise our business continuity plans, it’s the only way to find out whether they will work. Of course that’s with the exception of a live incident, but during a disaster is never a good time to find out your plan doesn’t work. But what type of exercises should you run, how often should you run them, how to you plan them and how do you assess them?
These are all important questions and are all vital to ensuring that you have an effective business continuity programme in place, one that will provide reassurance to top management that, in the event of a crisis, the organization will be able to deal with it.
This is why the Business Continuity Institute has published a new guide that will assist those who have responsibility for business continuity to manage their exercise programme. ‘The BCI guide to… exercising your business continuity plan’ explains what the main types of exercises are and in what situation it would be appropriate to use them. It explains how to plan an exercise and what needs to be considered when doing so, from the setting of objectives to conducting a debrief and establishing whether those objectives have been met.
Following feedback from those working in the industry, testing and exercising was chosen as the theme for Business Continuity Awareness Week and the BCI is keen to highlight just how important it is to effective business continuity. A recent study showed that nearly half of respondents to a survey had not tested their plans over the previous year and half of those had no plans to do so over the next twelve months. This guide is intended to make it easier for people to develop an exercise programme and demonstrate that it does not have to be an onerous task to do so.
Have you ever experienced severe diarrhea or vomiting? If you have, it’s likely you had norovirus. If you haven’t, chances are you will sometime in your life. Norovirus is a very contagious virus that anyone can get from contaminated food or surfaces, or from an infected person. It is the most common cause of diarrhea and vomiting (also known as gastroenteritis) and is often referred to as food poisoning or stomach flu. In the United States, a person is likely to get norovirus about 5 times during their life.
Norovirus has always caused a considerable portion of gastroenteritis among all age groups. However, improved diagnostic testing and gains in the prevention of other gastroenteritis viruses, like rotavirus, are beginning to unmask the full impact of norovirus
For most people, norovirus causes diarrhea and vomiting which lasts a few days but, the symptoms can be serious for some people, especially young children and older adults. Each year in the United States, norovirus causes 19 to 21 million illnesses and contributes to 56,000 to 71,000 hospitalizations and 570 to 800 deaths.
While there is hope for a norovirus vaccine in the future, there are steps you can take now to prevent norovirus.
Additionally, norovirus is increasingly being recognized as a major cause of diarrheal disease around the globe, accounting for nearly 20% of all diarrheal cases. In developing countries, it is associated with approximately 50,000 to 100,000 child deaths every year. Because it is so infectious, hand washing and improvements in sanitation and hygiene can only go so far in preventing people from getting infected and sick with norovirus.
This is why efforts to develop a vaccine are so important and why in February 2015 the Bill and Melinda Gates Foundation, CDC Foundation, and CDC brought together norovirus experts from around the world to discuss how to make the norovirus vaccine a reality. Participants were from 17 countries on 6 continents and included representatives from academia, industry, government, and private charitable foundations.
Important questions remain regarding how humans develop immunity to norovirus, how long immunity lasts, and whether immunity to one norovirus strain protects against infection from other strains. There are also relevant questions as to how a norovirus vaccine would be used to prevent the most disease and protect those at highest risk for severe illness. These are all critical questions for a vaccine, and this meeting was a step toward finding answers to these questions and making a norovirus vaccine a reality.
For more information on norovirus visit CDC’s webpage: http://www.cdc.gov/norovirus/.
It seems like the breach cycle goes in full circles.
When data breaches began to make the news, the health care industry was hardest hit. Eventually, attacks against the health care industry, while they didn’t disappear, moved off the headlines in order to make room for breaches against the financial industry and retail and entertainment. But then came the Anthem breach, and now the announcement that Premera Blue Cross was hacked, with possibly millions of customers’ medical data exposed. I wouldn’t be surprised if we saw a flurry of news on health care-related attacks in the coming months, either.
The reasons are simple. First, health care organizations hold so much data that is valuable on the black market. You are looking at names, birthdates, addresses, Social Security numbers, insurance numbers, medical records and more.
Premera Blue Cross, a health insurer based in the Seattle suburbs, announced Tuesday it was the victim of a cyberattack that may have exposed the personal data of 11 million customers — including medical information.
The company said it discovered the attack on Jan. 29 but that hackers initially penetrated their security system May 5, 2014. The attack affected customers of Premera, which operates primarily in Washington, Premera's Alaskan branch as well as its affiliated brands Vivacity and Connexion Insurance Solutions, according to a Web site created by the company for customers. "Members of other Blue Cross Blue Shield plans who have sought treatment in Washington or Alaska may be affected," according to the site.
The company said its investigation has not determined if data was removed from their systems. But the information attackers had access to may have included names, street addresses, e-mail addresses, telephone numbers, dates of birth, Social Security numbers, member identification numbers, medical claims information and bank account information, according to the company's Web site. The company said it does not store credit card information.
Think you know it all when it comes to business continuity? That’s great. Think you can store all that knowledge? Think again. The way most information technology has developed, it’s great for storing information (bunches of related data), but not so hot for knowledge (insights and deeper relationships). There is no shortage of information to define business continuity, list its component parts, describe planning methodologies and offer case studies. You can access that information, transfer it and store it on your PC or mobile computing device. The problem is in storing your understanding of that material, and the model you develop to see them as a connected whole.
Zetta.net's "The State of Backup Survey" of 425 IT professionals revealed nearly 97 percent of respondents said they currently are using some form of disaster recovery (DR). Additionally, 31 percent said they plan to leverage a new DR method in the future, and more than half of these respondents intend to use cloud-based DR solutions. Here's everything you need to know about Zetta.net's new survey.
New research from Zetta.net showed that the demand for cloud-based backup and disaster recovery (BDR) solutions from managed service providers (MSPs) could increase soon.
Zetta.net's The State of Backup Survey of 425 IT professionals revealed nearly 97 percent of respondents said they currently are using some form of DR. Additionally, 31 percent said they plan to leverage a new DR method in the future, and more than half of these respondents intend to use cloud-based DR solutions.
No drought relief in sight for California, Nevada or Oregon this spring
According to NOAA’s Spring Outlook released today, rivers in western New York and eastern New England have the greatest risk of spring flooding in part because of heavy snowpack coupled with possible spring rain. Meanwhile, widespread drought conditions are expected to persist in California, Nevada, and Oregon this spring as the dry season begins.
“Periods of record warmth in the West and not enough precipitation during the rainy season cut short drought-relief in California this winter and prospects for above average temperatures this spring may make the situation worse,” said Jon Gottschalck, chief, Operational Prediction Branch, NOAA’s Climate Prediction Center.
NOAA’s Spring Outlook identifies areas at risk of spring flooding and expectations for temperature, precipitation and drought from April through June. The Spring Outlook provides emergency managers, water managers, state and local officials, and the public with valuable information so they will be prepared to take action to protect life and property.
Spring Outlook 2015. (Credit: NOAA)
Record snowfall and unusually cold temperatures in February through early March retained a significant snowpack across eastern New England and western New York raising flood concerns. Significant river ice across northern New York and northern New England increase the risk of flooding related to ice jams and ice jam breakups. Rivers in these areas are expected to exceed moderate flood levels this spring if there is quick warm up with heavy rainfall.
There is a 50 percent chance of exceeding moderate flood levels in small streams and rivers in the lower Missouri River basin in Missouri and eastern Kansas which typically experience minor to moderate flooding during the spring. This flood potential will be driven by rain and thunderstorms.
Moderate flooding has occurred in portions of the Ohio River basin, including the Tennessee and Cumberland rivers from melting snow and recent heavy rains. This has primed soils and streams for flooding to persist in Kentucky, southern Illinois, and southwest Indiana with the typical heavy spring rains seen in this area.
Minor river flooding is possible from the Gulf Coast through the Ohio River Valley and into the Southeast from Texas eastward and up the coast to Virginia. The upper Midwest eastward to Michigan has a low risk of flooding thanks to below normal snowfall this winter. Though, heavy rainfall at any time can lead to flooding, even in areas where overall risk is considered low.
El Niño finally arrived in February, but forecasters say it’s too weak and too late in the rainy season to provide much relief for California which will soon reach its fourth year in drought.
Drought is expected to persist in California, Nevada, and Oregon through June with the onset of the dry season in April. Drought is also forecast to develop in remaining areas of Oregon and western Washington. Drought is also likely to continue in parts of the southern Plains.
Forecasters say drought improvement or removal is favored for some areas in the Southwest, southern Rockies, southern Plains, and Gulf Coast while drought development is more likely in parts of the northern Plains, upper Mississippi Valley and western Great Lakes where recent dryness and an outlook of favored below average precipitation exist.
Current water supply forecasts and outlooks in the western U.S. range from near normal in the Pacific Northwest, northern Rockies, and Upper Colorado, to, much below normal in California, the southern Rockies, and portions of the Great Basin.
If the drought persists as predicted in the Far West, it will likely result in an active wildfire season, continued stress on crops due to low reservoir levels, and an expansion of water conservation measures. More information about drought can be found at www.drought.gov.
Above-average temperatures are favored this spring across the Far West, northern Rockies, and northern Plains eastward to include parts of the western Great Lakes, and for all of Alaska. Below normal temperatures are most likely this spring for Texas and nearby areas of New Mexico, Colorado, Kansas, and Oklahoma.
For precipitation, odds favor drier than average conditions for parts of the northern Plains, upper Mississippi Valley, western Great Lakes, and Pacific Northwest. Above average precipitation is most likely for parts of the Southwest, southern and central Rockies, Texas, Southeast, and east central Alaska. Hawaii is favored to be warmer than average with eastern areas most likely wetter than average this spring.
Now is the time to become weather-ready during NOAA’s Spring Weather Safety Campaign which runs from March to June and offers information on hazardous spring weather -- tornadoes, floods, thunderstorm winds, hail, lightning, heat, wildfires, and rip currents -- and tips on how to stay safe.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter, Instagram and our other social media channels.
In 2010, Google’s then-CEO Eric Schmidt gave a presentation at the annual Techonomy conference. He told attendees about Android’s incredibly phenomenal growth rate, but the real bombshell he shared was an interesting fact about data management.
From the beginning of human history--cave paintings until 2003--human beings created 2 exabytes of data. Total. That’s all the symphonies, all the movies, all the books--everything. Now we are replicating that every two days. That’s “Big Data.”
Even more staggering, about 80% of all the data we’ve ever created was generated in the past two years, and 90% of that is file, or unstructured, data. With data volumes expected to double every two years over the next decade, many IT leaders are feeling the pain of an infrastructure that isn’t scaling for capacity and performance.
DDoS attacks are now one of the most common and affordable cyberweapons. They are used by unscrupulous competitors, sinister extortionists or just everyday cyber-vandals. More and more companies, regardless of their size or business, are encountering this threat. And, according to the results of a survey conducted by Kaspersky Lab and B2B International, the majority of companies believe that revenue and reputation losses are the most damaging consequences of a DDoS attack.
According to the figures, companies regard lost business opportunities – the loss of contracts or on-going operations that generate guaranteed income – as the most frightening consequence of a DDoS attack. 26 percent of companies that encountered DDoS attacks regarded this as the biggest risk.
Reputational risks (23 percent) were viewed as the next most frightening consequence, likely to be since a negative customer or partner experience can drive away future contracts or sales. Losing current customers who could not access the anticipated service due to a DDoS attack was in third place: named by 19 percent of respondents. Technical issues were at the bottom of the pile: 17 percent of respondents identified a need to deploy back-up systems that would keep operations online as the most undesirable consequence, followed by the costs of fighting the attack and restoring services.
The research also revealed that respondents from companies in different fields take different views of the consequences of DDoS attacks. For example, industrial and telecoms companies, as well as e-commerce and utilities and energy organizations, tend to rate reputational risks ahead of lost business opportunities. In the construction and engineering sector there is more concern about the cost of setting up back-up systems, perhaps because larger companies face higher expenditure on this kind of system.
DDoS attacks on company resources are becoming a costly problem but only 37 percent of the organizations surveyed said they currently have measures in place to protect against them.
“People who have not yet faced a particular threat often tend to underestimate it while those who have already experienced it understand which consequences might be the most damaging for them. However, it makes little sense to wait until the worst happens before acting – this can cost companies a lot, and not only in financial terms. That is why it is important to evaluate all possible risks in advance and take appropriate measures to protect against DDoS attacks”, said Evgeny Vigovsky, Head of Kaspersky DDoS Protection, Kaspersky Lab.
If an organization’s backup system was designed before data volumes began to grow exponentially – or before IT infrastructures became highly virtualized – the company may find itself in a tight spot. Modernization is the key, and Logicalis US has identified six benefits CIOs can realize by updating their organization’s data storage and backup infrastructure.
"Working with an outdated backup system can create significant challenges in IT service levels,” says Bill Mansfield, solution architect, Logicalis US. “One sign it’s time to modernize your storage and backup/recovery infrastructure is when it’s too difficult to manage - you have to add staff to manage different backup products for physical and virtual servers, or you have to constantly fight fires to keep backups working. Another sign is when it’s just not working anymore. You can’t meet backup windows or recovery objectives because your backup techniques or storage are outdated, or your virtual environment’s performance degrades during routine backup operations. These are warning signs that you are working too hard to maintain an infrastructure that isn’t up to par, and that you could experience a significant loss if a disaster were to occur.”
ScaleArc has released the results of a new survey into 'The State of Application Uptime in Database Environments'. The 451 Research survey solicited responses from more than 200 enterprises of varying size, across a wide range of vertical markets, to learn more about the impact that an organization's underlying database infrastructure has on application availability.
Specifically, respondents were asked about their database infrastructure and its effect on both planned and unplanned downtime. The survey reveals key insights into the IT decision-making process, including the risks organizations are willing to take when choosing between application availability and security.
Commenting on the survey, Matt Aslett, research director at 451 Research said: "As enterprises struggle to improve application availability, understanding how the database affects application uptime is critical. The survey results indicate that enterprises cannot afford to maintain the status quo when it comes to database availability. Having your most critical applications be offline for 20 minutes to three hours, more than once a month, should not be acceptable to any enterprise today."
Key insights from the survey include:
- Database failover takes down the applications: for the majority of organizations, users see application errors for the duration of an unplanned outage. Failover is manual in most cases, and applications have to be restarted 62 percent of the time.
- Database outages are too frequent and too long: too frequently, the database is the source of unplanned downtime. A surprising 65 percent of all enterprises surveyed experience between 20 minutes and 3 hours of downtime, on average, for their most critical applications.
- Database maintenance crushes resources: more than 70 percent of respondents reported that they performed maintenance updates on a weekly or monthly basis. Those surveyed also indicated that key development resources are pulled in to assist with maintenance tasks 50 percent of the time.
- Deferred ‘security patching’ is rampant, placing enterprises at risk: more than 60 percent of respondents postponed critical security patches because of concerns over application downtime.
For the full survey report, please click here (registration required).
When it comes to damaging cyberattacks, a horror movie cliche may offer a valuable warning: the call is coming from inside the building.
According to PwC’s 2014 U.S. State of Cybercrime Survey, almost a third of respondents said insider crimes are more costly or damaging than those committed by external adversaries, yet overall, only 49% have implemented a plan to deal with internal threats. Development of a formal insider risk-management strategy seems overdue, as 28% of survey respondents detected insider incidents in the past year.
In the recent report “Managing Insider Threats,” PwC found the most common motives and impacts of insider cybercrimes are:
Despite some early difficulties configuring and deploying private clouds, the enterprise is still gung ho for them as a way to have a little piece of the cloud close to home for the most critical data.
But the knock on private clouds is undeniable: Unless you are willing to set up a vast array of modular infrastructure, private resources simply do not scale as well as public ones. And if a cloud can’t scale, is it really of much use?
To the first point, a private cloud may not offer “unlimited scalability” the way AWS does, but there are still plenty of ways that scalability can be architected into local resources to provide a decently large data environment. Infoblox is current working on private cloud scalability from the networking side, offering the new Cloud Network Automation stack for its NIOS 7.0 operating system. The idea is to provide a single management console for VMware, Microsoft, OpenStack and other platforms as they make the transition from pilot programs to full, multiplatform production environments. The system relies on an advanced GUI and a scalable virtual appliance architecture that handles the management of IP addresses and DNS/DHCP services, all backed by specialized adapters that enable consistent operation across multi-vendor platforms.
“Every company also needs to be a data company,” Leo Mirani, a reporter for the London-based Quartz, warned last fall.
I love that line, and once agreed. But in the past few months, I’ve had cause to rethink that premise and have decided that it’s not true for two reasons.
First, it ignores the ugly truth that not every company can be a data company. Everyone loves a success story, especially start-ups and vendors, so you don’t often hear about the failures. Companies that waste time and money trying to squeeze value from Big Data or other data projects don’t hire PR firms to put out press releases. But these stories exist, lurking in the subtext of data company success stories.
This GreenTechMedia story on utility data analytics is a good example. It’s a success story about start-up utility data analytics companies, but lurking among the unfathomably large market numbers and tech descriptions, our second story emerges:
The virus escaped control as countries and global agencies failed to acknowledge and contend with the magnitude of its spread. Treatment centers were overwhelmed. Sick people died on city streets, and new cases multiplied inside health care facilities, killing a significant proportion of the already inadequate health work force of the three most affected countries — Liberia, Sierra Leone and Guinea.
However, after two American aid workers and a traveler to Nigeria fell ill last summer, setting off a panic, a huge global initiative to combat Ebola swung into place. The effort has been messy, inefficient and expensive, often lagging the epidemic’s twists in tragic ways.
But the effort has also established expertise that may be built upon to prevent similar tragedies in the future — and shown personal and institutional bravery.
(TNS) — The man wasn’t any sicker at first than many of the other patients who arrive at University of Kansas Hospital, infectious disease specialist Dana Hawkinson recalls.
But he went downhill fast. Fever spiking, kidneys failing, breath so short he needed supplemental oxygen.
He had been bitten by ticks while working outdoors, so he probably had one of the many diseases commonly spread by bug bites in the Midwest, Hawkinson figured. But the tests the doctor ran — for ehrlichiosis, Rocky Mountain spotted fever, Lyme disease, West Nile virus — all turned up negative.
Even though the U.S. government has broadened its pursuit against corruption, only about 9% of organizations see Foreign Corrupt Practices Act monitoring as a top concern, according to “Bribery and Corruption: The Essential Guide to Managing the Risks” by ACL.
Many companies have policies against corruption, but it still exists. Although remaining competitive can be difficult in some parts of the world that see payments, gifts and consulting fees as part of doing business, companies need to identify these risks and manage them across the organization. There is much is at stake, as penalties are rising and more companies globally are being fined, the study found.
According to ACL, if a formalized ERM process exists within an organization, then the anti-bribery and anti-corruption (ABAC) risk assessment process should ideally be carried out within that ERM framework. In some organizations, however, the overall risk management process is fragmented, meaning that the risks of bribery and corruption are considered in relative isolation. Whichever approach is taken within an organization, the process of defining the risks should involve individuals with sufficient knowledge of the regulations and ways the business actually works.
Unstructured data received a boost from Big Data technologies such as Hadoop. Finally, organizations had an in-road to an estimated 70 to 80 percent of data that was largely unusable.
But Big Data isn’t the last work when it comes to leveraging unstructured data. A recent Baseline Magazine piece outlines the options for obtaining new business insights by combining structured data with unstructured data.
Blueocean Market Intelligence’s Senior VP of Analytics, Durjoy Patranabish, and Shreya Sharma, analytics consultant, collaborated to write the article. The consultancy focuses on solutions in marketing, life sciences, digital and, of course, Big Data. The resources section of Blueocean’s site is worth exploring in its own right since it includes quite a few papers, studies and webinars.
How often have you heard the expression ‘no pain, no gain’? These four words sum up the idea that if you are to receive benefits, then you must suffer (or at least make an effort). Alternatively, you could take it to mean that if you don’t make an effort, you can’t expect benefits. An example in the domain of disaster recovery might be ‘if you skip regular data backups (no effort), you’ll fail when your hard disk crashes (no benefit)’. The problem comes when people use chop logic to infer from ‘no pain, no gain’ that ‘if pain, then gain’ is true as well.
Fifteen or twenty years ago, when you thought about record retention and electronic communications, “electronic mail” or, email, was the only thing to worry about. Back then, firms and the regulators scrambled to interpret how to apply existing rules pertaining to communications to the new modality of email. Nowadays, email is just a one piece of a more complex communications landscape. Companies are deploying new forms of communication and the pace is only accelerating. Your firm might be using Unified Communications platforms like Microsoft Lync and IBM Sametime, collaboration tools like Chatter, IBM Connections, or Jive, or IM networks such as corporate Lync IM or perhaps public-facing such as Yahoo! Messenger. Your firm may even be using community networks geared towards specific industries such as Reuters and Bloomberg , widely used in the financial services sector, or ICE within the energy markets. And, of course, your regulated users, such as financial advisors, may also be clamoring to use social networking sites such as Facebook, LinkedIn, Twitter, YouTube, Google+, Pinterest, Instagram to prospect and conduct business.
A new report by the Business Continuity Institute, supported by certification body NQA, has shown that 6 out of 10 organizations adopt ISO 22301, the international standard for business continuity management. Organizations with strong top management commitment to standardising business continuity practice are four times more likely to adopt ISO 22301 than those who do not.
There are many reasons why an organization would want to embrace ISO 22301, most notably it provides assurance of continued service with 61% of respondents identifying this as a significant reason. By certifying to the Standard, organizations can provide reassurance to their stakeholders that, in the event of a crisis, it will still be able to function. Other reasons include:
- Reputation and brand management (48%)
- Reduced risk of business interruption (48%)
- Greater resilience against disruption (45%)
- Quicker recovery from interruption (44%)
There are of course barriers that prevent such commitment and those identified were resource constraints (25%), complexity of implementation (19%) and top management buy-in (18%). It is perhaps encouraging that these barriers each had relatively low percentages suggesting that the barriers aren’t that widespread.
If reassurance is one of the primary reasons to commit to the Standard then one can only wonder why many organizations don’t expect the same of their suppliers as supply chains can only be as strong as their weakest link. It could be considered alarming that 82% or respondents stated that their organization does not seek certification to the Standard from their suppliers.
Deborah Higgins MBCI, Head of Learning an Development at the Business Continuity Institute, commented: “It is encouraging that uptake is beginning to increase as organizations recognise the value investing in an effective business continuity programme, however there is still a lot of work to be done, most notably when it comes to persuading other organizations within the supply chain to also adopt ISO 22301.”
Kevan Parker, Head of NQA, stated “ISO 22301 provides an excellent framework for building organizational resilience and the benefits of adoption are becoming increasingly recognised. This is very positive but, as highlighted, a supply chain is only as strong as the weakest link; it is a responsibility of those with ISO 22301 certification to lead their peers towards adoption and elevate organizational resilience to total supply chain resilience.”
At the DRJ Spring World Conference in Orlando, FL on Tuesday 24th March, the Business Continuity Institute recognized the outstanding contribution made by a select group of individuals and organizations from across the continent as they presented their annual BCI North America Awards.
The BCI Awards consist of eight categories – seven of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from all over the United States and Canada. The number of nominations for each category was high, as was the standard of the nominations, leaving the judges with a difficult job to do in choosing the winners. But choose they must, and those who went home celebrating were:
Continuity and Resilience Consultant of the Year 2015
Roberta Atabaigi MBCI of KPMG
Continuity and Resilience Professional (Private) of the Year 2015
Cheryl Hirst of Erie Insurance Group
Continuity and Resilience Newcomer of the Year 2015
Garrett Hatfield of MetLife
Continuity and Resilience Team of the Year 2015
ETS Enterprise Resiliency Department Educational Testing Service
Continuity and Resilience Provider of the Year 2015
Continuity and Resilience Innovation of the Year 2015
Send Word Now
Most Effective Recovery of the Year 2015
Industry Personality of the Year 2015
Brian Zawada FBCI, Chairman of the US Chapter of the BCI; said: “Congratulations to all the winners who have shown themselves to be an asset to the profession. The high caliber of entries to these awards demonstrates the capability that exists within the business continuity and resilience industry, meaning that many C-Suite executives need not worry about whether their organization can manage a crisis, they can worry about other things instead.”
The BCI North America Awards are one of seven regional held by the BCI and which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI North America Awards are automatically entered into the Global Awards.
Could the age of the virtual desktop finally have arrived? Rising demand for virtual desktops could create new opportunities for managed service providers (MSPs) over the next few years, according to a new survey from managed service provider Evolve IP.
Nearly 37 percent of organizations said they have implemented or tested some level of virtual desktops, while almost 33 percent noted that they plan on doing so in the next three years, according to the study, 2015 Evolve IP State Of The Desktop.
The survey also showed that nearly 98 percent of virtual desktop users are "very pleased" with the technology.
After a harsh, cold winter, the clear, sunny skies and rising temperatures of spring are much appreciated. Businesses, however, also need to be ready for the possibility of flooding that may result from heavy rains combined with melting ice and snow.
The National Oceanic and Atmospheric Administration (NOAA) notes that flooding causes more damage in the United States than any other weather-related event. On average, flooding causes $8 billion in damages and 89 fatalities annually. Warming weather also often brings ice jams along rivers, streams and creeks, which can cause further flooding.
“In addition to the threat of floods that occur when severe weather hits, snow and ice have been piling up in many areas of the U.S. this winter,” Bill Boyd, senior vice president with CNA Risk Control, said in a statement. “When temperatures rapidly increase, so does the rate at which snow and ice melt…” which can create serious problems for those heavily affected this winter. “As spring temperatures begin to rise, it’s imperative for businesses to create emergency plans for flooding, which could cause costly property damage or disrupt operations,” he said.
The results of the research show that few businesses have comprehensive workforce strategies, with the majority taking a piecemeal approach to planning human capital. Only 15% of organizations polled said there is a clear link between their workforce planning and their overall strategic business plan, showing that where workforce plans exist, they often do so in isolation.
Research conducted by the Business Continuity Institute has shown that workforce planning is also a concern for business continuity professionals with the results of a recent survey conducted for the the annual Horizon Scan report revealing that a third of respondents consider availability of talents/key skills to be a concern for organizations, while nearly two-thirds consider loss of key employee as an issue that organizations need to be aware of.
Organizations tend to react to workforce challenges, rather than plan for them. An alarming 47% of those surveyed by CRF said that recruitment forecasts for the next 12 months have not been undertaken in their organisations. This reluctance to identify workforce risks leads to poor succession planning, insufficient anticipation of recruitment needs and a lack of understanding of future skill requirements.
David Knight, Associate Partner at KPMG comments: “One of the biggest issues that business will face in the coming years is the management of human capital. Poor planning can make it difficult to adapt to changing market conditions, as well as retain talent in competitive industries. The ability to forecast skills requirements, pre-empt workforce risks and deploy resources efficiently will underpin financial success for organisations in future.”
Mike Haffenden, from Corporate Research Forum’s, comments: “In today‘s world of ever-increasing complexity, it is even more important to prepare for an uncertain future armed with a flexible plan, rather than simply reacting to unforeseen events. Adopting a strategic approach to workforce planning will leave organisations better prepared to deal with a dynamic and fast-changing environment.”
With 81% of large UK businesses and 60% of small companies suffering a cyber security breach in the last year, a new report published by the UK Government and Marsh entitled UK Cyber Security: The Role of Insurance in Managing and Mitigating the Risk has highlighted the exposure of firms to cyber attacks among their suppliers.
Cyber threats are estimated to cost the UK economy billions of pounds each year with the cost of cyber attacks nearly doubling between 2013 and 2014. The report found that, while larger firms have taken some action to make themselves more cyber-secure, they face an escalating threat as they become more reliant on online distribution channels and as attackers grow more sophisticated. The report issues a call to arms for insurers and insurance brokers to simplify and raise awareness of their cyber insurance offering and ensure that firms understand the extent of their coverage against cyber attack.
The cyber threat is also a very real for business continuity professionals with the Business Continuity Institute’s latest Horizon Scan report highlighting that cyber attacks are now perceived to be the number one threat to organizations. 82% of respondents to a survey expressed either concern or extreme concern at the prospect of this threat materialising.
The report recommends that organizations stop viewing cyber largely as an IT issue and focus on it as a key commercial risk affecting all parts of their operations, and that they examine the different forms of cyber attacks they face, to stress-test themselves against them and to put in place business-wide recovery plans.
The report also notes a significant gap in awareness around the use of insurance with around half of firms interviewed being unaware that insurance was available for cyber risk. Other surveys suggest that despite the growing concern among UK companies about the threat of cyber attacks, less than 10% of UK companies have cyber insurance protection even though 52% of CEOs believe that their companies have some form of coverage in place.
Francis Maude, Minister for the Cabinet Office and Paymaster General, said: “Insurance is not a substitute for good cyber security but is an important addition to a company’s overall risk management. Insurers can help guide and incentivise significant improvements in cyber security practice across industry by asking the right questions of their customers on how they handle cyber threats”.
Mark Weil, CEO of Marsh UK and Ireland, added: “While critical infrastructure in regulated sectors, such as banks and utility firms, are used to this kind of risk, most firms are not and their risk management practices are geared around lower-level, slower moving risks. Companies will need to upgrade their risk management substantially to cope with the growing threat of cyber attack, including introducing disciplines such as stress-testing, and creating a joined-up recovery plan that brings together financial, operational, and reputational responses.”
Geary W. Sikich introduces ‘risk absorption capacity’, ‘risk saturation point’, ‘risk deflection’ and ‘risk explosion’ and explains their usefulness to risk managers.
What is risk? Think about it before you leap to answer. Do we really know and understand risk? Some facts to consider:
- Risk is not static, it is fluid.
- Risk probes for weaknesses to exploit.
- Risk, therefore, can only be temporarily mitigated and never really eliminated.
- Over time risk mitigation degrades and loses effectiveness as risk mutates, creating new risk realities.
Risk management requires that you constantly monitor recognized risks and continue to scan for new risks. This process cannot be accomplished with a ‘one and done’ mindset. Risk needs to be looked at in three dimensions and perhaps even four dimensions to begin to understand the ‘touchpoints’; the aggregation of risk; and its potential to cascade, conflate and/or come to a confluence.
We often hear references to a holistic view of risk. “Holistic” is a term used in risk management to emphasize the importance of understanding the interrelationships among individual risks (or groups of related risks) and the coordinated approach that an organization’s operating units and functions undertake to manage risk. A holistic approach to risk management is, by definition, one that is not fragmented into functions and departments, but rather is organized with the intention of optimizing risk management performance.
A silo approach to managing risk is dangerous in today’s rapidly changing environment. Organizations can face change with greater confidence with an enterprise-wide perspective. That is why an enterprise risk management (ERM) approach is intended to be holistic in its perspective toward risk and how it is managed. While the goal of thinking holistically is laudable, the question arises as to what it means from a practical standpoint.
So things didn’t go as well as you planned; either your project implementation didn’t go the way you wanted – without any hiccups – or your organization didn’t respond the way you’d expected them to when the proverbial hit the fan. Well, get used to it. That’s the way things go. You always plan for the worst and hope for the best and having a project management background as well as my BCM/DR background, things don’t always go as planned no matter how hard you try. However, if something does go wrong, it’s a good idea to learn from it.
With most post-activities – either project implementations or responses to disasters and crises, there is usually one activity that’s always held; the Lessons Learned or Post Incident Review.
During these sessions, which I’m sure you’re familiar with, the focus always tends to be what went wrong and people trying to find the faults but most importantly, the person or area for where to lay the blame and shame them for their error. Well, to some degree that’s OK; you want to find the cause and find out what went wrong to cause the problem but it shouldn’t be to lay blame or just to focus on the negative. Often, these Lessons Learned meetings tend to be sessions where people can vent their frustration due to how inconvenienced they became as a result of the situation. Again, focusing on the negative. But that’s not all you should be addressing.
S&R pros, is there a Chief Data Officer (CDO) in your organization? Do you work with them? Previously, John and I wrote about the CDO role and how we believe that CDOs will help to drive security policy in the future because they can 1) directly tie business value to data assets, 2) have a deep understanding of data identity and purpose, and 3) possess a great incentive to protect the company’s data (it’s a strategic business asset after all!). Colleagues like Gene have also written about the CDO and the importance of the CDO in data management.
The sector continues to advance in its adoption of security services. As reported on MSPmentor, this is a rapidly expanding market, with continued opportunity for solution providers. With a fast-growing segment of the market being mid-sized businesses, this seems like a ripe opportunity to deliver services.
According to Gartner (and as quoted in the most recent CompTIA security report), the global security market was expected to reach $71.1 billion dollars by the end of 2014. So this is big business. Interestingly, based on analysis of data on successful attacks, many stats indicate that security should at least be a solvable problem for mid-sized businesses:
You’d think master data management would be an easy sell in a world where everyone wants an accurate “360-degree” view of the customer. And certainly, that’s a leading driver of adoption.
Yet it’s not always enough to make a winning business case, according to a recent Computing survey of IT decision makers.
The UK tech site interviewed 150 IT decision makers about MDM. The survey found that 38 percent were either currently scoping a project or implementing a project, while another 29 percent had already implemented MDM successfully.
The most-cited factors driving MDM were improving customer experience (60 percent) and improving the quality of strategic decision making. Despite these key business drivers for MDM, IT leaders still struggled to make the MDM business case. When asked about the primary challenges in obtaining funding for customer data management projects, the respondents said:
(TNS) — It's one of the few things that just about everyone seems to agree government should be doing.
But there's less consensus when it comes to figuring out how to pay the bill for making sure a call to 911 results in emergency responders rushing to help.
Pennsylvania's decades-old system for funding emergency call centers — a fee on monthly phone bills — hasn't been generating enough money to keep up with operating costs. And that's left local tax dollars plugging the gap.
This year, Berks County expects to put $2.53 million in county taxes and $2.97 million in fees it collects from municipal governments toward 911 center operations.
"This has become an enormous issue," said Christian Y. Leinbach, Berks County commissioners chairman.
Wi-Fi has serious security issues. As my colleague Carl Weinschenk wrote last year in a blog post discussing the vulnerability problems of Wi-Fi, particularly in the age of BYOD and working from anywhere, “… the world outside the firewall simply isn’t as secure as the world within.”
If we needed a reminder about the insecure world outside of the firewall, we got it last week with the news of a vulnerability discovered in hotel Wi-Fi. The flaw was discovered in ANTLabs InnGate devices, which provide in-room access for hotel guests, as well as the type of temporary Wi-Fi connections used in other public places such as convention centers. As explained by Wired:
The vulnerability, which was discovered by the security firm Cylance, gives attackers direct access to the root file system of the ANTlabs devices and would allow them to copy configuration and other files from the devices’ file system or, more significantly, write any other file to them, including ones that could be used to infect the computers of Wi-Fi users.
Amid all the time, attention and money devoted to upgrading and improving enterprise infrastructure, we should keep in mind that it is still just a means to an end. While the specifics may vary, that end is generally considered to be improved productivity, streamlined infrastructure and a more vibrant, dynamic user experience.
But none of this is going to happen without a complete renovation of data center infrastructure and, by extension, the mindset that governs not only design and architecture but human interaction with the digital ecosystem.
To Hiroshige Sugihara, president and CEO of Oracle Japan, this can be summed up in a single word, which unfortunately defies English translation. But generally speaking, it refers to the rejection of conceptual categorization that often prevents us from seeing the big picture – kind of like failing to see the forest through the trees. In the enterprise, this often leads to the one-to-one thinking that lumps together applications and hardware and ultimately produces the silo-based infrastructure that hampers interactivity and innovation. In the new century, the enterprise will need to base strategies on results, rather than what resources must be brought to bear on particular data sets.
The NCAA basketball tournament takes hundreds of good college teams from around the country and boils them down to 64 qualifiers, a round of 32, a Sweet Sixteen, an Elite Eight, Final Four and then two finalists who fight it out for the glory.
Similarly, we have whittled down the many flash storage tips from a multitude of sources into a handful. A couple of weeks back, we provided some tips focused on how to maximize flash performance. But so hot is the flash arena that we are now following it up with an Elite Eight among flash storage tips, these ones focused on product selection.
As the old adage goes, “Time is money,” and in the interest of saving money, we must not waste time. This is especially true when it comes to disaster preparedness and recovery—an area where many companies continue to fall short, as evidenced by the Disaster Recovery Preparedness Council’s 2014 Disaster Preparedness Benchmark Survey.
As part of the study--which surveyed companies of all sizes, from a broad range of industries across the globe--the Disaster Recovery Preparedness Council found that three out of four companies worldwide are at risk for failing to adequately prepare for disaster. Furthermore, the council found that incidents and costs of outages associated with disaster remain a major challenge for many organizations.
With the growing reliance on digital business processes in most companies today, the IT department has more responsibility than ever. But, according to new research, businesses are disrupted within the first few minutes of an IT outage and poor communications management means finding the right person to investigate the issue can take as long as, or longer than resolving it.
Forty-five percent of IT professionals reported that their business is impacted if IT is down just 15 minutes or less, and 17 percent said disruption occurs the instant an IT outage develops, according to research by Dimensional Research for a new report, the ‘Business Impact of IT Incident Communications: A Global Survey of IT Professionals.’ The report was commissioned by xMatters, inc.
Many CIOs are struggling to realise the full benefits of their increasingly virtualized IT estates, largely due to the strains of staying secure. But Reuven Harrison says it doesn’t have to be this way...
Over the past decade, businesses have been virtualizing ever more of their IT architecture. At first, CIOs were primarily attracted by the huge efficiency improvements and reduced need for capital expenditure. But as cloud computing has evolved and matured, firms are increasingly eying the main prize: the potential to attain unparalleled levels of business agility.
Being able to deploy resources such as servers, storage and connectivity on demand, and scale them up (and down) at will, has resulted in IT departments shifting more and more systems and applications over to private and (to a lesser extent) public clouds. And as firms move inexorably towards a fully software-defined environment – where systems are not only virtualized, but every part of them can be managed, monitored, configured, optimised and secured centrally and automatically – virtual nirvana seems tantalisingly close.
There isn’t a week that goes by without some headline news on a data security issue. Whether it’s data theft, operating system and browser vulnerabilities, or malware threats, today’s small to midsize businesses face dangers from every corner. Unfortunately, most SMBs don’t understand the impact these threats can have until it’s too late. Many also don’t realize it takes more than a simple anti-virus solution to get the job done. Yet SMBs don’t have the time or the expertise to install and manage the level of security software that is necessary to protect against modern security threats. How can managed service providers help?
The SMB market is highly dependent on managed service providers (MSPs) to deliver managed security services to protect corporate assets. It’s an opportunity that’s there for the taking, but to be successful MSPs need to take a multipronged approach--one that encompasses vulnerability assessment, Windows and third-party patch management, anti-malware, content control and filtering. Endpoint security, along with policy management and enforcement, is also an important part of the mix for maximizing SMB protection.
Business continuity is not just for businesses – public sector organizations and third sector organizations are perhaps just as likely to be affected by a disruptive event as any private sector organization. So are non-profits doing enough to protect the way they operate?
‘Business continuity challenges within the non-profit sector’ is the subject of the latest edition of the Business Continuity Institute's Working Paper Series. In this edition, Rina Bhakta CBCI discusses how there is a lack of shared knowledge on the way business continuity works in the non-profit sector. She argues that while there are various standards and benchmarking from other industries, it can be difficult to relate it to non-profits because a lot of it is not applicable.
Rina notes that the main challenge is that any programme adopted is usually based on best practice. Although the Charity Commission in the United Kingdom outlines the requirements of risk management, the section on business continuity is limited. It then becomes difficult in influencing appropriate buy-in and commitment when such aspects are not enforced by regulation.
In 'business continuity challenges within the non-profit sector', Rina talks through the six stages of the business continuity management lifecycle and provides case studies to highlight how each stage would apply to a non-profit organization. To read the full document, click here.
Businesses are more dependent on their supply chains than ever, with supply chain disruption one of the leading causes of business instability. To thrive, companies need to be resilient, and part of that is their location and the location of suppliers. According to FM Global’s 2015 FM Global Resilience Index, Norway tops the list of resilient countries, with Switzerland in second place.
The study’s purpose is to help companies evaluate and manage their supply chain risk by ranking 130 countries and regions in terms of their business resilience to supply chain disruption. Data is based on: economic strength, risk quality (mostly related to natural hazard exposure and risk management) and supply chain factors (including corruption, infrastructure and local supplier quality).
This is a tale from the mists of time; from days of yore when it was difficult to get people interested in business continuity management and even more difficult to secure their involvement in exercises and tests (OK, in fairness, that could have been this week, but just indulge me for a moment).
Some of you may have heard me tell this story before, but recounting ancient tales didn’t do Hans Christian Anderson (or my Dad) any harm and, in any case, I’m a big fan of recycling.
Having been asked to contribute something on exercising and testing to this year’s Business Continuity Awareness Week Flashblog, and despite conforming in terms of using the snappy title demanded of all the contributors, I really couldn’t bring myself to write about strategy or methodology or process or the difference between a test, exercise, rehearsal, etc, etc, etc. So I’ll leave that to those whose boats are floated by that sort of thing and tell you my favourite exercising story instead.
(TNS) — Many of those who lived through last August’s 6.0 magnitude South Napa Earthquake suffered mental health issues as a result, with about a quarter of those at risk for PTSD, according to a newly released survey, Napa County officials announced.
The California Department of Public Health recently released the final results of the door-to-door survey of Napa and American Canyon households conducted September 16-18. The Community Assessment for Public Health Emergency Response final report was based on the survey that asked questions about residents’ experiences during and after the temblor to assess the extent of injuries, chronic disease exacerbation and mental health issues associated with the earthquake, and the degree of disaster preparedness of these communities.
Mental health issues were extremely common among residents of both cities, with about 79 percent of Napa households and 73 percent of American Canyon households reporting a traumatic experience or mental health stressor during or since the earthquake.
WASHINGTON—The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA), in coordination with state and tribal emergency managers and state broadcasting associations, will conduct a test of the Emergency Alert System (EAS) on Wednesday, March 18, 2015 in Kentucky, Michigan, Ohio, and Tennessee. The test will begin at 2:30 p.m. Eastern Daylight Time (EDT) and will last approximately one minute.
“The goal of the test is to assess the operational readiness and effectiveness of the EAS to deliver a national emergency test message to radio, television and cable providers who broadcast lifesaving alerts and emergency information to the public,” said Damon Penn, Assistant Administrator of FEMA’s National Continuity Programs. “The only way to demonstrate the resilience of the system’s infrastructure is through comprehensive testing to ensure that members of tribes, and the residents of Kentucky, Michigan, Ohio, and Tennessee, receive alerts when an emergency occurs.”
The test will be seen and heard over radio and television in Kentucky, Michigan, Ohio, and Tennessee, similar to regular monthly testing of the EAS conducted by state officials and broadcasters. The test message will be nearly identical to the regular monthly tests of the EAS normally heard by public. Only the word “national” will be added to the test message: “This is a national test of the Emergency Alert System. This is only a test...”
The test is designed to have limited impact on the public, with only minor disruptions of radio and television programs that normally occur when broadcasters regularly test EAS in their area. Broadcasters and cable operators’ participation in the test is completely voluntary. There is no Federal Communications Commission regulatory liability for stations that choose not to participate.
In 2007, FEMA began modernizing the nation’s public alert and warning system by integrating new technologies into existing alert systems. The new system is known to broadcasters and local alerting officials as the Integrated Public Alert and Warning System or IPAWS. IPAWS connects public safety officials, such as emergency managers, police and fire departments, to multiple communications channels to send alerts to warn when a disaster happens. For more information, please visit www.fema.gov/media-library/assets/documents/31814.
Panda Security accidentally flagged itself as malware last week, causing some user files to be quarantined.
What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:
The growing proliferation of mobile devices continues to make business faster, more agile, and more efficient. However, a recent study suggests U.S. workers remain concerned about the security of their mobile devices when it comes to cloud-based file sharing.
According to a recent study, 73 percent of the 1,000 U.S. employees surveyed said that they preferred to use email over file-sharing services, up 4 percent from the 69 percent in the previous year's survey. Those who made use of file-sharing services dropped to 47 percent, down from 52 percent in 2013.
Keeping up with and fending off cybersecurity threats is a daily topic for all organizations, but for health care providers and systems, failure in that regard can result in much more dire results than a financial or reputational loss. It can result in bodily harm or death. It’s possible that you could draw a line to such severe consequences in other industries and lines of work, but for the health care industry, that added layer of urgency is always present in cybersecurity protections.
A large research project devoted to determining how best to protect patient health while maximizing use of digital tools and resources, named IMMUNE-SECURE, got a boost in attention from health care IT organizations and other technologists with the announcement today that Dr. Larry Ponemon, well-known in IT circles for his work through the Ponemon Institute, has joined the advisory board for the project.
How things change. For years, even decades, people have been getting rid of tape. They bought into the idea that disk was the way to go and that tape was “old hat.”
But the realities of a Big Data world and the advances in tape technology, density, reliability and usability have brought the realization to many that they shouldn’t have been so hasty. And that’s showing up in the raw numbers. According to the Active Archive Alliance, nearly 250 million Linear Tape Open (LTO) tape cartridges have been shipped since the format’s inception. That’s more than 100,000 PB of data on LTO.
Tape, then, is returning to some organizations that dumped it a while back. Its role is steadily being expanded in others who remained faithful, and it now serves as the backbone data repository for many of the major cloud data providers.
HOUSTON—The recent spike in oil and natural gas production has led trucking companies to grow so quickly that they sometimes scramble to find qualified drivers. This has meant tightening coverage with a limited number of carriers and a market in “disarray,” Anthony Dorn, a broker with Sloan Mason Insurance Services said today at the IRMI Energy Risk and Insurance Conference.
“Carriers have taken a bath on construction risks,” he said. “Only nine carriers will write crude hauling.”
He added that there is a “huge need for risk management in trucking right now. A lot of these are fly-by-night companies. They are running with drivers that have no experience, they are getting violations from the DOT left and right for not having licenses and adequate brakes on their trucks and they are running on dirt roads that aren’t made for 100,000 pound units,” Dorn said. “It’s a very risky place for underwriters. If we don’t do something as agents and as risk managers there will be fewer carriers.”
By Gabriel Gambill
You would be pretty worried if you didn’t have fire safety and evacuation plans in your office, so why would you not put the same contingency strategy in place for your data?
Too many businesses don't have a disaster recovery plan, so my advice is to sit down and consider it pronto. Disaster recovery as a service (DraaS) or cloud-based DR strategies are now making data recovery plans far less complicated and highly efficient for businesses. But despite being able to re-think their DR plans in the cloud and make them so much easier, companies are still lax about testing the plan on a regular basis.
To put it into context, perhaps it’s best to start by defining what a disaster could be. When we say ‘disaster’ often we mean something that is out of our hands. Floods, hurricanes power cuts and earthquakes all spring to mind. However a disaster could be something as mundane as a software update or a simple human error. They're often not as newsworthy as a natural disaster but have just as much impact on an organization’s ability to operate.
Despite numerous emergencies making headlines last year and major events impacting communities in Oso, Wash.; Napa, Calif.; and Detroit, 2014 was considered a relatively quiet year in terms of federally declared disasters.
After years of hearing about how the number of disaster declarations has been rising, 2014 had the lowest number of declared disasters and fire assistance grants in at least 14 years. FEMA reported that 45 major disaster declarations were made by the president in 2014. And six emergency declarations, which are issued in advance of an event, were declared. The highest number of emergency declarations was in 2005 with 68 events.
In addition, the agency provided 33 fire management grants, a lower than average number. It was “a higher number compared to 2013 (28) but far fewer than the 118 provided in 2011, or the 86 provided in 2006,” according to a FEMA blog post.
(TNS) — In 2015, the hydrologists tasked with forecasting how high the Minnesota River will rise have supercomputers, advanced radar systems and satellites.
In 1965, they had slide rules, rain gauges and grave diggers.
Pedro Restrepo, the 65-year-old hydrologist in charge at the North Central River Forecast Center in Chanhassen, can relate to the tools available 50 years ago even as he uses the technology of today. When he first started working in hydrology in the 1970s, the instruments being used were much the same as in 1965.
"I still have my slide rule," Restrepo said, producing from his office the well-worn tool used by engineers and scientists to do calculations before the invention of the calculator.
The cloud wants enterprise data, and so far it has been fairly adept at gathering the low-hanging fruit: mostly bulk storage, archives, B&R, low-level database workloads and other non-critical stuff.
But the real money is in the advanced applications – the kind of data that organizations will pay a premium to support because it brings the highest value to emerging business models. This is a conundrum, however, because that high value also causes the enterprise to keep critical data close to the vest, which means cloud providers need to go the extra mile to win enterprise trust. And for the most part, that has not happened yet.
This is a shame because in terms of both security and uptime, the cloud is at least on par with the typical enterprise and in certain key metrics is actually superior. Cloud tracking site cloudharmony.com offers service status data for many of the top cloud providers going back at least a year, and its latest chart shows many services delivering four- or even five-nines availability. That puts outages at providers like Amazon EC2 and Google Cloud Service at mere minutes per year, while even three-nines performers confine their downtime to a few hours at most. A perfect record? Not by a longshot, but certainly no worse than the vast majority of enterprises out there.
Tape data storage just keeps on going. It’s almost like the steam punk of IT, a branch off into a different universe where everybody reads with bigger candles instead electric light bulbs. But it works. In fact, it works well enough for the largest IT vendors to continue pushing the envelope on data storage density on tape and storage and recovery speeds too. However, tape is not disk. You cannot ‘dip into’ tape in the same way you can randomly access a hard drive. And so, for backup and recovery in particular, the virtual tape library was invented to offer advantages of tape and disk altogether. Nevertheless, there are both pros and cons to consider.
Over the past year, Phoenix has found that customers using disaster recovery as a service (DRaaS) such as cloud backup & recovery, virtual disaster recovery or data replication services, all undertook rehearsals of their plans last year, highlighting that customers find it easier to test with DRaaS in place than customers who have traditional business continuity services, where Phoenix has seen only 40 percent of its customers testing.
Phoenix has found that DRaaS makes it much easier for customers to test because the data is with the same provider and the logistical issues usually found around testing, such as tape transportation and getting IT staff to the recovery centre, are removed. Furthermore, as it’s disaster recovery as a service, the service provider can initiate the recovery so customers are able to remotely access the recovered infrastructure to ensure that everything they needed to recover, has been recovered. The ‘live’ service element of DRaaS ensures a regular flow of communication which in turn increases awareness of testing.
Recent figures published by Phoenix show that just 45 percent of customers in total, tested last year with only 12 percent testing more than once. With environmental and hardware failures the most common reasons why customers put Phoenix on standby to use its disaster recovery services, the company is urging organisations to test their plans at least once a year to protect themselves against unforeseen but commonplace disruptions.
During Business Continuity Awareness Week (16th - 20th March 2015) Phoenix is offering tours of its facilities: to register log-on to: http://www.phoenix.co.uk/bc-open-day-registration-form/
BSI, the business standards company, has published a list of tips to help those new to the business continuity profession. The BSI's top ten tips for business continuity planning are:
1. Identify critical business functions: once critical business functions have been identified, it is possible to apply a methodical approach to the threats that are posed to them and implement the most effective plans.
2. Remember the seven 'P's needed to keep your business operational: providers, performance, processes, people, premises, profile (your brand) and preparation.
3. Understand and track past incidents with suppliers: obtain country-level intelligence so you understand what factors may cause a supply chain disruption e.g. working conditions, natural disasters, and political unrest.
4. Assess and understand vulnerabilities and weak points: conduct risk assessments to evaluate supplier capabilities to effectively adhere to your business continuity plans and requirements.
5. Agree and document your plans: these should never just be hidden away in the mind of the managing director. Assess your critical suppliers to make sure their business continuity plans fit with your objectives and are defined within your contract.
6. Make sure plans are communicated to key staff and suppliers: equally, share them with other key stakeholders to boost their confidence in your ability to maintain business as usual. This is particularly important for small businesses or those working with suppliers / buyers for the first time.
7. Try your plans out in mock scenarios: if possible include suppliers in your exercises and remember to test them not only in scenarios where there may be a physical risk, such as poor weather conditions making premises inaccessible, but people risks such as supply chain challenges and boardroom departures.
8. Expect the unexpected: while lean and efficient supply chains make good economic sense, unexpected events can have a significant impact on the operations and reputation of businesses.
9. Make sure your continuity plans are nimble and can evolve quickly: if your plans look the same as they did 10 years ago, then they probably won't meet current requirements. Organizations engaged in business continuity management will be actively learning from their internal audits, tests, management reviews and even from incidents themselves.
10. Make sure you're not just box-ticking: plans which get the tick against the 'to do' list but don't actually reflect the organization's strategy and objectives can lack credibility and are unlikely to succeed in the long-term. Instead, make sure your plans allow you to get back up and running in a way that aligns with your organization's objectives.
By Harriet Wood
In the 2014 Supply Chain Resilience Report published by the BCI 76 percent of respondents reported at least one disruption within their supply chain.
For all of us supply chain failure is a major issue. Within the brewing and pub industries the list and variety of suppliers seems endless. Butchers, bakers and beer bottle makers combine with engineering and IT businesses to create a mind-boggling range of possible disruptions.
For years we had worked hard to write, review and exercise our own plans but around five years ago we realised the need to extend our exercise program out to key suppliers. We quickly established that ‘key suppliers’ could not be identified simply by asking Purchasing for the names of the highest value contracts. We approached our business – and led by the Director of Supply Chain – they came back to us with the names of three suppliers. They were essential to our business, could not easily be replaced and I would never have guessed any of them were so critical!
Do you like being taken out of your comfort zone? Having some of your professional weaknesses highlighted and reported on? Finding out that your organisation isn’t perhaps as well-prepared for a disruption as you’d hoped? No??...I didn’t think so. I suppose the idea of taking part in an exercise presents all of the above as a possibility. So why ever would you want to put yourself through it?
Because…if done right it can be a positive and valuable learning experience for the business and you!
Training, testing and exercising are methods by which we are able to validate our plans. Validation is designed to confirm that plans will work and that the organisation will be able to remain resilient, and plans without exercised and trained key and supporting personnel to execute them are pointless. It is essential for success that the processes in plans are tested and practiced to ensure that when pressure is applied, an incident has occurred and impacts are felt, the organisation can meet its BCM objectives and targets. So, our testing needs to be rigorous, but balanced, to ensure that it goes far enough – but not too far.
It’s really good practice to take the approach that the plans themselves should be tested and exercised incrementally to ensure that overload of subjects and excessive disruption to routine operations and procedures is avoided. When exercised, all plans will have failings exposed or areas for refinement identified. The resulting confidence and capability of the personnel tested should provide realisable benefits – particularly if a real incident is experienced. Documents such as the the BCI’s GPG 2013 identify some of the activities that may need to be exercised and the effective programme will ensure that it encompasses these and the associated aims as minima. As with the other processes and professional practices, the effective BCM practitioner will need to go beyond the initial lists and consider carefully what is required and to what level.
Capital Weather Gang cites a weather.com report that not a single tornado has been reported to the National Weather Service in March, typically the first month of severe weather season in the Plains and Southeast.
The only other year since 1950 that there have been zero tornado reports in the first half of March was 1969, according to the Weather Channel’s severe weather expert Dr. Greg Forbes.
Per Dr. Forbes’ report from January 1 to March 12, only 27 tornadoes had been documented across the nation – the slowest start to the year since the 21 tornadoes recorded through March 12, 2003.
(TNS) — Aiming to minimize the number of victims, the Japanese government is hurrying to establish a network of undersea cables to monitor the occurrence of tsunami on the floor of the Pacific Ocean, where a huge earthquake is expected to take place.
The cables connect tsunami gauges and other observation devices for that purpose.
On seabeds stretching from off Hokkaido to off Chiba Prefecture, the National Research Institute for Earth Science and Disaster Prevention (NIED) is installing tsunami gauges and other devices in 150 locations. The total length of the undersea cables will be 5,700 kilometers.
“There is no precedent anywhere in the world for such a large-scale tsunami observation network," NIED President Yoshimitsu Okada said. “Completion is scheduled for fiscal 2015. After that, it will be possible to detect tsunami waves 20 minutes earlier than we do now."
(Tribune News Service) -- New York state's top bank regulator told a University at Albany audience on Thursday that one of the greatest threats to the economy today is a "cyber 9/11" attack that causes widespread panic in financial markets.
Benjamin Lawsky, who as superintendent of the state Department of Financial Services oversees 3,800 banks and insurance companies, said that trying to stop cyberattacks on the state's financial system — from data breaches to cyberterrorism — is his biggest concern.
"It's the one issue that I personally work on every single day," Lawsky said at UAlbany's Business School, where he delivered the first-ever Massry Lecture. "What should we do to prevent these nightmare scenarios?"
Although Lawsky doesn't have criminal prosecution powers, his office has been aggressive in negotiating civil penalties with banks that have been investigated for wrongdoing in New York state. On Thursday, just an hour before his UAlbany speech, his office announced a $1.45 billion fine for Commerzbank of Germany — of which $610 million will go to New York state.
Cloud computing and modular infrastructure are working hand-in-hand to remove the hassles of physical infrastructure from the enterprise’s list of concerns.
If it all goes as planned, the loss of any one server, storage or networking component will cease to be the service-killing event that drives IT into a state of near-insanity. If a piece goes down, an automation system simply reroutes traffic to another module and a replacement device is swapped in at IT’s leisure, perhaps by a robotic arm.
But that does not mean IT is on easy street. Rather, responsibility for the smooth flow of data simply travels up the stack, to the application and service layers, to be precise. And exactly how the enterprise prepares for data management on that level will go a long way toward determining how well the bosses in the executive suite can fulfill their business models.
Who needs a data scientist when you can have a robot analyze your data? No, seriously, that’s an actual question enterprises may be asking if this Computerworld article on artificial intelligence is right.
Technically, I guess artificial intelligence isn’t a robot until you add a body, but the question still stands: Can artificial intelligence solve the data deluge better than humans? AI experts certainly think so.
"The notion that a human analyst can look at all of this data unaided becomes more and more implausible," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told senior reporter Sharon Gaudin. "You can't have a person sitting there watching Twitter to protect your brand. … You need A.I. tools."
The obvious use case is with security, where humans are already failing to keep up with the ever-changing threat. Algorithms can “learn” from the data and flag deviations.
The potential value in the Internet of Things (IoT) is bringing to a fever pitch the focus on data as one of the enterprise’s most valuable assets. Clearly, those who carefully collect, transform, analyze, model and report on IoT data are seeing their influence rise. As much of this work is settling around the data scientist role, I talked with Don DeLoach, CEO of Infobright, provider of an analytics database platform, about what data scientists are being asked to do now, and how those responsibilities around IoT data might change in the near future.
DeLoach says it’s definitely early days when you look at what data scientists are being asked to examine:
“Look at the progress of the Internet of Things. Most, probably 95 percent, of the focus is on the closed loop message response systems that make up the use cases: service models for capital equipment, focus on specific silos, alerting to problems, not having to send service professionals out when they’re not needed, or information like temperatures in machines, or lighting levels that are appropriate for time or conditions. It’s grabbing a message off a sensor, and then determining whether an action is needed. We’re at an early stage.”
A new industry survey has found that of those who responded the largest group (37 percent) estimated that the cost-per-minute of downtime in their organization fell into the £10,000 - £20,000 bracket.
With 80 percent of those questioned giving their recovery time objectives as two hours or greater, the results mean that the potential losses to UK businesses are high.
The study, conducted by Timico, gave a comprehensive insight into the disaster recovery habits of IT managers in the UK, and revealed a distinct lack of awareness, despite the predicted cost of outages.
The survey revealed that almost a quarter (24 percent) of IT managers acknowledged having an outage within the past month but despite that, over 70 percent admitted to never having worked out the cost of the resulting downtime.
The research also found that over 60 percent of SMEs had not yet rolled out any form of cloud-based back up within their business. Moving to the cloud can negate the need for dual site replication, an option still favoured by 18 percent of those businesses questioned. Shockingly, despite the risks, a minority of respondents even admitted to never backing up their data.
Sungard Availability Services has released its 2014 UK invocation figures, which show the highest number of incidents since 2009.
Overall incidents of downtime, in which staff are unable to work from their usual office or access business critical systems, rose by over one third (38 percent) compared to 2013, leading to concerns that organizations are failing to sufficiently invest in availability and business continuity strategies and solutions.
While workplace related disruptions, in which the office environment is rendered inaccessible have remained fairly stable – with only a minor increase in 2014 – disruptions due to technology failures have more than doubled, increasing by 140 percent. Sungard AS’ 2014 invocation statistics show that hardware has been the main issue, causing a fifth of all problems (21 percent). The year-on-year spike in technology-related incidents, also including power and communications, is particularly worrying, suggesting that while many organizations are now entirely dependent on their IT systems, they are struggling to maintain them.
Let’s start with the notion that nobody is perfect. I know, that will drive the perfectionists up a wall, but it is true. No person, no organization, no company is perfect. This means we will all make mistakes. So why not plan for it.
Plan for it! Yes. We all know that someday there will be a screw up, a goof, or God forbid an intentional negative act. For example, consider the recent experience of a Comcast customer. Lisa wanted to find a way to save money, so she decided that the family could do without the cable portion of the family bill. The Comcast customer service representative was not happy with this request, tried to retain her, and when she still refused Lisa got her next Comcast bill addressed to – “Asshole Brown”. Needless to say Lisa was upset about trying to get the name changed back to her real name. Even that task was not easy.
So here we go. Like I said, no one is perfect and in this case Comcast certainly deserves a black eye.
Accessing analytics of any type has always been a complex endeavor. But starting this week, Ryft Systems wants to make real-time analytics running on a 1u server built using field programmable gate arrays (FPGAs) a single application programming interface (API) call away.
Pat McGarry, vice president of engineering for Ryft Systems, says that by deploying a dedicated Ryft ONE server that runs a “Linux-like” operating system to process analytics IT organizations can once and for all eliminate I/O bottlenecks.
The biggest challenge with Big Data, says McGarry, is not so much the size of that data that needs to processed at any given time, but rather the velocity at which that data needs to be processed. Rather than relying on a general-purpose processor, McGarry says that Ryft has combined FPGAs with up to 40 solid-state disk drives that can process up to 48TB of data at a rate of 10 gigabytes per second.
(TNS) — A new report from the U.S. Geological Survey shows it is increasingly likely a magnitude 8.0 or greater earthquake will hit California, but that "doesn’t change the bottom line” for the state’s emergency management workers, an agency official says.
Lucy Jones, a USGS seismologist and Mayor Eric Garcetti’s adviser on earthquakes, tweeted Tuesday about the randomness of big quakes.
"This new science doesn't change the bottom line for emergency managers," she wrote. "Which one happens in our lifetimes is a random subset."
The tweet was in response to a question posed to Jones about the practical takeaway for those trying to prepare the state for just such a disaster.
Often, when an organization initiates its Business Continuity Management (BCM) / Disaster Recovery (DR) program, it a pretty manual process: documents, power points and spreadsheets abound. They look good and they serve a purposes but when the program needs to mature and grow, the manual maintenance and monitoring processes just can’t keep up properly. Suddenly, the person responsible – use is usually only assigned to BCM/DR part time – can’t keep up and things begin to fall apart. It’s time for some help to automate the BCM process to keep it current and maintainable (not just the plans being maintained).
So where do you start and what needs to be considered when determining what software is best for you? Here are some helpful tips to consider when you get to that point.
Often, when an organization initiates its Business Continuity Management (BCM) / Disaster Recovery (DR) program, it a pretty manual process: documents, power points and spreadsheets abound. They look good and they serve a purposes but when the program needs to mature and grow, the manual maintenance and monitoring processes just can’t keep up properly. Suddenly, the person responsible – use is usually only assigned to BCM/DR part time – can’t keep up and things begin to fall apart. It’s time for some help to automate the BCM process to keep it current and maintainable (not just the plans being maintained).
So where do you start and what needs to be considered when determining what software is best for you? Here are some helpful tips to consider when you get to that point.
If you are the IT person who handles security for your company, where do you feel the most pressure when it comes to protecting business interests and consumer privacy? The folks at Trustwave sought to discover what was causing the most stress and concerns for IT and security professionals, and they just released their findings in the 2015 Security Pressures Report.
It’s an interesting perspective to study. All professionals are under pressure to perform well in their job duties, but as more companies reveal disastrous breaches and security breakdowns, IT security pros are really in the spotlight right now, with minimal room for failure. In fact, as the study stated in the introduction:
Few white-collar professions face as much mounting pressure as the information security trade. It is a discipline that, due to the widely publicized data breach epidemic, has suddenly crept out from behind the shadows of the mysterious, isolated and technical — and into the public and business mainstream.- See more at: http://www.itbusinessedge.com/blogs/data-security/stress-levels-on-the-rise-for-security-professionals.html#sthash.Txh7nrOk.dpuf
If you are the IT person who handles security for your company, where do you feel the most pressure when it comes to protecting business interests and consumer privacy? The folks at Trustwave sought to discover what was causing the most stress and concerns for IT and security professionals, and they just released their findings in the 2015 Security Pressures Report.
It’s an interesting perspective to study. All professionals are under pressure to perform well in their job duties, but as more companies reveal disastrous breaches and security breakdowns, IT security pros are really in the spotlight right now, with minimal room for failure. In fact, as the study stated in the introduction:
Few white-collar professions face as much mounting pressure as the information security trade. It is a discipline that, due to the widely publicized data breach epidemic, has suddenly crept out from behind the shadows of the mysterious, isolated and technical — and into the public and business mainstream.- See more at: http://www.itbusinessedge.com/blogs/data-security/stress-levels-on-the-rise-for-security-professionals.html#sthash.Txh7nrOk.dpuf
According to the Occupational Health and Safety Administration, 4.1 million U.S. employees experience work-related injuries or illnesses each year and 1.12 million of those employees lose work days as a result. With the average employee missing eight days per injury, even a minor injury can create a domino effect in your company.
When employees experience illness or injury, it often impacts their ability to perform their jobs, especially in occupations that are more labor intensive. As soon as your worker is able, it is in everyone’s best interest to return him or her to work in some capacity. Oftentimes, this is done through formalized return to work programs. Return to work programs are extremely effective because they provide benefits to not only the employee, but also your company.
Many information security professionals are looking for help with security and may very well partner with managed security service providers (MSSPs) this year. That's according to a new report from Trustwave. Here are the details.
The 2015 Security Pressures Report revealed that most businesses expect the pressure to secure their organizations against cyber threats will increase in 2015. Also, 78 percent of information security professionals said they are likely or plan to partner with an MSSP to protect their organizations.
MSPs who offer cloud-based file sharing have a full time job. It isn’t enough to simply sell and set up cloud services for your client – you then need to monitor them.
Surprisingly, 44 percent of corporate data stored in the cloud environment is not managed or controlled by the IT department.
While you could try to make it easier for customers to monitor the cloud sharing you set up, there are advantages to being the one to handle this task. For one, you obviously want to make sure that the file sharing system you set up is working properly. You also want to be able to tell when your client may need additional functions or storage based on their use. Finally, your clients care about it, so being the one to offer it will increase your value to them.
Here are four things your clients care about, and things you should be actively monitoring:
Cyber attacks against businesses may dominate the news headlines, but recent events point to the growing number and range of cyber threats facing public entities and government agencies.
City officials yesterday confirmed that city and county computer systems in Madison, Wisconsin were being targeted by cyber attackers in retaliation for the shooting death of Tony Robinson, an unarmed biracial man, by a Madison police officer last Friday. A Reuters report says the cyber attack is thought to have been initiated by hacker group Anonymous.
Then on Sunday the website of Colonial Williamsburg was hit in a cyber attack attributed to ISIS. The attack targeted the history.org website and comes just a week after the living history museum offered to house artifacts at risk of destruction in Iraq.
(TNS) — Dallas startup accelerator Tech Wildcatters is launching a program focused on wearable technology for police officers, firefighters and emergency medical personnel.
The unique public-private experiment will be announced Wednesday.
The pilot program is funded by the Department of Homeland Security’s research and development arm, and Tech Wildcatters is one of two U.S. accelerators tapped to run it. The program is being managed by the Center for Innovative Technology, a Virginia-based nonprofit.
This is the first time Homeland Security’s research division has experimented with accelerators. The federal agency is interested in wearable technology such as advanced sensors, smart voice and data communication chips embedded in gear, and health-related monitors.
Consensus is building that the cloud will subsume traditional data center infrastructure within the next decade. This is not to say that local resources will go the way of the dinosaur, but that whatever remains in the data center will be cloud-based.
This means that both the hardware and software platforms that hope to support future data architectures will have to cater more toward cloud functionality than traditional data center constructs. And yet, it seems that only recently have we seen anything that can be described as cloud-specific enterprise systems in the channel.
HP took the wraps off of its Cloudline server this week, aimed specifically at helping cloud service providers gain an edge on competitors by offering not just lower costs, but advanced functionality as well. This includes open management capabilities that enable a broad range of third-party solutions, as well as broad ties to the OpenStack format through HP’s Helion platform. This should give providers a wedge in crafting hybrid cloud solutions for enterprises that convert their legacy architectures to OpenStack-based clouds. At the same time, Cloudline supports the HP Altoline open network switch, which itself supports the Cumulus Networks Linux networking distribution aimed at building web-facing hyperscale infrastructure.
Where are the weak points in your organisation and its operations? Where could disasters or criminals do the most damage? Vulnerability testing, as its name suggests, is done to find out where the soft underbelly is. Then protection and security can be suitably reinforced. In a general sense, it can cover everything: from freak weather conditions to power outages, supplier failure and IT disasters. Indeed, the latter category of IT is where vulnerability testing is often the most performed. This is partly because of the critical role of IT throughout many organisations, and partly because IT vulnerability testing is relatively easy to automate. However, even systematic automated testing can’t do it all. So what’s the solution?
Carbonite, Inc., a provider of cloud and hybrid business continuity solutions for small and midsize businesses, has published a report on recent business continuity and channel research. Entitled, ‘Business Continuity: A Growing Opportunity in a Digitalized World,’ the report details the results of research conducted through Spiceworks Voice of IT, and identifies trends, challenges and strategies related to business continuity.
According to the report, 67 percent of channel partners reported an increase in demand for business continuity solutions from small and medium sized businesses, and 77 percent expect the demand to continue growing over the next three years. 87 percent of channel partners agree that business continuity solutions are worth the investment, but they are faced with two key challenges when selling related products: lack of customer education (45 percent) and budget concerns (45 percent).
By James Stevenson
The first few exercises I ran were pretty nerve wracking. Would the plans work? Would the team play nicely or start throwing stuff? Would they realise I was new to this?
Since then I’ve been fortunate to work with many different groups around the world facilitating exercises, coaching and training new business continuity managers to design and run their own successful exercises.
It’s not rocket science but there is a skill to setting up and running a great exercise.
To help with this, the ten steps below are packed full of tips and suggestions to develop this skill, run great exercises and maximise your business continuity programme:
CompTIA's new "Enabling SMBs with Technology" study revealed many small- and medium-sized businesses (SMBs) want innovative technology partners, and a lack of innovative technology solutions is one of the primary reasons why some of these companies choose to switch IT firms.
CompTIA reported that more than 70 percent of SMBs said they have used an outside IT firm at least occasionally over the past 12 months. Also, 46 percent of SMBs noted that they look to outside IT firms when they need greater expertise and new options, which could create new opportunities for innovative managed service providers (MSPs).
"For an MSP to be innovative, it must focus on business results at a broad scale and proactively determine the best technology solution," Seth Robinson, CompTIA's senior director of technology analysis, told MSPmentor.
When gathering food for an emergency kit, we often think about items that do not require cooking or refrigeration and have a long storage life. Yet, we often forget to check the nutritional value of the food in our emergency kits. March is National Nutrition Month and a great time to review the food in your emergency kit and makes sure it is healthy and not expired. Here are a few healthy tips to keep in mind when gathering food for your emergency kit and reviewing the food you have already stored.
1. Avoid salty snacks.
Salty snacks make you thirsty and increase your need to drink water. When you have a limited supply of food and water, you don’t want foods that will make you want to drink more water than you need or planned for.
2. Include protein.
While you may not be able to rely on your normal sources of protein like meat, after an emergency, you should still include some good sources of protein in your emergency kit. Nuts, protein bars and peanut butter can be sustaining foods that can help keep you full and are easy to store in your emergency kit.
3. Look for high-energy foods.
Food with protein, carbohydrates, and good fats can help keep your energy up, which can be very important during or after a disaster. Choose foods like nuts, dried meat, whole grains (crackers, cereal, etc.) and canned beans, fruits, or vegetables.
4. Don’t forget water.
Water is a crucial part of any emergency kit. Store at least 1 gallon of water per day for each person and each pet. If possible, try to store a 2-week supply of water or at least a 3-day supply of water for each person in your family. Unopened, commercially bottled water is the safest and most reliable emergency water supply.
5. Make sure your emergency kit food is healthy and safe.
In addition to choosing the right foods for your emergency kit, you should also regularly review the content of your kit to make sure none of your food has expired or become dented or damaged. Keep the food in your emergency kit in a dry, cool spot, out of the sun to help ensure that the food does not become damaged or unusable.
6. Stick with what you know.
The most important part choosing food for your emergency kit is making sure you know how to prepare and will want to eat the food you store. Stick with foods you know your family will eat. Also, do not forget about food allergies or dietary needs of your loved ones. Consider how you will meet everyone’s unique nutritional needs if you can only access your emergency kit food supply.
For more information about choosing and storing food for your emergency kit, visit CDC’s webpage http://emergency.cdc.gov/disasters/foodwater/index.asp.
Looking to put an end to spearphishing attacks that have made a mockery of IT security defenses, Check Point Software Technologies Ltd. today unveiled technology that automatically extracts malware from both documents attached to email and content downloaded from Web sites.
Gabi Reish, vice president of product management for Check Point, says Check Point Threat Extraction software works by decomposing content in real time into a set of digital bits and then removing any and all code that is identified as malware. The content is then reconstituted and send on to the intended user.
Running on security gateways from Check Point, Reish says Check Point Threat Extraction software is the second major IT security innovation Check Point is bringing to market in as many months. Last month Check Point acquired Hyperwise, a provider of software that identifies threats at the processor level.
(TNS) — In Pennsylvania, nearly 1.5 million people are in potential danger if a train carrying crude oil derails and catches fire, according to a PublicSource analysis.
That is about one in every nine Pennsylvanians, or 11.5 percent of the state's population.
The analysis also found 327 K-12 schools, 37 hospitals and 61 nursing homes in the state are at risk.
These numbers take on new meaning in the wake of the recent derailment near Mount Carbon, W. Va. And, a federal report predicts 15 trains carrying crude oil and ethanol in the United States could derail in 2015 alone.
Even though the U.S. government has broadened its pursuit against corruption, only about 9% of organizations see the Foreign Corrupt Practices Act’s monitoring of corruption as a top concern, according to “Bribery and Corruption: The Essential Guide to Managing the Risks” by ACL.
Remaining competitive can be difficult in some areas due to expectations of payments, gifts and consulting fees, but companies need to identify and manage the risks across the organization. Much is at stake as penalties are rising and reputations are at risk.
According to ACL:
No doubt enterprise IT technology will be vastly different in five years’ time. We’re not just talking about better, faster, more flexible infrastructure, but a top-to-bottom overhaul of what data infrastructure is all about and how it should be architected for the new digital economy.
But what gets lost in the whirlwind of activity surrounding the cloud, modular infrastructure, mobility and all the rest is how this will change the day-to-day operations of the data center, and in particular the responsibilities of the IT staff and the skillsets required to fulfill those responsibilities.
We can start with the CIO. Traditionally, this position is served by someone steeped in technical knowledge and the careful relationships that must be maintained between the various layers of the IT stack. (Yes, there is much more to it than that, but in general terms this is good for our discussion.) But as Mike Altendorf, CEO of systems integrator Conchango told CIO.com, a technology background will become steadily less valuable as things unfold, and more traditional business-minded skills will rise. These include not only budgeting and management, but marketing, customer relations and even sales as IT becomes more integrated with the business side of the operation.
One of my favorite virtual friends is Dr. Andrea Bonime-Blanc, the Chief Executive Officer (CEO) and founder of GEC Risk Advisory LLC, the global governance, risk, integrity, reputation and crisis advisory firm which serves executives, boards, investors and advisors in diverse sectors, growth stages and industries, primarily in the Americas, Europe and Africa, providing strategic and tactical advice to transform risk into value. Dr. Bonime-Blanc is an extensively published author and editor of several books and numerous articles. She writes The GlobalEthicist column for Ethical Corporation Magazine. She also co-authored and co-edited The Ethics and Compliance Handbook for the ECOA Foundation. While her career and current consulting is wide-ranging, I want to focus on one of her recent book, The Reputation Risk Handbook, which should be read by any compliance practitioner, senior executive or board member.
Why should you read this book? It is because you should recognize that “Reputation risk has become strategic because of the age of hyper-transparency.” The book provides a variety of examples of reputation risk and explains its special nature. The book also provides strategies for management of reputation risk. Bonime-Blanc concludes her book by going into the veiled land of the future to opine on not only risk management techniques but also the “transformation of this risk into an opportunity and value for the organization.” Her book is broken down into three general areas, I. Understanding Reputation Risk, II. Triangulating Reputation Risk, and III. Deploying Reputation Risk.
In case you thought Microsoft was lagging behind in mobile productivity, you might want to reconsider. Microsoft and other cloud companies have taken some big steps to extend Microsoft Office far beyond the desktop and into the cloud. In the last couple of months, Google and Box have separately announced online editing features and close integration with Office desktop apps, while Microsoft announced just two weeks ago that Office users on iPad will be able to save their documents in any kind of cloud storage.
What’s the upshot for MSPs given these moves? The days of employees storing their important data on a single server or within a single cloud repository are long gone. Now, as the applications that take advantage of cloud storage are becoming more cloud platform-agnostic, employees can and will store their data in any number of cloud services, such as Google Drive, Office 365 and Box. MSPs need to make sure that clients’ data is properly controlled, no matter where it resides.
However, the study, titled "Business Continuity: A Growing Opportunity in a Digitalized World," also showed that channel partners are typically faced with two key challenges when selling business continuity products: lack of customer education (45 percent) and budget concerns (45 percent).
When it comes to so-called “shadow IT,” the enterprise has three basic responses. You can accept it, you can fight it, or you can ignore it.
Unfortunately, it seems that a large number of organizations are choosing option three, ignoring it, which is probably the worst approach to take because shadow IT can, in fact, become a strategic asset to the enterprise, provided it is not left to its own devices.
Ideally, the enterprise should accept shadow IT, but with conditions. With the coming of the mobile-first generation to the knowledge workforce, IT needs to recognize that enterprise data will find its way onto personal smartphones and tablets, and that the best thing to do is encourage this level of flexibility but impress upon people the need to maintain an adequate security posture.
(TNS) — Ohio tops the country with the most school threats in the first half of the school year, according to a recent report by a national school-safety consultant.
From August to December 2014, Ohio had 64 reports of school threats, more than California (60), New York (46) and Texas (41).
Across the nation, school threats are up 158 percent from last year, the first year of the survey conducted by Cleveland-based National School Safety and Security Services.
Local safety experts question the company’s figures because they are based on news reports instead of police records. The local experts say that schools and media outlets tend to underreport threats.
KANSAS CITY, Mo. — The woman’s voice on the intercom was anguished.
“There’s a shooter in the building. Lockdown! Lockdown!”
Inside the library at Independence’s Pioneer Ridge Middle School, about 65 teachers and staff members — who knew this was all pretend but were warned it may be unnerving — assumed their positions under desks and crouched between rows of children’s books.
Someone switched off the lights as instructed. Maybe the shooter won’t see them hiding. The rest of the school stood empty.
It was part of training increasingly occurring in the nation’s schools, hospitals and other workplaces to drive home lessons, some of them controversial, on how not to become an armed intruder’s sitting duck.
(TNS) — For more than 100 years, people have questioned whether taking oil and gas from the depths of the earth can cause tremors.
When an earthquake shook Austin in 1902, some thought an explosion in the oilfields of Spindletop, in southern Beaumont, might be to blame.
The 1902 earthquake was naturally occurring. But the link between human activity and earthquakes is very real and well established, said Cliff Frohlich, associate director and senior research scientist with UT's Institute for Geophysics.
"When people make the statement that it hasn't been established that humans can cause earthquakes, they're either woefully uninformed about the research by myself and hundreds of others over the last 70 years or they're trying to mislead you," he said. "That's like people saying the world is flat; that evolution hasn't been proven or that humans can't cause climate change."
At the end of last week, I started getting email messages warning me about the latest TLS/SSL vulnerability that has been discovered. This one is called the FREAK Attack and a site dedicated to informing users about the attack describes this new vulnerability in this way:
It allows an attacker to intercept HTTPS connections between vulnerable clients and servers and force them to use weakened encryption, which the attacker can break to steal or manipulate sensitive data.
The first reports of FREAK Attack, which like Heartbleed involves open source code, were via initial warnings through Mac and Android-native browsers—although Chrome appeared to be safe, as is Firefox. BlackBerry browsers are also affected by the vulnerability. At first glance, it looked like Windows machines were okay. A second glance, however, tells a different story.
Data theft is becoming big business if the estimated damages of recent breaches are any indication. Can you imagine being insured for US $100 million against such events, yet having to bear costs that exceeded even that figure? The recent attack against Anthem, the second largest health insurer in America, involved as many as 80 million records being stolen. The associated expenses have been estimated at more than the $100 million policy taken out by the enterprise. Elsewhere, supermarket chain Target (also in the US) estimated costs of over US $148 million after 100 million customer records were compromised at the end of 2013. But the attack similarities don’t end there – and could apply to any company.
BitSight Technologies has released the results of a commissioned study, conducted by Forrester Consulting on behalf of BitSight, which reveals third-party security as a top business concern for enterprises. The findings suggest a significant appetite for monitoring third-party security but a steep disconnect in resources available to adequately and objectively manage this.
The study, ‘Continuous Third-Party Security Monitoring Powers Business Objectives and Vendor Accountability,’ is based on surveys of IT security and risk-management decision makers in the US, UK, France and Germany.
Forrester found that when it comes to tracking third-party risk, critical data loss or exposure (63 percent) and the threat of cyber attacks (62 percent) ranked as the top concerns, above standard business issues, including whether the supplier could deliver the quality and timely service as contracted (55 percent). Despite the desire for more robust insight into third-party security practices, only 37 percent of survey respondents reported tracking any of these metrics on a monthly basis.
The results of a Risk Management Association (RMA) and MetricStream survey on third-party and vendor risk management in financial institutions has been published.
The survey drew responses from over 100 leading financial institutions and addressed vendor management frameworks, vendor selection and monitoring processes, critical vendors and critical activities, tools and techniques, contracts, regulatory compliance, and fourth-party suppliers.
With the growing need to grow the business, provide new offerings, reduce overall costs, and maximise profitability and revenues, outsourcing to third-party service providers has become the norm for most banks and financial institutions (FIs) worldwide. Larger organizations have tens of thousands of vendor relationships to manage, and in this scenario, are increasingly exposed to financial loss and reputation if they fail to maintain adequate quality control over all third-party activities.
“Managing the risks inherent in vendor and other third party relationships has become critically important in recent years, as the actions of vendors can cause significant financial and reputational impact to organizations, no matter their size or industry,” said Edward J. DeMarco, RMA's general counsel and director of operational risk.
For most companies, the on-premise appliance sits firmly rooted at the center of their backup world--making disc-to-disc (D2D) the preferred data protection method for backup and recovery of critical data, servers and applications. While D2D isn’t a perfect solution--often characterized by its high cost, capacity planning challenges and finite storage constraints--it’s tested, trusted and reliable.
With the cloud becoming more broadly adopted, many companies are considering cloud backup as a viable option for their disaster recovery (DR) strategy. Who doesn’t want lower costs and increased efficiency?
Heeding the call, the backup industry, which has always let the appliance drive its product vision, introduced hybrid backup appliances to the market. These appliances, designed to deliver cost savings, act as your local D2D backup. The cloud becomes your replication repository.
Project managers—especially in the tech sector—know all too well how many factors can cause a project to miss its deadline or go over budget. Keeping a project within its projected scope is one of the most difficult challenges for project managers.
Issues such as project omissions, slow or no user involvement, customer over-expectation and lengthy application development times can often not be avoided. One thing that can usually be reined in is the scope of the project, which includes the objective, timeline, goals, resources, tasks, team and budget.
By properly defining these requirements, a project has a better chance of staying within these guidelines. Of course, often the collection of data to define these requirements can be a huge challenge within itself.
The book “Project Scope Management: A Practical Guide to Requirements for Engineering, Product, Construction, IT and Enterprise Projects,” provides instruction on developing and defining project requirements to keep projects on track and within scope. It deals with practical tools and simple techniques for project managers to use in the daily struggle to avoid scope creep.
My reading and research includes white papers from the Big Four accounting firms. I note for the record that Deloitte, a firm that has consistently produced excellent white papers on risk, has upped its game past white papers with a weekday Risk & Compliance Journal for executives in the Wall Street Journal, a convenient daily reminder of what’s at stake for publically listed firms. But it’s Deloitte’s 2014 Global Survey on Reputation Risk that I’d like to discuss here, and then make note of several other useful and available white papers.
It’s always been difficult to quantify reputation, whether individual or corporate. We claim to know when a firm’s reputation has been compromised, and often the market punishes that firm directly. Yet there are other cases where direct actions taken to save a reputation -- notably investigations, which may lead to the removal of the CEO or other executives – seem autocratic and insufficient. We express our own judgments by comment and retweet, often becoming part of a groundswell of distrust and dissatisfaction on social media that has a longer term impact on the firm in question. Social media in that sense is more innovative than traditional data analytics. [It is hard to know whether social media commented more upon the corporate reputation of NBC or the individual reputation of anchor Brian Williams, but that particular groundswell led to the six month suspension without pay of Williams, who is said to make $10 million a year. So far at least, NBC Evening News is holding its own in the ratings, but the company is making significant changes in its management staff; and it is not clear that Williams will ever return to the news desk.]
CHICAGO – You may be ready to enjoy more daylight hours after we “Spring ahead” an hour on March 8, but are you ready for the threat of flooding that warmer months can bring?
“With the change of seasons comes the risk of snow melt, heavy rains, and rising waters—we’re all at some level of flood risk,” said Andrew Velasquez III, FEMA Region V administrator. “It is important we prepare now for the impact floods could have on our homes, our businesses and in our communities.”
Take action with these simple steps to protect what matters before a flood threatens your community:
• Ensure you’re insured. Consider purchasing flood insurance to protect your home against the damage floodwaters can cause. Homeowners’ insurance policies do not typically cover flood losses, and most policies take 30-days to become effective. Visit FloodSmart.gov for more information.
• Keep important papers in a safe place. Make copies of critical documents (mortgage papers, deed, passport, bank information, etc.). Keep copies in your home and store originals in a secure place outside the home, such as a bank safe deposit box.
• Elevate mechanicals off the floor of your basement—such as the water heater, washer, dryer and furnace—to avoid potential water damage.
• Caulk exterior openings where electrical wires and cables enter your home to keep water from getting inside.
• Shovel! As temperatures warm, snow melt is a real concern. Shovel snow away from your home and clean your gutters to keep your home free from potential water damage.
• Build and maintain an emergency supply kit. Include drinking water, a first-aid kit, canned food, a radio, flashlight and blankets. Visit www.Ready.gov for a disaster supply checklist for flood safety tips and information. Don’t forget to store additional supply kits in your car and at the office too.
• Plan for your pet needs. Ensure you have pet food, bottled water, medications, cat litter/pan, newspaper, a secure pet carrier and leash included in your emergency supply kit.
• Have a family emergency plan in place. Plan and practice flood evacuation routes from home, work and school that are on higher ground. Your family may not be together when a disaster strikes so it is important to plan in advance: how you will get to a safe place; how you will contact one another; how you will get back together; and what you will do in different situations.
To learn more about preparing for floods, how to purchase a flood insurance policy and the benefits of protecting your home or property investment against flooding visit FloodSmart.gov or call 1-800-427-2419. For even more readiness information follow FEMA Region V at twitter.com/femaregion5 and facebook.com/fema. Individuals can always find valuable preparedness information at www.Ready.gov or download the free FEMA app, available for Android, Apple or Blackberry devices.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
As conflict continues in Ukraine, and fears of an expansionist Russia throw a shadow of war over Europe, the Cambridge Risk Centre for Risk Studies has urged businesses to incorporate geopolitical conflict scenarios into their business continuity planning.
Interstate conflict was the number one concern of nearly 900 businesses and academics who responded to the Global Risks 2015 Report published in January by the World Economic Forum.
"These risks are continuing to grow in this new era of political uncertainty," said Dr Andrew Coburn, director of the Centre for Risk Studies Advisory Board at Cambridge Judge Business School. "Businesses should reappraise their readiness to manage possible disruption to their activities from armed conflicts in different parts of the globe," he said at the Centre's risk briefing held recently in the City of London.
The Centre for Risk Studies and its research partner, Cytora, has identified more than 100 potential country-to-country conflicts based on recent antagonist statements towards each other, antithetical values and historical enmity. All have the potential to cause severe disruption to business activities.
Cytora's risk map of potential future conflicts highlights a number of regional hot-spots, including the obvious Middle East, Central and Eastern Africa; the Eastern European margins; the Indian subcontinent, parts of Latin America and the emerging Southeast Asian powers.
(TNS) — A study funded by a $10,000 grant will look at whether post-Sandy Long Islanders are better prepared when the streets are blocked, phones die and the water or snow turns into life-threatening challenges.
Sustainable Long Island, a nonprofit organization that promotes economic development, social equity and the environment, said a State Farm insurance grant awarded last month will develop and launch a Disaster Preparedness Program.
Under the three-prong plan, the group will conduct surveys to assess whether Long Islanders have strategies and supplies ready; teach high school and college students on how to let their peers know about the effectiveness of social media in helping residents during disasters; and work with Long Beach to create a pilot program that would educate the public about disaster preparedness.
Establishing relationships with potential clients and partners is absolutely necessary for succeeding in business. One of the most effective ways to build such connections is to hold a lunch-and-learn event. That is, it’s effective when done right.
When done wrong, you’ll end up giving a presentation to a near-empty room in some dingy hotel conference space. Or even if you have a full house at a nice venue, it might as well be empty because your message is so unclear, cliché-ridden, and poorly delivered that it convinces no one to use your services.
It’s easy to avoid these ugly scenarios if you know what you’re doing. I spoke with David Russell, CEO of MANAGEtoWIN, who has over 41 years of experience in business, and has held too many lunch-and-learn events to count. Recently we held a webinar together to share what it takes to hold a successful lunch and learn. Here are the main tips we shared:
Toby Owen of Peer 1 Hosting identifies four drivers for the hybrid cloud:
- Big Data
- The Internet of Things
But covering data issues changes your perspective on these things, because when you boil it down, most technology is about sharing, securing or using data and information. Since information is just unstructured data — it really all boils down to data. So I look at that list and see only two drivers:
- Shared services (supported by federation and interoperability)
- Data. Just data.
By Joe Schreiber
Once you’ve come to terms with the harsh reality of the world, you come to understand that sooner or later, you will be the victim of a security breach. Chances are that it may not be this month, or even this year, but as the insightful Tyler Durden so shrewdly observed, “On a long enough timeline, the survival rate for everyone drops to zero.”
Getting breached doesn’t establish whether or not you have a decent security program in place: but how you respond to a security breach does.
If you come to accept Murphy’s Law; that everything that can go wrong will do so – and usually with the worst possible timing, there are several steps that can be taken today to help soften any future blows. These motions that you can set in place give you the ‘freedom’ to expect the unexpected.
Try to rid yourself of any notion that the work you do in network security is ‘protecting’ the company’s assets. Your mission is to look into and analyze how the network can be attacked, with the anticipation that you can control the battlefield smoothly enough to be able to respond to all attacks satisfactorily. So, think strategically about what can be done today and what can be delayed for later. The following are six key actions you can take to make sure you and your organization are more than prepared.
It may seem a bit incongruous to talk about solar energy when nearly half the country is covered in snow, but the data center is still the energy hog of today’s economy and is constantly ratcheting up its consumption with every new hyperscale facility.
But even as Apple, Facebook and other top firms embrace solar power and other renewables, the question remains whether this is a viable option for the broader enterprise community. And if not, will the pressure to shed local data infrastructure still come from environmental corners anxious to foster greater dependence on cleaner, utility-style computing?
The key test of a technology like solar power is not in its ability to generate electricity, but in its ability to do so reliably. In a recent report, the North American Electric Reliability Corp. (NERC) said that the influx of renewables into the bulk energy grid of the U.S. and Canada and the closing of aging coal-fired facilities is lowering energy reliability in the region. This could cause rates to increase as utilities up their reserve fuel stores to maintain adequate load. The report is disputed by many, to be sure, but it does point up the uncertainties that accompany changes to such fundamental infrastructure as the energy grid.
Whenever a project is being planned, risk management has to be part of the equation – things rarely go smoothly or completely as expected, and there will always be areas that present more risks than others. Whether they affect the projected timeframes, budgets or outcomes, it is the job of the project manager to identify them and ensure that provisions are in place to limit their impact should they occur.
However, failures are made in risk management every day – they helped to trigger the economic crisis in 2008, demonstrating that even the world’s biggest banks, which take financial and logistical risks every day, are not immune to risk mismanagement. With this in mind, it’s understandable that smaller projects and processes might suffer from errors made in risk management.
Why aren’t we performing risk management well, then? With project management an ever-growing sector and more and more jobs being created every day, the next generation of risk managers needs to be able to identify issues in order to rectify them.
Facing a future where extreme weather events are more common, cities on the East Coast are building up their resiliency to power outages.
At-risk cities, especially those on the East Coast that haven’t historically had to prepare for hurricane-induced problems, are trying to improve their infrastructure and emergency plans to prevent power outages.
A recent analysis from Johns Hopkins University ranked Philadelphia as the second most likely city in the United States to experience more power outages.
As a business continuity manager, CIO or company risk office, you’ve probably already done numerous risk value calculations. In order to make a table to compare risks and their impacts, you might assign percentages or relative scores to risks, and monetary values or relative scores again to impacts. The risk value in each case is then simply “risk X impact”. You get a simple table that allows you to rank risks in order of their risk value and set your priorities accordingly. However, what may be forgotten is that risk calculations can be positive as well as negative.
This harks back to the perception of business continuity planning and management exclusively as something that prevents interruptions (negative) and ensures that operations continue as usual (zero change). This is true, but it is only half the story. Increasingly, business continuity is becoming an opportunity not just to do as well as usual, but better (positive). For example, BCM must contain negative risk of suppliers failing, but can also encourage positive risk of increased profitability thanks to higher efficiency stemming from BC measures.
It is the end of an era for the Business Continuity Institute as Lyndon Bird FBCI has announced he is to stand down from his role of Technical Director. Over the last 21 years, Lyndon has become an integral part of the Institute, from his role as one of the founding members, through his position as Chairman of the Institute, to his job as Technical Director.
In nine years as Technical Director at the BCI, Lyndon has ensured that the BCI continues to have an effective and consistent voice on all matters of Business Continuity Management within the business, government, regulatory and academic communities. During his time, the Good Practice Guidelines have become a well respected source of global best practice, and the BCI has contributed significantly to the development of national and international standards.
On announcing his decision, Lyndon reflected that “although the BCI's work in all of these fields is ongoing, I feel my role as the main catalyst for this has changed. The BCI has grown to the point where it is staffed by a wide range of very competent people who are more than capable of dealing with the future challenges the Institute and the discipline might face. It is therefore an ideal time for me to move on and seek other interesting and challenging projects.”
On what lies ahead for him, Lyndon explained that "the opportunities created by the emergence of a wide-scale global resilience movement are very exciting and I look forward to continuing with my diverse writing, editing, teaching, commentating and consulting activities wherever in the world such opportunities emerge. I will no doubt be working with many BCI members in the future, albeit in a different capacity, but still with the same enthusiasm and passion for our subject.”
David James-Brown FBCI, Chairman of the Institute, described Lyndon as being "intimately involved with the establishment and growth of the Institute and has dedicated an enormous amount of his time and energy to making the BCI what it is today. Lyndon is truly one of the fathers of the industry and has been an inspiration to so many."
"On behalf of the BCI Board and the Membership I would like to express our heartfelt thanks and appreciation for an exceptional contribution; not just in terms of work but the personal attributes that Lyndon has brought. Lyndon will be sorely missed around the office for his wisdom, humour and humility; for his mentoring, his support and his encouragement. He will be missed by the Board for his dependability, his insightfulness and his clear thinking."
Steve Mellish FBCI, former Chairman of the BCI, and close friend to Lyndon, said of him: "Lyndon has always been reliably consistent in his passion for the subject and has such an astute capability to analyse situations and information to see connections or trends that many just don’t see. His devotion to the BCI has been there from ‘day one’ as one of the founding members. He has probably spent more time on the Board than anyone else I know including two terms as Chairman. To this day he still talks enthusiastically about the future and how business continuity and the BCI has and will continue to drive the whole resilience agenda going forward."
"If it wasn’t for Lyndon I know that I would not have achieved half of what I have done as a business continuity professional and without doubt, never have been so involved with the Business Continuity Institute. His wise counsel and support enabled me to face and deal with many challenging situations over my 12 years on the Board."
Anyone who has ever used Business Continuity Management System (BCMS) knows that having access for your business, IT, and executive planners is essential for two critical reasons:
- YOUR SYSTEM MAY INHIBIT DATA GATHERING AND ANALYSIS: You need quite a bit of data from many sources in your organization in order to formulate your BCP. While meeting with all users is fantastic, it simply is not feasible—even in the smallest of organizations. Even though your BCMS is supposed to streamline this activity, limiting users can do the exact opposite. It FORCES YOU to gather data by going directly to the user or utilizing outside methods (e.g. spreadsheets or external survey tools). This requires extensive work outside the BCMS.
Business Continuity Planning is often theoretical. After all, we can’t really know what we’ll need until a disruption occurs (and by then, it’s too late for planning!). As a result, we have little choice but to make our best guess as to what we’ll need when something hits the proverbial fan. A previous article discussed the pitfalls of assigning Business Continuity tasks to individuals because of risks to their availability. You should also be cognizant of the limitations of those teams and individuals assigned to carry out recovery tasks.
BC Planning deals with many unknowns: what will happen, when it will happen, how severe the disruption may be. We also don’t know how long the disruption – or the recovery from it – will last. We may assume that assigned teams or individuals will stick with the recovery process until normalcy is achieved. Is that likely? Who knows? But if it isn’t (if, for example, the recovery lasts more than 3 days) what is in our Plan to account for the limitations on assigned personnel? What kinds of ‘limitations’ must be accounted for?
The Cloud Standards Customer Council has released version two of its guide to cloud security.
The abstract reads as follows:
“Much has changed in the realm of cloud computing security since the original Security for Cloud Computing whitepaper was published in August, 2012. The aim of this guide is to provide a practical reference to help enterprise information technology (IT) and business decision makers analyze the security implications of cloud computing on their business. The paper includes a list of steps, along with guidance and strategies, designed to help these decision makers evaluate and compare security offerings from different cloud providers in key areas.”
Verisk Maplecroft has published its 2015 Natural Hazards Risk Atlas, which ranks over 1300 cities in 198 countries on their exposure to natural hazards to help organizations identify and compare risks to populations, economies, business and supply chains.
According to the Atlas, the strategic markets of Philippines, China, Japan and Bangladesh are home to over half of the 100 cities most exposed to natural hazards, highlighting the potential risks to foreign business, supply chains and economic output in Asia from extreme weather events and seismic disasters. Of the 100 cities with the greatest exposure to natural hazards, 21 are located in the Philippines, 16 in China, 11 in Japan and 8 in Bangladesh. Analysis for the Natural Hazards Risk Atlas considered the combined risk posed by tropical storms and cyclones, floods, earthquakes, tsunamis, severe storms, extra-tropical cyclones, wildfires, storm surges, volcanoes and landslides.
The Philippines’ extreme exposure to a myriad of natural hazards is reflected by the inclusion of eight of the country’s cities among the ten most at risk globally: including Tuguegarao (2nd), Lucena (3rd), Manila (4th), San Fernando (5th) and Cabantuan (6th). Port Vila, Vanuatu (1st) and Taipei City, Taiwan (8th) are the only cities not located in the Philippines to feature in the top ten.
By Duncan Ford MBCI
Could you get more out of your business continuity exercises? Do you have an inner concern that last year’s exercise programme didn’t demonstrate as much as you would have liked, or that there may be alternative ways of delivering the exercise that would be more cost effective and less effort?
Guidance from the various business continuity institutes and regulators, also included in recognised standards, puts a strong emphasis, quite correctly, on the essential requirement to exercise plans and recovery procedures. However, how do you assess the quality of the exercises, as opposed to the quantity? Are different types and styles of exercises being used, within an integrated programme, to meet different business needs?
Take a couple of seconds to consider whether:
- The maximum return is being gained from the time people commit to exercises;
- Different techniques could be used to engage directors and senior managers;
- The exercise(s) sufficiently challenge the organization’s assumptions about its ability to respond and recover.
Cold snaps are the weather phenomenon most likely to damage UK business performance according to new research commissioned by cloud services company, 8x8 Solutions, to highlight the need for businesses to prepare for adverse weather to limit lost productivity. Economists from the Centre for Economics and Business Research (Cebr) examined the relationship between different weather events and economic growth across the UK’s main industries over the last decade.
They found that since 2005, periods of very cold weather have seen quarterly GDP growth on average 0.6 percentage points lower than typical levels. When minimum temperatures are one degree Celsius lower than average, quarterly GDP is on average £2.5 billion lower. This is a bigger negative effect than any other form of adverse weather, including snowfall, heat waves or flooding.
The fall in GDP results from lower output across a number of industries and lost productivity as transport links and staff availability suffer. Those who do get to work on particularly poor weather days often meet a skeleton staff, hindering productivity.
Whilst cold has the biggest negative effect on the economy, different industry sectors are impacted by different forms of extreme weather. For example, professional services and accommodation and food are the sectors that take the biggest hit from heavy rainfall. High rainfall has a big impact on office-based jobs, with just ten millimetres above average costing the economy £86 million in a single quarter. In January 2015 rainfall was 26.5mm above the 2004-2014 January average of 126.8mm – potentially costing the economy £76.3million over the quarter.
The research also explores the resilience of businesses of different sectors and sizes. The information and communications sector is one of the few to see positive growth during poor weather. Cebr concluded that this is because the sector leads the way in using cloud-based technology allowing employees to work from home. On average, nearly two thirds (65%) of all companies in this sector use some form of cloud technology compared to just 15-30% of all other businesses.
But the report warns that smaller businesses are at a disadvantage in terms of poor weather, as Scott Corfe, Head of UK Macroeconomics, Cebr explains: “Many small offices are unprepared for such events as they often lack remote access to their work due to security concerns and a lack of infrastructure. This is compounded in many cases by inadequate internet connections or computing power at staff homes. In addition SMEs tend to suffer more than their larger counterparts who can spread the setup and maintenance costs of remote working infrastructure across many more staff.”
Kevin Scott-Cowell, CEO of 8x8 Solutions, says, “Bad weather hits businesses hard, and medium-sized companies are more vulnerable than their larger counterparts. Until now, the technical infrastructure to enable remote working and guard against disruption has been out of reach for many companies, but cloud solutions are changing this. It’s now affordable for any size business to put in place a plan and deploy the right remote working technology. This can make sure it’s business as usual for customers, whatever the weather.”
The research is released in the run up to Business Continuity Awareness Week, an initiative run by the Business Continuity Institute. Lyndon Bird FBCI, Technical Director at the BCI, said, “This research is a timely reminder of the need for companies to adopt business continuity management best practice. That means having the plans and technology in place to manage risks to the smooth running of their organisation or delivery of a service, ensuring continuity of critical functions in the event of a disruption, and effective recovery afterwards.”
Did I pack socks? Check. Toothbrush? Check. Business cards, phone charger, passport? Check, check, and check. Do I know what I need to do and what not to do to protect myself, my devices and the company’s data while I’m on the road and traveling for work? [awkward silence, crickets chirping]
S&R pros, how would employees and executives at your firm answer that last question? It’s an increasingly important one. Items like socks and toothbrushes can be replaced if lost or forgotten; the same can’t be said for your company’s intellectual property and sensitive information. As employees travel around the world for business and traverse through hostile countries (this includes the USA!), they present an additional point of vulnerability for your organization. Devices can be lost, stolen, or physically compromised. Employees can unwittingly connect to hostile networks, be subject to eavesdropping or wandering eyes in public areas. Employees can be targeted because they are an employee of your organization, or simply because they are a foreign business traveler.
(TNS) — Army researchers in a lab outside Washington worked for years on a software tool to help soldiers understand how hackers were targeting military computers.
Late last year they did something unusual: They released their project for anyone on the Internet to poke and prod.
William Glodek, the leader of the project, said the Army Research Lab hopes that if his team gives something, they'll get something.
"The Army is open and willing to collaborate," he said. "Hopefully, we can attract some bright talent to contribute to the project."
The federal government is looking for ways to improve the security of the nation's computers, but its plan to share information about threats faces legal obstacles before it can get moving. By offering up code, rather than data, Glodek's team has been able to take a step forward — and join a growing movement among military and intelligence community coders to share what they make.
Cybersecurity is a priority for enterprise executives and their boards, but a serious disconnect also exists in the C-suite on what the risk priorities should be and why, according to recent research. Some of the gap can be attributed to the day-to-day focus of different executive functions, but much of it goes far deeper into problems with culture and communication.
When consulting firm Protiviti and the Enterprise Risk Management (ERM) Initiative at the North Carolina University Poole College of Management recently conducted the third annual survey of business executives for “Executive Perspectives on Top Risks for 2015,” and examined the ranking of 27 risks by job function, they found that CFOs and chief audit executives (CAEs) perceived a riskier business environment than CEOs and the board. And CEOs and board members each had their own focus on the types of risks they perceived as most important.
Protiviti examined the relationship between the job functions of the executives it surveyed and whether they ranked macroeconomic, strategic or operational risks as of highest concern, and a pattern emerged. Board of directors members collectively named four strategic risks among their top five concerns, along with one macroeconomic issue; CEOs collectively named four macroeconomic risks among their top five, along with one strategic risk. And other executives named more operational risks to their top five lists.
Two of my favorite bloggers, Tony Jaques in Australia and Jonathan and Erik Bernstein from California, had excellent posts and two of the most important topics: rumor management and apologies.
Tony tells the story of a hepatitis A scare in Australia that got linked to a frozen berry product. The company out of an abundance of caution as they like to say, voluntarily recalled their product without verification their product was the cause. From there as you will see the media did their thing and the company apparently did not do enough to correct the misreporting.
The lesson is clear: a lie (or error) repeated often enough becomes the truth. The only way I know to deal with this is to loudly, clearly over and over and over tell the truth and correct the misinformation.
For many of our readers and the organizations where they work, any kind of supply chain disruption could easily qualify as a serious incident and one that would easily have been discussed and included in their disaster preparedness planning process.
With that thought in mind, our staff recommends reading and potentially adding a recent EventWatch™ 2014 Supply Chain Disruption report to your organization’s business continuity and disaster preparedness team’s reading resource library.
This report This report was funded and supported by Resilinc’s database of over 40,000 suppliers and over 400,000 parts which are tracked in its cloud supplier intelligence repository, and, analyzed incidents by risk type, industry, geography, severity, and seasonality and compared 2014 data in these categories with 2013.
Disaster recovery planning for your IT installations may use automated procedures for a number of situations. Virtual machines can often be switched or re-started in case of server failure, and network communications can be rerouted without human intervention. For other requirements, people will be involved in getting IT systems up and running properly after an incident. But people do not switch into auto-run modes like a machine. They can be affected by the surprise factor of an IT disaster and by the pressure to bring things back to normal. Five aspects of usability may need to be designed into your DR planning if you want the best chances of a satisfactory recovery.
Risk management and risk transfer must work together to make organizations more resilient, as firms become more exposed to major disasters and subsequent business interruptions as a result of their increasingly complex global networks. Traditional property damage/business interruption policies were never designed to meet the risks faced by organizations today, and the business interruption insurance market has not kept pace with these rapid changes, according to Marsh.
In a new Marsh Risk Management Research report, the firm highlights how the limitations of existing business interruption insurance, including gaps in cover and inaccurate valuations, are resulting in less than optimal coverage for clients and makes the case for insurance modernisation.
Based on concerns raised by colleagues, clients, loss adjusters, lawyers and insurers, the report focuses on five core areas where Marsh believes improvement is required: insured values; indemnity periods; wide area damage scenarios; supply chain; and claims.
Caroline Woolley, Global Leader of Marsh’s Business Interruption Center of Excellence, commented: “A property damage event remains one of the major exposures any company can face, and business interruption is one of the main insurances purchased. Business interruption policies, however, have done little to evolve since the middle of the last century.
“The insurance industry needs to acknowledge the shortcomings of existing business interruption cover and build a better solution for buyers. This report is Marsh’s contribution to the debate as we seek to improve existing solutions and reshape the industry to address insurance buyers’ evolving needs.”
The report ‘Business Interruption Insurance Efficacy: Five Key Issues’ can be found after registration here.
Whilst SSD usage is up, the technology is still a cause of downtime: one third of respondents to a Kroll Ontrack survey confirm they have experienced some sort of SSD technology malfunction.
According to a recent solid state disk (SSD) technology use survey by Kroll Ontrack, while nearly 90 percent of respondents leverage the performance and reliability benefits of SSD technology within their organisation, one-third confirmed they experienced some sort of SSD technology malfunction. Of those who did, 61 percent lost data and fewer than 20 percent were successful in recovering their data, highlighting the known complexity of SSD data recovery.
In the UK, 27 per cent of respondents had experienced a failure of their SSD technology and of these 56 per cent experienced data loss as a result. A slightly higher number than the global figure (26 per cent) were able to recover their data following a failure.
What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:
(TNS) — Emergency personnel responding to an oil train derailment in West Virginia last week applied lessons learned from a rail disaster more than three decades ago, and likely prevented a bad situation from becoming much worse.
This week marks 37 years since a deadly explosion in Waverly, Tenn. On Feb. 24, 1978, a derailed tank car carrying liquid propane violently ruptured, killing 16 people, including the small town’s police and fire chiefs.
Emergency response and training has changed dramatically in decades since the tragedy.
Buddy Frazier, the city manager of Waverly, about 65 miles west of Nashville, who was a young police officer when he witnessed the 1978 explosion, said that emergency responders are better trained and better equipped today. Still, he understands the challenges they face.
Virtualization has been changing the business IT landscape since the first hypervisor solution debuted in 1999. The technology initially targeted large enterprises and data center operators that could take advantage of its ability to add capacity and scale without physical components or the power and cooling costs required by hardware assets. During the past several years, though, virtualization has made significant in-roads in the SMB market due to a reduction in upfront investment costs, improved reliability and the proliferation of virtualization-dependent cloud services.
Industry research points to the continued growth of virtualization, and, according to social business platform provider Spiceworks’ 2014 State of IT Report, the adoption of virtualization among IT pros is currently at 74 percent worldwide. The Spiceworks report found that just over half of SMBs with fewer than 20 employees are currently leveraging virtualization, while 70 percent of SMBs with 20 to 99 employees and 83 percent of SMBs with 100 to 249 employees have adopted the technology for everything from productivity applications to databases to managed services.
(TNS) — Joplin, Springfield and Branson, Mo., have agreed to a set of procedures that will standardize how outdoor storm-warning sirens are activated and how they are tested.
The objective is to create a uniform standard across the region where none exists now. The adoption of the procedures by three of Southwest Missouri’s largest communities already has spurred other communities, such as Carthage, Bolivar, Pierce City and Monett, to participate in the guidelines.
The new procedures were unveiled during a news conference on Wednesday at the Springfield-Greene County Office of Emergency Management. Officials from the communities and representatives of the National Weather Service forecast office at Springfield were on hand for the announcement.
Among the many services state and local governments provide, few are as popular, as trusted or as essential as 911. Americans place roughly 240 million 911 calls each year, says the National Emergency Number Association, and access to 911 is nearly universal. Nevertheless, the system so many Americans rely on today to report emergencies and other problems stands on the brink of obsolescence.
While Americans are now accustomed to using Twitter, Facebook, Instagram and other social-media platforms for the rapid-fire sharing of news and information, most 911 systems can't handle the texts, videos, data and images that we increasingly use to communicate.
That's because in many parts of the country 911 is still rooted in the landline-telephone-based infrastructure that gave the system its start in 1968. As of November 2014, just 152 counties in 18 states even had the capability for citizens to text to 911. And only a handful of states -- such as Iowa and Vermont -- have taken the leap to Internet-enabled 911, known as "next-generation 911."
(TNS) — The tornado that struck Joplin, Mo., nearly four years ago left 161 people dead and much of the city devastated.
But the storm taught forecasters lessons that may have saved lives during subsequent disasters, including the May 2013 tornadoes in the Oklahoma City area, a National Weather Service official said Wednesday.
During a keynote address Wednesday at the National Tornado Summit in Oklahoma City, National Weather Service Deputy Director Laura Furgione discussed lessons the agency learned from a series of deadly tornadoes in the spring of 2011.
Will 2015 be the year the cloud gets past the hype? While cloud-based file sharing and other cloud services are being adopted by almost all businesses, the cloud is still in the early stages of its technological revolution. Whether it is personal computers, the internet, or 3D printing, every new technology goes through a period of hype and disillusionment before the really productive innovation takes place.
Gartner calls this the Hype Cycle of Emerging Technologies. According to Gartner, cloud computing has already passed the inflated expectations people had about it and everyone is beginning to become disillusioned by it. But that’s not a bad thing! Once the hype ends, real enlightenment can begin, and that’s where really useful and significant things get created.
So now that the hype over the cloud is over, is 2015 the year of enlightenment?
by Ben J. Carnevale
Business Continuity, Resiliency and Emergency Management Planning teams are often looking for additional ideas, programs and campaigns to help those teams be more prepared and ready to mitigate losses from potential disasters affecting the organization where they work, and the community where they work and live with their families.
Our staff believes that the America’s PrepareAthon™ campaign qualifies as one of the best resources for those teams to look for ideas and assistance for taking action to increase emergency preparedness and resilience.
America’s PrepareAthon! ™ is a grassroots campaign for action within the United States to increase community emergency preparedness and resilience through hazard-specific drills, group discussions, and exercises. Throughout the year, America’s PrepareAthon! ™ helps communities and individuals across the country to practice preparedness actions before a disaster or emergency strikes.
The Business Continuity Institute’s North America awards will take place on 24th March 2015 during the DRJ Spring World in Orlando. The awards recognise the achievements of business continuity professionals and organizations based in the USA and Canada.
The BCI has now issued the shortlist for the awards which is as follows:
Continuity and Resilience Consultant
- Robbie Atabaigi, KPMG
- Jeff Blackmon FBCI, Strategic Continuity Solutions
- Christopher Duffy, Strategic BCP
- Paul Kirvan FBCI
- Debjyoti Mukherjee, KPMG
Continuity and Resilience Newcomer
- Garrett Hatfield, MetLife, Inc.
- William Kearney, Cameron
- Tamika McLester, Crawford & Company
Continuity and Resilience Team
- Business Resiliency Office (BRO), Automatic Data Processing (ADP)
- ETS Enterprise Resiliency Department, Educational Testing Service
- TMG Health Team, TMG Health
Continuity and Resilience Provider (Service/Product)
- ClearView Continuity
- Fusion Risk Management, Inc.
- Strategic BCP
- Virtual Corporation
- xMatters, Inc.
Continuity and Resilience Innovation
- 9yahds, Inc.
- Strategic BCP
- Send Word Now
- Quorum Technologies
- Suzanne Bernier MBCI
- Christopher Duffy
- Frank Leonetti FBCI