Industry Hot News (6393)
Despite the tremendous gains it has made over the past decade, storage is still lagging behind its compute and networking counterparts in terms of speed and performance.
This isn’t an indictment of storage itself, mind you, as technologies like Flash and other forms of solid-state infrastructure have done wonders for both speed and throughput in advanced enterprise settings. Rather, it is in the support infrastructure surrounding physical storage where most of the bottlenecks remain.
Latency in the storage farm, in fact, is increasingly seen as an impediment to many higher order data center functions, such as virtualization and cloud computing. According to a recent survey from PernixData, a vendor of server-side Flash solutions, about half of respondents say storage performance is a higher priority than additional capacity, while only 21 percent cited capacity as a priority. As well, the survey has upwards of 70 percent of respondents considering storage acceleration software to help boost performance. A key driver in this shortage of performance continues to be the proliferation of virtual machines, which tends to flood storage infrastructure with more requests than it can handle.
Rapidly developing computer technologies and the unrelenting evolution of cyber risks present one of the biggest challenges to the (re)insurance sector today. Liabilities from cyberattacks and threats to the data security of cloud computing and social media have become key emerging risks for carriers. The unprecedented rise in cyberattacks, in addition to the threat cyberrisk poses to global supply chains, has seen the cyberinsurance market grow significantly in recent years.
Client demand for cyber coverage has been growing, on average, 30% annually in the United States over the past several years, according to Marsh. While demand varies by industry, the one constant has been that more clients are investigating and analyzing existing traditional insurance coverage and whether they need standalone cyberrisk insurance coverage.
(MCT) — As scary as the Ebola incidents in Texas and the outbreak in Africa are, it's worth noting that nine years ago this month the country was confronting another outbreak that looked rather ominous, too: a deadly strain of influenza that had originated in birds in Asia.
The so-called bird flu elicited a widespread government response, including a white paper from then-President George W. Bush's White House laying out the strategies should the flu reach pandemic levels in the United States. There were worries at the time that the flu, which was passed from birds to humans, could mutate, turning into a flu pandemic similar to the one at the end of World War I that killed between 20 and 40 million people globally in 1918-1919.
Millions of birds were purposely killed to stop the disease, and the bird flu scare abated over that winter of 2005-2006.
Which disaster recovery measurements do you really need? The answer is the ones that are effective in helping you to plan and execute good DR. So your choice will naturally depend on your IT operations. The two ‘classics’ of the recovery time objective (RTO) and recovery point objective (RPO) are so fundamental that they apply to practically all situations. But suppose your organisation is running a service-oriented IT architecture with business applications like ERP using resources supplied by other servers. If some of the servers cannot be recovered satisfactorily, there may be a secondary impact elsewhere. How can you measure this situation and define a minimum acceptable level of recovery?
DALLAS — As a 26-year-old Dallas nurse lay infected in the same hospital where she treated a dying Ebola patient last week, government officials on Monday said the first transmission of the disease in the United States had revealed systemic failures in preparation that must “substantially” change in coming days.
“We have to rethink the way we address Ebola infection control, because even a single infection is unacceptable,” Thomas Frieden, director of the Centers for Disease Control and Prevention, said in a news conference.
Frieden did not detail precisely how the extensive, government-issued safety protocols in place at many facilities might need to change or in what ways hospitals need to ramp up training for front-line doctors or nurses.
By Matthew Neigh, Global Technology Evangelist, Cherwell Software
Today’s IT environments are complex, and the commoditization of IT is one of the driving elements. This is manifest in a variety of ways in the enterprise. However, few are as vexing as “bring your own device” (BYOD).
BYOD is not only the future—actually, it’s already here. Organizations should expect the trend and learning curve to increase, and the required time to adapt to decrease at a sharp rate. That means IT organizations are responsible for laying the groundwork for today’s need: the creation and implementation of policy. Listed below are key factors you’ll want to consider as you move toward the creation and implementation phase.
(MCT) — If the Loma Prieta earthquake happened today, Buck Helm might have survived his Nimitz Freeway commute to watch his two youngest children grow up. Donna Marsden could have finished fixing up her Victorian home. Delores Stewart could have cheered on her beloved Oakland A's.
Twenty-five years later, the freeways and bridges that collapsed have been rebuilt to stand up to a quake even more powerful than the 6.9 magnitude Loma Prieta.
More than $22 billion in infrastructure upgrades have built a metropolitan area that is far safer and far more resilient than before. It's a testament to the power of long-term planning, borne of the ashes of the tragedy — 25 years ago Friday.
More than 440,000 in Missouri to Participate in Nationwide Drill
KANSAS CITY, Mo. — With just one week to go, communities throughout Missouri are preparing for the fourth annual Great Central U.S. ShakeOut Earthquake Drill, scheduled for October 16 at 10:16 a.m.
Great ShakeOut Earthquake Drills are occurring in more than 45 states and territories — nationwide more than 19 million people are expected to participate in the activity. During the drill, participants simultaneously practice the recommended response to earthquake shaking:
- DROP to the ground
- Take COVER by getting under a sturdy desk or table, or cover your head/neck with your arms, and
- HOLD ON until the shaking stops
The ShakeOut is free and open to the public. Participants include individuals, schools, businesses, local and state government agencies and many other groups. See the list of all the participants in Missouri at, www.shakeout.org/centralus/participants.php?start=Missouri. The goal of the program is to engage individuals to take steps to become better prepared for earthquakes and other disasters.
“Participating in this drill is a quick, simple thing we should all do—at work, at home, alone or with family or co-workers—to prepare for earthquakes,” said Regional Administrator Beth Freeman, FEMA Region VII. “When we practice ‘drop, cover and hold on’ it makes it more likely we will react appropriately during an earthquake and that can and does save lives.”
States participating in the Great Central U.S. ShakeOut include Alabama, Arkansas, Illinois, Indiana, Kentucky, Missouri, Mississippi, Ohio, Oklahoma, and Tennessee.
Interested citizens, schools, communities, businesses, etc. are encouraged to visit http://www.shakeout.org/centralus/register to register to participate and receive instructions on how to hold their earthquake drill. On social media, information about the drill is being provided on Twitter through www.twitter.com/CentUS_ShakeOut. In addition, www.twitter.com/femaregion7 and others are tweeting earthquake safety tips and drill information using the hashtag #ShakeOut.
The Great Central U.S. ShakeOut is being coordinated by Missouri State Emergency Management Agency, the Central U.S. Earthquake Consortium and its other Member and Associate States, the Federal Emergency Management Agency, the U.S. Geological Survey and dozens of other partners.
Great ShakeOut Earthquake Drills began in California in 2008 and have expanded each year since then.
Visit FEMA Region VII online at www.fema.gov/region7. Follow FEMA online at www.twitter.com/femaregion7, www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Charlie Maclean-Bristol, FBCI, discusses whether the time has come for business continuity managers to make contingency plans for an Ebola pandemic.
Spain is now dealing with the first case of direct infection of Ebola in Western Europe; the first Ebola death has occurred in the United States; and the World Health Organization has warned that ‘Ebola is now entrenched in the capital cities of all three worst-affected countries and is accelerating in almost all settings’. So has the time come for business continuity managers to make contingency plans for a possible future Ebola pandemic? I think the answer to this question is, yes, we should be.
I am not suggesting that you immediately go out to the supermarket and buy lots of tinned food and water, barricade the house, be prepared to operate on battery power and bottled gas and then lie low.
What I am suggesting is that we should be quietly thinking about how a possible Ebola pandemic might affect our organization; thinking through what an Ebola plan might look like; and monitoring the situation to ensure that you are ready to react if the situation escalates further.
So what at this stage should business continuity managers be doing?
Enterprises are moving more and more applications to the cloud. The use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend, according to Gartner, Inc (1). 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.
“While the benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, for mission-critical applications, such as SQL Server, Oracle and SAP, companies need a strategy for high availability (HA) and disaster recovery (DR) protection,” said Jerry Melnick, COO of SIOS Technology Corp. “While traditional SAN-based clusters are not possible in these environments, SANless clusters can provide an easy, cost-efficient alternative.”
Jerry says that separating the truths and myths of HA and DR in cloud deployments can dramatically reduce data center costs and risks. He debunks what he says are five myths:
As part of a broad effort to reinvent itself, BMC Software this week added advanced analytics capabilities to its suite of IT operations management software, while at the same time revamping its Remedy service desk software.
In addition, BMC has created a series of Smartflow Solutions that combine various BMC Software products into frameworks that make it possible to more easily manage IT at scale, while providing access to Automation Passport, a compilation of reference guides and best practices for automating IT operations.
Paul Appleby, worldwide executive vice president of sales and marketing for BMC Software, says BMC is moving to modernize its complete suite of distributed IT management offerings to make it easier to manage IT at scale in the age of cloud computing. Organizations that are increasingly relying on IT as a competitive weapon need to be able to operate IT on an industrial scale in order to successfully compete, says Appleby.
Now that the Ebola virus has made its way to the United States and we enter the traditional US Flu season, companies are beginning to revisit and/or develop Pandemic Plans to address this scare. But, Pandemic Planning is a little bit different than your standard business continuity plan development process. I have often chastised organizations for saying they have business continuity or disaster recovery “plans” when all they really have are plans to create plans, but, in the case of pandemic planning, I think, that is actually the right approach to take.
The reason why it is so important to have well developed and relatively detailed business continuity plans, strategies and solutions in place today is that most disasters occur without warning and do not provide the luxury of time to figure out what to do after the incident occurs. Pandemics represent an evolving threat that comes in various shapes and sizes and does afford us a luxury (if that word really applies here) to construct a response plan based on the particular pandemic that poses the threat.
The “Pandemic Influenza Risk Management / WHO Interim Guidance” published by the World Health Organization in 2013 (click here to read this document) states:
As hacking attempts become more complex, governments continue to improve their cybersecurity presence through sophisticated firewalls and expanded procedures. But while high-profile data breaches have focused more state and municipal attention on cyberintrusions, a decidedly old-school problem continues to plague efforts to beef up security — communication.
With a variety of security options available, public-sector agencies often are deploying tools and using strategies that utilize different terminology and principles. These differences can lead to frustration when trying to compare cybersecurity programs and address the latest digital threats across agencies or jurisdictions. Without a standardized language, it’s difficult to gauge how strong another organization’s cybersecurity is.
To illustrate the concept, consider an advertisement for a new hotel. The hotel boasts that it has superior service, amenities and security. The only way to know that for sure, however, is for those claims to be verified. In the lodging industry, organizations like AAA visit hotels and rate them — five-star, four-star, etc. Customers then read those ratings and make a decision on where to stay based on the commonly understood vernacular.
By Geary Sikich
Our concept of risk management needs to change. I’m not saying that the current practice is wrong; it just provides us with too much static risk assessment and the creation of many false positives in risk reports. One may ask why I chose the use the example of commodities traders for a new risk mindset. The answer is rather simple; commodities traders view risk as a rapid change agent. That is to say, risk changes in likelihood, velocity, impact and exposure over time.
If one refocuses to look at the consequences or potential consequences of the ‘near miss’ event instead of trying to determine the cause (which is often masked by opacity) preventative measures can be undertaken. Controls often times are reactive and not sufficiently proactive. Once we have changed the mindset, we can create a more proactive culture.
Businesses which respond to supply chain scandals with additional rules and regulations leave workers even more vulnerable, according to a new report published by the Institute of Risk Management (IRM).
‘Extended Enterprise: managing risk in complex 21st century organisations’ argues that the modern commercial obsession with systems and processes obscures the real problem: failure to understand and predict human behaviour and build trust. It urges companies to prioritise behavioural risk over ‘tick box compliance’ to tackle the ethical uncertainties in today’s complex delivery networks.
IRM states that the report marks the transition from risk management of a single organization to a coherent programme which meets the global and interdependent challenges of today’s joint endeavours. The report’s project group, made up of IRM practitioners together with academic experts, provides developed models, tools and techniques to help risk practitioners understand and manage risk across extended enterprises.
As well as supporting organizational performance, the report claims that a better understanding of risk across the extended enterprise is also vital in tackling wider problems including slavery, abuse, environmental damage and dangerous working conditions. The report argues that wilful blindness by organizations to these issues within their broader networks is unacceptable. Firms must ask themselves whether any claims that they make about their values hold true across their extended enterprise.
A recent announcement explained that cyber-security ‘big names’ McAfee and Symantec have agreed to share their threat data. It’s a development that should benefit customers of both vendors. Historically, IT vendors have swung back and forth between the multi-vendor approach (“we’ll handle the other vendor’s stuff for you”) and so-called coopetition, where two or more providers joined forces by agreeing to operate to a common standard for instance. The McAfee-Symantec pact ranges over sharing malware signatures to information on real-time attacks. Who else might follow this apparently enlightened example?
As the Internet of Things comes online, it will almost certainly require changes to how IT manages data, according to Gartner analyst Joe Skorupa.
"The enormous number of devices, coupled with the sheer volume, velocity and structure of IoT data, creates challenges, particularly in the areas of security, data, storage management, servers and the data center network, as real-time business processes are at stake," Skorupa, vice president and distinguished analyst at Gartner, states. "Data center managers will need to deploy more forward-looking capacity management in these areas to be able to proactively meet the business priorities associated with IoT."
The highly distributed nature of the IoT will make it impractical to move all of the data to a central location for processing, Skorupa theorizes. Instead, data will be aggregated in “distributed mini data centers where initial processing can occur.” Only the business-relevant data would be sent to a central location for further processing, he added.
Erosion Threat Assessment Reduction Team (ETART) is a multijurisdictional, interdisciplinary team formed jointly by FEMA and the State of Washington in response to the 2014 Central Washington wildfires to address the threat of flooding, mudslides, debris flows and other erosion over the approximately 415 square miles of burned lands.(For a landownership breakdown, see the following map and chart.)
In the summer of 2014, the Carlton Complex Fire burned more than 250,000 acres of land in Washington, the largest wildfire in state history. The fire burned private, federal, state and tribal lands, consumed 300 homes and destroyed critical infrastructure in its path. Then intense rainstorms over the scarred landscape caused more damage from flooding, mudslides and debris flow.
Fire suppression costs topped $68 million. But post-fire recovery costs have yet to be tallied.
Given the size and severity of the fire, President Obama issued a major disaster declaration on Aug. 11, which authorized the Federal Emergency Management Agency (FEMA) to coordinate federal disaster relief and to help state, tribal and local agencies recover from the disaster.
Once firefighters contained the Carlton fire on Aug. 25, the U.S. Forest Service (USFS) deployed its Burn Area Emergency Response (BAER) team to measure soil quality, assess watershed changes, identify downstream risks and develop recommendations to treat burned federal lands.
FEMA officials and the BAER team acted fast. They knew more floods may follow without vegetation to soak up rainwater. More silt and debris in the runoff can plug culverts and raise water levels, which may further threaten downstream communities and properties.
To reduce the vulnerability of those downstream communities, FEMA created ETART. Modeled after BAER, ETART would measure soil quality, assess watershed changes, identify downstream risks and develop recommendations to treat burned state, tribal and private lands.
FEMA and the State of Washington recruited biologists, engineers, hydrologists, mapping experts, range specialists, soil scientists and support staff from more than 17 entities.
SPIRIT OF COOPERATION
ETART participants include: Cascadia Conservation District, the Confederated Tribes of the Colville Reservation, FEMA, Methow Conservancy, National Weather Service (NWS), Okanogan Conservation District, Skagit Conservation District, Spokane Conservation District, U.S. Army Corps of Engineers, U.S. Bureau of Land Management (BLM), U.S. Department of Agriculture, U.S. Department of the Interior, USFS, Washington State Department of Natural Resources, Washington State Department of Fish and Wildlife, Whatcom Conservation District and Yakama Nation Fisheries.
Team members scored the benefits of working together across jurisdictional boundaries and overlapping authorities right away. To start, they stitched their maps together and overlaid their findings to gain consistency and a better perspective. Field assessments used extensive soil sampling. Computer modeling showed the probability of debris flow and other hazards.
Standard fixes in their erosion control toolbox include seeding and other ground treatments, debris racks, ditch protection, temporary berms, low-water crossings and sediment retention basins. Suggested treatments were evaluated based on their practical and technical feasibility.
Regional conservation districts provided a vital and trusted link to private landowners. They:
• held public meetings and acted as the hub of communications
• posted helpful links on their websites
• collected information on damage to crops, wells, fences, livestock and irrigation systems
• secured necessary permits that grant state and federal workers access to private property to assess conditions.
Local residents shared up-to-the minute information on road conditions and knew which seed mixtures worked best for their area. Residents proved key to the success of ETART.
Note: Teams found a few positive consequences of the wildfire. For instance, debris flow delivered more wood and gravel downstream, which may create a better fish habitat once the debris and sediment settle. The resultant bedload may enhance foraging, spawning and nesting for endangered species, such as Steelhead, Bull Trout and Spring Chinook Salmon.
STRENGTH OF COLLECTIVE ACTION
Final reports from BAER and ETART have helped several state agencies formulate and prioritize their projects, and leverage their budget requests for more erosion control funds.
Landowners and managers might share equipment, gain economies of scale and develop more cost-effective solutions. In the end, collaboration and collective action may avert future flooding.
CULTURE OF RESILIENCE
While public health and safety remain the top priority, other values at risk include property, natural resources, fish and wildlife habitats, as well as cultural and heritage sites.
Estimated costs for the emergency restoration and recovery recommendations on federal lands run $1.5 million. For short-term stabilization, USFS initiated funding requests for seeding and mulching urgent areas before the first snowfall. Other suggested treatments include bigger culverts, more warning signs and the improvement of road drainage systems.
For state and private lands, emergency restoration and recovery recommendations may cost in excess of $2.8 million. Erosion controls include seeding, invasive species removal and the construction of berms and barriers. In its final report, ETART also recommended better early warning systems, more warning signs on county roads and electronic message signs to aid residents evacuating via highways.
Landowners, managers and agencies continue to search for funding to pay for implementation. For instance, BLM regulations may allow it to seed its lands, as well as adjoining properties, after a wildfire. Select state agencies may provide seedlings, technical assistance on tree salvaging, or partial reimbursement for pruning, brush removal and weed control.
Knowing a short period of moderate rainfall on burned areas can lead to flash floods, the NWS placed seven real-time portable gauges in September to monitor rainfall in and around the area, and plans to place eight more rain gauges in the coming weeks. The NWS will issue advisory Outlooks, Watches and Warnings, which will be disseminated to the public and emergency management personnel through the NWS Advanced Weather Information Processing System.
Certain projects may qualify for FEMA Public Assistance funds. Under this disaster declaration, FEMA will reimburse eligible tribes, state agencies, local governments and certain private nonprofits in Kittitas and Okanogan counties for 75 percent of the cost of eligible emergency protective measures.
Successful ETARTs replicated in the future may formalize interagency memorandums of understanding, develop more comprehensive community wildfire protection plans and promote even greater coordination of restoration and recovery activities following major wildfires.
I have participated in a number of conversations where people argue what the basis for business continuity plans should be. Some people say you should have plans designed for specific threats inherent in your environment and others say that “what” happens is not important; plans should be based on the impacts of what happened and not the event itself. I say, they are both right, in a way.
Business continuity planning, I think, has evolved over time and has expanded in scope of what it tries to achieve. I’m not sure why we have gotten away from the term “contingency plans”, but I think Business Continuity Planning today includes both emergency response components and contingency planning components.
Considering these two components of the overall program, I think the Emergency Response part, that part that addresses how an organization responds to an incident should, in fact, have scenario specific components for the known risks and threats in the area where you do business. If you have facilities in hurricane regions, you absolutely should have Hurricane Preparedness Plans. Same goes for if you have facilities on fault lines; in flood plains; near active volcanoes; near nuclear power plants; etc. When specific threats arise, like pandemics, for example, your organization should develop a scenario specific plan for prevention and contention techniques for that exact threat.
(MCT) A few years ago a group of researchers used computer modeling to put California through a nightmare scenario: Seven decades of unrelenting mega-drought similar to those that dried out the state in past millennia.
"The results were surprising," said Jay Lund, one of the academics who conducted the study.
The California economy would not collapse. The state would not shrivel into a giant, abandoned dust bowl. Agriculture would shrink but by no means disappear.
Traumatic changes would occur as developed parts of the state shed an unsustainable gloss of green and dropped what many experts consider the profligate water ways of the 20th century. But overall, "California has a remarkable ability to weather extreme and prolonged droughts from an economic perspective," said Lund, director of the Center for Watershed Sciences at the University of California, Davis.
(MCT) — Gov. Dannel P. Malloy has declared Ebola a public health emergency and authorized officials to quarantine anyone who may have been exposed to or infected with the virus.
Though Ebola has not been reported anywhere near Connecticut, the order is a precautionary measure and just one of several actions being taken to guard against the disease in the state.
"Right now, we have no reason to think that anyone in the state is infected or at risk of infection," Malloy said in a news release. "But it is essential to be prepared, and we need to have the authorities in place that will allow us to move quickly to protect public health if and when that becomes necessary."
With more than 7,000 people sickened and more than 3,000 killed by the virus in West Africa, fears spiked last week with the announcement that Ebola was found in a man who had traveled from Liberia to Dallas.
By 2017, half of employers will require employees to provide their own mobile devices for work use, Gartner reports. There are many benefits to BYOD policies, from greater productivity on devices users are more comfortable with to lower corporate costs when businesses do not have to purchase mobile equipment or service plans. But securing these devices poses tremendous risk that may not be worth the reward. According to data security firm Bitdefender, 33% of U.S. employees who use their own devices for work do not meet minimum security standards for protecting company data. In fact, 40% do not even activate the most basic layer of protection: activating lock-screen features. Further, while the majority of workers could access their employer’s secure network connection, only half do so.
Bitdefender reports that there are 5 core security functionalities a strong BYOD policy should check:
To respond effectively during a disaster, it’s first vital to understand the demographics of residents and visitors. Most offices of emergency management maintain detailed inventories of critical infrastructure, their vulnerabilities, states of repair and hotspots around their jurisdictions frequently impacted such as roads that consistently flood or ice over. However, the same amount of critical information is rarely available about the community’s most valuable asset — its people.
Just as other significant storms have in the past, Hurricane Sandy served as a strong reminder of the importance of having access to critical information about the individuals who reside in or commute to an area. Nearly half the victims of the storm were age 65 or older, similar to that of Hurricane Katrina where 71 percent of those who died were 60 or older. Recent lawsuits brought against the cities of New York and Los Angeles (as well as Los Angeles County) have reinforced the importance of anticipating and preparing for the needs of some of the population who might require additional or specialized assistance during a disaster. It’s hard to say whether knowledge of the locations of older residents or those with other needs, particularly along coastal areas, would have reduced the death toll during Sandy, but having access to more information is always better when managing response to a disaster.
While low interest rates are likely to continue to present a challenge well into 2015, a stronger economy presents the property/casualty insurance industry’s best opportunity for growth, according to I.I.I. president Dr. Robert Hartwig.
Dr. Hartwig shared his thoughts on the industry’s growth outlook in his Commentary on 2014 First Half Results.
There are two principal drivers of premium growth in the P/C insurance industry he noted: exposure growth and rate activity.
Exposure growth—basically an increase in the number and/or value of insurable interests (such as property and liability risks)—is being fueled primarily by economic growth and development.
Although the nation’s real (inflation-adjusted) GDP in the first quarter of 2014 actually declined at an annual rate of -2.1 percent, economic growth snapped back in the second quarter, as real GDP surged by 4.6 percent.
There are very few more pressing issues in management today than cyber security. Notice that I didn’t say IT management; I said management. When the hacking of a major US retailer (Target) leads to the loss of billions of dollars in stock value and sales and the removal of not only the CSO, but the CIO and ultimately the CEO as well, stockholders, investors, and customers take notice.
Organizations worldwide depend increasingly on information and communications technology to operate and manage 24/7/365, and wireless devices, BYOD, social media, and the like all combine to make the jobs of those responsible for cyber security exponentially more difficult. Like the Dutch boy and the dike, security people worldwide have too many holes to plug and too few arms and fingers. Recently, I was watching a 1960s spy movie in which the agent had to find and access physical documents on site, take pictures of them, reduce the photos to microdots, paste the dots in place of periods in another document, and then smuggle those documents past the authorities. Today, an equivalent theft can be done remotely, often from another, hostile country, at light speed. And Edward Snowden’s 2013 disclosures about the doings of the US National Security Agency (NSA) amply demonstrate what a skilled technical organization with nearly unlimited resources can accomplish from half a world away.
The National Fire Protection Association (NFPA) reports that property losses at U.S. factories total nearly $1 billion annually. Between 2006-2010, about 42,800 industrial or manufacturing property fires in the utility, defense, agriculture, and mining industries were reported to U.S. fire departments each year, as well as 22 deaths and 300 injuries each year, according to the NFPA.
“Fire is the No. 1 preventable disaster at manufacturing facilities,” Cindy Slubowski, vice president and head of manufacturing at Zurich, said in a statement. “Most fires are preventable, and the risks can be reduced dramatically.”
In recognition of National Fire Prevention Week (Oct. 5-11), Zurich recommends that factory owners implement a pre-fire plan, starting with these steps:
One of the intuitive responses to Bring Your Own Device (BYOD) concerns is that it is important for organizations to have prudent and well publicized policies in place to clarify necessary information for users; including mitigating dangers and ensuring that everybody knows who pays for services.
Of course, this makes sense, but it may be difficult to do. Respecting the rights of employees and organizations is a tough balancing act. Perhaps this is why BYOD policies are not being followed as much as they should – or as much as they were in the past. Teksystems recently released a survey that suggests a lot of the people who should be paying attention to policies aren’t, and that the number of workers bypassing policies is growing.
Even more troubling, the survey found that 64 percent of IT professionals said that their organization has no official BYOD policy, and that percentage rose from 43 percent in 2013.
The steady stream of high-profile data breach incidents we’ve seen over the last few years makes one thing clear: cyber risk is a serious concern for virtually any enterprise. Disruption of day-to-day business operations and damage caused by the exposure of critical intellectual property or consumer information are just a couple of examples of potential fallout from an information security incident, not to mention a tide of expensive and embarrassing litigation and the possibility of damaging regulatory inquiries or compliance actions.
Federal agencies extend their reach into cybersecurity
Not convinced? One need only look at the breadth of publicly disclosed document requests from the Federal Trade Commission (FTC) in response to recent data breaches to get a sense of the entirely new level of scrutiny regulators are focusing on information security risk management practices following a serious breach incident. Other federal agencies like the Securities and Exchange Commission (SEC) and the Commodity Futures Trade Commission (CFTC) are also extending their reach by issuing new guidance regarding cybersecurity. Even congressional committees are getting into the act.
How security policy orchestration software can help reduce downtime in hybrid environments.
By REUVEN HARRISON
In our global, 24/7, online world, the individuals and organizations we deal with increasingly expect – and often rely on – our systems and applications being available at all times. When disaster strikes and downtime hits (whether through error, misfortune or malice), it can damage both an organization’s reputation and its bottom line. The companies you’re trusting to store and handle valuable information securely, or to access to the applications and services must do all they can to minimise the risk of breaches and downtime.
While stories about hackers and viruses breaking into (or bringing down) systems tend to prompt the biggest headlines, those of us in IT know that more downtime is due to network configuration errors than to security breaches. Because today’s networks are so complicated, and the pace and volume of changes is so great, it’s not uncommon for rushed-off-their-feet IT staff to make occasional configuration errors – and that could mean downtime for an application, service or even an entire business.
Entries are now being accepted for the BCI North America Awards 2015, which will be presented at the DRJ Spring World conference in Orlando.
This year's Award categories are:
- Business Continuity Consultant of the Year
- Business Continuity Manager of the Year
- Public Sector Business Continuity Manager of the Year
- Most Effective Recovery of the Year
- BCM Newcomer of the Year
- Business Continuity Team of the Year
- Business Continuity Provider of the Year (BCM Service)
- Business Continuity Provider of the Year (BCM Product)
- Business Continuity Innovation of the Year (Product/Service)
- Industry Personality of the Year.
The entry deadline is January 23rd 2015.
A new survey-based study conducted by IDG Research Services on behalf of Sungard Availability Services and EMC Corporation has looked at the cloud recovery market, amongst other areas.
The survey of 132 organizations found that faster recovery and reduced disaster recovery costs were reported as the top benefits of cloud recovery services (58 percent) followed by reduced downtime (44 percent) and improved reliability (38 percent).
Nearly half of respondents either have already invested in cloud recovery services or are planning to invest in the next one to two years; nearly an additional third have cloud recovery services on their radar but have no current investment plans.
Significantly, over three-fourths (78 percent) of those already investing in cloud recovery services acknowledge faster recovery as a benefit, compared with just 54 percent of organizations planning on investing and 57 percent of those with no plans to invest.
With regard to challenges specifically associated with cloud recovery services, those who are planning to invest (80 percent) and those who have no plans to invest (57 percent) are significantly more likely to have security concerns than those who are already investing (48 percent) in cloud recovery.
Organizations also wonder whether they will realize a return on their cloud spending, with 38 percent believing it will prove a challenge to realize an ROI on cloud recovery services.
The full results of the survey can be found after registration here.
When should you bring in new technology? When it does a better job at meeting your needs, of course. It’s the same for business continuity management. Migrating from in-house physical servers to cloud computing services should be properly justified by lower costs, higher reliability and better performance for instance. Without sacrificing data confidentiality, control or conformance. While cloud computing makes sense for many organisations, there are cases where it doesn’t (example – cloud computing isn’t always cheaper). Looking at the following business criteria and then analysing what new generation technology has to offer may be the smarter way to do things.
Suppose your business suffers a temporary disruption. (The cause of the disruption doesn’t matter; neither, necessarily, does the length of the disruption.) A disruption that impacts customers, prospects or finances (and almost every disruption – even for a few minutes – will), may trigger compliance obligations. You may need to file an insurance claim. Or you may need to provide government or industry regulators with the details of how your organization dealt with the disruption.
Do your Business Continuity and Incident Management plans lay out the needs and requirements for documenting actions taken during disaster or other disruption?
Any business disruption will generate a flurry of activity. Will you be able to recall all of those actions once order has been restored? Or will you have to spend countless hours reconstructing what happened, who did what and how long each action took. It is unlikely you’ll be able to capture every action by every participant. And the longer the disruption lasts, the longer that list of action will be.
Two surveys have been released recently that show the way consumers think about enterprise data breaches.
The first survey, conducted by HyTrust, isn’t surprising. It found that the majority of consumers will take their business elsewhere after discovering their information was compromised in a breach. And consumers aren’t patient on this matter. For approximately 45 percent of survey respondents, data security is a one strike and you’re out deal – they aren’t going to wait around for your company to get its act together and fix the security holes.
Also, that 45 percent wants to see companies held criminally negligent when a data breach occurs. Eric Chiu, president and co-founder of HyTrust, told eWeek that this survey result may have been the most surprising statistic to come out of the survey, adding:
One of the primary benefits of the cloud is the ability to distribute data architectures across wide geographic areas. Not only does this protect against failure and loss of service, but it allows the enterprise to locate and provision the lowest-cost resources for any given data load.
But problems arise in the ability, or lack thereof, of managing and monitoring these disparate resources, particularly as Big Data and other emerging trends require all enterprise data capabilities to be marshalled into a cohesive whole.
When it comes to storage, many organizations are attempting to do this through global file management, which is essentially putting SAN and NAS capabilities on steroids. The idea, as Nasuni and other promoters point out, is to extend resource connectivity across broadly distributed architectures while maintaining centralized control. This is not as easy as it sounds, however. Traditional snapshot and replication techniques must now work across multiple platforms and be free to make multiple versions of data that would overwhelm standard storage architectures. They must also be flexible enough to accommodate numerous performance levels, but not so unwieldy as to drive up costs by endlessly copying data sets for each new cloud deployment.
Data can be a fundamental tool in disaster preparedness, but the insights aren’t always heeded. This was the observation of three emergency management experts from academia, government and the private sector in an exchange last week on natural disaster data.
The trio, who spoke about data use for city resilience at the Atlantic CityLab Summit in Los Angeles, Sept. 29, said that an analysis of data shows an overwhelming need for infrastructure improvements, but states and cities typically take short-term savings over long-term protections against catastrophe.
Lucy Jones, a seismologist at the U.S. Geological Survey (USGS), is collaborating with Los Angeles to draft a seismic-resilience plan. She said the city is a prime example of what happens when there’s an abundance of data and absence of investment in disaster preparation. About 85 percent of the city’s water supply is delivered by aqueducts across the southern San Andreas Fault — a fault line the USGS estimates will generate a major earthquake sometime in the next decade or so, according to its data. The danger centers on indications city aqueducts will break, leaving only a six-month supply of water reserves for residents, she said.
“What if there was a case of Ebola in my community?” With the growing outbreak in West Africa, public health preparedness planners across the country are mulling this question as news broke that the CDC confirmed a case of Ebola in Texas and concerns grow over the threat posed by Ebola to global health security. This question is inevitably followed up with, “Are we ready?”
These are the types of questions that keep public health preparedness planners up at night. The reason these questions are so pressing right now is not only because of the alarming symptoms and mortality rate of Ebola, but also because of the continuous funding cuts that local health departments have faced since 2007. The United States is not West Africa, and Ebola is unlikely to have sustained transmission here because of better infection control in healthcare facilities, cultural differences, and protocols put in place by the Centers for Disease Control and Prevention (CDC) to stop the spread of the disease. But while local health departments would do everything in their power to protect lives in the face of a public health emergency like Ebola, there are other consequences to a community tasked with responding to a public health emergency that are complicated by ongoing funding cuts. For example, even the containment, treatment, and contact investigation of a small number of Ebola patients would have the potential to quickly overwhelm local health department budgets, as per capita spending on public health preparedness has decreased by nearly 50 percent in just the past year. Administrative burdens often delay state and federal emergency response funding that supplements local budgets. Additionally, lack of funding has decreased the number of preparedness programs.
Business Continuity and IT Disaster Recovery planning tends to first focus on system and application recovery (Recovery Time Objective – RTO) and data recovery (Recovery Point Objective – RPO) second. That makes sense when you consider the order it which things are usually recovered, but does it really? Isn’t the data or the information the life blood of the company? Isn’t that why it is called Information Technology and not just technology?
Customer information, financial data, product specifications, research data, procedures, accounts payable, forms (the list could go on and on) is what the company runs on.
I read two articles recently – Michael O’Dwyer’s “How snapshot recovery ensures business continuity” and Marc Staimer’s “Why Business Continuity Processes Fail and How To Recover Them.” Both share a lot of good information about improving data backup methods and timeliness. They explain how important the RPO is to disaster recovery planning and talk about backup and restore procedures, media, storage and locations. I would like to add some additional considerations for determining the RPO and developing recovery strategies that will meet the business need.
New WatchGuard Firebox M440 UTM/NGFW makes it easy to apply the right policies to the correct network segment
WatchGuard Dimension™ provides industry first, real-time view into the performance of security policies across segmented networks
WatchGuard® Technologies has launched the WatchGuard Firebox® M440 UTM/NGFW appliance designed to further simplify network security. The WatchGuard Firebox ® M440 features multiple independent ports, removing the need for complex configurations such as VLANs and simplifying the critical process of applying traffic-appropriate policies across multiple network segments – a process beyond the technical reach of many organisations. WatchGuard’s visibility solution, Dimension™, also provides the industry’s only real-time, single-pane-of-glass view to show the effect each policy is having on that segment’s traffic.
“Network security solutions are only good if they’re not too difficult for IT pros to use,” said Dave R. Taylor, vice president of corporate strategy and product management for WatchGuard. “The Firebox M440 makes it drop-dead easy to create segments, map the traffic, create custom policies based on what traffic is in each segment, and instantly see how it affects traffic. Applying the appropriate security policies to the correct traffic flows is what truly defines the success of your segmentation strategy and the Firebox M440 takes the configuration complexity out of the process.”
John Stengel, President of J Stengel Consulting, a network security, management and training firm, stresses that effective segmentation has never been more critical, stating, “The common misconception that strategies such as role-based authentication or basic VLAN switching and routing constitutes effective network segmentation, delivers a false sense of security. With the increased expectation for anytime employee access and advances around embedded Internet devices (IoT) and recent breaches like Target tied to a lack of proper segmentation, it has never been a better time for organisations to re-evaluate how they segment the network and ensure they have the right policies applied.”
The WatchGuard Firebox M440 delivers 25 1Gb Ethernet ports, eight that deliver Power over Ethernet (PoE), plus two 10 Gb SFP+ (fiber) ports. For more information click here: http://www.watchguard.com/wgrd-products/utm/firebox-m440/overview.
About WatchGuard Technologies, Inc.
WatchGuard® Technologies, Inc. is a global leader of integrated, multi-function business security solutions that intelligently combine industry standard hardware, best-of-breed security features, and policy-based management tools. WatchGuard provides easy-to-use, but enterprise-powerful protection to hundreds of thousands of businesses worldwide. WatchGuard products are backed by WatchGuard LiveSecurity® Service, an innovative support program. WatchGuard is headquartered in Seattle, Wash. with offices throughout North America, Europe, Asia Pacific, and Latin America. To learn more, visit WatchGuard.com.
WatchGuard is a registered trademark of WatchGuard Technologies, Inc. All other marks are property of their respective owners.
On Saturday, September 26, 2014 Mount Ontake – 200km west of Tokyo – suddenly erupted, spewing ash and rock over a wide area and killing nearly 50 people (at last count). What’s strange is that this volcanic eruption occurred with no warning – at least that’s what the specialists are saying at this stage. I’m not so sure that’s true.
It’s always been said that Japan has one of the best early warning / monitoring systems in the world due to its location on the Pacific Rim of Fire. If the best monitoring system in the world didn’t catch this, then is the best system even worth it? I mean, these systems are developed to help save lives and provide early warnings to evacuate people and ensure life safety. Yet, that didn’t happen so are the monitoring systems we have in place any good? Are they providing any help at all?
What do we need to do to get to a point that can predict – with sufficient notification – that something is (or could be) imminent? A few seconds won’t cut it and isn’t enough to allow for any communications or sufficient response – unless you’re a race car driver. Should we educate people instead to understand the risks of where they are – like climbing the side of a volcano, which makes up for the vast majority of those that died on Mount Ontake – or do we put trust in systems that can’t predict or measure potential dangers?
So I’m listening to the radio in the car on the way home from work and not surprisingly there’s comments about the current Ebola crisis in West Africa – it is a major headline after all and serious matter. In fact, as I was listening this particular broadcast was talking about the fact that Ebola had made its way to Dallas, Texas from Liberia via a male visitor.
Now, what got me surprised was that commentators and experts were saying that people should be panicked or scared of Ebola (in the Western world anyway) and I agree with them. But then they went on to kind of criticize people for being scared; taking their kids out of school, buying masks and disinfectants. They were saying that people were over reacting and there was no need to do this sort of thing. Yet, when flu season in making the rounds – in schools, office buildings, subway systems and shopping malls – people are blamed for not taking the proper precautions to ensure they don’t catch the flu, getting sick and getting other sick (and taking a flu shot of course). So what’s the difference?
There isn’t a pill people can take to proactively prevent themselves from catching Ebola, even though you can’t catch it from just walking past someone on the street. This is what people will do to protect themselves, to take themselves out of possible harm’s way, I don’t think that’s over-reacting. Yes, buying hazmat suits might be bit overboard but taking one’s loved one’s out of school and not interacting in areas where illnesses can spread – malls, subways etc. – is natural for people. So which is it? Do we protect ourselves proactively or not? Do we ensure our safety and that of our loved ones, or do we continue as if nothing is happening?
A Washington-area hospital announced Friday that it had admitted a patient with symptoms and a travel history associated with Ebola. The case has not been confirmed, but the number of similar incidents around the country and a confirmed Ebola patient in Dallas have spurred concerns about whether U.S. hospitals are as prepared to deal with the virus as federal officials insist they are.
Since July, hospitals around the country have reported more than 100 cases involving Ebola-like symptoms to the federal Centers for Disease Control and Prevention, officials there said. Only one patient so far — Thomas Duncan in Dallas — has been diagnosed with Ebola.
But in addition to lapses at the Dallas hospital where Duncan is being treated, officials say they are fielding inquiries from hospitals and health workers that make it clear that serious questions remain about how to properly and safely care for potential Ebola patients.
A CDC official said the agency realized that many hospitals remain confused and unsure about how they are supposed to react when a suspected patient shows up. The agency sent additional guidance to health-care facilities around the country this week, just as it has numerous times in recent months, on everything from training personnel to spot the symptoms of Ebola to using protective gear.
California Gov. Jerry Brown signed legislation on Tuesday, Sept. 30, to kick-start adoption of next-generation emergency communications technology in the state. But while the law requires state leaders to develop a comprehensive rollout plan, questions remain on how to adequately fund the upgrades.
Senate Bill 1211 orders the Governor’s Office of Emergency Services (OES) to establish a transparent process for calculating how much next-gen 911 technology will cost to implement on an annual basis, including how it sets the statewide 911 customer fee on phone bills. But according to one expert, questions have surfaced across the U.S. about whether states are using their 911 funds appropriately.
Kim Robert Scovill, executive director of the NG9-1-1 Institute, a nonprofit organization that promotes the deployment of next-generation 911 services, explained that some states move 911 money over to their general fund for other purposes. And while that doesn’t indicate a state is ignoring public-safety, he said increased fiscal transparency was a good move to ensure the money is being used properly.
No matter how complicated and unwieldy you think your data environment is, chances are you have nothing on the federal government.
The U.S. government is the single largest employer in the world, with more than 2 million civilian employees plus another 3.2 million military personnel around the world. That means it has had to build and maintain digital infrastructure of gargantuan size in order to keep all those people connected. Estimated at close to 9,000 data centers, the government IT footprint is clearly in need of a slimdown, not just to cut costs but to keep government processes in working order as mobile and cloud infrastructure take hold in the private sector.
To that end, government agencies have been working on a consolidation project for the past few years that, according to the Government Accountability Office (GAO), has shaved more than $1 billion off the U.S. government’s IT budget so far. The project has already led to the shuttering or planned closing of more than 1,100 data centers, while at the same time encouraging leading departments like the DoD to embrace the cloud and other advanced architectures to ensure that remaining resources can be distributed quickly and evenly to both critical and non-critical functions.
One of the challenges of developing a community that’s resilient to disaster is getting citizens to sign up for alert notifications. For example, a year after Itawamba County, Miss., deployed an emergency notification system, 25 percent of households had signed up to receive it. That’s considered good. Really good.
In fact, getting residents to sign up for any number of emergency services is difficult for a multitude of reasons. Some people are averse because of the privacy and security implications and are afraid to share personal information. And some of it is that people just tune out when it comes to the gruesome nature of preparing for a disaster.
But there are strategies to maximize the buy-in from residents. Ana-Marie Jones, executive director of the nonprofit agency Collaborating Agencies Responding to Disasters (CARD), shared her favorite ways for getting buy-in from the public:
(MCT) — USAA on Thursday became the first insurance company to seek federal permission to test ways drones could expedite claim processing in disaster areas.
The insurance and financial services company is seeking an exemption from the Federal Aviation Administration's Modernization and Reform Act of 2012 that would allow it to test unmanned aircraft systems on its San Antonio campus as well as on private, rural property nearby.
The FAA has largely limited commercial drone-use research to six test sites named in December, including a collection of Texas ranges managed by Texas A&M University-Corpus Christi.
Kathleen Swain, a USAA underwriter and FAA-rated commercial pilot and flight instructor, said USAA has already worked with A&M at the testing zone in College Station and was now ready to go further.
A second annual survey from Experian and the Ponemon Institute appears to show that more companies are prepared for a data breach, and that cyber insurance policies are becoming a more important part of those preparedness plans.
The study, which surveyed 567 executives in the United States, found that 73 percent of companies now have data breach response plans in place, up from 61 percent in 2013. Similarly, 72 percent of companies now have a data breach response team, up from 67 percent last year.
In the last year the purchase of cyber insurance by those companies has more than doubled, with 26 percent now saying they have a data breach or cyber policy, up from just 10 percent in 2013.
One of the monumental shifts in telecommunications and enterprise networking during the past century was the ascendency of the Internet protocol. The reason that it is so powerful is simple: Everything is divisible to the same basic language. Instead of French, English, Russian and Turkish, the world’s networks all talk in Esperanto.
Myriad advantages come with this, but one big issue: Video, voice and data are sent through the same network. Vital and incidental pieces of information – sales results and the menu in the cafeteria – are carried alongside each other. The comingling of so many applications and so much data actually has two implications: If the network goes down, losers have no connectivity, and the data that must be secured becomes more cumbersome.
Customer data integration currently is the top barrier to adopting digital marketing technologies, according to a recent survey of senior marketers at global companies.
Teradata, an analytics platform vendor, released “Enterprise Priorities in Digital Marketing” this week. It’s based on a global survey conducted by Econsultacy US, which queried 402 senior marketing officers about their plans for digital marketing.
I find the term “digital marketing” to be a bit vague, but for the survey, it was defined as “the strategy of connecting large amounts of online data with traditional offline data, rapidly analyzing it and gaining cross-channel insights about customers.” The goal is much simpler: Deliver personalized content and messages to customers wherever — or however — they’re online.
It’s not hard to figure out why companies value this approach, but the findings fill in the gap between common sense and theory:
“The largest marketing organizations in the world have concluded that enhancing customer relationships via multiple digital channels best supports sustainable growth and reliable retention. This focus on thoroughly understanding the customer through data, and acting on insights found in data to design interactions, is driving an unprecedented demand for technology.”
With the amount of data that IT organizations are being asked to manage rising considerably, backing up all that data has become a significant challenge. Looking to provide IT organizations with some additional headroom, Symantec today introduced a NetBackup 5330 appliance that can store up to 229TB of data at throughput speeds that are four times faster than previous generations of the appliance using 10G Ethernet.
The end result, says Drew Meyer, director of marketing for integrated backup at Symantec, is backup that is now two times faster, data recovery that is three times faster, and data replication that is 4.8 times faster.
Meyer says the NetBackup 5330 appliance is a core element of the company’s overall approach to software-defined data protection. Rather than requiring IT organizations to acquire and manage separate backup and recovery systems to handle physical and virtual servers, Meyer says NetBackup provides a single platform for managing data protection across the data center.
It is clear that the Ebola virus outbreak has devastated Liberia, Guinea and Sierra Leone by killing more than 3,000 people to date of the 7,000 individuals infected. Even more troubling is that the BBC News reported that “five people are infected every hour” and the Centers for Disease Control and Prevention (CDC) stated that “cases in Liberia are currently doubling every 15-20 days, and those in Sierra Leone and Guinea are doubling every 30-40 days.” With the CDC providing confirmation on the first Ebola virus patient in the U.S., as well as projecting that the spread of the Ebola virus in 2015 will be upward of a million cases in West Africa, now is the time for nations to step up their prevention efforts. Because the Ebola virus transmission takes place through the exchange of blood and bodily fluids and is not spread by air or water, health-care personnel and close family caring for the patients are at the greatest risk of getting the virus.
Health-care providers, hospitals, long-term care agencies and primary and specialty care should use this threat to seize the opportunity to refine worker protection and infectious control plans and procedures. With the competing priorities of providing health care, emergency management is shockingly not always on the minds of health-care administration. As many of us know, emergency management planning is often a top priority only when it is desperately needed.
Many risk managers are struggling to get their arms around reputation risk. One challenge is that risk, a threat to valued asset or desired outcome, is hard to discuss in modern terms without statistics. Statistics, on the other hand, can be mind-numbing.
First, the accountancies. Eisner & Amper reports that reputation risk has been the number one board concern for each of the past four years. Deloitte concurs on the ranking but emphasizes the strategic nature of reputation risk. E&Y finds reputation risk in international tax matters; PwC finds reputation risk in bribery, corruption and money laundering. Oliver Wyman, a human resource and strategy consultancy, reports that reputation risk is a rising C-suite imperative ranking fourth this year (and third among risk professionals). Reputation risk was fourth in Aon’s 2013 survey. Willis shared data showing that 95% of major companies experienced at least one major reputation event in the past 20 years.
Ace in 2013 reported that 81% of companies told the insurer that reputation was their most important asset. Allianz’s 2014 global survey ranked the risk sixth of the top 10. Rounding out the professions, the 2014 study written by the Economist Intelligence Unit and published by the law firm, Clifford Chance, reported that 74% of U.K. board members see reputation damage as the most worrying consequence of an incident or scandal, ranking it as more serious than the potential direct financial costs, loss of business contracts and even impact on share price.
Unified communications is an important trend but, when it comes to business continuity planning for critical communications systems, it may not be the best approach.
By Andrew Jones
Smart mobile devices have, by their very nature, brought voice and data convergence to a mass market. It’s easy to be convinced that they offer a panacea communications solution: addressing all needs and offering the best value for money. However, when critical communications are a key requirement the situation can become much more complicated and it may even become clear that separating voice and data systems could be a better solution, which could contradict the unified communications trend.
It is certainly possible to bring voice and data together when planned carefully with the right level of consideration for the longer term but it may not be that one size fits all and alternative designs and infrastructure may prove to be a more effective solution.
One of the biggest benefits to using smartphones in an organization is the ability to not only use the commercial cellular services but also private networks (either a private cellular/GSM network or even a wifi-enabled solution) – and rightly so, this is the kind of flexibility that is highly useful and simply was not available in the past. Today we continue to build our onsite networks and links to the outside world to provide high speed rich data content to suit our needs. However, as each year passes the content, definition of graphics and tolerance to delays shift, requiring us to carefully manage and upgrade our onsite wifi and Internet connectivity so it provides the best for our employees for the foreseeable future. We continue this stepwise investment to keep abreast of IT demands of our users and as far we know this trend is set to continue. So is introducing VoIP (Voice over IP) onto a wifi network that continually struggles to keep abreast of our needs counterproductive, as while it uses an existing asset, upgrading for voice is not inexpensive.
By James Moore
Increasing reports of compromises by well-funded and resourced attackers are raising the profile of cyber security to such an extent that headlines of data breaches are becoming mainstream. On a regular basis, reports are being released showing the skill and persistence of attackers. Advanced attacks such as spear phishing, watering holes booby-trapped with custom malware and zero-day exploits, even entry via supplier links are all being reported on an almost weekly basis. And all of these attacks have one thing in common - they target individuals.
Generally, we still see that most organizations rely on traditional security controls in the form of technology such as anti-virus, firewalls, SIEM etc to protect their critical assets. However, the increasing importance of employee security awareness is often overlooked and instead only basic awareness training is given, focussing available resources on deploying and testing traditional security controls.
The US National Fire Protection Association (NFPA) Standards Council has approved a request to establish a standard for community risk assessments and reduction plans.
The standard will provide a process for jurisdictions to follow in developing and implementing a community risk reduction plan, which helps identify a community risk profile and allocate resources to minimize risks.
The standard is expected to be completed in the next two years.
A new UK-based company which aims to demystify business continuity management and make it easier and more straightforward than ever before has opened its doors for business.
With more than 15 years’ business continuity experience with RSA (Royal & SunAlliance), one of the UK’s leading general insurers and a FTSE 100 company, Ian Houghton’s trademark no-nonsense, down-to-earth approach will now be available to clients across the country with the launch of his own consultancy.
Called Easy BCM Ltd, Houghton’s new venture aims to make business continuity management easy to understand, implement and maintain for companies large and small.
“I’ve always believed that BCM should be approached in a sensible and straightforward way, to reflect the nature, scale and complexity of a business,” explains Houghton. “Too often plans are dictatorial and take no account of the industry, the size of the organization and the complexity of its operations.
“At Easy BCM we make business continuity management accessible and show clients that it can be a valuable asset for a company which can help drive improvements in many different areas.”
Ten new National Science Foundation projects will investigate how to keep complex, interdependent infrastructure available.
When critical infrastructure is resilient, it is able to bounce back after a disruption at an acceptable cost and speed. When resilient infrastructure is interdependent, cascading failures between infrastructure systems may be eased or possibly even avoided.
This ideal of resilience is far from the norm, particularly as critical infrastructure becomes more interconnected and complex.
To investigate innovative ways to bolster the resilience of the electrical grid, water systems and other critical infrastructure areas, the US National Science Foundation (NSF) has awarded grants totaling nearly $17 million through cross-disciplinary funding by its Directorates for Engineering and Computer and Information Science and Engineering.
During the next three years, more than 50 researchers at 16 institutions will pursue transformative research in the area of Resilient Interdependent Infrastructure Processes and Systems (RIPS).
It’s an unfortunate truth. The holes in your IT security are most likely to be where you neither see them nor expect them. That means they’ll be outside the basic security arrangements that most organisations make. Firewalls, up to date software versions and strong user passwords are all necessary, but not sufficient. Really testing security is akin to an exercise in lateral thinking or even method acting. You have to look at your systems and network from the outside to see how a hacker or cybercriminal might try to get through or round the mechanisms you’ve put in place. And there’s more still to this inside-out approach to protecting your organisation.
The government released 4.4 million medical payment records this week as part of the Open Payments database, and it’s already attracting national headlines and criticisms for being incomplete and slow.
It’s a major reminder that while open data may be free, it isn’t necessarily clean.
NPR, the Wall Street Journal and Forbes, have all reported on the controversial data release, which is required under a provision of the Affordable Care Act. The records show $3.5 billion in payments made by pharmaceutical and device companies to doctors.
(MCT) — Tom Fuller could tell how well folks understood earthquake insurance once he mentioned that he has a policy for his damaged home in Napa.
The uninitiated responded, “Well, you’re lucky.” The more knowledgeable said, “I hope you didn’t hit your deductible.”
Fuller, a public relations consultant, said the repairs from last month’s magnitude-6.0 quake won’t come close to his $48,000 deductible — the amount of structural damage his home must suffer before the insurance company becomes liable for major repairs. That means he will cover virtually all the damage from the Aug. 24 temblor to his 1940s-era home south of downtown.
Even so, his insurance policy still gives him peace of mind that he could rebuild should a massive, 1906-type quake ever level his city.
(MCT) -- Under the blistering Central Valley sun, Filiberta Sanchez and her toddler granddaughter strolled down a Parkwood sidewalk lined with yellow weeds, dying grass and trees more fit for kindling than shade.
"It was very pretty here, very pretty," said Sanchez, 56, as little Jenny crunched a fistful of parched dirt and pine needles she grabbed from the ground. "Now everything's dry."
Parkwood's last well dried up in July. County officials, after much hand-wringing, made a deal with the city of Madera for a temporary water supply, but the arrangement prohibited Parkwood's 3,000 residents from using so much as a drop of water on their trees, shrubs or lawns. The county had to find a permanent water fix.
Risk assessment is, of course, the foundation of effective compliance measures. This has always been true as a matter of common sense. And, since the Federal Sentencing Guidelines for Organizations went into effect two years ago this November, this has been true as a matter of legal expectation.
Risk assessment is also, in my view, the most challenging aspect of C&E work – both conceptually and as a practical matter. Indeed, even though I’ve been writing this column for four years (the fruits of which are contained in this complimentary e-book issued by CCI), I can see no end of risk assessment topics in sight. So, to attempt to chip away at the backlog, this most recent installment will look at some of the recurring questions C&E officers have on risk assessment methodology.
This post by O’Dwyers announcing that H+K Strategies (formerly Hill & Knowlton) has officially declared that digital public relations and marketing communications is now the backbone to any organization’s communications. O’Dwyers is quite snarky in their comments about this “announcement” by H+K. It’s obvious they say, and that H+K is clearly outdated by even having to tout their digital savvy.
While it is true that some agencies, like Edelman, have long established credibility in digital comms, what O’Dwyer ignores is the fact that most organizations, even some of the most powerful and sophisticated in the world, still do not really get this. Almost any crisis communication plan I look at is still “media first.” That is, the primary focus of the plan is preparing for and delivering info and messages to media outlets.
By John D’Ambrosia, chairman, Ethernet Alliance board of directors; chief Ethernet evangelist, CTO office, Dell Networking
Ethernet and its standards-based approach have been a fundamental pillar leveraged by the data center community from inception. CxOs and IT managers have embraced Ethernet and its strong history of seamless, multi-vendor interoperability. In today's data centers, Gigabit Ethernet for servers and 10 Gigabit Ethernet (10 GbE) for networking have been the proven workhorses – cloud-scale data centers are shifting to 10 GbE for servers, and 40 Gigabit Ethernet (40 GbE) for networking.
The introduction of 40 Gigabit Ethernet provided CxOs and IT managers with a cost-effective solution to deal with the never-ending traffic burden on their networks, while 100 GbE technology continues to evolve. The initial development of 40 GbE was intended as the next-generation solution for servers beyond 10 GbE, but its inherent architecture enabled a high-density aggregation for 10 GbE server connections. This interconnect scheme enabled the cost efficiencies fueling the phenomenal growth rates being seen in today's cloud-scale data centers. The same inherent structure also exists at 100GbE, and given the maturity in development of 25 Gb/s signaling to enable 100 GbE, industry forces are driving toward 25 GbE as the next high-volume deployment for servers. This will take today’s cloud-scale data centers to the next level of performance at the lowest cost per bit from a CAPEX and OPEX perspective.
Every once in a while, talk of the all-cloud data center starts to circulate throughout professional IT circles. While most people are quick to dismiss this notion, it’s important to note the distinction between fully cloud-based data architecture and the end of the traditional data center as we know it.
In short, many organizations will likely stick with in-house infrastructure for some time to come, but others could reap tremendous benefits by outsourcing their entire data environment, at least in the short term.
A case in point is Infor Inc., which built its software business entirely in the cloud and now specializes in application-centric business solutions that allow other organizations to do the same. The company claims its lack of a data center allows it to focus more of its energy on development and other business-facing concerns and gives it an edge against well-heeled competitors like SAP and Oracle. The company utilizes an open framework and public providers like Amazon, and is looking to port some of its Big Data needs onto Amazon’s RedShift platform or possibly the IBM cloud. Company executives say that manpower costs alone are enough to deter them from building their own facilities for the foreseeable future.
Mary Schoenfeldt is the public education coordinator for the Everett, Wash., Office of Emergency Management. She is a 2013 inductee into the International Network of Women in Emergency Management hall of fame and has written numerous books on school safety during her 30 years in the field.
Schoenfeldt is considered an expert in crisis management, helping communities assess response systems; writing crisis plans; conducting physical site safety audits; and designing school training exercises. She created the community preparedness campaign “Who Depends on You?” This interview has been edited for clarity and length.
Exercises are conducted to identify strengths and weaknesses; assess gaps and shortfalls in plans, policies and procedures; clarify roles and responsibilities among different entities; improve interagency coordination and communications; and identify needed resources and opportunities for improvement.
Do exercises achieve these goals? Probably not. Not because they can’t, but because the organizations planning and executing these exercises don’t use them as real tests. These organizations are engaging in “exercises in futility.” But organizations may be ready for a new kind of dynamic exercise, based on risk-reward principles.
The goal is to provide a deliverable: the after action report or improvement plan. What if we changed this deliverable to measurable improvement in actual policy, procedure, capability or technical assistance to support performance? This would change the conversation from planning exercises, to exercising plans or at least exercising the concepts in the plans. If there is no plan, consultants could help the organization by using dynamic exercises to develop hypotheses, reveal weakness, uncover strengths, innovate new approaches to problem-solving, and then support planning efforts to capture and implement improvements based on the exercise outcomes.
New model will help forecasters predict a storm’s path, timing and intensity better than ever
- This is a comparison of two weather forecast models looking six hours ahead for the New Jersey area. Image on left shows the forecast which doesn't distinguish localized hazardous weather. Image on right shows the new HRRR (High-Resolution Rapid Refresh) model that clearly depicts where local thunderstorms (yellow and red coloring) are likely. (Credit: NOAA)
Today, meteorologists at NOAA’s National Weather Service are using a new model that will help improve forecasts and warnings for severe weather events. Thanks to the High-Resolution Rapid Refresh (HRRR) model, forecasters will be able to pinpoint neighborhoods under threat of tornadoes and hail, heavy precipitation that could lead to flash flooding or heavy snowfall and warn residents hours before a storm hits. It will also help forecasters provide more information to air traffic managers and pilots about hazards such as air turbulence and thunderstorms.
Developed over the last five years by researchers at NOAA’s Earth System Research Laboratory, the HRRR is a NOAA research to operations success story. It provides forecasters more detailed, short-term information about a quickly developing small-scale storm by combining higher detail, more frequent radar input and an advanced representation of clouds and winds. The HRRR model forecasts are run in high resolution every hour using the most recent observations with forecasts extending out 15 hours, allowing forecasters to better monitor rapidly developing and evolving localized storms.
- VIDEO: NOAA launches new tool to improve weather forecasts. (Credit: NOAA)
“This is the first in a new generation of weather prediction models designed to better represent the atmosphere and mechanics that drive high-impact weather events,” said William Lapenta, Ph.D., director of the National Centers for Environmental Prediction, part of the National Weather Service. “The HRRR is a tool delivering forecasters a more accurate depiction of hazardous weather to help improve our public warnings and save lives.”
Hyper local forecasts are possible with the HRRR because of higher resolution. The HRRR’s spatial resolution is four times finer than what is currently used in hourly updated NOAA models offering a more precise prediction of a storm’s location, formation, and structure. Using the HRRR, forecasters have an aerial image in which each pixel represents a neighborhood instead of a city. “This increase in resolution from eight to two miles is a game-changer,” added Lapenta.
What Goes In…
The HRRR starts with a full 3-D picture of the atmosphere one hour before the forecast and then brings in observations from surface stations, commercial aircraft, satellites, and weather balloons to create a more detailed and balanced starting point for the forecast. Another key innovation for the HRRR is adding in radar data every 15 minutes during that hour to help the model “know” where precipitation is ongoing. Integrating atmospheric data gathered before a model run, including radar data at a two mile resolution, provides a more accurate picture of what is happening in the atmosphere at the start of the forecast. This helps predict changes to storms and development of new storms faster than current models.
…And What Comes Out
The HRRR model’s hourly output includes more frequent snapshots, in 15 minutes intervals, of the atmosphere. With this information forecasters can better anticipate and predict the onset of a storm and critical details of its evolution, allowing for earlier watches and warnings.
“The HRRR model will provide forecasters a powerful tool to help them inform communities about evolving severe weather,” said Stan Benjamin, Ph.D., a research meteorologist at NOAA’s Earth System Research Laboratory who led the research team that developed the model. "Being able to warn the public of weather hazards earlier and with greater detail is an outstanding return from NOAA's investment in research and observation systems."
Many NOAA scientists were involved with testing, optimizing, and implementing the model, including experts at NOAA’s National Weather Service and its National Centers for Environmental Prediction. NOAA’s partners at the Cooperative Institute for Research in Environmental Science at the University of Colorado at Boulder and the Cooperative Institute for Research in the Atmosphere at Colorado State University, Fort Collins helped with development. NOAA researchers partnered with users such as the Federal Aviation Administration, the National Center for Atmospheric Research, and the Department of Energy to significantly improve forecasts for aviation, energy among other industries through the HRRR model.
“Implementation of the HRRR is just one of many model improvements made possible with NOAA’s boost in its supercomputing power for weather prediction,” said Louis Uccellini, Ph.D., director, National Weather Service. “With advances in our forecast models, like the HRRR, we’re moving toward building a Weather-Ready Nation by improving our forecasts, providing better information to decision makers, and helping communities become more weather-ready and resilient against severe weather events.”
NOAA's National Weather Service is the primary source of weather data, forecasts and warnings for the United States and its territories. NOAA’s National Weather Service operates the most advanced weather and flood warning and forecast system in the world, helping to protect lives and property and enhance the national economy. Working with partners, NOAA’s National Weather Service is building a Weather-Ready Nation to support community resilience in the face of increasing vulnerability to extreme weather. Visit us at weather.gov and join us on Facebook and Twitter.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram and our other social media channels.
EATONTOWN, N.J. -- September is National Preparedness Month, and the latter half of the year is an ideal time for people to review their insurance policies. Understanding the details of what specific policies cover and what the policyholder is responsible for after a disaster is important as both clients’ needs and insurance companies’ rules change.
Insurers’ decisions and legislative changes have the biggest effect on changes in policies. Consumers should make themselves aware of possible changes in these areas and know what to look for while reviewing their policies.
The first check is the most obvious: the actual coverage. Policyholders should look at the specifics of which property is covered and the type of damage that is covered. Property owners should know that floods are not covered by standard insurance policies and that separate flood insurance is available. Flood insurance is required for homes and buildings located in federally designated high risk areas with federally backed mortgages, referred to as Special Flood Hazard Areas (SFHAs). Residents of communities that participate in the National Flood Insurance Program (NFIP) are automatically eligible to buy flood insurance. According to www.floodsmart.gov, mortgage lenders can also require property owners in moderate to low-risk areas to purchase flood insurance.
There are two types of flood insurance coverage: Building Property and Personal Property. Building Property covers the structure, electrical, plumbing, and heating and air conditioning systems. Personal Property, which is purchased separately, covers furniture, portable kitchen appliances, food freezers, laundry equipment, and service vehicles such as tractors.
What’s Not Covered
Policy exclusions describe coverage limits or how coverage can be purchased separately, if possible. Property owners should know that not only is flood insurance separate from property (homeowners) insurance, but that standard policies may not cover personal items damaged by flooding. In these cases, additional contents insurance can be purchased as an add-on at an additional cost. Some policies may include coverage, but set coverage limits that will pay only a percentage of the entire loss or a specific dollar amount.
The Federal Emergency Management Agency’s Standard Flood Insurance Program (SFIP) “only covers direct physical loss to structures by flooding,” FEMA officials said. The SFIP has very specific definitions of what a flood is and what it considers flood damage. “Earth movement” caused by flooding, such as a landslide, sinkholes and destabilization of land, is not covered by SFIP.
Structures that are elevated must be built at least to the minimum Base Flood Elevation (BFE) standards as determined by the Flood Insurance Rate Maps (FIRMs). There may be coverage limitations regarding personal property in areas below the lowest elevated floor of an elevated building.
Cost Impact of Biggert-Waters
The Biggert-Waters Flood Insurance Reform Act of 2012 extends and reforms the NFIP for five years by adjusting rate subsidies and premium rates. Approximately 20 percent of NFIP policies pay subsidized premiums, and the 5 percent of those policyholders with subsidized policies for non-primary residences and businesses will see a 25 percent annual increase immediately. A Reserve Fund assessment charge will be added to the 80 percent of policies that pay full-risk premiums. Un-elevated properties constructed in a SFHA before a community adopted its initial FIRMs will be affected most by rate changes.
In March 2014, the Consolidated Appropriations Act of 2014 and the Homeowner Flood Insurance Affordability Act (HFIAA) of 2014 were signed into law, lowering rate increases on some policies, preventing rate increases on others, and delaying the implementation of Section 207 of Biggert-Waters, which was to ensure that certain properties’ flood insurance rates reflected their full risk after a mapping change or update. HFIAA also repeals a portion of Biggert-Waters that eliminated grandfathering properties into lower risk classes. Many of the changes have not yet been implemented because the necessary new programs and procedures have not been established.
The General Conditions section informs the consumer and the insurer of their responsibilities, including fraud, policy cancellation, subrogation (in this case, the insurer’s right to claim damages caused by a third party) and payment plans. Policies also have a section that offers guidance on the steps to take when damage or loss occurs. It includes notifying the insurer as soon as practically possible, notifying the police (if appropriate or necessary) and taking steps to protect property from further damage.
“FEMA’s top priority is to provide assistance to those in need as quickly as possible, while also meeting our requirements under the law,” FEMA press secretary Dan Watson said. “To do this, FEMA works with its private sector, write-your-own insurance (WYO) company partners who sell flood insurance under their own names and are responsible for the adjustment of their policy holders’ claims.”
Policyholders should speak with their insurance agent or representative if they have any questions about coverage. For further information and direction, call the NFIP Call Center at 1-800-427-4661 or the NFIP Referral Center at 1-888-379-9531. Comprehensive information about NFIP, Biggert-Waters, HFIAA and flood insurance in general can be found at the official NFIP website, www.floodsmart.gov.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
Retail, by its very nature, is fast-moving: competition is intense and customers are increasingly demanding. In this cutthroat environment, the inability to do business can quickly damage a retailer: and making up lost ground is often extremely difficult, if it’s possible at all.
“All businesses need to have business continuity plans in place to avoid risks and minimise disaster, but retailers operate in a particularly competitive environment,” says Grant Minnaar, Business Continuity Management Advisor at ContinuitySA. “Retailers need to understand their risk profiles and make sure they have strategies in place to ensure they can stay trading, or they risk losing customers and damaging their brands.”
ContinuitySA has identified some of the top business continuity risks faced by retailers:
Craig Young overviews the Bash /‘Shellshock’ vulnerability which was recently identified and looks at whether it really is worse than Heartbleed, as has been widely claimed.
What is the vulnerability?
An Akamai researcher discovered that Bash, the dominant command-line interpreter present on Unix/Linux based systems, will improperly process crafted variable definitions allowing trailing bytes to be processed as OS commands. Bash allows users to define environmental variables which contain function definitions and a flaw within this parsing process means that commands specified after the function are executed when the variable definitions are passed to a Bash interpreter. The problem can easily be reproduced by logging into Bash shell and defining a crafted variable definition with trailing commands but in this scenario there is little risk since the commands are limited to the permissions of the already logged in user. Where this ‘Shellshock’ vulnerability really becomes a problem is when we consider the many ways in which Bash is indirectly exposed to an adversary. The most prominent (and worrisome) example of this is web technologies which use the vulnerable command-interpreter to generate responses to http requests. Since various details from the request are stored in Bash variables and passed to the command-interpreter, a remote unauthenticated attacker can use these scripts to inject commands which will run in the context of the web server.
The BCI’s Australasian Awards will be presented in Melbourne on October 17th 2014. The shortlst for the awards has now been published and is as follows:
Business Continuity Consultant of the Year
Steven Cvetkovic MBCI Managing Director Continuity & Compliance Management Services Pty Ltd
Ian Perry Director Chelmsford Consulting Limited
Oliver Pettit Client Director – Risk Services Deloitte Touch Tohmatsu
Ken Simpson MBCI Principal Consultant The VR Group
Paul Trebilcock MBCI Director JBTGlobal Coporate Advisory
Nalin Wijetilleke MBCI Director/Principal Consultant ContinuityNZ Limited
Business Continuity Manager of the Year
John Doble Business Continuity Manager NBN Co.
Sarah McDonald MBCI Senior Manager – Business Resilience Deloitte Touche Tomatsu
Public Sector BC Manager of the Year
Ian Goldfinch MBCI Manager, ICT Continuity Planning eHealth Systems, SA Health
David Reason Senior Risk Manager EQC (Earthquake Commission)
BCM Newcomer of the Year
Dale Cochrane CBCI Business Continuity Consultant National Australia Bank
Mark Dossetor AMBCI Manager Business Continuity Department of Transport, Planning and Local Infrastructure (DTPLI)
Eddie Ramirez Business Continuity Coordinator Westpac Group
Business Continuity Team of the Year
Australian Taxation Office
Department of Justice, Victoria
Victorian Department of Transport, Planning and Local Infrastructure
Business Continuity Provider of the Year (Product)
Linus Information Security Solutions Pty Ltd
RiskLogic Pty Ltd
Business Continuity Provider of the Year (Service)
Continuity & Compliance Management Services Pty Ltd
Hewlett-Packard Australia Pty Ltd
Linus Information Security Solutions Pty Ltd
Plan B Limited
Business Continuity Innovation of the Year
Continuity & Compliance Management Services Pty Ltd
PAN Software Pty. Ltd.
RiskLogic Pty Ltd
Most Effective Recovery of the Year
Bank of New Zealand
Plan B Limited
Westpac Banking Corporation
Industry Personality of the Year
Steven Cvetkovic MBCI
Howard Kenny MBCI
To help business continuity professionals better understand IT-related risk, they should develop and test risk scenarios. A new guide and tool kit from ISACA provides 60 examples of IT-related risk scenarios covering 20 categories of risk that organizations can customize for their own use.
‘Risk Scenarios Using COBIT 5 for Risk’ provides an understanding of risk assessment and risk management concepts in business terms, based on the principles of the globally recognized COBIT framework. It also defines the following six steps to effectively using risk scenarios to improve risk management:
1. Use generic risk scenarios, such as those presented in the publication, to define a set that is tailored to your organization;
2. Validate the risk scenarios against the business objectives of the organization, ensuring that the scenarios address business impacts;
3. Refine the selected scenarios based on this validation and ensure their level of detail is in line with the business criticality;
4. Reduce the number of scenarios to a manageable set;
5. Keep all scenarios in a list so they can be reevaluated; and
6. Include in the scenarios an unspecified event (an incident not covered by other scenarios)
Risk Scenarios provides scenario examples across categories such as IT investment decision making, staff operations, infrastructure, software, regulatory compliance, geopolitical, malware, acts of nature and innovation. It also provides guidance on how to respond to a risk that exceeds the organization’s tolerance level and how to use COBIT 5 to accomplish key risk management activities.
Risk Scenarios is available at www.isaca.org/riskscenarios
Whenever a breach of some sort occurs, two things tend to happen. First, the general password warning is given: Change them now, change them regularly, and don’t repeat passwords for anything. Second, people experience angst over password use in general. They often feel that the password has come to the end of its usefulness and we need to move on to other sorts of authentication.
You know what we never talk about when news breaks about a data breach and stolen passwords? Usernames. If we look back at two major password-related breach stories from recent months, it’s obviously something that should be considered. When word went out about the Russian hackers who had stolen a billion passwords, it was also reported that usernames were stolen.
It was the same situation with the Gmail incident of earlier this month. But if we look closely at the way an eSecurity Planet story phrased the incident, we see what the real issue is:
The following day, however, Google published a blog post stating that less than 2 percent of the username and password combinations would have worked for Gmail.
Username and password. Not just password alone.
(MCT) -- With flu season approaching, public health officials hope a crowdsourcing app that tracks flu activity will gain additional traction.
Flu Near You, a disease detection app, helps predict outbreaks of the flu in real time. Users self-report symptoms in a weekly survey, which the app then analyzes and maps to show where pockets of influenza-like illness are located.
HealthMap, Boston Children’s Hospital, the Skoll Global Threats Fund and the American Public Health Association developed the app, which was launched in November 2011. It now has more than 120,000 subscribers.
“It engages the public directly,” said Jennifer Olsen, manager of pandemics for the Skoll Global Threats Fund, a San Francisco-based non-governmental organization that works to confront dangers around the world.
We recently received a low ranking by a major market research organization, ranking eBRP Suite among the “Niche Players” in their mystical rating chart. Then why are we smiling?
We have been told that eBRP Suite does not deliver what these industry “experts” expect in a BCM software product. In last year’s review, we were ranked among the top companies. What did we do wrong this year? We did what we always do: act on our Customer’s feedback to continue to improve our products. We also added a stream of new customers – including several Fortune 500 companies and international banks – all of whom found eBRP Suite to be exactly what they needed. So what happened to drop us so far in the rankings? The simple answer is: they changed the survey! We still offer the same great product. We still provide the same world class service. Just as we have for more than a decade.
What those market researchers got right is that eBRP Suite isn’t for everyone. For those looking for a tool to simply conduct a BIA and write plans, there are plenty of companies to choose from. That’s not what we are, or want to be – even if those market “gurus” think we’re wrong.
In 2010, just as the recession’s wave of fiscal calamity was peaking, George Bascom and Todd Foglesong, from Harvard’s Kennedy School of Government, published a report, Making Policing More Affordable. They pointed out that public expenditures on policing had more than quadrupled between 1982 and 2006. But with city budget shortfalls opening up across the country, police departments and their chiefs, once used to ever-growing budgets, were now facing a new reality of cutbacks, layoffs and even outright mergers and consolidations of entire police departments with others. With federal subsidies disappearing (federal support for criminal justice assistance grant programs shrank by 43 percent between 2011 and 2013), thanks to a frugal Congress, police had few options.
With funding spigots turning off, law enforcement agencies must find ways to operate more affordably, according to Bascom and Foglesong. One obvious way is to use technology in more efficient ways. Being more efficient with technology also means being smarter.
One example can be found in Camden, N.J., a poverty-ridden, high-crime city of 77,000, located on the banks of the Delaware River, across from Philadelphia. Desperate to cut costs, the city disbanded its entire police force. The Camden County Police Department rehired most of the laid-off officers, and hired another 100 at much lower salaries and benefits, to create a consolidated regional police force. The move is considered highly controversial and certainly radical. While police departments in other jurisdictions have merged or consolidated to cut costs, none have gone down the path that Camden has taken.
During the January 2014 winter storm that crippled the Atlanta metro area and left thousands stranded on the city’s highways, businesses stepped up to the plate to assist those with nowhere to turn. Home Depot opened 26 stores in Georgia and Alabama to shelter stranded travelers, and other local stores like Walgreens, Wal-Mart, and Target welcomed weary – and cold – drivers who abandoned their cars when it was obvious they were not going to make it home that night. These businesses provided the community with resources and services when people needed them most.
In planning for public health emergencies, communities are quickly learning that businesses are true partners in response and recovery efforts. The private sector has the expertise, resources, and systems that operate every day that can assist in a public health response, be it for a pandemic, terrorist event, or natural disaster. During Hurricane Sandy, for example, big businesses used their commercial supply chain to deliver water, food, and other supplies. As the U.S. Chamber of Commerce says, “when the going gets rough, businesses gets moving.”
Staff at CDC’s Strategic National Stockpile – the largest global stockpile of pharmaceuticals and medical supplies for a public health emergency – are working to help state and local agencies forge these partnerships for both distribution and dispensing efforts and as a way to increase access to medicines in an event that affects that entire community. Partnering with public health is good business, too. These private partners are members of the community and when disaster strikes, they can help keep their employees safe and healthy and their businesses up and running.
“As a global manufacturer of computers and computer services, we have committed ourselves to providing our customers with quality products and services,” said a representative from Dell, the information technology powerhouse that has partnered with public health to assist in dispensing medicine to its employees during an emergency. “We are doing the same thing with our employees. We want them to feel good about coming to work and their company taking care of them. That’s why we have gotten very much involved in the points of dispensing program that is being offered by many of our health departments around the country.”
In addition to serving as closed points of dispensing, which allows businesses to provide medicine to their own employees, companies also are coordinating with CDC and their public health departments to provide volunteers, to assist in communications, and to serve the larger community as public dispensing sites. This type of collaboration and partnership between the private and public sector will augment and support a public health response and ultimately help keep Americans prepared, safe, and protected.
For more information on how businesses can partner for preparedness, visit http://www.cdc.gov/phpr/partnerships/.
There has been a “dirty little secret” in security that the risks associated with compliance violations, brand damage and remediation costs simply are not sufficient to encourage ubiquitous use of multi-factor authentication, encryption of sensitive data and other proven controls for preventing breaches. This has been a major contributing factor behind the data breach epidemic. (Why is ANY sensitive data unencrypted in this day and age?)
As the frequency of attacks increases and the nature of the threats change, companies are playing a game of Russian roulette with hackers. They are not utilizing an encryption security infrastructure and risking an attack that will leave privileged customer information available for these criminals to use.
In the first three months of 2014, there were 200 million records stolen according to the Breach Level Index. In 2013, we saw some of the biggest players in retail get hacked and there seemed to be few negative financial consequences for these companies. Stock prices and company reputation have rebounded back to normal within a few months. Shoppers are comfortable patronizing these businesses again, even the customers whose information was hacked.
Properly assessing risk is critical to any business. Successful businesspeople understand that every decision they make must be weighed against the potential risk to the company. This risk assessment must not be limited solely to situations directly related to the business itself, however. They must also consider reputation risk, or the risk events will have a negative impact on one’s personal reputation and, by extension, the business.
Whether fair or not, the decisions made in someone’s personal life can have a substantial impact on the company they are connected to. This risk extends beyond just the owner or executives of a company; employees caught doing unscrupulous things can cause a public relations nightmare for the business, ultimately resulting in massive losses for the company itself.
Assessing Reputation Risk
Unlike business transactions, where there are countless models and historical examples of the likely risk and reward of most given situations, reputation risk is far harder to quantify and prepare for. It is nearly impossible to predict, for example, whether or not an executive will get belligerently intoxicated and assault a police officer. The executive can bring unwelcome attention to the company, which in turn can cause investors, advertisers, and partners to shy away in the short or even long-term.
Health officials from dozens of countries gathered Friday at the White House, seeking ways to strengthen international defenses against epidemics such as the Ebola outbreak raging in West Africa.
The Obama administration launched a global health security initiative in February to help other nations develop basic disease-detection and monitoring systems to contain and combat the spread of deadly illnesses. That push to develop a long-term strategy gained urgency in the wake of the Ebola epidemic.
“Now, the good news is today our nations have begun to answer the call,” President Obama told the Friday gathering. “With all the knowledge, all the medical talent, all the advanced technologies at our disposal, it is unacceptable if, because of lack of preparedness and planning and global coordination, people are dying when they don’t have to. So, we have to do better , especially when we know that outbreaks are going to keep happening.”
North America leads the way in Big Data, besting other regions when it comes to investing, according to a new market survey by Gartner. The research firm found that while Big Data experienced international growth last year, North America led with a 9.2 percent jump in the past year.
The survey also found that 73 percent of organizations have either already invested or plan to do so in the next two years. That’s another significant increase over 2013, when the number was 64 percent.
By comparison, InsideBigData quotes IDG’s 2014 Enterprise Big Data report, which showed lower numbers. IDG found that 49 percent were already in the process of implementing Big Data projects or in the process of doing so in the future.
That begs the question: Who are these Gartner respondents that are so gung-ho on Big Data? Well, if you’re familiar with Gartner, you know its clients tend to be established enterprises and larger government agencies, more so than, say, small businesses or startups. In this case, the survey responses came from 302 Gartner Research Circle members, who are “the voice of selected business decision makers,” according to this.
America’s PrepareAthon! Campaign Offers Simple, Specific Actions Americans Should Know and Practice to Prepare For a Disaster in their Community
WASHINGTON – Today, the Federal Emergency Management Administration (FEMA) encourages individuals, families, workplaces, schools and organizations across the nation to take part in America’s PrepareAthon!, a national day of action that will take place September 30. America’s PrepareAthon! is a community-based campaign to increase emergency preparedness and resilience through participation in hazard-specific drills, group discussions and exercises every fall and spring. To register, individuals and organizations can visit www.ready.gov/prepare.
According to a recent survey conducted by FEMA, 50 percent of Americans have not discussed or developed an emergency plan for family members about where to go and what to do in the event of a local disaster. Additionally, nearly 70 percent of Americans have not participated in a preparedness drill or exercise, aside from a fire drill at their workplace, school or home in the past two years.
“Disasters can strike anytime and anywhere,” FEMA Administrator Craig Fugate said. “America’s PrepareAthon! is about practicing what to do in an emergency with enough regularity so that it becomes second nature when the real disaster actually happens.”
To encourage more Americans to prepare and practice, the campaign offers easy-to-implement preparedness guides, checklists and resources. These tools help individuals, organizations and entire communities practice the simple, specific actions they can take for the emergencies disasters relevant to their area. Examples include:
- Sign up for local text alerts and warnings and download weather apps to your smartphone. Stay aware of worsening weather conditions. Visit www.ready.gov/prepare and download Be Smart: Know Your Alerts and Warnings to learn how to search for local alerts and weather apps relevant for hazards that affect your area.
- Gather important documents and keep them in a safe place. Have all of your personal, medical, and legal papers in one place, so you can evacuate without worrying about gathering your family’s critical documents at the last minute. Visit www.ready.gov/prepare and download Be Smart: Protect Your Critical Documents and Valuables for a helpful checklist.
- Create an emergency supply kit. Bad weather can become dangerous very quickly. Be prepared by creating an emergency supply kit for each member of your family. Visit www.ready.gov/kit for more ideas of what to include in your kit.
- Develop an emergency communication plan for your family. It’s possible that your family will be in different locations when a disaster strikes. Come up with a plan so everyone knows how to reach each other and get back together if separated. Visit http://www.ready.gov/make-a-plan for communication plan resources.
Managed and sponsored by the Ready Campaign each September, National Preparedness Month is designed to raise awareness and encourage Americans to take steps to prepare for emergencies in their homes, schools, organizations, businesses and places of worship, culminating with the National Day of Action. America’s PrepareAthon! was established to provide a comprehensive campaign to build and sustain national preparedness as directed in Presidential Policy Directive-8. The campaign is coordinated by FEMA in collaboration with federal, state, local, tribal, and territorial governments, the private sector, and non-governmental organizations.
More information about America’s PrepareAthon!, including how to register, is available at ready.gov/prepare.
EATONTOWN, NJ -- Nearly two years after Hurricane Sandy, communities around New Jersey are still recovering from the damages inflicted by that historic storm.
The cost of cleaning up debris, clearing waterways and roads, repairing damaged sewer systems and other critical infrastructure, and rebuilding homes and businesses assaulted by wind and water is well into the tens of billions of dollars.
The idea that a storm like Sandy could happen again isn’t one we want to contemplate. But the fact is, not only could it happen again, chances are good that it will.
It’s just a matter of time.
The good news is that it’s possible to take steps now to reduce your community’s vulnerability to flooding and strengthen its resilience before another Sandy comes to town.
One way to accomplish that is to participate in the Community Rating System, a hazard mitigation program administered by the Federal Emergency Management Agency.
The goals of the CRS program are to reduce losses caused by flooding, facilitate accurate insurance ratings and promote awareness about flood insurance.
Residents of towns that participate in CRS pay reduced flood insurance premiums. The premiums are discounted in five percent increments based on the level of flood protection each community has achieved.
Communities raise their CRS rating via their achievements in four categories: Information, Mapping and Regulations, Flood Damage Reduction, and Flood Preparedness.
Sixty-one communities and the Meadowlands area in New Jersey are presently enrolled in the CRS program, saving more than $17 million combined on their flood insurance premiums.
Joining the CRS program is free, but it does require the commitment of the community. Mayors of towns that want to participate must send a letter of interest to the regional office of FEMA, which for New Jersey is:
Federal Emergency Management Agency
Region II office
26 Federal Plaza, 13th Floor
New York, N.Y.10278
FEMA representatives will then arrange a visit to review the community’s floodplain management status and ensure that it meets federal regulations.
Once the community is granted a “letter of good standing,” it receives a verification visit from the Insurance Services Office, a FEMA contract agency, to verify the community’s eligibility for the program and to determine its rating.
Once accepted into the program, towns must file annual reports showing the measures they have taken to reduce their flood risks. Every five years, each town must undergo a complete audit to ensure that they remain in compliance with the CRS program.
Most communities enter the CRS at Level 9, which immediately entitles residents to a five percent reduction in their flood insurance bills. Communities achieve the maximum premium discount of 45 percent when they reach level one.
More importantly, they will have strengthened their ability to withstand the whims of Mother Nature when storm clouds gather and waters rise.
As of May 1, 2014, 11 communities in New Jersey had achieved a Level 5 in the CRS, earning property owners a 25 percent reduction in their flood insurance premiums. Those communities are: Avalon, Beach Haven, Long Beach Township, Longport, Mantoloking, Margate, Pompton Lakes, Sea Isle City, Stafford Township, Stone Harbor and Surf City.
With another hurricane season on the horizon, now is the perfect time to increase your town’s ability to weather a future storm. Learn more about NFIP’s CRS program online at http://www.fema.gov/national-flood-insurance-program-community-rating-system
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.”
Drought continues to make the headlines, with the latest U.S. Drought Monitor showing moderate to exceptional drought covers 30.6 percent of the contiguous United States.
Its weekly update also shows that 82 percent of the state of California is in a state of extreme or exceptional drought. Reservoir levels in the state continued to decline, and groundwater wells continued to go dry, the U.S. Drought Monitor says.
The LA Times reports that California’s historic drought has 14 communities on the brink of waterlessness. It quotes Tim Quinn, executive director of the Association of California Water Agencies, saying that communities that have made the list are often small and isolated and have relied on a single source of water without backup sources.
(MCT) — President Obama and other leaders delivered a sobering message at the United Nations on Thursday, saying the world was not doing enough to contain the Ebola outbreak in West Africa and avert a “humanitarian catastrophe.”
“This is more than a health crisis,” Obama told leaders at a special gathering convened while the U.N. General Assembly was meeting in New York. “This is a growing threat to regional and global security.”
Faced with a caseload that is doubling every three weeks, U.N. Secretary-General Ban Ki-moon has called for a “twentyfold surge in care, tracking, transport and equipment” to get in front of the epidemic, which is believed to have killed more than 2,900 people.
Obama said last week that he would send as many as 3,000 military personnel to establish a coordination center in Liberia and work with partners to set up Ebola treatment facilities, train health workers and distribute medical supplies and prevention information.
Exams can be hard enough without having to sit them in a foreign language. Our Good Practice Guidelines are already available in several languages so why not the CBCI exam also? Good question! The Business Continuity Institute is pleased to say that you can now sit your exam in Spanish, French, Italian or Japanese at computer-based testing centres, or alternatively you can sit paper and pencil exams through our global network of training providers, currently in Arabic, French, German, Italian and Spanish. Our long term aim is to have many other languages available.
To book your computer-based exam simply purchase it from the BCI shop. Once payment is complete you will receive an email containing an individual ID number and link to the Prometric website. You will then be able to choose the location of the exam and the language you wish to sit the exam in.
Yet another example of the BCI improving accessibility! For further information on this please email the BCI Learning and Development Team.
I’ve recently written about my journey of taking a business through to ISO22301 certification and how I achieved it with virtually no prior experience while creating a management system completely from scratch. It was quite the adventure and I naively assumed the journey would end there…
The truth is there is no end point to this journey (unless you’re a consultant) as you begin to evidence the system’s continuing improvement and maturity over time. You will have to continually work with whatever you create during these audits and keep it alive long enough to pass those surveillance visits!
At this point in the system’s development I decided it would be worthwhile in undertaking some additional training to prepare myself. A close colleague and mentor of mine suggested:
“The ISO 22301 Lead Auditor training is definitely the way forward for people at your stage, it’s quickly becoming a pre-requisite for most BC jobs”
‘Bash’ or ‘Shellshock’, a major new security vulnerability that could have greater impacts than Heartbleed, has been uncovered. In this article Continuity Central summarises the views of a number of information security professionals concerning this vulnerability.
Toyin Adelakun, VP of Products at Sestus:
Bash is a command interpreter (or ‘shell’) present on many Unix-based systems — such as Apple’s OS X, various flavours of Linux (such as Red Hat and Ubuntu), and other operating systems such as IBM’s AIX and HP’s HP-UX.
A command interpreter allows users to interact with the operating system, for the purposes of issuing low-level instructions and manipulating data.
On many Unix systems, users might be human, or software applications (apps).
Direct access to data and instructions potentially offers a means for attackers (malevolent users) to circumvent the protections built into a legitimate app in respect of the app’s data.
Therefore, the fact that many apps use Bash to invoke other apps or operating-system commands makes this vulnerability particularly potent.
Continuity Central is currently conducting a brief survey into whether there is a change in business terminology taking place: from business continuity management to organizational resilience. The survey is a follow up to an article in which Lyndon Bird, the technical director of the Business Continuity Institute, claims that such a development is under way.
The results of the survey so far show that just over half of respondents (56.76 percent) agree that a terminology change from business continuity management to organizational resilience is taking place. 33.76 percent of respondents disagree and 9.46 percent don't know.
Interestingly, when respondents were asked about their own organization, the situation was somewhat different, with only 29.73 percent of respondents stating that their organization was starting to use 'organizational resilience' rather than 'business continuity management' terminology. 68.92 percent said that their organization was still using business continuity management terminology; and 1.35 percent didn't know.
Finally the survey asked respondents whether 'organizational resilience' and 'business continuity management' are simply two names for the same process. A third (32.43 percent) think that they are two names for the same thing, while 67.57 percent believe that they are different processes. The implication being that if there is in fact a move in place away from business continuity management towards organizational resilience, this could have fundamental implications for organizations.
The survey will remain open for a further week: click here to take part.
CDC has developed a dynamic modeling tool called Ebola Response that allows for estimations of projected cases over time in Liberia and Sierra Leone. The Ebola Response modeling tool has been used to construct scenarios to illustrate how control and prevention interventions can slow and eventually stop the Ebola epidemic. Importantly, it can help planners make more informed decisions about emergency response resources to help bring the outbreak under control. It allows input of data reflective of the current situation on the ground in affected countries and communities. Ebola Response is intended to help local governments and international responders generate short-term estimates of the Ebola situations in countries, districts, and villages. The tool, in the form of a Microsoft Excel spreadsheet, will be made freely available online.
Ebola Response makes case projections, but also models the impact of key elements essential to controlling the outbreak: the number of sick individuals who are effectively isolated and other actions to control for spread of infection, such as safe burial practices. Currently, many healthy individuals are contracting Ebola from non-isolated individuals with the disease. Others are contracting Ebola because traditional burial practices can involve multiple family members being exposed to the bodily fluids of the deceased body, which are highly contagious. Ebola Response modeling shows that with an increasing rate of isolation and measures to control the spread of infection, the rate of new Ebola cases declines rapidly.
CDC used the Ebola Response modeling tool to calculate Ebola cases through to mid-January in Sierra Leone and Liberia, providing an example of how this tool can be used. The MMWR estimates a range of between 550,000 and 1.4 million cases by January 20th, 2015. The top range of the case estimate, 1.4 million, is explained by the model’s assumption that cases are significantly underreported by a factor of 2.5. It is essential to note that these numbers reflect a moment in time based on scientific and epidemiological data available in August, which did not account for the ramping up of the Ebola relief effort which has occurred in September. Modeling suggests that extensive, immediate actions – such as those already started – can bring the epidemic to a tipping point to start a rapid decline in cases.
The most important part of the report describes the potential effect of public health actions. The news is encouraging. If we do nothing, things could become much worse. If the international community takes the actions that are planned Ebola can be brought under control. The model indicates that once a tipping point is reached, cases will decline about as rapidly as they had increased.
The National Science Foundation and the Semiconductor Research Corporation have given research awards to 10 universities to develop secure, trustworthy, assured and resilient semiconductors and systems.
The awards total $4 million and support research at the circuit, architecture and system levels on new strategies, methods and tools to decrease the likelihood of unintended behavior or access; increase resistance and resilience to tampering; and improve the ability to provide authentication throughout the supply chain and in the field.
"The processes and tools used to design and manufacture semiconductors ensure that the resulting product does what it is supposed to do. However, a key question that must also be addressed is whether the product does anything else, such as behaving in ways that are unintended or malicious," said Keith Marzullo, division director of NSF's Computer and Network Systems Division. "Through this partnership with SRC, we are pleased to focus on hardware and systems security research addressing this challenge and to provide a unique opportunity to facilitate the transition of this research into practical use."
SINGAPORE — On a sunny Saturday afternoon here, children scamper about on a broad green lawn, families lay mats down for picnics, and a man maneuvers a kite in the sky.
This is no ordinary lawn; it’s three floors up on the roof of a pump house next to Singapore’s first urban reservoir, Marina Bay.
“It’s an easy place to fly kites,” says Erich Chew, 45, whose day job is running a small IT business, but whose passion is aerial photography by kite (“Compared to a drone, there are more surprises”).
“It’s quite high,” he says, “and at this level the wind is usually quite good.”
Next to the pump house, a dam known as Marina Barrage stretches across the mouth of a wide channel. On one side of the dam is salt water, leading out to sea. On the other side is the fresh-water reservoir, a shimmering blue backdrop to some of the most expensive real estate in Singapore — tall office towers, a conference center, hotel and shopping complex and the popular Gardens by the Bay botanic garden, all built after the dam went up in 2008.
The deployment of 802.11ac is accelerating, according to ABI Research. The firm released research this week that predicts that it will reach 11 percent of consumer gear – access points (APs), routers and gateways – this year. The total number of units shipped will be more than 176 million. About 32 million of those will be APs.
The firm says that D-Link and NETGEAR represented more than 20 percent of the consumer market during the first quarter of this year. Cisco and Aruba are the leading vendors on the enterprise side. The enterprise market, according to the firm, is expected to generate revenue of $8.1 billion by the end of 2019.
Network World prefaces a piece sponsored by WildPackets on the preparations organizations should take to ensure a smooth rollout of 802.11ac with the warning that the suggestions may favor the vendor. In any case, it offers advice that should be considered.
The Business Continuity Institute will be hosting a networking event following their annual general meeting on the eve of the BCI World Conference and Exhibition. The networking event, sponsored by EPC (formerly known as Emergency Planning College), will be starting at 7pm at the Hand and Flower pub in Hammersmith.
All delegates at the BCI World Conference are invited to attend what will be a sparkling night of entertainment, dancing, drinks and nibbles. The venue is directly opposite the Olympia so conveniently located and provides an informal environment to reacquaint yourself with BC colleagues from across the world.
Lynda Vongyer, Business Continuity Director at EPC said: "Communication is a vital element of resilience planning, implementation and recovery. It’s good to talk, so EPC are very happy to host this pre-conference evening for the BCI. A great way to relax and unwind after your travels, meet old and new colleagues. We look forward to being your hosts."
Clouds by definition are nebulous and vague. Their use in IT models and discussions goes back decades, long before the current cloud computing models. A ‘cloud’ was convenient shorthand for showing a link between a system on one side and a terminal or another system on the other. Today however, the concept has evolved. Not only do such clouds link computers, but increasingly they are the computer. Aspects of on-site IT security therefore apply to cloud computing too. For that reason alone, it’s time to firm up definitions about the type of computing that goes on in the cloud, and the IT security approaches suited to each one.
Let’s face it: For a long time, IT and legal compliance have been driving data governance. Even though the experts warned that businesses needed to own governance, that didn’t change the basic fact that many of the related tools — including master data management and data quality solutions — belonged to IT.
But a shift is happening, slowly but surely, that’s pushing data governance out of IT and into the hands of business users. One reason is that business users now see data as a key asset, according to “The Forrester Wave: Data Governance Tools, Q2 2014.”
“As organizations begin to exploit the value of data for strategy and operations, they recognize that data governance has to be about helping the business realize the value potential in data,” wrote Forrester analysts Henry Peyret and Michele Goetz. “As such, stakeholders in marketing, sales, customer service, and finance are becoming much more involved and accountable.”
At 2:49 p.m. on April 15, 2013, at the height of Boston’s annual Marathon, two bombs exploded near the finish line, killing three people and injuring more than 260 others. What followed was an extraordinary manhunt, which included a shelter-in-place request from the governor that virtually shut down the city, along with the use of social media by law enforcement as a key communications tool to keep the media and frightened citizens accurately informed about what was going on.
Within 10 minutes of the bombing, Boston Police Department (BPD) Commissioner Edward Davis told his department to start using social media and to let people know what had occurred. The importance of social media as a policing tool, in particular Twitter and Facebook, soon became apparent. Misinformation, spread by professional media outlets and social media itself, was quickly corrected by the BPD. It didn’t take long for the media to realize that the most accurate information about the bombing was coming from the official BPD Twitter account.
“The Boston Police Department was outstanding and it was so simple and effective,” said Lt. Zachary Perron, public information officer for the Palo Alto, Calif., Police Department. “They became the news source during the crisis. It was a watershed moment for law enforcement and social media.
(MCT) — Can street flooding be crowdsourced?
Apparently so, as the Norfolk-based environmental group Wetlands Watch hones its Sea Level Rise app to enable the public to issue and receive real-time alerts about waterlogged streets.
When the app launches in a couple of weeks, Wetlands Watch Executive Director Skip Stiles says flood watchers — nicknamed "floodies" — can download it for free and join the effort to pinpoint trouble spots during a rain or storm event.
"Anyone can drop a pin and say, 'Boom, flooded,'" Stiles said.
The information will also be used by emergency managers and scientists to better understand flood patterns and prepare for them, he said.
The app comes as the Virginia Department of Emergency Management (VDEM) also unveils an interactive storm-surge map to allow users to see the maximum risk for specific locations.
Although it is good practice for organizations to have a business continuity plan, workplace flexibility is what really counts in a disaster: Victoria University of Wellington research.
Dr Noelle Donnelly and Dr Sarah Proctor-Thomson, researchers at the Centre for Labour, Employment and Work at Victoria University of Wellington’s School of Management, were commissioned by the New Zealand Public Service Association (PSA) and Inland Revenue to research the experiences of employees who worked from home following the February 2011 earthquakes in Christchurch.
This is the first study of its kind examining the experiences of flexible work arrangements in a post-disaster environment.
At the time of the February earthquake, Inland Revenue had just one central office of over 800 staff members in the centre of town.
“When the earthquake hit Christchurch at 12.51pm on Tuesday 22 February 2011, Inland Revenue immediately lost access to its main workplace in the CBD,” says Dr Donnelly. “In response, available senior managers met and began the work of assigning new roles and tasks to staff. One of their immediate challenges was making contact with their people to ensure that they were all safe.”
The Cloud Security Alliance (CSA) has released the results of a new survey that found a significant difference between the number of cloud-based applications IT and security professionals believe to be running in their environments, and the number reported by cloud application vendors.
The survey entitled ‘Cloud Usage: Risks and Opportunities’ included responses from IT and security professionals from around the globe representing a variety of industry verticals and enterprise sizes. The aim was to gain insight and understand the perceptions of how enterprises are using cloud apps, what kind of data is moving to and through those apps, and what that means in terms of risks.
Among other things, the survey found that 54 percent of IT and security professionals said they have 10 or fewer cloud-based applications running in their organization, with 87 percent indicating that they had 50 or fewer applications running in the cloud (with a weighted average of 23 apps per organization). These estimates are far lower than commonly reported by vendors and research reports, which count more than 500 cloud apps present, on average, per enterprise.
Software developers from around the world have been recognized at the UN Climate Summit for their ingenuity in devising life-saving apps for use in reducing the impact of extreme weather events on cities and coastal communities.
Entries to the Esri Global Disaster Resilience App Challenge included apps which allow communities to measure the impact of permafrost melt and storm water on vital infrastructure, to access sea-level rise and landslide forecasts, and an app which allows disaster-affected citizens to check out evacuation routes, shelter locations, and much more.
Esri, a leader in geographic information system technology and mapping software, awarded a prize of $10,000 each to the winner for the best professional/scientific app and the best citizen/public-facing app which will be made available for use to the 2,200 cities, towns and municipalities in the global Making Cities Resilient Campaign of the UN Office for Disaster Risk Reduction (UNISDR).
Crowdsourcing inevitably raises questions about data quality, but a number of companies and experts believe crowdsourcing can be used to improve data quality.
GigaOm recently profiled one of these companies, CrowdFlower, after it raised $12.5 million in its Series C round of venture capital — just under half of the $28 million it’s raised since its launch four years ago.
CrowdFlower doesn’t so much crowdsource its work, but relies on the crowd to do its work. For instance, Unilever hired CrowdFlower to extract sentiment, location, sex and other information from tweets, GigaOm reports. eBay used the company to clean up its product taxonomies.
As the Internet of Things (IoT) becomes a reality, the volume of data that will be generated by the multitude of connected devices, machines, and processes — in the consumer, business, and industrial worlds — is expected to be massive. In short, the more devices and machines that get connected, the more data that is going to be generated.
Achieving some kind of business value from this massive data reservoir will require the use of big data storage and analysis technologies that can scale to meet the constantly increasing demands placed on organizations. These include:
- NoSQL file systems
- NoSQL databases
- High-performance relational analytic and in-memory database appliances
- Hybrid relational databases with embedded MapReduce
- Streaming analytics systems
All of these technologies provide varying capabilities for managing and analyzing sensor and other data associated with IoT applications and services. That said, a key point to keep in mind is that none of them on its own currently offers an all-encompassing solution that can serve every need for IoT application requirements. Consequently, I recommend you consider these technologies as complementary.
Almost two years after it tore a deadly and costly path through the Northeast, Superstorm Sandy still stands as one of the most important events in the history of disaster preparedness. The desire to be more resilient in the face of these big and increasing storms kicked into high gear planning efforts by states and localities across the country. But it takes money to take action. And as governments are finding out, it’s hard to find money in today’s tight budgets.
If one of the biggest stumbling blocks to increasing a community’s sustainability and resilience is financing, then New Jersey’s in good shape. This summer, the Garden State created an energy resilience bank to “fund projects that will help prevent a reoccurrence of the energy disruptions and build energy resilience,” according to the state’s proposal for the bank. The idea essentially is to set up a dedicated source of funding for projects that will provide clean, more reliable energy at critical infrastructure such as water and wastewater treatment plants, hospitals, shelters, emergency response centers, schools, and transit systems.
Through revolving loans and grants, the bank will support projects that include installing microgrids, distributed generation (where electricity is generated from multiple small energy sources such as fuel cells or solar panels), smart grid technology and energy storage. Initially, the bank will be funded using $200 million from New Jersey’s Community Development Block Grant-Disaster Recovery allocation from the U.S. Department of Housing and Urban Development (HUD). When that runs out, says Greg Reinert, director of communications for the New Jersey Board of Public Utilities, the state will allocate funds. The ultimate goal, though, is to bring in private capital.
Yet another set of ominous projections about the Ebola epidemic in West Africa was released Tuesday, in a report from the Centers for Disease Control and Prevention that gave worst- and best-case estimates for Liberia and Sierra Leone based on computer modeling.
In the worst-case scenario, the two countries could have a total of 21,000 cases of Ebola by Sept. 30 and 1.4 million cases by Jan. 20 if the disease keeps spreading without effective methods to contain it. These figures take into account the fact that many cases go undetected, and estimate that there are actually 2.5 times as many as reported.
In the best-case model, the epidemic in both countries would be “almost ended” by Jan. 20, the report said. Success would require conducting safe funerals at which no one touches the bodies, and treating 70 percent of patients in settings that reduce the risk of transmission. The report said the proportion of patients now in such settings was about 18 percent in Liberia and 40 percent in Sierra Leone.
SAN FRANCISCO – A staggering 43% of companies have experienced a data breach in the past year, an annual study on data breach preparedness finds.
The report, released Wednesday, was conducted by the Ponemon Institute, which does independent research on privacy, data protection and information security policy.
That's up up 10% from the year before.
The absolute size of the breaches is increasing, said Michael Bruemmer, vice president of the credit information company Experian's data breach resolution group, which sponsored the report.
"Particularly beginning with last quarter in 2013, and now with all the retail breaches this year, the size had gone exponentially up," Bruemmer said.
He cited one large international breach few Americans have even heard about. In January, 40% of South Koreans—a total of 20 million people—had their personal data stolen and credit cards compromised.
Research conducted by Databarracks has revealed a significant disparity between organizations’ attitudes and approaches to business continuity and disaster recovery. The findings indicate that while medium and large organizations are confidently implementing business continuity plans, small organizations are putting themselves at risk by failing to follow suit.
The findings are part of Databarracks’ fifth annual Data Health Check report, which surveys over 400 IT professionals in the UK on the changing ways in which technology is used by businesses today.
The results revealed that only 30 percent of small organizations had a business continuity plan in place, compared with 54 percent of medium and 73 percent of large businesses. Perhaps even more concerning is that when asked if the organization intended to implement a BCP in the next 12 months, over 40 percent of small organizations had no intention to do so.
Other key findings included:
- Hardware failure (21 percent), software failure (19 percent) and human error (18 percent) were reported as the top causes of data loss;
- Large organizations are more than twice as likely to have tested their disaster recovery plans in the last year compared to small organizations;
- ‘Lack of time’ was deemed to be the biggest factor for all organizations not testing their disaster recovery plans (35 percent), this was closely followed by ‘cost’ (18 percent) and ‘lack of skilled staff to carry out testing’ (18 percent).
IBM has announced the opening of its new Cloud Resiliency Center in Research Triangle Park (RTP), North Carolina. The new facility provides state-of-the-art business continuity capabilities in the cloud to protect companies from potential costly disruptions.
IBM’s new Resiliency Center integrates cloud and traditional disaster recovery capabilities with innovative physical security features. With cloud resiliency services, the recovery time of 24 to 48 hours that was once deemed the industry standard has shrunk dramatically to a matter of minutes.
Open 24 hours a day, seven days a week, the Resiliency Center team will monitor developing disaster events and then mobilize as needed to ensure that the infrastructure for all customers is configured to handle the latest threats to keep data, applications, people and transactions secure.
IBM has also announced that it will be opening two new cloud based resiliency centers in Mumbai, India and Izmir, Turkey.
Technology helps organisations to get more done in less time. However, technology alone cannot guarantee business continuity. Solid business processes also contribute to resilience, but there’s another kind of ‘glue’ that can make the difference between enterprises that stand or fall when the going gets tough. It’s organisational culture, or “the way we do things round here”. This is an element that business continuity managers must factor into their planning, for at least two reasons. Firstly, and as we’ve just said, it’s because it’s important – in fact, essential – to BC. Secondly, because someone whose support the BC manager must get is also likely to make organisational culture a top priority.
I was hardly surprised to see Home Depot-related emails showing up in my inbox over the weekend. After all, it may be the largest breach ever, with at least 56 million credit cards compromised.
It also now appears that Home Depot is the new poster child for what happens to a company, both in terms of data loss and of its reputation, when it ignores the warnings that it is at a high threat level.
According to a number of reports, Home Depot management had been warned for years – years – that its network was vulnerable to a serious cybersecurity attack. But it appears that upper management refused to take these warnings seriously. The New York Times reported:
In recent years, Home Depot relied on outdated software to protect its network and scanned systems that handled customer information irregularly, those people said. Some members of its security team left as managers dismissed their concerns. Others wondered how Home Depot met industry standards for protecting customer data. One went so far as to warn friends to use cash, rather than credit cards, at the company’s stores.
It’s referred to as the Big One, the cataclysmic earthquake that will devastate Los Angeles when the ground around the San Andreas Fault gives a dramatic heave.
Seismologists agree that it’s a matter of when, not if, it happens, and that the resulting damage will be incalculable in the city of more than 4 million residents and 400,000 businesses.
Emergency response will have to come on multiple fronts at once. Beyond the immediate imperative of saving lives, the emergency community will need to coordinate activities in the realms of transportation, health, finances and diverse other sectors to stabilize the city. Water will be a particular concern in an area that relies largely on outside sources for its supply.
(MCT) — Nobody knew what to call it in 1859, when the most dramatic solar storm on record shocked telegraph operators, set their paper ablaze and lit up the horizon with brilliant auroras.
Sky watchers now know the sun can belch out dozens of solar flares and related eruptions every year, including one that put electricity grid monitors on alert this month.
Bursts known as a coronal mass ejections especially can destabilize the power grid by causing vibrations in the Earth's magnetic field, as NASA explains. Those vibrations cause invisible electric currents that can overwhelm circuitry and lead to prolonged shutdowns.
Solar researchers say their challenge is figuring out which bursts threaten disruption on the scale of the so-called Carrington Event, which bedeviled telegraph operators and crippled communication systems in 1859.
(MCT) — With canned peaches and tuna, marshmallows and Spam, professional chefs competed Saturday to show Houstonians that they can eat more than just peanut butter and jelly during a natural disaster.
Chef Kate McLean of Tony's won the 2nd annual Ready Houston Preparedness Kit Chef's Challenge at Market Square with a dish judge Albert Nurick said he "could see on the menu exactly as it is."
"The creativity is off the hook on this one," said Nurick, writer for the H-Town Chow Down blog.
On a fold-out table with a camp stove and average household cookware, McLean created a play on fish and chips. She and her competitors — David Grossman of Fusion Taco, Jonathan Jones of El Big Bad, Travis Lenig of Liberty Kitchen & Oysterette and Kevin Naderi of Roost — had 25 minutes to cook after lifting a tablecloth off a surprise stack of non-perishable items.
Why do we perform business continuity management (BCM)? Is it because we want to make sure that our organisations are able to respond to any future crisis? Probably yes! Is it because it’s just plain common sense that you would want your organisation to be prepared for any future eventuality? That would seem the sensible thing to do!
In many cases however, it is also because there is a legal obligation to do so. Many industries are tightly regulated, some more heavily than others, and therefore must have plans in place to deal with certain scenarios. There is also variation on an international scale with some countries having rules in place that others don’t. Legislation, regulations, standards and guidelines are being created and revised all the time and it is sometimes difficult to understand which ones are applicable to you. This is especially the case when you operate internationally.
There is a solution however. The Business Continuity Institute has published what it believes to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based upon information provided by the members of the Institute from all across the world. Some of the items may only be indirectly related to BCM, and should not be interpreted as specifically designed for the industry, but rather they contain sections that could be useful to a BCM practitioner.
The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, finally, it provides a link to the full document itself.
The BCI has done its best to check the validity of these details but takes no responsibility for their accuracy and currency at any particular time or in any particular circumstances. To download a copy of the document, click here.
Nearly all computing devices, even the processor itself, are comprised of discrete elements that must be brought under a common architecture in order to produce productive, valuable outcomes. This is why we build operating systems for the PC, the server, the storage farm and even the network; otherwise, we would just have a collection of blinking boxes.
To date, this has sufficed because the data environment did not extend beyond the data center walls, and the needs of each type of device were unique enough that separate but interconnected operating systems afforded the greatest degree of flexibility and functionality.
Now, however, with the data center itself emerging as one component in a larger, distributed data ecosystem, some are starting to wonder if it should be treated like a giant, multi-user computer, with a single operating system to bind all its functions together.
EATONTOWN, N.J.– When an incident reaches the point that it’s unsafe for people to remain in the immediate area, getting everyone evacuated as safely and quickly as possible becomes crucial. One of the most – if not the most – important part of an evacuation is figuring out how to get out of the affected area.
Coastal Evacuation Routes exist in states that border the Atlantic Ocean and Gulf of Mexico. They are often denoted by signs featuring some combination of blue and white. In New Jersey, they are white signs with a blue circle on them, filled with white text. Because of New Jersey’s small size and its proximity to water on three sides, many of the state’s major highways also serve as coastal evacuation routes. Most of New Jersey’s routes come from the shore (south and west) and move inward, mainly westbound.
The Garden State Parkway in Cape May County, for example, is the main evacuation route out of the county to the north, along with Routes 47 and 50. Also in Cape May and Atlantic counties, the barrier islands have multiple access points connecting the towns on those islands with the Parkway and other roads headed inland.
The Atlantic City Expressway is the main east-west route through the southern part of New Jersey. When Hurricane Sandy arrived in New Jersey, state officials reversed traffic on the Atlantic City Expressway, forcing all traffic on the highway to go west, away from the coast.
Unlike the barrier islands in Cape May and Atlantic counties, there is only one way on and off of Long Beach Island – Route 72. Route 37 serves the southern half of the Barnegat Peninsula in Ocean County, and Route 35 leads to access to inland roads in the northern half, including Routes 88 and 34, as well as Routes 36 and (indirectly) 18 in Monmouth County.
Getting to the main routes can sometimes involve traveling through residential areas and on lower-capacity streets and roads that can get crowded. www.ready.gov recommends keeping your car’s gas tank at least half full in case you have to leave immediately.
Once an evacuation order has been issued, leave as soon as possible to avoid traffic congestion and ensure access to routes. Have a battery-powered radio to listen for emergencies and road condition changes. During Sandy, not only was contraflow lane reversal (alteration of traffic patterns on a controlled-access highway so all vehicles travel in the same direction) implemented on the Atlantic City Expressway, but the southbound Garden State Parkway was closed to traffic.
During evacuations, people should follow instructions from local authorities on which roads to take to get to the main evacuation routes. Don’t take shortcuts, as they may be blocked. Know more than one nearby evacuation route in case the closest or most convenient one is blocked or otherwise unpassable. Don’t drive into potentially hazardous areas, such as over or near other bodies of water during a hurricane or other flood event. Barrier island residents should take the quickest possible route to the mainland.
Emergency evacuations are stressful moments. But knowing where you’re going and how to get there can help make the whole experience a little easier to handle.
Evacuation routes for the state of New Jersey are posted on the New Jersey Office of Emergency Management website. Go to http://ready.nj.gov/plan/evacuation-routes.html to find the route for your region.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
On the 5th and 6th November, the Business Continuity Institute will be hosting its annual BCI World Conference and Exhibition at the Olympia in London, UK. Join us in our 20th year by participating in this annual event that brings together the global business continuity community.
This is a unique networking and learning experience for anyone working or interested in business continuity, risk management, emergency management, crisis and incident management, security, disaster recovery... anyone with an interest in building organisational resilience.
The programme has now been released and it is packed with an abundance of fascinating speakers and topics. Keynote speeches will be given by world famous author and psychologist to the stars Professor Steve Peters who explains how your inner chimp may be holding you back; Martin Fenlon MBCI, from the Houses of Parliament, who will tell us how they prepare for the 5th November and the British Standards Institute will announce the new standard BS 65000.
The conference is split into three streams. In the Listen Stream you can hear practitioners share lessons learned, in the Learn Stream you will experience world class training based on the Good Practice Guidelines and in the Lead Stream there is an interactive thought leadership discussion and debate.
In addition to all of this, the BCI World Conference and Exhibition includes:
- Pre-conference training with expert instructors
- AGM – the must attend event for all BCI members
- Welcome networking event – join us for a night of live music, nibbles and drinks
- Live fully interactive game show
- Exhibition with a variety of attractions including demonstrations and product showcasing
- Guided tour with an experienced practitioner around the event for newcomers
- BCI clinic – visit the BCI stand with your BC related questions
- Exhibition Floor Complimentary Seminar Programme and Vendor Showcasing
- Gala dinner and global awards at the landmark Science Museum in London
Don't miss out on this great opportunity to learn and network with your colleagues from across the world. Book your place today by clicking here.
Actions that property owning organizations can take to better protect facilities, tenants and employees from civil unrest
Article provided by Preparis.
The recent killing of 18-year-old Michael Brown in Ferguson, MO, sparked a national response so powerful that frequent protests ignited throughout the United States bringing greater awareness to injustices that are still prevalent in our modern society. These protests and demonstrations, when performed peacefully, can bring together a community in ways that few other actions can; however, as can be seen with the happenings surrounding Ferguson, protests have a way of spiraling out of control, causing catastrophic damage and loss of life.
From a property management perspective, it is important for the safety of your tenants and the protection of your properties to understand the cultural dynamics within the communities adjacent to your business locations, stay abreast of the events involving political discord that could permeate those business locations, prepare for the worst scenario—civil disturbances involving your properties—and properly respond to instances of civil unrest. This article offers a guide to help you begin the process of achieving these goals in the event that other instances of civil unrest hit closer to home.
Christian Toon makes the case for a blended approach to backup and storage plans.
Data backup and storage is the IT equivalent of tidying up at the end of the day. Putting all your information away neatly so you know it is accounted for, secure and easy to find again. An unlikely topic, you would imagine, for strong opinions and lively debate. Yet that is exactly what it has become and for good reason.
Every day more data is handled by more employees who are spread across multiple locations and use a variety of devices. This increases the vulnerability of information. The solution for many organizations is to implement a centrally controlled data back-up and storage plan from the range of options available. And this is where the debate can become heated. In the red corner are the cloud converts, those who are quick to point out that ultimately all hardware-based back-ups will fail, and that nothing offers the same storage capacity, flexibility and ease of access. Over in the blue corner, we find those who approach the cloud with more caution. They can point to a growing evidence base such as the recent Symantec study  that shows 68 percent of companies have been unable to recover data stored in the cloud and to the fact that Forrester urges companies to back-up all cloud-stored data .
The reality of the workplace is complex. IT departments need to prioritise limited budgets and work with legacy IT infrastructure as they build confidence in the security and benefits of an established cloud provider. In many cases this leads to a hybrid data back-up and storage system that include onsite servers for the most active, business critical or confidential information, and securely stored offsite tape and disc as well as the cloud for less essential or dormant data. The result is tidy, cost-effectively managed and protected information and an IT team released to add more value elsewhere. At least, that is, until employees start asking for data they have lost or can’t access. The effort required to meet these requests has caught many IT professionals off-guard.
After many years with the aim of ‘promoting the art and science of business continuity’ around the world, the Business Continuity Institute (BCI) has now stated that its purpose is ‘to promote a more resilient world.’
This change of focus is supported by a new vision statement. Previously the BCI’s vision statement was: “To be the Institute of choice for business continuity professionals.” This has now been changed to: “To be the Professional Body of choice for resilience professionals.”
To support the above aims the Institute has set out three clear goals:
- To deliver a consistent “BCI experience” for members to develop and enhance their qualifications and expertise;
- To strengthen BCI’s role as “the global thought leader” for continuity and resilience;
- To increase BCI’s global influence within both mature and emerging markets which will be reflected by a growth in membership.
US Department of Housing and Urban Development (HUD) Secretary Julián Castro has launched a $1 billion National Disaster Resilience Competition. He was joined by Dr. Judith Rodin, President of The Rockefeller Foundation, in announcing that eligible states and localities can now begin applying for funds. Representatives from eligible communities will have the opportunity to attend Rockefeller-supported Resilience Academies across the country to strengthen their funding proposals.
"The National Disaster Resilience Competition is going to help communities that have been devastated by natural disasters build back stronger and better prepared for the future," said Secretary Julián Castro. "This competition will help spur innovation, creatively distribute limited federal resources, and help communities across the country cope with the reality of severe weather that is being made worse by climate change."
"The Rockefeller Foundation is committed to spurring innovation in resilience planning and design so that communities can build better, more resilient futures, particularly for their most vulnerable citizens" said Dr. Judith Rodin, President of The Rockefeller Foundation. "Building resilience will minimize the impact of the next shock, while also improving life in communities day-to-day, allowing them to yield a resilience dividend. Everyone wins."
The National Disaster Resilience Competition makes $1 billion available to communities that have been struck by natural disasters in recent years. The competition promotes risk assessment and planning and will fund the implementation of innovative resilience projects to better prepare communities for future storms and other extreme events. Funding for the competition is from the Community Development Block Grant disaster recovery (CDBG-DR) appropriation provided by the Disaster Relief Appropriations Act, 2013 (PL 113-2).
All successful applicants will need to tie their proposals to the eligible disaster from which they are recovering.
Given the complexity of the challenge HUD will partner with The Rockefeller Foundation to help communities better understand the innovation, broad commitment, and multi-faceted approach that is required to build toward a more resilient future. As they did in HUD's Rebuild by Design competition, The Rockefeller Foundation will provide targeted technical assistance to eligible communities and support a stakeholder-driven process, informed by the best available data, to identify recovery needs and innovative solutions.
There are 67 eligible applicants for the $1 billion National Disaster Resilience Competition. All states with counties that experienced a Presidentially Declared Major Disaster in 2011, 2012 or 2013 are eligible to submit applications that address unmet needs as well as vulnerabilities to future extreme events, stresses, threats, hazards, or other shocks in areas that were most impacted and distressed as a result of the effects of the Qualified Disaster. This includes 48 of 50 states plus Puerto Rico and Washington, DC. In addition, 17 local governments that have received funding under PL 113-2 are also eligible.
Whether you already have one or are contemplating acquiring one, having a Standby Power Generator is not a ‘set it and forget it’ responsibility.
As a Business Continuity professional you should not rely on that generator to mitigate electrical disruption risks unless you ask – and get satisfactory answers to – four questions about the most important aspects of owning and using a backup generator:
The Weather Company, best known for The Weather Channel and weather.com, is getting into the emergency alert business — a natural fit given the company's focus and market saturation.
Using its large-scale distribution and weather expertise, the company is, in partnership with local officials, building a localized alerting platform for state, local and private authorities to manage and distribute emergency alerts via The Weather Channel properties and existing local distribution points.
“The U.S. offers its citizens some of the best emergency alerting capabilities in the world,” said Bryson Koehler, executive vice president and CIO of The Weather Company, noting that the National Weather Service and FEMA ensure national coverage through alerts and the Integrated Public Alert and Warning System (IPAWS) system. "But most communities currently do not have a local alerting system to integrate with IPAWS. As a result, many alerts cover large areas or do not provide the types of local details that can best serve the public.”
Are concerns about personal data a sign of privilege?
Daniel Castro argues that they are, especially as the Internet of Things (IoT) comes online and data constantly streams from high-tech, high-cost gadgets.
Poor people don’t own Fitbits. Rather inconveniently for data, they also are born, grow up and live in low-tech environments. In our data-driven society, the end effect is that these people disappear from data, writes Castro in his paper, “The Rise of Data Poverty in America.” Castro is the director for the Center of Data Innovation, a data innovation think-tank that published the paper. He’s also a senior analyst at the Information Technology and Innovation Foundation — qualifications that show in his thought-provoking, well-researched paper.
More than 500 Red Cross volunteers are helping people affected by Hurricane Odile in the Mexican state of Baja California Sur. The volunteers—120 of which are paramedics—are providing basic medical check-ups and delivering food to people housed in shelters. The Red Cross has sent 2,000 food parcels to the city of Los Cabos. In addition, volunteers are carrying out damage assessments in Baja California Sur in order to determine the most urgent needs.
The storm has left roughly 82% of the population in Los Cabos and La Paz without electrical power, damaged roadways, and caused ports to close. People affected by the storm have evacuated to 164 shelters in Baja California Sur.
Mexican Red Cross volunteers participating in the response are specialists in collapsed structures, damage evaluations, pre-hospital care, and logistics support in shelters & collection centres. The Mexican Red Cross is working closely with federal authorities, Civil Protection, the Governors Secretariat, the Mexican Marines and Army, to deliver the aid to the people affected as quickly as possible.
Another storm—Hurricane Polo—is threatening the Mexican state of Guerrero, where at least 120 Mexican Red Cross volunteers are prepositioned to act if needed.
(MCT) — Among the many things the Bay Area learned from the recent shaker near Napa is that the University of California, Berkeley’s earthquake warning system does indeed work for the handful of people who receive its messages, but most folks find out about a tremor only after it knocks them out of bed.
Silicon Valley has made apps that tell people when their Uber ride is approaching, their air conditioning has broken or a thunderstorm is brewing. Yet despite being home to the most devastating earthquakes in the country, the region does not have a high-tech earthquake alert system for the public.
But since last month’s temblor, more tech companies are trying to solve that problem. A handful of startups are developing apps that would quickly broadcast warnings of upcoming quakes to users on their smartphones, tablets or other gadgets. Already, the much-joked-about messaging app Yo has rolled out “Earthquake Yo” to hundreds of users.
What is the scarcest IT resource today? Processor power, main memory and disk space all seem to grow unabated. But network bandwidth on the other hand is still comparatively expensive. Consequently, enterprises tend to have less of it, which is turn leaves them more exposed to possible outages. Luckily, other technology means that bandwidth can be made to do more, even if it’s not reasonable to have more of it. Routing voice and data over the same links is a prime example. This simplifies recovery and can also minimize outages. What’s missing in the equation is a simple explanation of terms involved. Here are a few to help you mix and match for the configuration that suits you.
After reading several blogs and articles this week, I’ve learned that many small to midsize businesses (SMBs) tend to learn as they go—especially when it comes to technology. And often, those lessons can be costly.
In a LinkedIn Blog written by Boost IT CEO Russell Shulin, I found a list of six major technology issues often overlooked by SMBs that can bust budgets and deeply affect business. Shulin explains that each is one lesson that he’s experienced, or seen experienced by others. Tips SMBs should consider include:
In the morning of Nov. 16, 2013, rural Ouray County, Colo., emergency responders were called to help miners in a nearby mine. Two were unconscious and 20 were suffering from oxygen deficiency. The two miners tragically died of carbon-monoxide poisoning, but a swift response got the other 20 to safety in a multiagency and regional effort.
The timing was uncanny. The coordinated response that ensued was practiced in a Mass Casualty Incident Command System (MCICS) training just the day prior to the incident, when those same responders were educated using an active shooter model. The training was applied to the mine incident in a structure that can be generalized to almost any mass casualty incident.
At the Revenue-Virginius mine, the county established a transportation unit leader and group for the first time to accurately track who was coming and going during the emergency.
In total, 30 responders navigated a snowy, narrow terrain to reach miners exposed to high levels of toxic carbon monoxide gases. The transportation leader and group helped especially to track and triage the miners and ensure quick treatment at three regional hospitals.
WINNIPEG, MANITOBA, Canada – After decades of working undercover for the Royal Canadian Mounted Police, the U.S. Drug Enforcement Administration and U.S. Customs Service, crime and risk expert Chris Mathers knows where companies are vulnerable and what it takes to protect them.
“In a world where popular culture tells us that the ends justify the means, crime is all about perception,” he said in a keynote address at the 2014 RIMS Canada Conference. “Young people are bombarded with it all the time, but we are in business, too. So the question is, how vulnerable is your business?”
Mathers, who joined the forensic division of KPMG and was later named president of corporate intelligence, shared his insight into how companies can best guard against “the business of crime, and crime in business.”
(MCT) — The San Antonio River Authority has announced the first nationwide implementation of software to help emergency responders react to dangerous floods.
SARA and the San Antonio Fire Department will hold a news conference Wednesday to discuss the FloodWorks system. It was developed in the United Kingdom and is operational via a “user-friendly, interactive website” at the San Antonio Emergency Operations Center at Brooks City-Base, officials said.
“We're doing the technology development; their role is the response,” Russell Persyn, SARA's watershed engineering manager, said of the joint project with the fire department.
The system, installed late last year and run through tests in the spring, uses historical flood data and weather forecasts to plan a day before a potential flood, with real-time radar updates from the National Weather Service helping responders track developments during a storm.
Reports are published almost daily about the gender pay gap in the UK. In 2013, women earned 19.7 percent less than men doing the same job. While in professional occupations, the pay gap is smaller (around 9 percent), at a senior level, the gender pay gap has not really decreased since 2005. Senior women earn 20.2 percent less than men in a similar role.
When examining the salaries for women in the resilience and governance sectors, recruitment agency BeecherMadden expected to see a similar trend.
However, surprisingly, salaries for women in resilience and governance roles buck the trend of women being paid less. Comparing recent appointments in the past year, women have been paid up to 30 percent more. This is for roles where men with comparable experience, have been appointed at a similar time, entering similar organizations.
BeecherMadden also found several examples of women with less experience in their role than men, who were earning around 10 percent more, for a similar role. The difference is most notable for those going into their second jobs; candidates who have 3 - 5 years’ experience are the most in demand and show the biggest pay difference. At senior levels, the experience gap closes when looking at comparable commercial experience.
To address critical gaps in knowledge about data center fire prevention, the US Fire Protection Research Foundation, an affiliate of the National Fire Protection Association(NFPA), has announced the release of a new report, ‘Validation of Modeling Tools for Detection Design in High Air Flow Environments,’ as the result of a project in partnership with Hughes Associates and FM Global.
The report validates a model that provides reliable analysis of smoke detection in data centers and guidance to the technical committees for NFPA 75, Fire Protection of Informational Technology Equipment, and NFPA 76, Fire Protection of Telecommunications Facilities.
Fire prevention and detection is critical to safeguarding data centers which hold critical business and organizational information around the world. Globally, spending on these facilities will be an estimated 149 billion dollars this year, according to Gartner Group.
In the past few years, the equipment in data centers has changed significantly, which has placed increased demands on HVAC systems. As a result, airflow containment solutions are being introduced to increase energy efficiency. From a fire safety design perspective, the use of airflow containment creates a high airflow environment that dilutes smoke, which poses challenges for adequate smoke detection, and affects the dispersion of fire suppression agents.
“While data centers have become increasingly important in housing digital information, sufficient smoke detection is a challenge with data center cooling systems,” says Amanda Kimball, a research project manager for the Foundation. “This research included a series of simulations with various smoke detector spacing, types of fires, and air flows which gave us important guidance on smoke detection placement and installation.”
(MCT) — Cities across California are struggling with how to convince property owners to retrofit buildings at risk of collapse during a major earthquake.
San Francisco this week is using an unusual tactic: trying to publicly shame building owners into shoring up their structures to better withstand shaking.
The city will slap large signs — in multiple languages, with red letters and a drawing of a destroyed building — on hundreds of apartment complexes that violate San Francisco's seismic safety laws.
No California city has gone so far to inform the public about potentially dangerous buildings and pressure property owners to make fixes.
Los Angeles is considering a similar approach. Mayor Eric Garcetti has proposed what would be the nation's first letter grading system to alert the public about the seismic safety of buildings. He has also said he wants to require owners to retrofit buildings that are at risk but is still working out the details of his plan.
It seems that small to midsize businesses (SMBs) around the world should begin beefing up their cybersecurity initiatives. Cybertinel, an Israeli security company, has verified the enigmatic Harkonnen Trojan on the network of one of its German clients in August, where attackers had taken full advantage of the often lax or lacking amount of network security in place in many SMBs.
According to TechWorld, around 300 SMBs in Europe may have been used as “fronts” for stealing data for as long as a decade. TechWorld’s John E. Dunn reported:
From the details released to the press, this looks like a rare example of a professional hacking-for-hire attack of long standing that possibly also targeted firms beyond the known target list, including in the UK.
As if crisis and emergency communicators don’t have enough to worry about. In today’s instant news world, without the care journalists once showed to get it right, it’s becoming increasingly common for fake spokespersons to prank the media.
Imagine the nightmare–your organization is in the middle of a major news crisis. While you are working hard to get your authorized spokesperson prepared to go live on national or regional TV, your TV monitor shows a live report going on with someone posing as a spokesperson for your organization.
Think it won’t happen?
Nags Head, N.C., barely skims the ocean surface, a town of about 3,000 people built on sand just 10 feet above sea level. Over the decades, hurricanes have cut a rough path here, taking down homes, roads and piers.
As city planners look toward the inevitable next big blow, they’re thinking about infrastructure. What happens when emergency phone lines no longer function or when the data center goes down? To meet that challenge, Nags Head is teaming up with other municipalities to create inter-city backup arrangements.
“[If] we should have a storm and the area has to be evacuated, essential personnel generally would be required to stay here. But [if] we have a very severe storm, essential personnel would be evacuated, and this arrangement gives us a place to set up shop,” said Allen Massey, IT coordinator of Nags Head.
The arrangement he refers to involves Cary, a city of 146,000 people that’s much farther inland. For call services in particular, Cary is Nags Head’s fallback position.
(MCT) TOKYO — In a nondescript government building near the Imperial Palace, a team of Japanese seismologists stands ready to predict an earthquake.
All day, every day, they monitor data from dozens of tiltmeters, strain gauges and other instruments deployed along a stretch of coastline southwest of Tokyo. The region, called Tokai, was last rocked by a major quake in 1854. Scientists fear it’s overdue for a repeat.
Since 1979, federal scientists have been watching for ground motion that might herald an impending rupture on the fault zone. If their instruments ever detect an ominous bulge, Japanese law requires the prime minister to issue warnings that will shut down schools, hospitals, factories, roads and trains across one of the country’s most populous areas.
The Pacific Northwest is subject to the same type of seismic disaster that Japan hopes to predict, but neither the U.S. nor any other nation has such an ambitious program to nail down an earthquake before it happens. That’s because most experts are convinced it can’t be done.
(MCT) — As Clark County, Wash., families get ready to settle back into the routine of the school year, local officials are hoping residents are also preparing for something less expected: a disaster.
September is National Preparedness Month, and on Monday the Clark Regional Emergency Services Agency kicked off its annual disaster preparedness game, called the "30 Days, 30 Ways Preparedness Challenge."
The game, played over social media, assigns one readiness task for each day for the month of September.
After participants have completed the task, they are asked to post their results to Twitter, Facebook, Instagram, the game's blog or send in the result by email. More details can be found at the game's website, www.30days30ways.com.
A brutal snowstorm strikes at mid-day. Roads grow increasingly congested as commuters across the city scramble to get home before conditions worsen. Ice begins to jam roads, and resulting accidents turn interstates into parking lots and neighborhood roads into skating rinks. Some parents grow increasingly desperate to reach their children as roads become impassable, leaving students stranded on buses and at school. Other parents pick up their children only to become stuck in their cars.
Once safely reunited, families remain stuck indoors for days. Childhood excitement at the sight of snow quickly turns to cabin fever. Parents’ relief to have the family reunited turns to hope for the power to remain on and schools to reopen soon.
This scenario became reality for cities across the southeastern U.S. in January 2014, highlighting the importance of preparedness, especially for families. Natural disasters affect about 66 million children each year. Keeping children safe in emergency situations starts in the home, whatever the emergency may be.
Get a Kit
“If you could take one thing with you on a desert island, what would it be?” This popular children’s question game is not too far off the mark for putting together an emergency kit for your family. Maintaining a routine in an emergency will help your children cope.
Putting together a good kit is the first step in helping you do that. Let your children pick things that make them feel secure, such as a favorite book or food. Your children will enjoy helping create a kit of all the things they are sure they could not live without in case of an emergency. Be sure to include your children in the process. Make it a game, and they will find it fun!
Some basic items to include in your kit include:
- Radio (hand-crank or battery-powered with extra batteries)
- First-aid kit
- Can opener
- Canned goods
You should also know your child’s medications and keep a small supply in case of emergency. Consider a small identification card with information on key medications and emergency contacts for your child to keep at all times.
Think of your family’s specific needs. For example, if you have an infant, keep any special foods or extra diapers on hand.
Keep a similar kit in each car, along with a blanket, nonperishable food, and a charger for your phone or other essential electronics.
Make a Plan
Knowing what to do in an emergency is just as important as having a kit. Most important is ensuring you have a way to reunite your family if they are separated at the time of the emergency. Children do better in these situations when they are with their families. As a start, teach your children important names, phone numbers and addresses. Most children can memorize a phone number by age four or five. Make it a game—it could help keep your children safe.
Protecting your family will involve others, as well. Pick a family member out of town to be a common contact for everyone to call or text. Sometimes local telephone networks can be jammed. If someone else cares for your children during part of the day, always make sure they know what to do and who to contact in an emergency, too. Lastly, make sure you have a plan for what to do with your pets. They are part of the family, too!
Being informed of your family’s situation when everyone is separated during the day is important. Know the emergency plan in your children’s schools and keep your emergency contact information up to date. Delegate a close family friend as an alternate contact who could pick your children up if you or your spouse is not able to do so. Consider using a word that only you and your children know, and make sure your children know only to leave with someone who can tell them what the code word is. This word can be anything, like a favorite book character, and can serve as the “password” or the “code word.”
In an emergency, talk to your children about what is happening. Be honest and explain the situation; it’s better to learn about it from you than from the media, since information from the media may not be age-appropriate. Set an example with your own actions by maintaining a sense of calm, even when you are distressed. This will help your family cope in any emergency.
Events and information can change quickly in an emergency. Pay attention to local leaders, like your town’s mayor or police department, so you can make the best, most informed decisions for you and your family.
Earthquake exposure is one of the biggest risks to workers compensation insurers, so it’s interesting to read that the California State Compensation Insurance Fund (SCIF) is once again looking to the capital markets to provide reinsurance protection for workers comp losses resulting from earthquakes.
This is a repeat of the first catastrophe bond sponsored by the SCIF in 2011 – Golden State Re Ltd sized at $200 million — which is due to expire in January 2015.
Artemis blog says:
The unique transaction, which has not been repeated by anyone else until now, links earthquake severity to workers compensation loss amounts demonstrating a new use of the catastrophe bond structure.”
The Golden State Re II catastrophe bond issuance is expected to be sized at $150 million or more, and will cover the SCIF until January 2019.
The ongoing shortage of Big Data talent is a serious problem for companies whose business increasingly relies on data analytics to remain competitive. You can imagine how difficult it must be for IT staffing firms whose clients are clamoring for Big Data skills when this country’s colleges and universities simply aren’t churning out enough graduates to meet the demand. Where do you look to find those highly skilled people? Overseas? Perhaps. But what if you looked at the existing pool of IT workers who are already inside those companies?
That’s one of the approaches being taken by Collabera, an IT staffing firm based in Morristown, N.J. I discussed the shortage of Big Data talent in an interview earlier this week with Nixon Patel, senior vice president and head of the technology competency units at Collabera. When I asked him about the extent to which Collabera relies on foreign talent, like individuals here on H-1B visas, to fill these roles for its clients, I was blown away when Patel said Collabera has taken a different approach:
Less than three-quarters of the way through 2014 and we have already seen a slew of regulatory changes and increased audit demands. First, we saw the Supreme Court significantly extend whistleblower provisions to include private companies. Then, we saw Walmart hit with $439 million in compliance enhancements and investigation costs due to its recent FCPA probe.
Needless to say, compliance officers have been dealt a tough hand – something that’s not expected to lighten up throughout the remaining months of 2014. Here are five challenges compliance officers can expect to face throughout the remainder of this year:
A new study relies on a complex systems modelling approach to analyse inter-dependent networks and improve their reliability in the event of failure.
Energy production systems are good examples of complex systems. Their infrastructure equipment requires ancillary sub-systems structured like a network: including water for cooling, transport to supply fuel, and ICT systems for control and management. Every step in the network chain is interconnected with a wider network and they are all mutually dependent.
A team of UK-based scientists has studied various aspects of inter-network dependencies, not previously explored. The findings have been published in The European Physical Journal B by Gaihua Fu from Newcastle University, UK, and colleagues. These findings could have implications for maximising the reliability of such networks when facing natural and man-made hazards.
Previous research has focused on studying single, isolated systems, not interconnected ones. However, understanding inter-connectedness is key, since failure of a component in one network can cause problems across the entire system, which can result in a cascading failure across multiple sectors, as in the energy infrastructure example quoted above.
In this study, interdependent systems are modelled as a network of networks. The model characterises interdependencies in terms of direction, redundancy, and extent of inter-network connectivity.
Fu and colleagues found that the severity of cascading failure increases significantly when inter-network connections are one-directional. They also found that the degree of redundancy, which is linked to the number of connections, in inter-network connections can have a significant effect on the robustness of systems, depending on the direction of inter-network connections.
The authors observed that the interdependencies between many real-world systems have characteristics that are consistent with the less reliable systems they tested, and therefore they are likely to operate near their critical thresholds. Finally, ways of cost-effectively reducing the vulnerability of inter-dependent networks are suggested.
Reference: Fu, G. et al. (2014). Interdependent networks: Vulnerability analysis and strategies to limit cascading failure. European Physical Journal B.
Read the paper (PDF).
The World Health Organization (WHO) has identified six countries as being at high risk for the spread of the Ebola virus disease. It is working with these countries to ensure that full surveillance, preparedness and response plans are in place.
“The following countries share land borders or major transportation connections with the affected countries and are therefore at risk for spread of the Ebola outbreak: Benin, Burkina Faso, Côte d’Ivoire, Guinea-Bissau, Mali, and Senegal,” the agency said in the first in a series of regular updates on the Ebola response roadmap.
WHO’s Ebola Response Roadmap Situation Report 1 features up-to-date maps containing hotspots and hot zones, as well as epidemiological data showing how the outbreak is evolving over time. It also communicates what is known about the location of treatment facilities and laboratories.
It follows the release of an Ebola response roadmap that aims to stop the transmission of Ebola virus disease (EVD) within six to nine months.
The update noted that although the numbers of new cases reported in Guinea and Sierra Leone had been relatively stable, last week saw the highest weekly increase yet in Guinea, Sierra Leone and Liberia, highlighting ‘the urgent need to reinforce control measures and increase capacity for case management.’
Disaster recovery planners are often recommended to take a holistic view of their IT organisation. They should work to deal with potential outcomes, rather than possible causes. That certainly helps businesses to greater overall DR effectiveness and cost-efficiency. However, there’s no denying that a number of practical details must also be respected. Otherwise, the best-aligned DR plan may never get off the ground. The old rhyme says: “For want of a nail, a shoe was lost…” and finally the whole kingdom too. Here are a few such ‘nails’ that disaster recovery planning can take into account to get those mission-critical apps up and running again after an incident.
What is the BCI Diploma?
The BCI Diploma enables individuals to achieve a formal, internationally recognised academic qualification in business continuity and is delivered in partnership with Buckinghamshire New University as a distance learning programme.
This course has been developed in response to industry demand and is designed to meet the current and future needs of business continuity professionals working in the industry worldwide.
Students will be entitled to FREE Student membership for the duration of their studies, giving them full access to a wide range of high-quality business continuity resources through the BCI Members’ Area to support their learning as well as a wide range of other value-add benefits, including Member discounts on BCI products and services.
Successful completion of the Diploma leads to the post-nominal designation DBCI (Diploma of the Business Continuity Institute). Holders of the DBCI can apply via the Alternative Route to Membership for Statutory membership of the BCI (AMBCI or MBCI dependent on experience).
This course is delivered in an interactive eLearning environment and is delivered over a period of eight weeks. Each session lasts two hours with two sessions scheduled for each of the eight weeks, giving you a total of 32 hours of training.
The BCI Good Practice Guidelines Live Online Training Course has been revised for 2014 and is fully aligned to the Good Practice Guidelines (GPG) 2013 and to ISO 22301:2012, the international standard for BCM.
This course offers a solid description of the methods, techniques and approaches used by BC professionals worldwide to develop, implement and maintain an effective BCM programme, as described in GPG 2013 and takes the student step by step through the BCM Lifecycle, which sits at the heart of good BC practice.
Infrastructure virtualization is a proven means of streamlining hardware footprints and increasing resource agility in order to better handle the demands of burgeoning data loads and wildly divergent user requirements.
But it turns out that what is good for infrastructure is also good for data itself, which is why many organizations are looking to augment existing virtual plans with data virtualization, particularly when it comes to massive volumes found in archiving and data warehousing environments.
The Data Warehousing Institute’s David Wells offers a good overview of data virtualization and how it can drive greater enterprise flexibility. In essence, the goal is to enable access to single copies of data across disparate entities, preferably in ways that make details like location, structure and even access language irrelevant to the user. For warehousing and analytics, then, this eliminates the need to move all related data to a newly created database, which gives infrastructure and particularly networking a break because data no longer has to move from site to site in order to reach the user. Couple this with semantic optimization and in-memory caching and suddenly Big Data starts to look a lot less menacing.
The big change has finally started to take effect, with our historic preceptions of terrorism, consequences of decades of mismanagement of the Middle East, the lack of intervention where needed and intervention where not necessary, the lack of political and public will to engage with the idea of ‘home-grown’ terrorism and the enthusiasm for disaffected youth to belong to something that allows them to ‘matter’.
In the UK, we have raised our threat level from International Terrorism to ‘Severe’. This is in recognition of the fact that there is stated intent to attack the UK ‘homeland’ and its people. There is known capability and the potential adversaries are motivated and perhaps preparing their plans now – raising the threat level is a sensible caution and allows some focus and thinking about what needs to be done to improve our protective and response capabilities. The result amongst our population varies from fear about a threat we don’t understand to perhaps understandable scepticism about the motives of the Government and the wish to impose a ‘police state’ regime.
Today, I conclude a three-part series on risk assessments in your Foreign Corrupt Practices Act (FCPA) or UK Bribery Act anti-corruption compliance program. I previously reviewed some of the risks that you need to assess and how you might go about assessing them. Today I want to consider some thoughts on how to use your risk assessment going forward.
Mike Volkov has advised that you should prepare a risk matrix detailing the specific risks you have identified and relevant mitigating controls. From this you can create a new control or prepare an enhanced control to remediate the gap between specific risk and control. Finally, through this risk matrix you should be able to assess relative remediation requirements.
A manner in which to put into practice some of Volkov’s suggestions was explored by Tammy Whitehouse, in an article entitled “Improving Risk Assessments and Audit Operations”. Her article focused on the how Timken Company, assesses and then evaluates the risks the company has assessed. Once risks are identified, they are then rated according to their significance and likelihood of occurring, and then plotted on a heat map to determine their priority. The most significant risks with the greatest likelihood of occurring are deemed the priority risks, which become the focus of the audit/monitoring plan, she said. A variety of solutions and tools can be used to manage these risks going forward but the key step is to evaluate and rate these risks.
There’s no doubt that virtualisation has been a boon to many enterprises. Being able to rationalise the use of servers by spreading storage and applications evenly over a total pool of hardware resources leads to higher cost-efficiency, as well as improved disaster recovery and business continuity. Yet in practical terms, businesses are often still tied to one vendor for any effective storage strategy. To break free of that constraint, software-defined storage (SDS) lets IT departments mix and match the physical storage devices as they want. And there are further benefits too.
I was recently talking with a friend about—what else—Facebook and her thoughts on whether that would be too private to share.
“Oh, I don’t believe in privacy,” she said with a dismissive hand wave.
That stumped me, in large part because she’s a defense attorney.
“You don’t believe in privacy as a fact or you don’t believe in privacy as a law?” I asked.
“Oh - legal privacy is very important,” she said. “But privacy as a fact—I don’t believe in it. It doesn’t exist.”
It sounds like a distinction only a lawyer could make. Yet as Big Data becomes commonplace, CIOs must educate themselves about the legal risks and responsibilities of gathering and using data, advises Larry Cohen, global CTO of Capgemini.
"I think the CIO is already kind of taking on more of a role of a risk broker and risk orchestrator in the enterprise," Cohen told CIO.com. "I think this is a perfect example of how a role like that arises in a topic like Big Data."
A Case In Point
Not long ago I was talking with a long-term CIO of a large organization about Disaster Recovery. He proceeded to tell me they are all set, as their tapes are stored offsite. To him, that was all he needed to be concerned about as it related to DR.
When a fire broke out in the office next to their data center, I am certain that offsite tapes were the last thing on his mind. He learned a hard lesson about relying on backups though. Turns out that after the fire, they were able to physically relocate their entire office before IT was able to restore all their applications. Even more disturbing than that was the discovery that they had more than ten days’ worth of data loss due to old/bad tapes, skipped files, and incomplete backups. I would not have wanted to be him when he met with the COO in the aftermath and had to explain the situation and his lack of preparedness.
The Napa County earthquake will have political aftershocks on Capitol Hill. The big question is how long they’ll last.
Prompted by California’s weekend temblor, lawmakers are renewing their push for earthquake warning programs. The most recent quake could spur support for a long-debated early warning system. It also could reveal some partisan fault lines.
“What we need is the political resolve to deploy such a system,” Sen. Dianne Feinstein, D-Calif., said this week.
In April, underscoring the role of politics in earthquake matters, 25 House Democrats from California, Oregon and Washington endorsed a proposal to provide $16.1 million for an earthquake early warning system. No Republican signed the letter requesting the funds.
As government leaders in California wend their way through the management of the state's historic drought, real discussions about how the state should adapt to water scarcity are taking place. And if history is a guide, the decisions made in the Golden State will have their impact in other places where water scarcity is becoming the norm.
Make no mistake: California is moving forward into uncharted territory. Traditional engineered solutions, such as the California Aqueduct that channels water from the wetter regions in the north to the arid south, are being challenged by a host of factors beyond the drought, including environmental regulations and the capacity of the systems themselves. Such water-transfer projects made it possible for the drier Southland to grow and become the most populous region of the state. But government and private-sector leaders are rapidly realizing that other approaches will be needed to fulfill future statewide agriculture, business and residential water needs.
Natural catastrophe events in the United States accounted for three of the five most costly insured catastrophe losses in the first half of 2014, according to just-released Swiss Re sigma estimates.
In mid-May, a spate of severe storms and hail hit many parts of the U.S. over a five-day period, generating insured losses of $2.6 billion. Harsh spring weather also triggered thunderstorms and tornadoes, some of which caused insured claims of $1.1 billion.
The Polar Vortex in the U.S. in January also led to a long period of heavy snowfall and very cold temperatures in the east and southern states such as Mississippi and Georgia, resulting in combined insured losses of $1.7 billion.
Ed. Note-Today, I continue my three-part posts on risk assessments. Today I take a look at some different ideas on how you might go about assessing your risks.
One of the questions that I hear most often is how does one actually perform a risk assessment? Mike Volkov has suggested a couple of different approaches in his article “Practical Suggestions for Conducting Risk Assessments.” In it Volkov differentiates between smaller companies which might use some basic tools such as “personal or telephone interviews of key employees; surveys and questionnaires of employees; and review of historical compliance information such as due diligence files for third parties and mergers and acquisitions, as well as internal audits of key offices” from larger companies. Such larger companies may use these basic techniques but may also include a deeper dive into high risk countries or high risk business areas. If your company’s sales model uses third party representatives, you may also wish to visit with those parties or persons to help evaluate their risks for bribery and corruption that might well be attributed to your company.
Another noted compliance practitioner, William Athanas, in an article entitled “Rethinking FCPA Compliance Strategies in a New Era of Enforcement”, took a different look at risk assessments when he posited that companies assume that FCPA violations follow a “bell-curve distribution, where the majority of employees are responsible for the majority of violations.” However Athanas believed that the distribution pattern more closely follows a “hockey-stick distribution, where a select few…commit virtually all violations.” Athanas suggests assessing those individuals with the opportunity to interact with foreign officials have the greatest chance to commit FCPA violations. Diving down from that group, certain individuals also possess the necessary inclination, whether a personal financial incentive linked to the transaction or the inability to recognize the significant risks attendant to bribery.
There’s bad news for SAP’s HANA: The majority of SAP’s American User Group is skeptical that the Big Data platform is worth the costs.
ASUG recently surveyed its member on SAP HANA adoption. It received more than 500 respondents, with 93 percent identifying themselves as ASUG members.
Three-fourths of SAP customers said they have not purchased any SAP HANA products because they can’t identify a business case that will justify its costs. Ranked well below this concern (at 40 percent) were concerns about skill set, a roadmap and upgrade issues.
ASUG membership can also include SAP partners, whose responses were separated out from customer survey results. Still, partner results share a similar concern. The top factor partners say could lead to more HANA purchases would be “better business case guidance.” (As one reader pointed out in the comments, the SAP Innovation Awards might help here, since the list provides nearly 30 use cases.)
WASHINGTON – The Federal Emergency Management Agency (FEMA), through its Regional Office in Oakland, California, is monitoring the situation following the U.S. Geological Survey report of a 6.0 magnitude earthquake that occurred this morning six miles south southwest of Napa, California. FEMA remains in close coordination with California officials, and its Regional Watch Center is at an enhanced watch to provide additional reporting and monitoring of the situation, including impacts of any additional aftershocks.
FEMA deployed liaison officers to the state emergency operations center in California and to the California coastal region emergency operations center to help coordinate any requests for federal assistance. FEMA also deployed a National Incident Management Assistance Team (IMAT West) to California to support response activities and ensure there are no unmet needs.
“I urge residents and visitors to follow the direction of state, tribal and local officials,” FEMA Administrator Craig Fugate said. “Aftershocks can be strong enough to cause additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.”
When disasters occur, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations and numerous private interest groups who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
Safety and Preparedness Tips
- Expect aftershocks. These secondary shockwaves are usually less violent than the main quake but can be strong enough to do additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.
- During an earthquake, drop, cover and hold on. Minimize movements to a few steps to a nearby safe place. If indoors, stay there until the shaking has stopped and exiting is safe.
- If it is safe to do so, check on neighbors who may require assistance.
- Use the telephone only for emergency calls. Cellular and land line phone systems may not be functioning properly. The use of text messages to contact family is the best option, when it is available.
- Check for gas leaks. If you know how to turn the gas off, do so and report the leak to your local fire department and gas company.
The enterprise must change if it is to take advantage of all the benefits that cloud and mobile technologies have to offer. This is nothing new, of course, as the enterprise has been changing to meet new challenges and opportunities since its inception.
But confronting challenges is always easier in hindsight, which leaves us non-time travelers in a quandary: What does the cloud future hold, and how can we best prepare for it?
According to the rising cadre of startups looking to capitalize on burgeoning cloud infrastructure, the biggest thing holding the enterprise back is their legacy infrastructure and the continued reliance on the old guard vendors who created it. SolidFire’s Jeremiah Dooley, for example, claims leading platform providers are trying to delay the inevitable switch to the cloud as much as possible in order to prevent others from encroaching upon their territory. This may benefit their revenue streams, but it keeps the enterprise in the slow lane when it comes to provisioning services and driving operational efficiency. The message here is simple: The cloud is not the problem; static legacy infrastructure is.
Social media is now a standard communications tool for businesses, with many companies regularly using Facebook, Twitter and other social networks to engage with the public. More and more businesses are hiring social media specialists whose sole responsibility is to be the company’s “voice” on these platforms. But this activity comes with risk for both the organization and the individual. The potential for any posting to be retweeted, shared or even go viral underscores the need to be aware of the rising legal risks associated with your business’s social media accounts.
Potential Defamation Lawsuits
The first tip for anyone engaged in social media on behalf of their business or employer is obvious, but not always followed—think before you post. Even if the tweet or post contains an unintended error and is deleted immediately, postings can still be pulled and reposted or retweeted by others. Once something is out there on social media, however, you’ll need to deal with the consequences. Although the laws surrounding social media are still developing, it is possible for a business to be hit with an expensive defamation suit based on a single posting or comment.
The Business Continuity Institute is pleased to announce that the keynote speaker for the BCI World Conference and Exhibition will be Prof Steve Peters – consultant psychiatrist, bestselling author and Head of Sports Psychology at UK Athletics. In addition to his extraordinary success with British cycling, he has also worked on twelve other Olympic disciplines as well as English Premier League football and the English rugby and football teams.
Beginning his career as a maths teacher, Prof Peters then switched to medicine and specialised in patients with severe and dangerous personality disorders. His focus is now on how the mind can enable people to reach optimum performance in all walks of life. Working with sportspeople at the top of their game, he gives them the confidence to come back from defeat and out-perform the opposition.
Prof Peters has been described as a "genius" by Team GB cycling coach Dave Brailsford and many decorated Olympians such as Chris Hoy, Victoria Pendleton and Bradley Wiggins have all attributed their success to him.
In his keynote speech, Prof Peters will explain his method to help us understand and control what he describes as our 'inner chimp' – the irrational, impulsive, seemingly impossible part of our mind that often holds us back. Examining motivation, confidence and communication, he will show that competition is as much in the mind as it is in the field or on the track – or in the office.
Find out more about the BCI World Conference and Exhibition on the 5th and 6th November at the London Olympia by visiting the BCI website.
Yesterday, I blogged about the Desktop Risk Assessment. I received so many comments and views about the post, I was inspired to put together a longer post on the topic of risk assessments more generally. Of course I got carried away so today, I will begin a three-part series on risk assessments. In today’s post I will review the legal and conceptual underpinnings of a risk assessment. Over the next couple of days, I will review the techniques you can use to perform a risk assessment and end with a discussion of what to do with the information that you have gleaned in a risk assessment for your compliance program going forward.
One cannot really say enough about risk assessments in the context of anti-corruption programs. Since at least 1999, in the Metcalf & Eddy enforcement action, the US Department of Justice (DOJ) has said that risk assessments that measure the likelihood and severity of possible Foreign Corrupt Practices Act (FCPA) violations identifies how you should direct your resources to manage these risks. The FCPA Guidance stated it succinctly when it said, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” The UK Bribery Act has a similar view. In Principal I of the Six Principals of an Adequate Compliance program, it states, “The commercial organisation regularly and comprehensively assesses the nature and extent of the risks relating to bribery to which it is exposed.” In other words, risk assessments have been around and even mandated for a long time and their use has not lessened in importance. The British have a way with words, even when discussing compliance, and Principal I of the Six Principals of an Adequate Compliance program says that your risk assessment should inform your compliance program.
Your data backups are there to help you recover information, applications and files if required, hopefully both effectively and efficiently. But they and any archiving you do may also be there for external parties to use as a result of e-discovery. That’s the retrieval of electronically stored information (ESI) for use in legal proceedings involving your organisation. The US has led the way in this field, defining ESI as any information that is “created, stored, or best used with any kind of computer technology”. Now in Australia, all court dealings above a certain size must be conducted completely digitally. But is e-discovery good news or bad news for legal rulings and ultimately business continuity?
In our haste to cover all the high-level strategies that may be needed to respond to a business disruption, Business Continuity Plans often miss critical details that can mean the difference between success and failure – especially when time is a major factor.
Many BCP’s have a strategy for “Loss of Building”. That strategy may include moving critical employees from the most crucial business processes to alternate sites – either internal (another of the organization’s facilities in a different geographical location) or external (at a 3rd party “Workspace” that can be made ready to accommodate those employee’s technology requirements).
All good; and logical – but perhaps missing some critical information.
A state of emergency was declared in California yesterday by Gov. Edmund G. Brown due to the effects of a 6.1 magnitude earthquake that rocked the Napa Valley area in northern California. The U.S. Geological Survey estimates that economic losses from the quake could top $1 billion and said there is a 54% likelihood of another large quake, magnitude 5 or higher, within the next week.
As of 4:15 p.m. Sunday, six aftershocks had been reported, four centered near Napa, ranging 2.5 to 3.6 magnitude. Two others, a 2.8 and a 2.6 were reported near American Canyon, according to the USGS.
The Napa quake is the largest in the Bay Area since the 1989 Loma Prieta quake, which was magnitude 6.9. That quake resulted in $1.8 billion in insured claims (in 2013 dollars) being paid to policyholders, said Robert Hartwig, Ph.D., president of the Insurance Information Institute.
(MCT) — Ten seconds before the earth rumbled in a UC Berkeley lab early Sunday morning, an alarm started blaring — and an ominous countdown warned that a temblor centered near Napa was moments away.
"Earthquake! Earthquake!" it cautioned, after a quick series of alarms. "Light shaking expected in three seconds."
The successful alert was the biggest test yet in the Bay Area for a type of earthquake early warning system that's not yet available to the public in the U.S. but already is providing precious seconds of notice before quakes hit in Mexico and Japan.
The ShakeAlert system — a collaboration between Cal, Caltech, the University of Washington and the U.S. Geological Survey — could one day stop elevators, control utilities and alert motorists of an impending natural disaster. But before it is reliable enough to launch throughout the West Coast, the system needs about $80 million in equipment, software and other seismic infrastructure upgrades.
(MCT) — City officials in Napa had long worried that the grand building on the corner of Second and Brown streets — with its brick walls and giant red-tiled cupolas — could be devastated by a major earthquake.
So city officials required brick structures such as the landmark Alexandria Square building to get seismic retrofitting — bolting brick walls to ceilings and floors to make them stronger. The work was completed years ago on the 104-year-old property.
But when a 6.0 earthquake struck Sunday morning, the walls on the top floors crumbled, showering brick and mortar onto the sidewalk and outdoor café.
The destruction highlights one of the greatest fears of seismic engineers — that the retrofitting of unreinforced masonry buildings still leaves weak joints between bricks. Whole chunks can fall, sending bricks crashing down.
One day after a magnitude 6.0 earthquake struck the San Francisco/Napa area of California, the Northern California Seismic System (NCSS) says there is a 29 percent probability of a strong and possibly damaging aftershock in the next seven days and a small chance (5 to 10 percent probability) of an earthquake of equal or larger magnitude.
The NCSS, operated by UC Berkeley and USGS, added that approximately 12 to 40 small aftershocks are expected in the same seven-day period and may be felt locally.
As a rule of thumb, a magnitude 6.0 quake may have aftershocks up to 10 to 20 miles away, the NCSS added.
In the European Union in the past year, a whole range of corporate risk and regulatory issues have been at the top of the agenda, but at the top of my list are data protection and information security.
In this report on risk issues for 2014, I will look at websites, privacy impact assessments, cloud computing and the EU Data Protection Regulation.
Focus on Websites in the EU
In the past five years or so, the European Commission and regulators that focus on consumer protection have carried out regular “sweeps” of websites in order to assess levels of compliance. This trend will continue, and businesses that sell or license content to consumers need to review their online terms and conditions as well as their compliance with other e-commerce rules such as the E-Privacy Directive, E-Commerce Regulations and Distance Selling Regulations.
For example, an EU-wide screening of 330 websites that sell digital content (such as books, music, films, videos and computer games) across the European Economic Area revealed some significant areas of non-compliance.
How many among you out there are sushi fans? Conversely, how many out there consider the idea of eating raw fish right up there with going into to the dentist’s office for some long overdue remedial work? One’s love or distaste for sushi was used as an interesting metaphor for leadership in this week’s Corner Office section of the New York Times (NYT) by Adam Bryant, in an article entitled “Eat Your Sushi, and Expand Your Horizon”, where he profiled Julie Myers Wood, the Chief Executive Officer (CEO) of Guidepost Solutions, a security, compliance and risk management firm. Wood said her sushi experience relates to advice she gives college students now, “One thing I always say is “eat the sushi.” When I had just graduated from college, I went with my mom to Japan. We had a wonderful time, but I refused to eat the sushi. Later, when I moved to New York, I tried some sushi and loved it. The point is to be willing to try things that are unfamiliar.”
I thought about sushi and trying something different in the context of risk assessments recently. I think that most compliance practitioners understand the need for risk assessments. The FCPA Guidance could not have been clearer when it stated, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” Many compliance practitioners have difficulty getting their collective arms about what is required for a risk assessment and then how precisely to use it. The FCPA Guidance makes clear there is no ‘one size fits all’ for about anything in an effective compliance program.
One type of risk assessment can consist of a full-blown, worldwide exercise, where teams of lawyers and fiscal consultants travel around the globe, interviewing and auditing. However if there is one thing that I learned as a lawyer, which also applies to the compliance field, is that you are only limited by your imagination. So using the FCPA Guidance that ‘on one size fits all’ proscription, I would submit that is also true for risk assessments.
Napa, Calif., residents were awakened at 3:20 a.m. on Sunday, Aug. 24, by a magnitude 6.0 earthquake that struck six miles southwest of the Northern California city, sending as many as 160 to the hospital, and causing widespread damage, including dozens of broken water mains and triggering six major fires. One person was still in critical condition Sunday evening.
The fires destroyed several mobile homes, and firefighters struggled with water pressure issues since a significant amount of pressure was lost because of the cracked and broken water mains. Most of the damage occurred in downtown Napa where the buildings are older.
There was also significant damage to roads, but the California Department of Highway Patrol and California Department of Transportation found no damage to bridges. The Transportation Department also had dive teams checking local toll bridges but found no damage.
(MCT) — A predawn earthquake rattled Napa, Calif., early Sunday morning, critically injuring at least three people as the shaking ripped facades and shattered windows from historic downtown buildings, toppled chimneys and ignited gas fires at mobile home parks.
Countless residents fled into darkened streets as the result of the quake, measured at magnitude 6.0 by the United States Geological Survey. It was the largest to hit the San Francisco Bay area since the devastating 6.9-magnitude Loma Prieta earthquake in 1989, prompting Gov. Jerry Brown to declare a state of emergency.
The Queen of the Valley Medical Center in Napa reported 120 people seeking treatment soon after the quake. They included a small child who was airlifted to UC Davis Medical Center with critical injuries authorities attributed to a collapsed chimney.
The buildup to fall is in full swing. The next step is Labor Day parades and barbeques and, then, the school busses will begin to roll.
IT and telecommunications never had a real summer slowdown this year, though. Much was done and lots of news was made, and hasn’t even slowed down during the latter half of August. Here is a look at some of the news and more interesting commentary.
"I always imagined a few people on the phones in a small office taking calls, not a big office with actual departments, and definitely not anyone thinking about business continuity and risks." Over the past year I have heard this line said to me in varying forms when I have explained that I give advice on corporate risk and business continuity in the non profit sector.
Not a common misconception and when being able to easily list the risks relevant to the financial services industry for example, applying that to the non profit industry along with the associations of what is important is not as easily obvious straight away.
Some Challenges and observations:
The varying degrees of academia in non profit organisations are expansive and the primary challenge is making it accessible and relatable to all.
The attitudes that this would take too long - it’s not required in our industry and focusing on delivering primary front line services was more important. But has anyone thought about those supporting functions?
"This will never happen to us anyway." At first, it made me feel uneasy hearing this but this is the best challenge to promote business continuity in any industry. Using the "if we don’t comply, we will get fined" card almost shifts the desired affect from wanting to provide great assurance to an exhausting check box exercise. The appetite and denial factor is a tough barrier to get around.
Forgotten plans - in most cases contingency plans were in people’s minds but just not on paper. Hearing various stories of incidents taking place which resulted in an instant panic before the swift realisation that "oh yes, we have a plan, we know what we need to" kicked off a series of reactions to get things back to normal.
Planning V’s practicing - countless months were spent planning and writing but practicing those BCP’s were missing. In recent exercises some feedback I got was that no one had ever tested their plans and found it really useful. The actions that were thought to take five minutes took twenty. This started a chain of actions which plan owners needed to implement in order to become more resilient in an incident. A friend said to me once that businesses don’t fail because of a bad business continuity plans, but because of bad choices. That stuck with me.
So what does BC look like in these industries?
We live in a robust and dynamic society and whilst a generic approach to start off a plan is valuable, they can be adaptable. I quickly realised that I was getting too hung up on wanting to make each teams plan look the same and what really mattered was that it absolutely has to work for the people invoking it, and if it is clear and coherent, that is sufficient.
It is without a doubt that the non-physical threats such as reputational risks, loss of funding from a major donor and employee scandals can have serious impacts on your operation, especially when the majority of funding is provided by the public generosity. If an incident occurred what would be the emergency funding protocol? It is things like this that needs the most consideration. Yes, every industry needs to consider the building, IT/data and staff but what about the intangible factors that essentially calls for a disaster.
Making those threats relatable is key and, the empowerment resulting in a shift in view of risk and business continuity only being related to IT and Financial services is essential. (Because of the varying levels of academics in these industries often sit under one roof).
What does this all mean?
All non profits, for example charities, are run like businesses. Fact!
Non profit or not, business continuity is on everyone’s mind, but they just don’t know that this is what it is. Yes, the variations of levels in what constitutes a threat differs from industry to industry but essentially, what matters most is the resiliency each organisation has to overcome any incident it faces.
RISKercizing until next time
It’s hard to have a conversation in the enterprise these days without the topic veering toward Big Data. What is it? Where does it come from? And what are we supposed to do with it?
But despite the fact that none of these questions have clear answers yet, IT is still tasked with preparing to accommodate Big Data and then figuring out how to derive real value from it.
Part of the problem is the term “Big Data” itself. While large data volumes are a facet of Big Data, that’s not where the challenge lies. Rather, says IBM’s Doug Balog, it’s the need to accommodate the ‘variety, velocity and veracity’ that advanced analytics require that will give most managers fits. This will require not only bigger, more scalable infrastructure, but entirely new ways to collect, analyze and store data, which, from IBM’s perspective, will require advanced Power8 architectures married to powerful third-party platforms like Canonical and the various Linux distributions.
Every organization should have an Emergency Action or Evacuation Plan. Even when it is not required (by the building owner, fire department or occupancy regulations) it is a ‘best practice’ for every organization to plan and practice to evacuate all personnel from the workplace. Often, evacuation focuses on getting out quickly. Surely that’s the most critical objective. . While simple in principle, there are some considerations that should not be overlooked:
Too Close for Safety: The standard ‘rule of thumb’ for Assembly points is at least 200 feet from the evacuated building. This is intended to assure personnel will not be endangered is window glass or other debris falls. Keep in mind that taller buildings may have a wider potential debris pattern. Two-hundred feet should be used as the minimum. Assuring employee safety should be the priority.
Obstruction: When Emergency Services (Fire, police, ambulance) arrive, will they have sufficient room to do their job? Crowds of evacuated personnel shouldn’t impede their work. Emergency services may need room to park and to turn their vehicles around. Make sure Assembly Points are a reasonable distance from entrances and drive paths- and assure personnel won’t interfere.
(MCT) — For six weeks, Florida reeled under the assault of four hurricanes.
First Charley struck Port Charlotte Aug. 13, 2004, with 150-mph winds. Then Frances pounded Martin and Palm Beach counties, collapsing part of Interstate 95 near Lake Worth and sending gusts into Broward that left a quarter-million people without electricity. Ivan came ashore near Pensacola with 120-mile-per-hour winds and a storm surge that swamped coastal towns. Jeanne struck the same area as Frances, turning out the lights in most of Palm Beach County, ripping off roofs and flooding houses.
It came to be known as the Year of the Four Hurricanes.
Following that beating, and another one the next year with Hurricanes Wilma and Katrina, there have been dramatic improvements to Florida’s electric grid, shelters, forecasting abilities and ability to communicate. And while another season like 2004 still would be disastrous, residents would have more warning and stand a better chance of returning faster to normal life.
(MCT) — The good news is people are more alert to and educated about weather this time of year.
Husbands and wives on the Coast can carry on a conversation about how the amount of sand in the upper atmosphere along the Atlantic affects the chances a tropical storm will develop.
But the down side is the array of information can be confusing and the social media sites, looking for clicks, tend to hype tropical activity.
Find a trusted source, local emergency managers say.
Here’s a tip that might take a little pressure off the data scientist talent search: A data scientist doesn’t necessarily need to be a math wizard with a PhD or other hard science background.
In fact, that type of person might actually prove disappointing if your goal is Big Data analytics for humans, according to data scientist Michael Li.
That may seem odd, given that Li’s work focuses on exactly the kind of credentials normally associated with the term “data scientist.” Li founded and runs The Data Incubator, a six-week bootcamp to prepare science and engineering PhDs for work as data scientists and quantitative analysts.
You can’t just wing it anymore. Many things have changed since you first said you wanted to become a fireman, an astronaut, a veterinarian or a nun. This is especially true in the field of business continuity.
Business continuity is not just concerned with IT recovery anymore. Supply chain management is critical to sustaining company operations. How do we determine what is or isn’t critical? Shouldn’t we bring these issues to the attention of our C-Level management?
These are just some of the issues confronting BCP Managers and most practitioners today had to learn how to handle these things along the way. As time goes by, trying to cover all bases regarding continuity has become more and more complicated. Instead of learning while working the job, a little bit of education to start would go a long way to getting ahead of what needs to be done.
The GlaxoSmithKline PLC (GSK) corruption matter in China continues to reverberate throughout the international business community, inside and outside China. The more I think about the related trial of Peter Humphrey and his wife, Yu Yingzeng for violating China’s privacy laws regarding their investigation of who filmed the head of GSK’s China unit head in flagrante delicto with his Chinese girlfriend, the more I ponder the issue of risk in the management of third parties under the Foreign Corrupt Practices Act (FCPA). In an article in the Wall Street Journal (WSJ), entitled “Chinese Case Lays Business Tripwires”, reporters James T. Areddy and Laurie Burkitt explored some of the problems brought about by the investigators convictions.
They quoted Manuel Maisog, chief China representative for the law firm Hunton & Williams LLP, who summed up the problem regarding background due diligence investigations as “How can I do that in China?” Maisog went on to say, “The verdict created new uncertainties for doing business in China since the case hinged on the couple’s admissions that they purchased personal information about Chinese citizens on behalf of clients. Companies in China may need to adjust how they assess future merger partners, supplier proposals or whether employees are involved in bribery.”
I had pondered what that meant for a company that wanted to do business in China, through some type of third party relationship, from a sales representative to distributor to a joint venture (JV). What if you cannot get such information? How can you still have a best practices compliance program around third parties representatives if you cannot get information such as ultimate beneficial ownership? At a recent SCCE event, I put that question to a Department of Justice (DOJ) representative. Paraphrasing his response, he said that companies still need to ask the question in a due diligence questionnaire or other format. What if a third party refuses to answer, citing some national law against disclosure? His response was that a company needs to very closely weigh the risk of doing business with a party that refuses to identify its ownership.
It’s been said that Big Data and the cloud go together like chocolate and peanut butter, but it looks like more symbiosis is at work here than meets the eye.
While on the surface it may seem like the two developments appeared at the same time by mere coincidence, the more likely explanation is that they both emerged in response to each other – that without the cloud there would be no Big Data, and without Big Data there would be no real reason for the cloud.
Silicon Angle’s Maria Deutscher hit on this idea recently, noting that the two seem to be feeding off each other: As enterprises start to grapple with Big Data, they will naturally turn to the cloud to support the load, which in turn will generate more data and the need for additional cloud resources. In part, this is a continuation of the old paradigm that more computing power and capacity simply causes users to up their data requirements. Of course, the cloud comes with additional security and availability concerns, but in the end it is the only way for already stretched IT budgets to feasibly cope with the amount of data being generated on a daily basis.
An improving economy and updated business practices have contributed to companies sending more employees than ever on international business trips and expatriate assignments. Rising travel risks, however, require employers to take proactive measures to ensure the health and safety of their traveling employees. Many organizations, however, fail to implement a company-wide travel risk management plan until it is too late – causing serious consequences that could easily have been avoided.
The most effective crisis planning requires company-wide education before employees take off for their destinations. Designing a well-executed response plan and holding mandatory training for both administrators and traveling employees will ensure that everyone understands both company protocol and their specific roles during an emergency situation.
Additionally, businesses must be aware that Duty of Care legislation has become an integral consideration for travel risk management plans, holding companies liable for the health and safety of their employees, extending to mobile and field employees as well. To fulfill their Duty of Care obligations, organizations should incorporate the following policies within their travel risk management plan:
Ian Kilpatrick looks at the risks involved with mobile devices and how to secure them.
Mobile devices with their large data capacities, always on capabilities, and global communications access, can represent both a business applications’ dream and a business risk nightmare.
For those in the security industry, the focus is mainly on deploying ‘solutions’ to provide protection. However, we are now at one of those key points of change which happen perhaps once in a generation, and that demand a new way of looking at things.
The convergence of communications, mobile devices and applications, high speed wireless, and cloud access at a personal level, are driving functionality demands on businesses at too fast a rate for many organizations.
Lockton report provides information to help protect companies' employees and operations from Ebola threats.
The current Ebola outbreak, deemed ‘an international public health emergency’ by the World Health Organization, has left many companies uncertain of how to properly protect themselves, while ensuring the safety of its employees and operations.
"The situation on the ground is evolving quickly and poses a threat not only to companies with operations in the region, but to all companies who have employees that may come in contact with the Ebola virus while traveling internationally," said Logan Payne of Lockton's International Risk Management Team.
Most companies are concerned with two main areas when facing a threat like Ebola: personnel risk and an interruption of normal business operations leading to a loss of revenue.
The 2014 Business Continuity Institute Africa Awards took place on Tuesday 19th August at a ceremony to coincide with the SADC and ITWeb Business Resilience Conference in South Africa. The BCI Africa Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in or operating in Africa.
The Winners of the Awards were:
Business Continuity Manager of the Year
Sylvain Prefumo MBCI, Head of Business Continuity at the State Bank of Mauritius
Emmanuel Atta Hanson MBCI, Business Continuity Manager at Barclays Bank of Ghana Ltd, and Elnora Aryee-Quaynor, Director of Africa Risk and Quality at PricewaterhouseCoopers (Ghana) Ltd, were both Highly Commended
Business Continuity Public Sector Manager of the Year
Dr Clifford Ferguson, Business Continuity Manager at the Government Pensions Administration Agency
Business Continuity Consultant of the Year
Peter Frielinghaus MBCI, Senior BCM Advisor at ContinuitySA
Lynn Jackson MBCI, Senior Business Continuity Consultant at ContinuitySA, was Highly Commended
Business Continuity Team of the Year
Barclays Bank of Kenya
Deloitte was Highly Commended
BCM Newcomer of the Year
Darren Johnson AMBCI, BCM Advisor at ContinuitySA
Business Continuity Innovation of the Year
Business Continuity Provider of the Year (Service)
Most Effective Recovery of the Year
Barclays Bank of Kenya
Business Continuity Personality of the Year
Congratulations to all the winners and well done to all those who were nominated. All winners from the BCI Africa Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition.
Computerworld - When Healthcare.gov was launched last October, it gave millions of Americans direct experience with a government IT failure on a massive scale. But the overall reliability of federal IT operations is being called into question by a survey that finds outages aren't uncommon in government.
Specifically, the survey found that 70% of federal agencies have experienced downtime of 30 minutes or more in a recent one-month period. Of that number, 42% of the outages were blamed on network or server problems and 29% on Internet connectivity loss.
This rate of outage isn't anywhere near as severe or dramatic as what Healthcare.gov faced until it was fixed. But the report by MeriTalk, which provides a network for government IT professionals, suggest that downtime is a systemic issue. The research was sponsored by Symantec.
The report is interesting because it surveys two distinct government groups, 152 federal "field workers," or people who work outside the office, and 150 IT professionals.
For all the care and feeding we’ve given to the data center over the years, it must be remembered that all that technology and the skills to operate it are a means to an end. The real prize these days is application performance.
An increasingly mobile workforce is fostering dramatic changes in the way work and productivity are measured, and enterprise infrastructure needs to keep up with these trends in order to remain relevant in the years to come. That means issues like throughput and compute power are still important, but so are architectural flexibility and the need to become more responsive to user needs.
According to a recent survey from SolarWinds, 93 percent of business people say the performance and availability of apps like Exchange, Sharepoint and NetSuite are crucial to their job performance, with nearly two-thirds describing them as critically important. At the same time, however, 36 percent say they have waited a full day for problems to be resolved in mission-critical apps, while 22 percent have experienced wait times of several days.
By Claire Phipps, MBCI
Businesses are usually in operation to make money and deliver a service or provide a product. To be successful there are many traits required and by ensuring your business is dynamic, adaptive, efficient and cost effective are all good starting points. Who would want a business that is passive, rigid, ineffective and expensive?
The same is true when talking about good management disciplines and recognised international standards and best practice.
So why don’t we evolve these disciplines and channel our way of thinking to change the way in which we deploy them. Adapt the methods in which we operate to one of ‘organizational resiliency’ - an all-encompassing comprehensive management discipline that ‘ticks all the right boxes’, provides success, growth, strength, security and a return on our investment.
Within my industry, there has long been an ongoing discussion and debate with regards to the future of business continuity and whether or not organizational resilience is the way forward. The fact that we are still not getting a concrete answer could be the answer itself. Yet again I’m hearing the phase being more commonly discussed and thought I would consider my own opinions on the topic and open this up for further discussion.
Senior disaster management officials from APEC economies, meeting in Beijing in the aftermath of the Ludian Earthquake in Southwest China, have detailed new far-reaching measures to strengthen relief and risk reduction capabilities across the Asia-Pacific, the world’s most disaster-prone region.
Upon observing a moment of silence for the victims of the 6.5 magnitude quake, officials were briefed on efforts to help survivors and speed recovery, and sanctioned deeper cooperation to protect against future emergencies. Joint actions are being taken forward in technical capacity building exchanges between APEC economies.
“The frequent occurrence of natural disasters poses a serious threat to lives and the economic health of the entire region,” cautioned Dou Yupei, China’s Vice Minister of Civil Affairs, in remarks to the 8th APEC Senior Disaster Management Officials’ Forum. “We must join hands to reduce disaster risk and guarantee the coordinated development of society, economy and the environment.”
IFMA, the US-based International Facility Management Association, has published an overarching guide to business continuity and emergency preparedness. It includes results from the IFMA 2014 Business Continuity Survey and research forums on emergency preparedness and business continuity.
‘High Stakes Business: People, Property and Services (Facility Management Perspectives on Emergency Preparedness and Business Continuity in North America)’ looks at the growing necessity of emergency and business continuity planning as a strategic priority; one which provides a unique opportunity for facility managers to establish valued partner status in ensuring organizational resiliency and longevity.
“Emergency preparedness and business continuity are critical and complex tasks that affects all facets of commercial and institutional facilities are central to FM worldwide. This publication provides practical guidance to facility professionals in order to develop plans that will best equip their organizations to resume normal operations as quickly as possible after disaster strikes,” said Stephen Ballesty, IFMA Board of Directors, IFMA Research Committee Chair, Director, Head of Advisory, Head of Research.
The report is available at a cost of $180 for non IFMA members and £90 for members.
The 2014 BCI Asia Awards took place on Thursday 14th August at the 12th Asia Business Continuity Conference in Singapore. The BCI Asia Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in the region.
The Winners of the Awards were:
Business Continuity Provider of the Year (Product)
Business Continuity Team of the Year
Business Continuity Innovation of the Year
BCM Manager of the Year
Khalid Ahmed Bahabri
BCM Newcomer of the Year
All winners from the BCI Asia Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition 2014.
Maintaining a supply chain's resilience is a daunting challenge, especially considering the increasing scale and complexity of supply chains worldwide. To support business continuity professionals in helping to assess their supply chains, the Business Continuity Institute has just published its latest Working Paper which uses a series of statistical comparisons from previous studies to look at the influence the number of suppliers an organisation has on the frequency and cost of supply chain disruption.
The research concluded that supply chain complexity does influence the frequency and cost of disruption which represents an important step towards the better understanding of supply chain disruption. Establishing the relationship between the complexity of supply chains to the frequency and cost of incidents will validate efforts by supply chain planners to work towards greater visibility of their supply chains. This also provides additional proof that may be used to justify continuous investment towards further understanding an organisation’s supply chain.
The study does highlight however, that given the implications of this research to decisions made by organisations, it is recommended that further statistical analysis be done to other variables that affect supply chains.
The Supply Chain Resilience survey has been one of the most comprehensive studies of its kind. It has produced useful findings that have guided organisations into imparting resilience to their supply chains. A more thorough study therefore provides greater opportunities to refine this tool and make it even more helpful to organisations worldwide.
To download the full version of the BCI's 'Working Paper Series No. 2: A quantitative analysis of selected variables in the 2013 Supply Chain Resilience Survey', please click here.
To take part in the BCI's 2014 Supply Chain Resilience survey and help further this research, please click here.
You can contact the paper’s author – Patrick Alcantara of the BCI’s Research Department – with any feedback about this particular paper or with any suggestions for future topics.
The main challenges in properly implementing business continuity management in an organisation can be expressed in four words: engagement, understanding, appropriateness and assumptions. In other words: senior management needs to be involved and committed to BCM; business continuity managers need to understand the essentials about IT operations; BCM processes need to link business objectives to operational realities; and any assumptions in BC planning need to be closely scrutinized. If this sounds like IT governance, you’re right. IT governance gives some good hints about how to make business continuity a practical, valued reality.
Maintaining the state’s trend of taking a leading position on new technological and legal challenges, a California Court of Appeals ruled earlier this month that within the state,
“We hold that when employees must use their personal cell phones for work-related calls, Labor Code section 2802 requires the employer to reimburse them. Whether the employees have cell phone plans with unlimited minutes or limited minutes, the reimbursement owed is a reasonable percentage of their cell phone bills."
And with that, a fresh set of headaches for companies and IT departments managing or allowing employee-owned devices used for work purposes is born.
By Victoria Harp
CDC leads the nation in responding to public health emergencies, such as outbreaks and natural disasters. While the agency encourages the public to be aware of personal and family preparedness, not all CDC staff follow those guidelines. In an effort to increase personal preparedness as part of workforce culture, CDC created the Ready CDC initiative. Targeting the CDC workforce living in metropolitan Atlanta, this program recently completed a pilot within the organization and is currently being evaluated for measurable improvements in recommended personal preparedness actions. Ready CDC is co-branded with the Federal Emergency Management Agency’s (FEMA) Ready.gov program, which is designed for local entities to take and make personal preparedness more meaningful to local communities. Ready CDC has done just that; the program uses a Whole Community approach to put personal preparedness into practice.
FEMA’s Whole Community approach relies on community action and behavior change at the local community level to instill a culture of preparedness. To achieve this with Ready CDC, the CDC workforce receives the following:
- The support needed to participate from their employer
- Consistent messaging from a trusted, valued source
- Localized and meaningful personal preparedness tools and resources
- Expertise and guidance from local community preparedness leaders
- Personal preparedness education that goes beyond the basic awareness level to practicing actionable behaviors such as making an emergency kit and a family disaster plan
Are you Ready CDC?
When the Office of Public Health Preparedness and Response Learning Office conducted an environmental scan and literature review, as well as an inward look at the readiness and resiliency of the CDC workforce, the need for a program like Ready CDC emerged. Although CDC has highlighted personal preparedness nationally in its innovative preparedness campaigns, there have been no formal efforts to determine if or ensure that the larger CDC workforce is prepared for an emergency. After all, thousands of people make up CDC’s workforce in Metro Atlanta, throughout the United States, and beyond.
The public relies upon those thousands of people to keep the life-saving, preventative work of CDC going 24/7. When the CDC workforce has their personal preparedness plans in place, they should be more willing and better able to work on behalf of CDC during a local emergency. Research has shown that individuals are more likely to respond to an event if they perceive that their family is prepared to function in their absence during an emergency*. Also, the National Health Security Strategy describes personal preparedness in its first strategic objective as a means to build community resilience.
Local Partnerships for the CDC
Ready CDC intends to move the dial by using its own workforce to understand behaviors associated with preparedness, including barriers to change. This is the most intriguing aspect of Ready CDC for the local community preparedness leaders involved. Most community-level preparedness education is currently conducted at the awareness level. Classes are taught and headcounts are taken, but beyond that, there is no feedback or follow-up to determine if their efforts are leading to desired behavior changes. Ready CDC is currently measuring and studying the Ready CDC intervention and that has local community preparedness leaders around metro Atlanta very interested in its outcomes.
While CDC has subject matter experts on many health-related topics, CDC looked to preparedness experts in and around the Metro Atlanta community to help make Ready CDC a locally-sustainable intervention. After all, the best interventions are active collaborations with community partners**. Key community partners from the American Red Cross; Atlanta-Fulton County, DeKalb County, and Gwinnett County Emergency Management Agencies; and the Georgia Emergency Management Agency played ongoing and significant roles in developing the program content, structure, and sustainability needed for CDC’s Metro Atlanta workforce. CDC gets the benefit of their time and expertise while partners have the satisfaction of knowing their efforts are making a difference in and contributing to the resilience of their communities. Also, because of these great partnerships, one lucky class participant wins a family disaster kit courtesy of The Home Depot and Georgia Emergency Management Agency.
Ready CDC is currently available to the CDC workforce in and around Metro Atlanta; however, efforts are underway to ensure that the broader CDC workforce is reached in 2015. For more information about Ready CDC, please email email@example.com.
Do you have a cybersecurity emergency plan in place? If you do, are you confident in your cybersecurity plan? If you answered both of these questions with a yes, pat yourself on the back for a job well done. And then volunteer some advice to your business peers because you are in the minority.
According to a new study by the SANS Institute, sponsored by AccessData, AlienVault, Arbor Networks, Bit9 + Carbon Black, HP and McAfee/Intel Security, found that 90 percent of American businesses don’t have a very effective cybersecurity emergency plan. One of the top reasons why an effective plan isn’t in place is lack of time to do so and a lack of budget, at 62 percent and 60 percent, respectively.
So, the companies that are already spending time and money on some sort of cybersecurity emergency plan don’t have one as good as they’d like. But these companies are also in the minority, as 43 percent don’t have any type of formal emergency response plan and 55 percent don’t have a response team. That could be a fatal mistake, especially considering that more than half claimed to have had at least one critical incident requiring a response over the past two years.
Banks may be undermining their own efforts at Big Data, according to a recent Information Week column.
“When faced with the requirements of a new big data initiative, banks too often only draw on prior experience and attempt to leverage familiar technologies and software-development-lifecycle (SDLC) methodologies for deployment,” writes Michael Flynn, managing director in AlixPartners' Information Management Services Community.
The problem: Those technologies enforce structure and focus on optimizing processing performance. That means the data is aggregated and normalized in an environment that works against Big Data sets in three ways, Flynn explains:
(MCT) — Dr. Diane Weems knew the virus was on their minds, so the acting director of the East Central Health District just launched into it at last week’s meeting of the Richmond County Board of Health.
“OK, does anyone have questions about Ebola?” she asked board members.
The lethal outbreak in Africa has prompted a lot of unneeded fear even among health care workers who might not understand that it takes more than casual contact to cause an infection, she said.
Augusta and Georgia have faced far bigger public health threats in the past and will likely face worse in the future, experts said.
The problem with the outbreak in West Africa, where nearly 2,000 people have been infected and more than 1,000 people have died, is that unlike past outbreaks in self-contained rural villages, this one is occurring in more populated areas, Weems said. These countries also lack a good public health infrastructure and health workers might not be taking common infection control procedures, such as wearing gloves, she said.
As the trend for larger and more frequent wildfires continues, a team of scientists, engineers, technologists, firefighters and government and industry professionals is working on a project, called WIFIRE, to build an end-to-end cyberinfrastructure for simulation, prediction and visualization of wildfire behavior.
The WIFIRE system will analyze wildfire dynamics with specific emphasis on the climate. The system will integrate heterogeneous satellite information and remote sensor data by computational techniques like signal processing, visualization, modeling and data assimilation to develop a scalable method to monitor weather patterns and predict the spread of a wildfire.
The project started with a three-year, $2.65 million grant to the University of California at San Diego in October 2013 when participants in the project began integration and cataloging of data from sensors, satellites and scientific models to create scalable wildfire models. Participants include the San Diego Supercomputer Center (SDSC), the California Institute for Telecommunications and Information Technology’s Qualcomm Institute and the University of Maryland.
Land Cover Atlas helps communities “see” vulnerabilities and craft stronger resilience plans
A new NOAA nationwide analysis shows that between 1996 and 2011, 64,975 square miles in coastal regions -- can area larger than the state of Wisconsin -- experienced changes in land cover, including a decline in wetlands and forest cover with development a major contributing factor.
Overall, 8.2 percent of the nation’s ocean and Great Lakes coastal regions experienced these changes. In analysis of the five year period between 2001-2006, coastal areas accounted for 43 percent of all land cover change in the continental U.S. This report identifies a wide variety of land cover changes that can intensify climate change risks, such as loss of coastal barriers to sea level rise and storm surge, and includes environmental data that can help coastal managers improve community resilience.
"Land cover maps document what's happening on the ground. By showing how that land cover has changed over time, scientists can determine how these changes impact our plant’s environmental health," said Nate Herold, a NOAA physical scientist who directs the mapping effort at NOAA's Coastal Services Center in Charleston, South Carolina.
Among the significant changes were the loss of 1,536 square miles of wetlands, and a decline in total forest cover by 6.1 percent.
The findings mirror similar changes in coastal wetland land cover loss reported in the November 2013 report, Status and Trends of Wetlands in the Coastal Watersheds of the Conterminous United States 2004 to 2009, an interagency supported analysis published by the U.S. Fish and Wildlife Service and NOAA.
This new NOAA analysis adds to the 2013 report with more recent data and includes loss of forest cover in an overall larger land area survey. Both wetlands and forest cover are critical to the promotion and protection of coastal habitat for the nation’s multi-billion dollar commercial and recreational fishing industries.
Development was a major contributing factor in the decline of both categories of land cover. Wetland loss due to development equals 642 square miles, a disappearance rate averaging 61 football fields lost daily. Forest changes overall totaled 27,515 square miles, equaling West Virginia, Rhode Island and Delaware combined. This total impact, however, was partially offset by reforestation growth. Still, the net forest cover loss was 16,483 square miles.
These findings, and many others, are viewable via the Land Cover Atlas program from the NOAA’s Coastal Change Analysis Program (C-CAP). Standardized NOAA maps allow scientists to compare maps from different regions and maps from the same place but from different years, providing easily accessible data that are critically important to scientists, managers, and city planners as the U.S. population along the coastline continues to grow.
“The ability to mitigate the growing evidence of climate change along our coasts with rising sea levels already impacting coastlines in ways not imaged just a few years ago makes the data available through the Land Cover Atlas program critically important to coastal resilience planning,” said Margaret Davidson, National Ocean Service senior advisor for coastal inundation and resilience science services.
C-CAP data identify a wide variety of land cover changes that can intensify climate change risks — for example, forest or wetland losses that threaten to worsen flooding and water quality issues or weaken the area’s fishing and forestry industries. The atlas’s visuals help make NOAA environmental data available to end users, enabling them to help the public better understand the importance of improving resilience.
“Seeing changes over five, 10, or even 15 years allows Land Cover Atlas users to focus on local hazard vulnerabilities and improve their resilience plans,” said Jeffrey L. Payne, Ph.D., acting director for NOAA’s Coastal Services Center. “For instance, the atlas has helped its users assess sea level rise hazards in Florida’s Miami-Dade County, high-risk areas for stormwater runoff in southern California, and the best habitat restoration sites in two watersheds of the Great Lakes.”
Selected Regional Findings – 1996 to 2011:
The Northeast region added more than 1,170 square miles of development, an area larger than Boston, New York City, Philadelphia, Baltimore, and the District of Columbia combined.
The West Coast region experienced a net loss of 3,200 square miles of forest (4,900 square miles of forests were cut while 1,700 square miles were regrown).
The Great Lakes was the only region to experience a net wetlands gain (69 square miles), chiefly because drought and lower lake levels changed water features into marsh or sandy beach.
The Southeast region lost 510 square miles of wetlands, with more than half this number replaced by development.
Many factors led to the Gulf Coast region’s loss of 996 square miles of wetlands, due to land subsidence and erosion, storms, man-made changes, sea level rise, and other factors.
On a positive note, local restoration activities, such as in Florida’s Everglades, and lake-level changes enabled some Gulf Coast and Southeast region communities to gain modest-sized wetland areas, although such gains did not make up for the larger regional wetland losses.
C-CAP moderate-resolution data on the Land Cover Atlas encompasses the intertidal areas, wetlands, and adjacent uplands of 29 states fronting the oceans and Great Lakes. High-resolution data are available for select locations.
All C-CAP data sets are featured on the Digital Coast. Tools like the Digital Coast are important components of NOAA’s National Ocean Service’s efforts to protect coastal resources and keep communities safe from coastal hazards by providing data, tools, training, and technical assistance. Check out other products and services on Facebook or Twitter.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.