Industry Hot News (6930)
I’ve recently written about my journey of taking a business through to ISO22301 certification and how I achieved it with virtually no prior experience while creating a management system completely from scratch. It was quite the adventure and I naively assumed the journey would end there…
The truth is there is no end point to this journey (unless you’re a consultant) as you begin to evidence the system’s continuing improvement and maturity over time. You will have to continually work with whatever you create during these audits and keep it alive long enough to pass those surveillance visits!
At this point in the system’s development I decided it would be worthwhile in undertaking some additional training to prepare myself. A close colleague and mentor of mine suggested:
“The ISO 22301 Lead Auditor training is definitely the way forward for people at your stage, it’s quickly becoming a pre-requisite for most BC jobs”
‘Bash’ or ‘Shellshock’, a major new security vulnerability that could have greater impacts than Heartbleed, has been uncovered. In this article Continuity Central summarises the views of a number of information security professionals concerning this vulnerability.
Toyin Adelakun, VP of Products at Sestus:
Bash is a command interpreter (or ‘shell’) present on many Unix-based systems — such as Apple’s OS X, various flavours of Linux (such as Red Hat and Ubuntu), and other operating systems such as IBM’s AIX and HP’s HP-UX.
A command interpreter allows users to interact with the operating system, for the purposes of issuing low-level instructions and manipulating data.
On many Unix systems, users might be human, or software applications (apps).
Direct access to data and instructions potentially offers a means for attackers (malevolent users) to circumvent the protections built into a legitimate app in respect of the app’s data.
Therefore, the fact that many apps use Bash to invoke other apps or operating-system commands makes this vulnerability particularly potent.
Continuity Central is currently conducting a brief survey into whether there is a change in business terminology taking place: from business continuity management to organizational resilience. The survey is a follow up to an article in which Lyndon Bird, the technical director of the Business Continuity Institute, claims that such a development is under way.
The results of the survey so far show that just over half of respondents (56.76 percent) agree that a terminology change from business continuity management to organizational resilience is taking place. 33.76 percent of respondents disagree and 9.46 percent don't know.
Interestingly, when respondents were asked about their own organization, the situation was somewhat different, with only 29.73 percent of respondents stating that their organization was starting to use 'organizational resilience' rather than 'business continuity management' terminology. 68.92 percent said that their organization was still using business continuity management terminology; and 1.35 percent didn't know.
Finally the survey asked respondents whether 'organizational resilience' and 'business continuity management' are simply two names for the same process. A third (32.43 percent) think that they are two names for the same thing, while 67.57 percent believe that they are different processes. The implication being that if there is in fact a move in place away from business continuity management towards organizational resilience, this could have fundamental implications for organizations.
The survey will remain open for a further week: click here to take part.
CDC has developed a dynamic modeling tool called Ebola Response that allows for estimations of projected cases over time in Liberia and Sierra Leone. The Ebola Response modeling tool has been used to construct scenarios to illustrate how control and prevention interventions can slow and eventually stop the Ebola epidemic. Importantly, it can help planners make more informed decisions about emergency response resources to help bring the outbreak under control. It allows input of data reflective of the current situation on the ground in affected countries and communities. Ebola Response is intended to help local governments and international responders generate short-term estimates of the Ebola situations in countries, districts, and villages. The tool, in the form of a Microsoft Excel spreadsheet, will be made freely available online.
Ebola Response makes case projections, but also models the impact of key elements essential to controlling the outbreak: the number of sick individuals who are effectively isolated and other actions to control for spread of infection, such as safe burial practices. Currently, many healthy individuals are contracting Ebola from non-isolated individuals with the disease. Others are contracting Ebola because traditional burial practices can involve multiple family members being exposed to the bodily fluids of the deceased body, which are highly contagious. Ebola Response modeling shows that with an increasing rate of isolation and measures to control the spread of infection, the rate of new Ebola cases declines rapidly.
CDC used the Ebola Response modeling tool to calculate Ebola cases through to mid-January in Sierra Leone and Liberia, providing an example of how this tool can be used. The MMWR estimates a range of between 550,000 and 1.4 million cases by January 20th, 2015. The top range of the case estimate, 1.4 million, is explained by the model’s assumption that cases are significantly underreported by a factor of 2.5. It is essential to note that these numbers reflect a moment in time based on scientific and epidemiological data available in August, which did not account for the ramping up of the Ebola relief effort which has occurred in September. Modeling suggests that extensive, immediate actions – such as those already started – can bring the epidemic to a tipping point to start a rapid decline in cases.
The most important part of the report describes the potential effect of public health actions. The news is encouraging. If we do nothing, things could become much worse. If the international community takes the actions that are planned Ebola can be brought under control. The model indicates that once a tipping point is reached, cases will decline about as rapidly as they had increased.
The National Science Foundation and the Semiconductor Research Corporation have given research awards to 10 universities to develop secure, trustworthy, assured and resilient semiconductors and systems.
The awards total $4 million and support research at the circuit, architecture and system levels on new strategies, methods and tools to decrease the likelihood of unintended behavior or access; increase resistance and resilience to tampering; and improve the ability to provide authentication throughout the supply chain and in the field.
"The processes and tools used to design and manufacture semiconductors ensure that the resulting product does what it is supposed to do. However, a key question that must also be addressed is whether the product does anything else, such as behaving in ways that are unintended or malicious," said Keith Marzullo, division director of NSF's Computer and Network Systems Division. "Through this partnership with SRC, we are pleased to focus on hardware and systems security research addressing this challenge and to provide a unique opportunity to facilitate the transition of this research into practical use."
SINGAPORE — On a sunny Saturday afternoon here, children scamper about on a broad green lawn, families lay mats down for picnics, and a man maneuvers a kite in the sky.
This is no ordinary lawn; it’s three floors up on the roof of a pump house next to Singapore’s first urban reservoir, Marina Bay.
“It’s an easy place to fly kites,” says Erich Chew, 45, whose day job is running a small IT business, but whose passion is aerial photography by kite (“Compared to a drone, there are more surprises”).
“It’s quite high,” he says, “and at this level the wind is usually quite good.”
Next to the pump house, a dam known as Marina Barrage stretches across the mouth of a wide channel. On one side of the dam is salt water, leading out to sea. On the other side is the fresh-water reservoir, a shimmering blue backdrop to some of the most expensive real estate in Singapore — tall office towers, a conference center, hotel and shopping complex and the popular Gardens by the Bay botanic garden, all built after the dam went up in 2008.
The deployment of 802.11ac is accelerating, according to ABI Research. The firm released research this week that predicts that it will reach 11 percent of consumer gear – access points (APs), routers and gateways – this year. The total number of units shipped will be more than 176 million. About 32 million of those will be APs.
The firm says that D-Link and NETGEAR represented more than 20 percent of the consumer market during the first quarter of this year. Cisco and Aruba are the leading vendors on the enterprise side. The enterprise market, according to the firm, is expected to generate revenue of $8.1 billion by the end of 2019.
Network World prefaces a piece sponsored by WildPackets on the preparations organizations should take to ensure a smooth rollout of 802.11ac with the warning that the suggestions may favor the vendor. In any case, it offers advice that should be considered.
The Business Continuity Institute will be hosting a networking event following their annual general meeting on the eve of the BCI World Conference and Exhibition. The networking event, sponsored by EPC (formerly known as Emergency Planning College), will be starting at 7pm at the Hand and Flower pub in Hammersmith.
All delegates at the BCI World Conference are invited to attend what will be a sparkling night of entertainment, dancing, drinks and nibbles. The venue is directly opposite the Olympia so conveniently located and provides an informal environment to reacquaint yourself with BC colleagues from across the world.
Lynda Vongyer, Business Continuity Director at EPC said: "Communication is a vital element of resilience planning, implementation and recovery. It’s good to talk, so EPC are very happy to host this pre-conference evening for the BCI. A great way to relax and unwind after your travels, meet old and new colleagues. We look forward to being your hosts."
Clouds by definition are nebulous and vague. Their use in IT models and discussions goes back decades, long before the current cloud computing models. A ‘cloud’ was convenient shorthand for showing a link between a system on one side and a terminal or another system on the other. Today however, the concept has evolved. Not only do such clouds link computers, but increasingly they are the computer. Aspects of on-site IT security therefore apply to cloud computing too. For that reason alone, it’s time to firm up definitions about the type of computing that goes on in the cloud, and the IT security approaches suited to each one.
Let’s face it: For a long time, IT and legal compliance have been driving data governance. Even though the experts warned that businesses needed to own governance, that didn’t change the basic fact that many of the related tools — including master data management and data quality solutions — belonged to IT.
But a shift is happening, slowly but surely, that’s pushing data governance out of IT and into the hands of business users. One reason is that business users now see data as a key asset, according to “The Forrester Wave: Data Governance Tools, Q2 2014.”
“As organizations begin to exploit the value of data for strategy and operations, they recognize that data governance has to be about helping the business realize the value potential in data,” wrote Forrester analysts Henry Peyret and Michele Goetz. “As such, stakeholders in marketing, sales, customer service, and finance are becoming much more involved and accountable.”
At 2:49 p.m. on April 15, 2013, at the height of Boston’s annual Marathon, two bombs exploded near the finish line, killing three people and injuring more than 260 others. What followed was an extraordinary manhunt, which included a shelter-in-place request from the governor that virtually shut down the city, along with the use of social media by law enforcement as a key communications tool to keep the media and frightened citizens accurately informed about what was going on.
Within 10 minutes of the bombing, Boston Police Department (BPD) Commissioner Edward Davis told his department to start using social media and to let people know what had occurred. The importance of social media as a policing tool, in particular Twitter and Facebook, soon became apparent. Misinformation, spread by professional media outlets and social media itself, was quickly corrected by the BPD. It didn’t take long for the media to realize that the most accurate information about the bombing was coming from the official BPD Twitter account.
“The Boston Police Department was outstanding and it was so simple and effective,” said Lt. Zachary Perron, public information officer for the Palo Alto, Calif., Police Department. “They became the news source during the crisis. It was a watershed moment for law enforcement and social media.
(MCT) — Can street flooding be crowdsourced?
Apparently so, as the Norfolk-based environmental group Wetlands Watch hones its Sea Level Rise app to enable the public to issue and receive real-time alerts about waterlogged streets.
When the app launches in a couple of weeks, Wetlands Watch Executive Director Skip Stiles says flood watchers — nicknamed "floodies" — can download it for free and join the effort to pinpoint trouble spots during a rain or storm event.
"Anyone can drop a pin and say, 'Boom, flooded,'" Stiles said.
The information will also be used by emergency managers and scientists to better understand flood patterns and prepare for them, he said.
The app comes as the Virginia Department of Emergency Management (VDEM) also unveils an interactive storm-surge map to allow users to see the maximum risk for specific locations.
Although it is good practice for organizations to have a business continuity plan, workplace flexibility is what really counts in a disaster: Victoria University of Wellington research.
Dr Noelle Donnelly and Dr Sarah Proctor-Thomson, researchers at the Centre for Labour, Employment and Work at Victoria University of Wellington’s School of Management, were commissioned by the New Zealand Public Service Association (PSA) and Inland Revenue to research the experiences of employees who worked from home following the February 2011 earthquakes in Christchurch.
This is the first study of its kind examining the experiences of flexible work arrangements in a post-disaster environment.
At the time of the February earthquake, Inland Revenue had just one central office of over 800 staff members in the centre of town.
“When the earthquake hit Christchurch at 12.51pm on Tuesday 22 February 2011, Inland Revenue immediately lost access to its main workplace in the CBD,” says Dr Donnelly. “In response, available senior managers met and began the work of assigning new roles and tasks to staff. One of their immediate challenges was making contact with their people to ensure that they were all safe.”
The Cloud Security Alliance (CSA) has released the results of a new survey that found a significant difference between the number of cloud-based applications IT and security professionals believe to be running in their environments, and the number reported by cloud application vendors.
The survey entitled ‘Cloud Usage: Risks and Opportunities’ included responses from IT and security professionals from around the globe representing a variety of industry verticals and enterprise sizes. The aim was to gain insight and understand the perceptions of how enterprises are using cloud apps, what kind of data is moving to and through those apps, and what that means in terms of risks.
Among other things, the survey found that 54 percent of IT and security professionals said they have 10 or fewer cloud-based applications running in their organization, with 87 percent indicating that they had 50 or fewer applications running in the cloud (with a weighted average of 23 apps per organization). These estimates are far lower than commonly reported by vendors and research reports, which count more than 500 cloud apps present, on average, per enterprise.
Software developers from around the world have been recognized at the UN Climate Summit for their ingenuity in devising life-saving apps for use in reducing the impact of extreme weather events on cities and coastal communities.
Entries to the Esri Global Disaster Resilience App Challenge included apps which allow communities to measure the impact of permafrost melt and storm water on vital infrastructure, to access sea-level rise and landslide forecasts, and an app which allows disaster-affected citizens to check out evacuation routes, shelter locations, and much more.
Esri, a leader in geographic information system technology and mapping software, awarded a prize of $10,000 each to the winner for the best professional/scientific app and the best citizen/public-facing app which will be made available for use to the 2,200 cities, towns and municipalities in the global Making Cities Resilient Campaign of the UN Office for Disaster Risk Reduction (UNISDR).
Crowdsourcing inevitably raises questions about data quality, but a number of companies and experts believe crowdsourcing can be used to improve data quality.
GigaOm recently profiled one of these companies, CrowdFlower, after it raised $12.5 million in its Series C round of venture capital — just under half of the $28 million it’s raised since its launch four years ago.
CrowdFlower doesn’t so much crowdsource its work, but relies on the crowd to do its work. For instance, Unilever hired CrowdFlower to extract sentiment, location, sex and other information from tweets, GigaOm reports. eBay used the company to clean up its product taxonomies.
As the Internet of Things (IoT) becomes a reality, the volume of data that will be generated by the multitude of connected devices, machines, and processes — in the consumer, business, and industrial worlds — is expected to be massive. In short, the more devices and machines that get connected, the more data that is going to be generated.
Achieving some kind of business value from this massive data reservoir will require the use of big data storage and analysis technologies that can scale to meet the constantly increasing demands placed on organizations. These include:
- NoSQL file systems
- NoSQL databases
- High-performance relational analytic and in-memory database appliances
- Hybrid relational databases with embedded MapReduce
- Streaming analytics systems
All of these technologies provide varying capabilities for managing and analyzing sensor and other data associated with IoT applications and services. That said, a key point to keep in mind is that none of them on its own currently offers an all-encompassing solution that can serve every need for IoT application requirements. Consequently, I recommend you consider these technologies as complementary.
Almost two years after it tore a deadly and costly path through the Northeast, Superstorm Sandy still stands as one of the most important events in the history of disaster preparedness. The desire to be more resilient in the face of these big and increasing storms kicked into high gear planning efforts by states and localities across the country. But it takes money to take action. And as governments are finding out, it’s hard to find money in today’s tight budgets.
If one of the biggest stumbling blocks to increasing a community’s sustainability and resilience is financing, then New Jersey’s in good shape. This summer, the Garden State created an energy resilience bank to “fund projects that will help prevent a reoccurrence of the energy disruptions and build energy resilience,” according to the state’s proposal for the bank. The idea essentially is to set up a dedicated source of funding for projects that will provide clean, more reliable energy at critical infrastructure such as water and wastewater treatment plants, hospitals, shelters, emergency response centers, schools, and transit systems.
Through revolving loans and grants, the bank will support projects that include installing microgrids, distributed generation (where electricity is generated from multiple small energy sources such as fuel cells or solar panels), smart grid technology and energy storage. Initially, the bank will be funded using $200 million from New Jersey’s Community Development Block Grant-Disaster Recovery allocation from the U.S. Department of Housing and Urban Development (HUD). When that runs out, says Greg Reinert, director of communications for the New Jersey Board of Public Utilities, the state will allocate funds. The ultimate goal, though, is to bring in private capital.
Yet another set of ominous projections about the Ebola epidemic in West Africa was released Tuesday, in a report from the Centers for Disease Control and Prevention that gave worst- and best-case estimates for Liberia and Sierra Leone based on computer modeling.
In the worst-case scenario, the two countries could have a total of 21,000 cases of Ebola by Sept. 30 and 1.4 million cases by Jan. 20 if the disease keeps spreading without effective methods to contain it. These figures take into account the fact that many cases go undetected, and estimate that there are actually 2.5 times as many as reported.
In the best-case model, the epidemic in both countries would be “almost ended” by Jan. 20, the report said. Success would require conducting safe funerals at which no one touches the bodies, and treating 70 percent of patients in settings that reduce the risk of transmission. The report said the proportion of patients now in such settings was about 18 percent in Liberia and 40 percent in Sierra Leone.
SAN FRANCISCO – A staggering 43% of companies have experienced a data breach in the past year, an annual study on data breach preparedness finds.
The report, released Wednesday, was conducted by the Ponemon Institute, which does independent research on privacy, data protection and information security policy.
That's up up 10% from the year before.
The absolute size of the breaches is increasing, said Michael Bruemmer, vice president of the credit information company Experian's data breach resolution group, which sponsored the report.
"Particularly beginning with last quarter in 2013, and now with all the retail breaches this year, the size had gone exponentially up," Bruemmer said.
He cited one large international breach few Americans have even heard about. In January, 40% of South Koreans—a total of 20 million people—had their personal data stolen and credit cards compromised.
Research conducted by Databarracks has revealed a significant disparity between organizations’ attitudes and approaches to business continuity and disaster recovery. The findings indicate that while medium and large organizations are confidently implementing business continuity plans, small organizations are putting themselves at risk by failing to follow suit.
The findings are part of Databarracks’ fifth annual Data Health Check report, which surveys over 400 IT professionals in the UK on the changing ways in which technology is used by businesses today.
The results revealed that only 30 percent of small organizations had a business continuity plan in place, compared with 54 percent of medium and 73 percent of large businesses. Perhaps even more concerning is that when asked if the organization intended to implement a BCP in the next 12 months, over 40 percent of small organizations had no intention to do so.
Other key findings included:
- Hardware failure (21 percent), software failure (19 percent) and human error (18 percent) were reported as the top causes of data loss;
- Large organizations are more than twice as likely to have tested their disaster recovery plans in the last year compared to small organizations;
- ‘Lack of time’ was deemed to be the biggest factor for all organizations not testing their disaster recovery plans (35 percent), this was closely followed by ‘cost’ (18 percent) and ‘lack of skilled staff to carry out testing’ (18 percent).
IBM has announced the opening of its new Cloud Resiliency Center in Research Triangle Park (RTP), North Carolina. The new facility provides state-of-the-art business continuity capabilities in the cloud to protect companies from potential costly disruptions.
IBM’s new Resiliency Center integrates cloud and traditional disaster recovery capabilities with innovative physical security features. With cloud resiliency services, the recovery time of 24 to 48 hours that was once deemed the industry standard has shrunk dramatically to a matter of minutes.
Open 24 hours a day, seven days a week, the Resiliency Center team will monitor developing disaster events and then mobilize as needed to ensure that the infrastructure for all customers is configured to handle the latest threats to keep data, applications, people and transactions secure.
IBM has also announced that it will be opening two new cloud based resiliency centers in Mumbai, India and Izmir, Turkey.
Technology helps organisations to get more done in less time. However, technology alone cannot guarantee business continuity. Solid business processes also contribute to resilience, but there’s another kind of ‘glue’ that can make the difference between enterprises that stand or fall when the going gets tough. It’s organisational culture, or “the way we do things round here”. This is an element that business continuity managers must factor into their planning, for at least two reasons. Firstly, and as we’ve just said, it’s because it’s important – in fact, essential – to BC. Secondly, because someone whose support the BC manager must get is also likely to make organisational culture a top priority.
I was hardly surprised to see Home Depot-related emails showing up in my inbox over the weekend. After all, it may be the largest breach ever, with at least 56 million credit cards compromised.
It also now appears that Home Depot is the new poster child for what happens to a company, both in terms of data loss and of its reputation, when it ignores the warnings that it is at a high threat level.
According to a number of reports, Home Depot management had been warned for years – years – that its network was vulnerable to a serious cybersecurity attack. But it appears that upper management refused to take these warnings seriously. The New York Times reported:
In recent years, Home Depot relied on outdated software to protect its network and scanned systems that handled customer information irregularly, those people said. Some members of its security team left as managers dismissed their concerns. Others wondered how Home Depot met industry standards for protecting customer data. One went so far as to warn friends to use cash, rather than credit cards, at the company’s stores.
It’s referred to as the Big One, the cataclysmic earthquake that will devastate Los Angeles when the ground around the San Andreas Fault gives a dramatic heave.
Seismologists agree that it’s a matter of when, not if, it happens, and that the resulting damage will be incalculable in the city of more than 4 million residents and 400,000 businesses.
Emergency response will have to come on multiple fronts at once. Beyond the immediate imperative of saving lives, the emergency community will need to coordinate activities in the realms of transportation, health, finances and diverse other sectors to stabilize the city. Water will be a particular concern in an area that relies largely on outside sources for its supply.
(MCT) — Nobody knew what to call it in 1859, when the most dramatic solar storm on record shocked telegraph operators, set their paper ablaze and lit up the horizon with brilliant auroras.
Sky watchers now know the sun can belch out dozens of solar flares and related eruptions every year, including one that put electricity grid monitors on alert this month.
Bursts known as a coronal mass ejections especially can destabilize the power grid by causing vibrations in the Earth's magnetic field, as NASA explains. Those vibrations cause invisible electric currents that can overwhelm circuitry and lead to prolonged shutdowns.
Solar researchers say their challenge is figuring out which bursts threaten disruption on the scale of the so-called Carrington Event, which bedeviled telegraph operators and crippled communication systems in 1859.
(MCT) — With canned peaches and tuna, marshmallows and Spam, professional chefs competed Saturday to show Houstonians that they can eat more than just peanut butter and jelly during a natural disaster.
Chef Kate McLean of Tony's won the 2nd annual Ready Houston Preparedness Kit Chef's Challenge at Market Square with a dish judge Albert Nurick said he "could see on the menu exactly as it is."
"The creativity is off the hook on this one," said Nurick, writer for the H-Town Chow Down blog.
On a fold-out table with a camp stove and average household cookware, McLean created a play on fish and chips. She and her competitors — David Grossman of Fusion Taco, Jonathan Jones of El Big Bad, Travis Lenig of Liberty Kitchen & Oysterette and Kevin Naderi of Roost — had 25 minutes to cook after lifting a tablecloth off a surprise stack of non-perishable items.
Why do we perform business continuity management (BCM)? Is it because we want to make sure that our organisations are able to respond to any future crisis? Probably yes! Is it because it’s just plain common sense that you would want your organisation to be prepared for any future eventuality? That would seem the sensible thing to do!
In many cases however, it is also because there is a legal obligation to do so. Many industries are tightly regulated, some more heavily than others, and therefore must have plans in place to deal with certain scenarios. There is also variation on an international scale with some countries having rules in place that others don’t. Legislation, regulations, standards and guidelines are being created and revised all the time and it is sometimes difficult to understand which ones are applicable to you. This is especially the case when you operate internationally.
There is a solution however. The Business Continuity Institute has published what it believes to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based upon information provided by the members of the Institute from all across the world. Some of the items may only be indirectly related to BCM, and should not be interpreted as specifically designed for the industry, but rather they contain sections that could be useful to a BCM practitioner.
The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, finally, it provides a link to the full document itself.
The BCI has done its best to check the validity of these details but takes no responsibility for their accuracy and currency at any particular time or in any particular circumstances. To download a copy of the document, click here.
Nearly all computing devices, even the processor itself, are comprised of discrete elements that must be brought under a common architecture in order to produce productive, valuable outcomes. This is why we build operating systems for the PC, the server, the storage farm and even the network; otherwise, we would just have a collection of blinking boxes.
To date, this has sufficed because the data environment did not extend beyond the data center walls, and the needs of each type of device were unique enough that separate but interconnected operating systems afforded the greatest degree of flexibility and functionality.
Now, however, with the data center itself emerging as one component in a larger, distributed data ecosystem, some are starting to wonder if it should be treated like a giant, multi-user computer, with a single operating system to bind all its functions together.
EATONTOWN, N.J.– When an incident reaches the point that it’s unsafe for people to remain in the immediate area, getting everyone evacuated as safely and quickly as possible becomes crucial. One of the most – if not the most – important part of an evacuation is figuring out how to get out of the affected area.
Coastal Evacuation Routes exist in states that border the Atlantic Ocean and Gulf of Mexico. They are often denoted by signs featuring some combination of blue and white. In New Jersey, they are white signs with a blue circle on them, filled with white text. Because of New Jersey’s small size and its proximity to water on three sides, many of the state’s major highways also serve as coastal evacuation routes. Most of New Jersey’s routes come from the shore (south and west) and move inward, mainly westbound.
The Garden State Parkway in Cape May County, for example, is the main evacuation route out of the county to the north, along with Routes 47 and 50. Also in Cape May and Atlantic counties, the barrier islands have multiple access points connecting the towns on those islands with the Parkway and other roads headed inland.
The Atlantic City Expressway is the main east-west route through the southern part of New Jersey. When Hurricane Sandy arrived in New Jersey, state officials reversed traffic on the Atlantic City Expressway, forcing all traffic on the highway to go west, away from the coast.
Unlike the barrier islands in Cape May and Atlantic counties, there is only one way on and off of Long Beach Island – Route 72. Route 37 serves the southern half of the Barnegat Peninsula in Ocean County, and Route 35 leads to access to inland roads in the northern half, including Routes 88 and 34, as well as Routes 36 and (indirectly) 18 in Monmouth County.
Getting to the main routes can sometimes involve traveling through residential areas and on lower-capacity streets and roads that can get crowded. www.ready.gov recommends keeping your car’s gas tank at least half full in case you have to leave immediately.
Once an evacuation order has been issued, leave as soon as possible to avoid traffic congestion and ensure access to routes. Have a battery-powered radio to listen for emergencies and road condition changes. During Sandy, not only was contraflow lane reversal (alteration of traffic patterns on a controlled-access highway so all vehicles travel in the same direction) implemented on the Atlantic City Expressway, but the southbound Garden State Parkway was closed to traffic.
During evacuations, people should follow instructions from local authorities on which roads to take to get to the main evacuation routes. Don’t take shortcuts, as they may be blocked. Know more than one nearby evacuation route in case the closest or most convenient one is blocked or otherwise unpassable. Don’t drive into potentially hazardous areas, such as over or near other bodies of water during a hurricane or other flood event. Barrier island residents should take the quickest possible route to the mainland.
Emergency evacuations are stressful moments. But knowing where you’re going and how to get there can help make the whole experience a little easier to handle.
Evacuation routes for the state of New Jersey are posted on the New Jersey Office of Emergency Management website. Go to http://ready.nj.gov/plan/evacuation-routes.html to find the route for your region.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at www.twitter.com/FEMASandy, www.twitter.com/fema, www.facebook.com/FEMASandy, www.facebook.com/fema, www.fema.gov/blog, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.
On the 5th and 6th November, the Business Continuity Institute will be hosting its annual BCI World Conference and Exhibition at the Olympia in London, UK. Join us in our 20th year by participating in this annual event that brings together the global business continuity community.
This is a unique networking and learning experience for anyone working or interested in business continuity, risk management, emergency management, crisis and incident management, security, disaster recovery... anyone with an interest in building organisational resilience.
The programme has now been released and it is packed with an abundance of fascinating speakers and topics. Keynote speeches will be given by world famous author and psychologist to the stars Professor Steve Peters who explains how your inner chimp may be holding you back; Martin Fenlon MBCI, from the Houses of Parliament, who will tell us how they prepare for the 5th November and the British Standards Institute will announce the new standard BS 65000.
The conference is split into three streams. In the Listen Stream you can hear practitioners share lessons learned, in the Learn Stream you will experience world class training based on the Good Practice Guidelines and in the Lead Stream there is an interactive thought leadership discussion and debate.
In addition to all of this, the BCI World Conference and Exhibition includes:
- Pre-conference training with expert instructors
- AGM – the must attend event for all BCI members
- Welcome networking event – join us for a night of live music, nibbles and drinks
- Live fully interactive game show
- Exhibition with a variety of attractions including demonstrations and product showcasing
- Guided tour with an experienced practitioner around the event for newcomers
- BCI clinic – visit the BCI stand with your BC related questions
- Exhibition Floor Complimentary Seminar Programme and Vendor Showcasing
- Gala dinner and global awards at the landmark Science Museum in London
Don't miss out on this great opportunity to learn and network with your colleagues from across the world. Book your place today by clicking here.
Actions that property owning organizations can take to better protect facilities, tenants and employees from civil unrest
Article provided by Preparis.
The recent killing of 18-year-old Michael Brown in Ferguson, MO, sparked a national response so powerful that frequent protests ignited throughout the United States bringing greater awareness to injustices that are still prevalent in our modern society. These protests and demonstrations, when performed peacefully, can bring together a community in ways that few other actions can; however, as can be seen with the happenings surrounding Ferguson, protests have a way of spiraling out of control, causing catastrophic damage and loss of life.
From a property management perspective, it is important for the safety of your tenants and the protection of your properties to understand the cultural dynamics within the communities adjacent to your business locations, stay abreast of the events involving political discord that could permeate those business locations, prepare for the worst scenario—civil disturbances involving your properties—and properly respond to instances of civil unrest. This article offers a guide to help you begin the process of achieving these goals in the event that other instances of civil unrest hit closer to home.
Christian Toon makes the case for a blended approach to backup and storage plans.
Data backup and storage is the IT equivalent of tidying up at the end of the day. Putting all your information away neatly so you know it is accounted for, secure and easy to find again. An unlikely topic, you would imagine, for strong opinions and lively debate. Yet that is exactly what it has become and for good reason.
Every day more data is handled by more employees who are spread across multiple locations and use a variety of devices. This increases the vulnerability of information. The solution for many organizations is to implement a centrally controlled data back-up and storage plan from the range of options available. And this is where the debate can become heated. In the red corner are the cloud converts, those who are quick to point out that ultimately all hardware-based back-ups will fail, and that nothing offers the same storage capacity, flexibility and ease of access. Over in the blue corner, we find those who approach the cloud with more caution. They can point to a growing evidence base such as the recent Symantec study  that shows 68 percent of companies have been unable to recover data stored in the cloud and to the fact that Forrester urges companies to back-up all cloud-stored data .
The reality of the workplace is complex. IT departments need to prioritise limited budgets and work with legacy IT infrastructure as they build confidence in the security and benefits of an established cloud provider. In many cases this leads to a hybrid data back-up and storage system that include onsite servers for the most active, business critical or confidential information, and securely stored offsite tape and disc as well as the cloud for less essential or dormant data. The result is tidy, cost-effectively managed and protected information and an IT team released to add more value elsewhere. At least, that is, until employees start asking for data they have lost or can’t access. The effort required to meet these requests has caught many IT professionals off-guard.
After many years with the aim of ‘promoting the art and science of business continuity’ around the world, the Business Continuity Institute (BCI) has now stated that its purpose is ‘to promote a more resilient world.’
This change of focus is supported by a new vision statement. Previously the BCI’s vision statement was: “To be the Institute of choice for business continuity professionals.” This has now been changed to: “To be the Professional Body of choice for resilience professionals.”
To support the above aims the Institute has set out three clear goals:
- To deliver a consistent “BCI experience” for members to develop and enhance their qualifications and expertise;
- To strengthen BCI’s role as “the global thought leader” for continuity and resilience;
- To increase BCI’s global influence within both mature and emerging markets which will be reflected by a growth in membership.
US Department of Housing and Urban Development (HUD) Secretary Julián Castro has launched a $1 billion National Disaster Resilience Competition. He was joined by Dr. Judith Rodin, President of The Rockefeller Foundation, in announcing that eligible states and localities can now begin applying for funds. Representatives from eligible communities will have the opportunity to attend Rockefeller-supported Resilience Academies across the country to strengthen their funding proposals.
"The National Disaster Resilience Competition is going to help communities that have been devastated by natural disasters build back stronger and better prepared for the future," said Secretary Julián Castro. "This competition will help spur innovation, creatively distribute limited federal resources, and help communities across the country cope with the reality of severe weather that is being made worse by climate change."
"The Rockefeller Foundation is committed to spurring innovation in resilience planning and design so that communities can build better, more resilient futures, particularly for their most vulnerable citizens" said Dr. Judith Rodin, President of The Rockefeller Foundation. "Building resilience will minimize the impact of the next shock, while also improving life in communities day-to-day, allowing them to yield a resilience dividend. Everyone wins."
The National Disaster Resilience Competition makes $1 billion available to communities that have been struck by natural disasters in recent years. The competition promotes risk assessment and planning and will fund the implementation of innovative resilience projects to better prepare communities for future storms and other extreme events. Funding for the competition is from the Community Development Block Grant disaster recovery (CDBG-DR) appropriation provided by the Disaster Relief Appropriations Act, 2013 (PL 113-2).
All successful applicants will need to tie their proposals to the eligible disaster from which they are recovering.
Given the complexity of the challenge HUD will partner with The Rockefeller Foundation to help communities better understand the innovation, broad commitment, and multi-faceted approach that is required to build toward a more resilient future. As they did in HUD's Rebuild by Design competition, The Rockefeller Foundation will provide targeted technical assistance to eligible communities and support a stakeholder-driven process, informed by the best available data, to identify recovery needs and innovative solutions.
There are 67 eligible applicants for the $1 billion National Disaster Resilience Competition. All states with counties that experienced a Presidentially Declared Major Disaster in 2011, 2012 or 2013 are eligible to submit applications that address unmet needs as well as vulnerabilities to future extreme events, stresses, threats, hazards, or other shocks in areas that were most impacted and distressed as a result of the effects of the Qualified Disaster. This includes 48 of 50 states plus Puerto Rico and Washington, DC. In addition, 17 local governments that have received funding under PL 113-2 are also eligible.
Whether you already have one or are contemplating acquiring one, having a Standby Power Generator is not a ‘set it and forget it’ responsibility.
As a Business Continuity professional you should not rely on that generator to mitigate electrical disruption risks unless you ask – and get satisfactory answers to – four questions about the most important aspects of owning and using a backup generator:
The Weather Company, best known for The Weather Channel and weather.com, is getting into the emergency alert business — a natural fit given the company's focus and market saturation.
Using its large-scale distribution and weather expertise, the company is, in partnership with local officials, building a localized alerting platform for state, local and private authorities to manage and distribute emergency alerts via The Weather Channel properties and existing local distribution points.
“The U.S. offers its citizens some of the best emergency alerting capabilities in the world,” said Bryson Koehler, executive vice president and CIO of The Weather Company, noting that the National Weather Service and FEMA ensure national coverage through alerts and the Integrated Public Alert and Warning System (IPAWS) system. "But most communities currently do not have a local alerting system to integrate with IPAWS. As a result, many alerts cover large areas or do not provide the types of local details that can best serve the public.”
Are concerns about personal data a sign of privilege?
Daniel Castro argues that they are, especially as the Internet of Things (IoT) comes online and data constantly streams from high-tech, high-cost gadgets.
Poor people don’t own Fitbits. Rather inconveniently for data, they also are born, grow up and live in low-tech environments. In our data-driven society, the end effect is that these people disappear from data, writes Castro in his paper, “The Rise of Data Poverty in America.” Castro is the director for the Center of Data Innovation, a data innovation think-tank that published the paper. He’s also a senior analyst at the Information Technology and Innovation Foundation — qualifications that show in his thought-provoking, well-researched paper.
More than 500 Red Cross volunteers are helping people affected by Hurricane Odile in the Mexican state of Baja California Sur. The volunteers—120 of which are paramedics—are providing basic medical check-ups and delivering food to people housed in shelters. The Red Cross has sent 2,000 food parcels to the city of Los Cabos. In addition, volunteers are carrying out damage assessments in Baja California Sur in order to determine the most urgent needs.
The storm has left roughly 82% of the population in Los Cabos and La Paz without electrical power, damaged roadways, and caused ports to close. People affected by the storm have evacuated to 164 shelters in Baja California Sur.
Mexican Red Cross volunteers participating in the response are specialists in collapsed structures, damage evaluations, pre-hospital care, and logistics support in shelters & collection centres. The Mexican Red Cross is working closely with federal authorities, Civil Protection, the Governors Secretariat, the Mexican Marines and Army, to deliver the aid to the people affected as quickly as possible.
Another storm—Hurricane Polo—is threatening the Mexican state of Guerrero, where at least 120 Mexican Red Cross volunteers are prepositioned to act if needed.
(MCT) — Among the many things the Bay Area learned from the recent shaker near Napa is that the University of California, Berkeley’s earthquake warning system does indeed work for the handful of people who receive its messages, but most folks find out about a tremor only after it knocks them out of bed.
Silicon Valley has made apps that tell people when their Uber ride is approaching, their air conditioning has broken or a thunderstorm is brewing. Yet despite being home to the most devastating earthquakes in the country, the region does not have a high-tech earthquake alert system for the public.
But since last month’s temblor, more tech companies are trying to solve that problem. A handful of startups are developing apps that would quickly broadcast warnings of upcoming quakes to users on their smartphones, tablets or other gadgets. Already, the much-joked-about messaging app Yo has rolled out “Earthquake Yo” to hundreds of users.
What is the scarcest IT resource today? Processor power, main memory and disk space all seem to grow unabated. But network bandwidth on the other hand is still comparatively expensive. Consequently, enterprises tend to have less of it, which is turn leaves them more exposed to possible outages. Luckily, other technology means that bandwidth can be made to do more, even if it’s not reasonable to have more of it. Routing voice and data over the same links is a prime example. This simplifies recovery and can also minimize outages. What’s missing in the equation is a simple explanation of terms involved. Here are a few to help you mix and match for the configuration that suits you.
After reading several blogs and articles this week, I’ve learned that many small to midsize businesses (SMBs) tend to learn as they go—especially when it comes to technology. And often, those lessons can be costly.
In a LinkedIn Blog written by Boost IT CEO Russell Shulin, I found a list of six major technology issues often overlooked by SMBs that can bust budgets and deeply affect business. Shulin explains that each is one lesson that he’s experienced, or seen experienced by others. Tips SMBs should consider include:
In the morning of Nov. 16, 2013, rural Ouray County, Colo., emergency responders were called to help miners in a nearby mine. Two were unconscious and 20 were suffering from oxygen deficiency. The two miners tragically died of carbon-monoxide poisoning, but a swift response got the other 20 to safety in a multiagency and regional effort.
The timing was uncanny. The coordinated response that ensued was practiced in a Mass Casualty Incident Command System (MCICS) training just the day prior to the incident, when those same responders were educated using an active shooter model. The training was applied to the mine incident in a structure that can be generalized to almost any mass casualty incident.
At the Revenue-Virginius mine, the county established a transportation unit leader and group for the first time to accurately track who was coming and going during the emergency.
In total, 30 responders navigated a snowy, narrow terrain to reach miners exposed to high levels of toxic carbon monoxide gases. The transportation leader and group helped especially to track and triage the miners and ensure quick treatment at three regional hospitals.
WINNIPEG, MANITOBA, Canada – After decades of working undercover for the Royal Canadian Mounted Police, the U.S. Drug Enforcement Administration and U.S. Customs Service, crime and risk expert Chris Mathers knows where companies are vulnerable and what it takes to protect them.
“In a world where popular culture tells us that the ends justify the means, crime is all about perception,” he said in a keynote address at the 2014 RIMS Canada Conference. “Young people are bombarded with it all the time, but we are in business, too. So the question is, how vulnerable is your business?”
Mathers, who joined the forensic division of KPMG and was later named president of corporate intelligence, shared his insight into how companies can best guard against “the business of crime, and crime in business.”
(MCT) — The San Antonio River Authority has announced the first nationwide implementation of software to help emergency responders react to dangerous floods.
SARA and the San Antonio Fire Department will hold a news conference Wednesday to discuss the FloodWorks system. It was developed in the United Kingdom and is operational via a “user-friendly, interactive website” at the San Antonio Emergency Operations Center at Brooks City-Base, officials said.
“We're doing the technology development; their role is the response,” Russell Persyn, SARA's watershed engineering manager, said of the joint project with the fire department.
The system, installed late last year and run through tests in the spring, uses historical flood data and weather forecasts to plan a day before a potential flood, with real-time radar updates from the National Weather Service helping responders track developments during a storm.
Reports are published almost daily about the gender pay gap in the UK. In 2013, women earned 19.7 percent less than men doing the same job. While in professional occupations, the pay gap is smaller (around 9 percent), at a senior level, the gender pay gap has not really decreased since 2005. Senior women earn 20.2 percent less than men in a similar role.
When examining the salaries for women in the resilience and governance sectors, recruitment agency BeecherMadden expected to see a similar trend.
However, surprisingly, salaries for women in resilience and governance roles buck the trend of women being paid less. Comparing recent appointments in the past year, women have been paid up to 30 percent more. This is for roles where men with comparable experience, have been appointed at a similar time, entering similar organizations.
BeecherMadden also found several examples of women with less experience in their role than men, who were earning around 10 percent more, for a similar role. The difference is most notable for those going into their second jobs; candidates who have 3 - 5 years’ experience are the most in demand and show the biggest pay difference. At senior levels, the experience gap closes when looking at comparable commercial experience.
To address critical gaps in knowledge about data center fire prevention, the US Fire Protection Research Foundation, an affiliate of the National Fire Protection Association(NFPA), has announced the release of a new report, ‘Validation of Modeling Tools for Detection Design in High Air Flow Environments,’ as the result of a project in partnership with Hughes Associates and FM Global.
The report validates a model that provides reliable analysis of smoke detection in data centers and guidance to the technical committees for NFPA 75, Fire Protection of Informational Technology Equipment, and NFPA 76, Fire Protection of Telecommunications Facilities.
Fire prevention and detection is critical to safeguarding data centers which hold critical business and organizational information around the world. Globally, spending on these facilities will be an estimated 149 billion dollars this year, according to Gartner Group.
In the past few years, the equipment in data centers has changed significantly, which has placed increased demands on HVAC systems. As a result, airflow containment solutions are being introduced to increase energy efficiency. From a fire safety design perspective, the use of airflow containment creates a high airflow environment that dilutes smoke, which poses challenges for adequate smoke detection, and affects the dispersion of fire suppression agents.
“While data centers have become increasingly important in housing digital information, sufficient smoke detection is a challenge with data center cooling systems,” says Amanda Kimball, a research project manager for the Foundation. “This research included a series of simulations with various smoke detector spacing, types of fires, and air flows which gave us important guidance on smoke detection placement and installation.”
(MCT) — Cities across California are struggling with how to convince property owners to retrofit buildings at risk of collapse during a major earthquake.
San Francisco this week is using an unusual tactic: trying to publicly shame building owners into shoring up their structures to better withstand shaking.
The city will slap large signs — in multiple languages, with red letters and a drawing of a destroyed building — on hundreds of apartment complexes that violate San Francisco's seismic safety laws.
No California city has gone so far to inform the public about potentially dangerous buildings and pressure property owners to make fixes.
Los Angeles is considering a similar approach. Mayor Eric Garcetti has proposed what would be the nation's first letter grading system to alert the public about the seismic safety of buildings. He has also said he wants to require owners to retrofit buildings that are at risk but is still working out the details of his plan.
It seems that small to midsize businesses (SMBs) around the world should begin beefing up their cybersecurity initiatives. Cybertinel, an Israeli security company, has verified the enigmatic Harkonnen Trojan on the network of one of its German clients in August, where attackers had taken full advantage of the often lax or lacking amount of network security in place in many SMBs.
According to TechWorld, around 300 SMBs in Europe may have been used as “fronts” for stealing data for as long as a decade. TechWorld’s John E. Dunn reported:
From the details released to the press, this looks like a rare example of a professional hacking-for-hire attack of long standing that possibly also targeted firms beyond the known target list, including in the UK.
As if crisis and emergency communicators don’t have enough to worry about. In today’s instant news world, without the care journalists once showed to get it right, it’s becoming increasingly common for fake spokespersons to prank the media.
Imagine the nightmare–your organization is in the middle of a major news crisis. While you are working hard to get your authorized spokesperson prepared to go live on national or regional TV, your TV monitor shows a live report going on with someone posing as a spokesperson for your organization.
Think it won’t happen?
Nags Head, N.C., barely skims the ocean surface, a town of about 3,000 people built on sand just 10 feet above sea level. Over the decades, hurricanes have cut a rough path here, taking down homes, roads and piers.
As city planners look toward the inevitable next big blow, they’re thinking about infrastructure. What happens when emergency phone lines no longer function or when the data center goes down? To meet that challenge, Nags Head is teaming up with other municipalities to create inter-city backup arrangements.
“[If] we should have a storm and the area has to be evacuated, essential personnel generally would be required to stay here. But [if] we have a very severe storm, essential personnel would be evacuated, and this arrangement gives us a place to set up shop,” said Allen Massey, IT coordinator of Nags Head.
The arrangement he refers to involves Cary, a city of 146,000 people that’s much farther inland. For call services in particular, Cary is Nags Head’s fallback position.
(MCT) TOKYO — In a nondescript government building near the Imperial Palace, a team of Japanese seismologists stands ready to predict an earthquake.
All day, every day, they monitor data from dozens of tiltmeters, strain gauges and other instruments deployed along a stretch of coastline southwest of Tokyo. The region, called Tokai, was last rocked by a major quake in 1854. Scientists fear it’s overdue for a repeat.
Since 1979, federal scientists have been watching for ground motion that might herald an impending rupture on the fault zone. If their instruments ever detect an ominous bulge, Japanese law requires the prime minister to issue warnings that will shut down schools, hospitals, factories, roads and trains across one of the country’s most populous areas.
The Pacific Northwest is subject to the same type of seismic disaster that Japan hopes to predict, but neither the U.S. nor any other nation has such an ambitious program to nail down an earthquake before it happens. That’s because most experts are convinced it can’t be done.
(MCT) — As Clark County, Wash., families get ready to settle back into the routine of the school year, local officials are hoping residents are also preparing for something less expected: a disaster.
September is National Preparedness Month, and on Monday the Clark Regional Emergency Services Agency kicked off its annual disaster preparedness game, called the "30 Days, 30 Ways Preparedness Challenge."
The game, played over social media, assigns one readiness task for each day for the month of September.
After participants have completed the task, they are asked to post their results to Twitter, Facebook, Instagram, the game's blog or send in the result by email. More details can be found at the game's website, www.30days30ways.com.
A brutal snowstorm strikes at mid-day. Roads grow increasingly congested as commuters across the city scramble to get home before conditions worsen. Ice begins to jam roads, and resulting accidents turn interstates into parking lots and neighborhood roads into skating rinks. Some parents grow increasingly desperate to reach their children as roads become impassable, leaving students stranded on buses and at school. Other parents pick up their children only to become stuck in their cars.
Once safely reunited, families remain stuck indoors for days. Childhood excitement at the sight of snow quickly turns to cabin fever. Parents’ relief to have the family reunited turns to hope for the power to remain on and schools to reopen soon.
This scenario became reality for cities across the southeastern U.S. in January 2014, highlighting the importance of preparedness, especially for families. Natural disasters affect about 66 million children each year. Keeping children safe in emergency situations starts in the home, whatever the emergency may be.
Get a Kit
“If you could take one thing with you on a desert island, what would it be?” This popular children’s question game is not too far off the mark for putting together an emergency kit for your family. Maintaining a routine in an emergency will help your children cope.
Putting together a good kit is the first step in helping you do that. Let your children pick things that make them feel secure, such as a favorite book or food. Your children will enjoy helping create a kit of all the things they are sure they could not live without in case of an emergency. Be sure to include your children in the process. Make it a game, and they will find it fun!
Some basic items to include in your kit include:
- Radio (hand-crank or battery-powered with extra batteries)
- First-aid kit
- Can opener
- Canned goods
You should also know your child’s medications and keep a small supply in case of emergency. Consider a small identification card with information on key medications and emergency contacts for your child to keep at all times.
Think of your family’s specific needs. For example, if you have an infant, keep any special foods or extra diapers on hand.
Keep a similar kit in each car, along with a blanket, nonperishable food, and a charger for your phone or other essential electronics.
Make a Plan
Knowing what to do in an emergency is just as important as having a kit. Most important is ensuring you have a way to reunite your family if they are separated at the time of the emergency. Children do better in these situations when they are with their families. As a start, teach your children important names, phone numbers and addresses. Most children can memorize a phone number by age four or five. Make it a game—it could help keep your children safe.
Protecting your family will involve others, as well. Pick a family member out of town to be a common contact for everyone to call or text. Sometimes local telephone networks can be jammed. If someone else cares for your children during part of the day, always make sure they know what to do and who to contact in an emergency, too. Lastly, make sure you have a plan for what to do with your pets. They are part of the family, too!
Being informed of your family’s situation when everyone is separated during the day is important. Know the emergency plan in your children’s schools and keep your emergency contact information up to date. Delegate a close family friend as an alternate contact who could pick your children up if you or your spouse is not able to do so. Consider using a word that only you and your children know, and make sure your children know only to leave with someone who can tell them what the code word is. This word can be anything, like a favorite book character, and can serve as the “password” or the “code word.”
In an emergency, talk to your children about what is happening. Be honest and explain the situation; it’s better to learn about it from you than from the media, since information from the media may not be age-appropriate. Set an example with your own actions by maintaining a sense of calm, even when you are distressed. This will help your family cope in any emergency.
Events and information can change quickly in an emergency. Pay attention to local leaders, like your town’s mayor or police department, so you can make the best, most informed decisions for you and your family.
Earthquake exposure is one of the biggest risks to workers compensation insurers, so it’s interesting to read that the California State Compensation Insurance Fund (SCIF) is once again looking to the capital markets to provide reinsurance protection for workers comp losses resulting from earthquakes.
This is a repeat of the first catastrophe bond sponsored by the SCIF in 2011 – Golden State Re Ltd sized at $200 million — which is due to expire in January 2015.
Artemis blog says:
The unique transaction, which has not been repeated by anyone else until now, links earthquake severity to workers compensation loss amounts demonstrating a new use of the catastrophe bond structure.”
The Golden State Re II catastrophe bond issuance is expected to be sized at $150 million or more, and will cover the SCIF until January 2019.
The ongoing shortage of Big Data talent is a serious problem for companies whose business increasingly relies on data analytics to remain competitive. You can imagine how difficult it must be for IT staffing firms whose clients are clamoring for Big Data skills when this country’s colleges and universities simply aren’t churning out enough graduates to meet the demand. Where do you look to find those highly skilled people? Overseas? Perhaps. But what if you looked at the existing pool of IT workers who are already inside those companies?
That’s one of the approaches being taken by Collabera, an IT staffing firm based in Morristown, N.J. I discussed the shortage of Big Data talent in an interview earlier this week with Nixon Patel, senior vice president and head of the technology competency units at Collabera. When I asked him about the extent to which Collabera relies on foreign talent, like individuals here on H-1B visas, to fill these roles for its clients, I was blown away when Patel said Collabera has taken a different approach:
Less than three-quarters of the way through 2014 and we have already seen a slew of regulatory changes and increased audit demands. First, we saw the Supreme Court significantly extend whistleblower provisions to include private companies. Then, we saw Walmart hit with $439 million in compliance enhancements and investigation costs due to its recent FCPA probe.
Needless to say, compliance officers have been dealt a tough hand – something that’s not expected to lighten up throughout the remaining months of 2014. Here are five challenges compliance officers can expect to face throughout the remainder of this year:
A new study relies on a complex systems modelling approach to analyse inter-dependent networks and improve their reliability in the event of failure.
Energy production systems are good examples of complex systems. Their infrastructure equipment requires ancillary sub-systems structured like a network: including water for cooling, transport to supply fuel, and ICT systems for control and management. Every step in the network chain is interconnected with a wider network and they are all mutually dependent.
A team of UK-based scientists has studied various aspects of inter-network dependencies, not previously explored. The findings have been published in The European Physical Journal B by Gaihua Fu from Newcastle University, UK, and colleagues. These findings could have implications for maximising the reliability of such networks when facing natural and man-made hazards.
Previous research has focused on studying single, isolated systems, not interconnected ones. However, understanding inter-connectedness is key, since failure of a component in one network can cause problems across the entire system, which can result in a cascading failure across multiple sectors, as in the energy infrastructure example quoted above.
In this study, interdependent systems are modelled as a network of networks. The model characterises interdependencies in terms of direction, redundancy, and extent of inter-network connectivity.
Fu and colleagues found that the severity of cascading failure increases significantly when inter-network connections are one-directional. They also found that the degree of redundancy, which is linked to the number of connections, in inter-network connections can have a significant effect on the robustness of systems, depending on the direction of inter-network connections.
The authors observed that the interdependencies between many real-world systems have characteristics that are consistent with the less reliable systems they tested, and therefore they are likely to operate near their critical thresholds. Finally, ways of cost-effectively reducing the vulnerability of inter-dependent networks are suggested.
Reference: Fu, G. et al. (2014). Interdependent networks: Vulnerability analysis and strategies to limit cascading failure. European Physical Journal B.
Read the paper (PDF).
The World Health Organization (WHO) has identified six countries as being at high risk for the spread of the Ebola virus disease. It is working with these countries to ensure that full surveillance, preparedness and response plans are in place.
“The following countries share land borders or major transportation connections with the affected countries and are therefore at risk for spread of the Ebola outbreak: Benin, Burkina Faso, Côte d’Ivoire, Guinea-Bissau, Mali, and Senegal,” the agency said in the first in a series of regular updates on the Ebola response roadmap.
WHO’s Ebola Response Roadmap Situation Report 1 features up-to-date maps containing hotspots and hot zones, as well as epidemiological data showing how the outbreak is evolving over time. It also communicates what is known about the location of treatment facilities and laboratories.
It follows the release of an Ebola response roadmap that aims to stop the transmission of Ebola virus disease (EVD) within six to nine months.
The update noted that although the numbers of new cases reported in Guinea and Sierra Leone had been relatively stable, last week saw the highest weekly increase yet in Guinea, Sierra Leone and Liberia, highlighting ‘the urgent need to reinforce control measures and increase capacity for case management.’
Disaster recovery planners are often recommended to take a holistic view of their IT organisation. They should work to deal with potential outcomes, rather than possible causes. That certainly helps businesses to greater overall DR effectiveness and cost-efficiency. However, there’s no denying that a number of practical details must also be respected. Otherwise, the best-aligned DR plan may never get off the ground. The old rhyme says: “For want of a nail, a shoe was lost…” and finally the whole kingdom too. Here are a few such ‘nails’ that disaster recovery planning can take into account to get those mission-critical apps up and running again after an incident.
What is the BCI Diploma?
The BCI Diploma enables individuals to achieve a formal, internationally recognised academic qualification in business continuity and is delivered in partnership with Buckinghamshire New University as a distance learning programme.
This course has been developed in response to industry demand and is designed to meet the current and future needs of business continuity professionals working in the industry worldwide.
Students will be entitled to FREE Student membership for the duration of their studies, giving them full access to a wide range of high-quality business continuity resources through the BCI Members’ Area to support their learning as well as a wide range of other value-add benefits, including Member discounts on BCI products and services.
Successful completion of the Diploma leads to the post-nominal designation DBCI (Diploma of the Business Continuity Institute). Holders of the DBCI can apply via the Alternative Route to Membership for Statutory membership of the BCI (AMBCI or MBCI dependent on experience).
This course is delivered in an interactive eLearning environment and is delivered over a period of eight weeks. Each session lasts two hours with two sessions scheduled for each of the eight weeks, giving you a total of 32 hours of training.
The BCI Good Practice Guidelines Live Online Training Course has been revised for 2014 and is fully aligned to the Good Practice Guidelines (GPG) 2013 and to ISO 22301:2012, the international standard for BCM.
This course offers a solid description of the methods, techniques and approaches used by BC professionals worldwide to develop, implement and maintain an effective BCM programme, as described in GPG 2013 and takes the student step by step through the BCM Lifecycle, which sits at the heart of good BC practice.
Infrastructure virtualization is a proven means of streamlining hardware footprints and increasing resource agility in order to better handle the demands of burgeoning data loads and wildly divergent user requirements.
But it turns out that what is good for infrastructure is also good for data itself, which is why many organizations are looking to augment existing virtual plans with data virtualization, particularly when it comes to massive volumes found in archiving and data warehousing environments.
The Data Warehousing Institute’s David Wells offers a good overview of data virtualization and how it can drive greater enterprise flexibility. In essence, the goal is to enable access to single copies of data across disparate entities, preferably in ways that make details like location, structure and even access language irrelevant to the user. For warehousing and analytics, then, this eliminates the need to move all related data to a newly created database, which gives infrastructure and particularly networking a break because data no longer has to move from site to site in order to reach the user. Couple this with semantic optimization and in-memory caching and suddenly Big Data starts to look a lot less menacing.
The big change has finally started to take effect, with our historic preceptions of terrorism, consequences of decades of mismanagement of the Middle East, the lack of intervention where needed and intervention where not necessary, the lack of political and public will to engage with the idea of ‘home-grown’ terrorism and the enthusiasm for disaffected youth to belong to something that allows them to ‘matter’.
In the UK, we have raised our threat level from International Terrorism to ‘Severe’. This is in recognition of the fact that there is stated intent to attack the UK ‘homeland’ and its people. There is known capability and the potential adversaries are motivated and perhaps preparing their plans now – raising the threat level is a sensible caution and allows some focus and thinking about what needs to be done to improve our protective and response capabilities. The result amongst our population varies from fear about a threat we don’t understand to perhaps understandable scepticism about the motives of the Government and the wish to impose a ‘police state’ regime.
Today, I conclude a three-part series on risk assessments in your Foreign Corrupt Practices Act (FCPA) or UK Bribery Act anti-corruption compliance program. I previously reviewed some of the risks that you need to assess and how you might go about assessing them. Today I want to consider some thoughts on how to use your risk assessment going forward.
Mike Volkov has advised that you should prepare a risk matrix detailing the specific risks you have identified and relevant mitigating controls. From this you can create a new control or prepare an enhanced control to remediate the gap between specific risk and control. Finally, through this risk matrix you should be able to assess relative remediation requirements.
A manner in which to put into practice some of Volkov’s suggestions was explored by Tammy Whitehouse, in an article entitled “Improving Risk Assessments and Audit Operations”. Her article focused on the how Timken Company, assesses and then evaluates the risks the company has assessed. Once risks are identified, they are then rated according to their significance and likelihood of occurring, and then plotted on a heat map to determine their priority. The most significant risks with the greatest likelihood of occurring are deemed the priority risks, which become the focus of the audit/monitoring plan, she said. A variety of solutions and tools can be used to manage these risks going forward but the key step is to evaluate and rate these risks.
There’s no doubt that virtualisation has been a boon to many enterprises. Being able to rationalise the use of servers by spreading storage and applications evenly over a total pool of hardware resources leads to higher cost-efficiency, as well as improved disaster recovery and business continuity. Yet in practical terms, businesses are often still tied to one vendor for any effective storage strategy. To break free of that constraint, software-defined storage (SDS) lets IT departments mix and match the physical storage devices as they want. And there are further benefits too.
I was recently talking with a friend about—what else—Facebook and her thoughts on whether that would be too private to share.
“Oh, I don’t believe in privacy,” she said with a dismissive hand wave.
That stumped me, in large part because she’s a defense attorney.
“You don’t believe in privacy as a fact or you don’t believe in privacy as a law?” I asked.
“Oh - legal privacy is very important,” she said. “But privacy as a fact—I don’t believe in it. It doesn’t exist.”
It sounds like a distinction only a lawyer could make. Yet as Big Data becomes commonplace, CIOs must educate themselves about the legal risks and responsibilities of gathering and using data, advises Larry Cohen, global CTO of Capgemini.
"I think the CIO is already kind of taking on more of a role of a risk broker and risk orchestrator in the enterprise," Cohen told CIO.com. "I think this is a perfect example of how a role like that arises in a topic like Big Data."
A Case In Point
Not long ago I was talking with a long-term CIO of a large organization about Disaster Recovery. He proceeded to tell me they are all set, as their tapes are stored offsite. To him, that was all he needed to be concerned about as it related to DR.
When a fire broke out in the office next to their data center, I am certain that offsite tapes were the last thing on his mind. He learned a hard lesson about relying on backups though. Turns out that after the fire, they were able to physically relocate their entire office before IT was able to restore all their applications. Even more disturbing than that was the discovery that they had more than ten days’ worth of data loss due to old/bad tapes, skipped files, and incomplete backups. I would not have wanted to be him when he met with the COO in the aftermath and had to explain the situation and his lack of preparedness.
The Napa County earthquake will have political aftershocks on Capitol Hill. The big question is how long they’ll last.
Prompted by California’s weekend temblor, lawmakers are renewing their push for earthquake warning programs. The most recent quake could spur support for a long-debated early warning system. It also could reveal some partisan fault lines.
“What we need is the political resolve to deploy such a system,” Sen. Dianne Feinstein, D-Calif., said this week.
In April, underscoring the role of politics in earthquake matters, 25 House Democrats from California, Oregon and Washington endorsed a proposal to provide $16.1 million for an earthquake early warning system. No Republican signed the letter requesting the funds.
As government leaders in California wend their way through the management of the state's historic drought, real discussions about how the state should adapt to water scarcity are taking place. And if history is a guide, the decisions made in the Golden State will have their impact in other places where water scarcity is becoming the norm.
Make no mistake: California is moving forward into uncharted territory. Traditional engineered solutions, such as the California Aqueduct that channels water from the wetter regions in the north to the arid south, are being challenged by a host of factors beyond the drought, including environmental regulations and the capacity of the systems themselves. Such water-transfer projects made it possible for the drier Southland to grow and become the most populous region of the state. But government and private-sector leaders are rapidly realizing that other approaches will be needed to fulfill future statewide agriculture, business and residential water needs.
Natural catastrophe events in the United States accounted for three of the five most costly insured catastrophe losses in the first half of 2014, according to just-released Swiss Re sigma estimates.
In mid-May, a spate of severe storms and hail hit many parts of the U.S. over a five-day period, generating insured losses of $2.6 billion. Harsh spring weather also triggered thunderstorms and tornadoes, some of which caused insured claims of $1.1 billion.
The Polar Vortex in the U.S. in January also led to a long period of heavy snowfall and very cold temperatures in the east and southern states such as Mississippi and Georgia, resulting in combined insured losses of $1.7 billion.
Ed. Note-Today, I continue my three-part posts on risk assessments. Today I take a look at some different ideas on how you might go about assessing your risks.
One of the questions that I hear most often is how does one actually perform a risk assessment? Mike Volkov has suggested a couple of different approaches in his article “Practical Suggestions for Conducting Risk Assessments.” In it Volkov differentiates between smaller companies which might use some basic tools such as “personal or telephone interviews of key employees; surveys and questionnaires of employees; and review of historical compliance information such as due diligence files for third parties and mergers and acquisitions, as well as internal audits of key offices” from larger companies. Such larger companies may use these basic techniques but may also include a deeper dive into high risk countries or high risk business areas. If your company’s sales model uses third party representatives, you may also wish to visit with those parties or persons to help evaluate their risks for bribery and corruption that might well be attributed to your company.
Another noted compliance practitioner, William Athanas, in an article entitled “Rethinking FCPA Compliance Strategies in a New Era of Enforcement”, took a different look at risk assessments when he posited that companies assume that FCPA violations follow a “bell-curve distribution, where the majority of employees are responsible for the majority of violations.” However Athanas believed that the distribution pattern more closely follows a “hockey-stick distribution, where a select few…commit virtually all violations.” Athanas suggests assessing those individuals with the opportunity to interact with foreign officials have the greatest chance to commit FCPA violations. Diving down from that group, certain individuals also possess the necessary inclination, whether a personal financial incentive linked to the transaction or the inability to recognize the significant risks attendant to bribery.
There’s bad news for SAP’s HANA: The majority of SAP’s American User Group is skeptical that the Big Data platform is worth the costs.
ASUG recently surveyed its member on SAP HANA adoption. It received more than 500 respondents, with 93 percent identifying themselves as ASUG members.
Three-fourths of SAP customers said they have not purchased any SAP HANA products because they can’t identify a business case that will justify its costs. Ranked well below this concern (at 40 percent) were concerns about skill set, a roadmap and upgrade issues.
ASUG membership can also include SAP partners, whose responses were separated out from customer survey results. Still, partner results share a similar concern. The top factor partners say could lead to more HANA purchases would be “better business case guidance.” (As one reader pointed out in the comments, the SAP Innovation Awards might help here, since the list provides nearly 30 use cases.)
WASHINGTON – The Federal Emergency Management Agency (FEMA), through its Regional Office in Oakland, California, is monitoring the situation following the U.S. Geological Survey report of a 6.0 magnitude earthquake that occurred this morning six miles south southwest of Napa, California. FEMA remains in close coordination with California officials, and its Regional Watch Center is at an enhanced watch to provide additional reporting and monitoring of the situation, including impacts of any additional aftershocks.
FEMA deployed liaison officers to the state emergency operations center in California and to the California coastal region emergency operations center to help coordinate any requests for federal assistance. FEMA also deployed a National Incident Management Assistance Team (IMAT West) to California to support response activities and ensure there are no unmet needs.
“I urge residents and visitors to follow the direction of state, tribal and local officials,” FEMA Administrator Craig Fugate said. “Aftershocks can be strong enough to cause additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.”
When disasters occur, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations and numerous private interest groups who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
Safety and Preparedness Tips
- Expect aftershocks. These secondary shockwaves are usually less violent than the main quake but can be strong enough to do additional damage to weakened structures and can occur in the first hours, days, weeks or even months after the quake.
- During an earthquake, drop, cover and hold on. Minimize movements to a few steps to a nearby safe place. If indoors, stay there until the shaking has stopped and exiting is safe.
- If it is safe to do so, check on neighbors who may require assistance.
- Use the telephone only for emergency calls. Cellular and land line phone systems may not be functioning properly. The use of text messages to contact family is the best option, when it is available.
- Check for gas leaks. If you know how to turn the gas off, do so and report the leak to your local fire department and gas company.
The enterprise must change if it is to take advantage of all the benefits that cloud and mobile technologies have to offer. This is nothing new, of course, as the enterprise has been changing to meet new challenges and opportunities since its inception.
But confronting challenges is always easier in hindsight, which leaves us non-time travelers in a quandary: What does the cloud future hold, and how can we best prepare for it?
According to the rising cadre of startups looking to capitalize on burgeoning cloud infrastructure, the biggest thing holding the enterprise back is their legacy infrastructure and the continued reliance on the old guard vendors who created it. SolidFire’s Jeremiah Dooley, for example, claims leading platform providers are trying to delay the inevitable switch to the cloud as much as possible in order to prevent others from encroaching upon their territory. This may benefit their revenue streams, but it keeps the enterprise in the slow lane when it comes to provisioning services and driving operational efficiency. The message here is simple: The cloud is not the problem; static legacy infrastructure is.
Social media is now a standard communications tool for businesses, with many companies regularly using Facebook, Twitter and other social networks to engage with the public. More and more businesses are hiring social media specialists whose sole responsibility is to be the company’s “voice” on these platforms. But this activity comes with risk for both the organization and the individual. The potential for any posting to be retweeted, shared or even go viral underscores the need to be aware of the rising legal risks associated with your business’s social media accounts.
Potential Defamation Lawsuits
The first tip for anyone engaged in social media on behalf of their business or employer is obvious, but not always followed—think before you post. Even if the tweet or post contains an unintended error and is deleted immediately, postings can still be pulled and reposted or retweeted by others. Once something is out there on social media, however, you’ll need to deal with the consequences. Although the laws surrounding social media are still developing, it is possible for a business to be hit with an expensive defamation suit based on a single posting or comment.
The Business Continuity Institute is pleased to announce that the keynote speaker for the BCI World Conference and Exhibition will be Prof Steve Peters – consultant psychiatrist, bestselling author and Head of Sports Psychology at UK Athletics. In addition to his extraordinary success with British cycling, he has also worked on twelve other Olympic disciplines as well as English Premier League football and the English rugby and football teams.
Beginning his career as a maths teacher, Prof Peters then switched to medicine and specialised in patients with severe and dangerous personality disorders. His focus is now on how the mind can enable people to reach optimum performance in all walks of life. Working with sportspeople at the top of their game, he gives them the confidence to come back from defeat and out-perform the opposition.
Prof Peters has been described as a "genius" by Team GB cycling coach Dave Brailsford and many decorated Olympians such as Chris Hoy, Victoria Pendleton and Bradley Wiggins have all attributed their success to him.
In his keynote speech, Prof Peters will explain his method to help us understand and control what he describes as our 'inner chimp' – the irrational, impulsive, seemingly impossible part of our mind that often holds us back. Examining motivation, confidence and communication, he will show that competition is as much in the mind as it is in the field or on the track – or in the office.
Find out more about the BCI World Conference and Exhibition on the 5th and 6th November at the London Olympia by visiting the BCI website.
Yesterday, I blogged about the Desktop Risk Assessment. I received so many comments and views about the post, I was inspired to put together a longer post on the topic of risk assessments more generally. Of course I got carried away so today, I will begin a three-part series on risk assessments. In today’s post I will review the legal and conceptual underpinnings of a risk assessment. Over the next couple of days, I will review the techniques you can use to perform a risk assessment and end with a discussion of what to do with the information that you have gleaned in a risk assessment for your compliance program going forward.
One cannot really say enough about risk assessments in the context of anti-corruption programs. Since at least 1999, in the Metcalf & Eddy enforcement action, the US Department of Justice (DOJ) has said that risk assessments that measure the likelihood and severity of possible Foreign Corrupt Practices Act (FCPA) violations identifies how you should direct your resources to manage these risks. The FCPA Guidance stated it succinctly when it said, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” The UK Bribery Act has a similar view. In Principal I of the Six Principals of an Adequate Compliance program, it states, “The commercial organisation regularly and comprehensively assesses the nature and extent of the risks relating to bribery to which it is exposed.” In other words, risk assessments have been around and even mandated for a long time and their use has not lessened in importance. The British have a way with words, even when discussing compliance, and Principal I of the Six Principals of an Adequate Compliance program says that your risk assessment should inform your compliance program.
Your data backups are there to help you recover information, applications and files if required, hopefully both effectively and efficiently. But they and any archiving you do may also be there for external parties to use as a result of e-discovery. That’s the retrieval of electronically stored information (ESI) for use in legal proceedings involving your organisation. The US has led the way in this field, defining ESI as any information that is “created, stored, or best used with any kind of computer technology”. Now in Australia, all court dealings above a certain size must be conducted completely digitally. But is e-discovery good news or bad news for legal rulings and ultimately business continuity?
In our haste to cover all the high-level strategies that may be needed to respond to a business disruption, Business Continuity Plans often miss critical details that can mean the difference between success and failure – especially when time is a major factor.
Many BCP’s have a strategy for “Loss of Building”. That strategy may include moving critical employees from the most crucial business processes to alternate sites – either internal (another of the organization’s facilities in a different geographical location) or external (at a 3rd party “Workspace” that can be made ready to accommodate those employee’s technology requirements).
All good; and logical – but perhaps missing some critical information.
A state of emergency was declared in California yesterday by Gov. Edmund G. Brown due to the effects of a 6.1 magnitude earthquake that rocked the Napa Valley area in northern California. The U.S. Geological Survey estimates that economic losses from the quake could top $1 billion and said there is a 54% likelihood of another large quake, magnitude 5 or higher, within the next week.
As of 4:15 p.m. Sunday, six aftershocks had been reported, four centered near Napa, ranging 2.5 to 3.6 magnitude. Two others, a 2.8 and a 2.6 were reported near American Canyon, according to the USGS.
The Napa quake is the largest in the Bay Area since the 1989 Loma Prieta quake, which was magnitude 6.9. That quake resulted in $1.8 billion in insured claims (in 2013 dollars) being paid to policyholders, said Robert Hartwig, Ph.D., president of the Insurance Information Institute.
(MCT) — Ten seconds before the earth rumbled in a UC Berkeley lab early Sunday morning, an alarm started blaring — and an ominous countdown warned that a temblor centered near Napa was moments away.
"Earthquake! Earthquake!" it cautioned, after a quick series of alarms. "Light shaking expected in three seconds."
The successful alert was the biggest test yet in the Bay Area for a type of earthquake early warning system that's not yet available to the public in the U.S. but already is providing precious seconds of notice before quakes hit in Mexico and Japan.
The ShakeAlert system — a collaboration between Cal, Caltech, the University of Washington and the U.S. Geological Survey — could one day stop elevators, control utilities and alert motorists of an impending natural disaster. But before it is reliable enough to launch throughout the West Coast, the system needs about $80 million in equipment, software and other seismic infrastructure upgrades.
(MCT) — City officials in Napa had long worried that the grand building on the corner of Second and Brown streets — with its brick walls and giant red-tiled cupolas — could be devastated by a major earthquake.
So city officials required brick structures such as the landmark Alexandria Square building to get seismic retrofitting — bolting brick walls to ceilings and floors to make them stronger. The work was completed years ago on the 104-year-old property.
But when a 6.0 earthquake struck Sunday morning, the walls on the top floors crumbled, showering brick and mortar onto the sidewalk and outdoor café.
The destruction highlights one of the greatest fears of seismic engineers — that the retrofitting of unreinforced masonry buildings still leaves weak joints between bricks. Whole chunks can fall, sending bricks crashing down.
One day after a magnitude 6.0 earthquake struck the San Francisco/Napa area of California, the Northern California Seismic System (NCSS) says there is a 29 percent probability of a strong and possibly damaging aftershock in the next seven days and a small chance (5 to 10 percent probability) of an earthquake of equal or larger magnitude.
The NCSS, operated by UC Berkeley and USGS, added that approximately 12 to 40 small aftershocks are expected in the same seven-day period and may be felt locally.
As a rule of thumb, a magnitude 6.0 quake may have aftershocks up to 10 to 20 miles away, the NCSS added.
In the European Union in the past year, a whole range of corporate risk and regulatory issues have been at the top of the agenda, but at the top of my list are data protection and information security.
In this report on risk issues for 2014, I will look at websites, privacy impact assessments, cloud computing and the EU Data Protection Regulation.
Focus on Websites in the EU
In the past five years or so, the European Commission and regulators that focus on consumer protection have carried out regular “sweeps” of websites in order to assess levels of compliance. This trend will continue, and businesses that sell or license content to consumers need to review their online terms and conditions as well as their compliance with other e-commerce rules such as the E-Privacy Directive, E-Commerce Regulations and Distance Selling Regulations.
For example, an EU-wide screening of 330 websites that sell digital content (such as books, music, films, videos and computer games) across the European Economic Area revealed some significant areas of non-compliance.
How many among you out there are sushi fans? Conversely, how many out there consider the idea of eating raw fish right up there with going into to the dentist’s office for some long overdue remedial work? One’s love or distaste for sushi was used as an interesting metaphor for leadership in this week’s Corner Office section of the New York Times (NYT) by Adam Bryant, in an article entitled “Eat Your Sushi, and Expand Your Horizon”, where he profiled Julie Myers Wood, the Chief Executive Officer (CEO) of Guidepost Solutions, a security, compliance and risk management firm. Wood said her sushi experience relates to advice she gives college students now, “One thing I always say is “eat the sushi.” When I had just graduated from college, I went with my mom to Japan. We had a wonderful time, but I refused to eat the sushi. Later, when I moved to New York, I tried some sushi and loved it. The point is to be willing to try things that are unfamiliar.”
I thought about sushi and trying something different in the context of risk assessments recently. I think that most compliance practitioners understand the need for risk assessments. The FCPA Guidance could not have been clearer when it stated, “Assessment of risk is fundamental to developing a strong compliance program, and is another factor DOJ and SEC evaluate when assessing a company’s compliance program.” Many compliance practitioners have difficulty getting their collective arms about what is required for a risk assessment and then how precisely to use it. The FCPA Guidance makes clear there is no ‘one size fits all’ for about anything in an effective compliance program.
One type of risk assessment can consist of a full-blown, worldwide exercise, where teams of lawyers and fiscal consultants travel around the globe, interviewing and auditing. However if there is one thing that I learned as a lawyer, which also applies to the compliance field, is that you are only limited by your imagination. So using the FCPA Guidance that ‘on one size fits all’ proscription, I would submit that is also true for risk assessments.
Napa, Calif., residents were awakened at 3:20 a.m. on Sunday, Aug. 24, by a magnitude 6.0 earthquake that struck six miles southwest of the Northern California city, sending as many as 160 to the hospital, and causing widespread damage, including dozens of broken water mains and triggering six major fires. One person was still in critical condition Sunday evening.
The fires destroyed several mobile homes, and firefighters struggled with water pressure issues since a significant amount of pressure was lost because of the cracked and broken water mains. Most of the damage occurred in downtown Napa where the buildings are older.
There was also significant damage to roads, but the California Department of Highway Patrol and California Department of Transportation found no damage to bridges. The Transportation Department also had dive teams checking local toll bridges but found no damage.
(MCT) — A predawn earthquake rattled Napa, Calif., early Sunday morning, critically injuring at least three people as the shaking ripped facades and shattered windows from historic downtown buildings, toppled chimneys and ignited gas fires at mobile home parks.
Countless residents fled into darkened streets as the result of the quake, measured at magnitude 6.0 by the United States Geological Survey. It was the largest to hit the San Francisco Bay area since the devastating 6.9-magnitude Loma Prieta earthquake in 1989, prompting Gov. Jerry Brown to declare a state of emergency.
The Queen of the Valley Medical Center in Napa reported 120 people seeking treatment soon after the quake. They included a small child who was airlifted to UC Davis Medical Center with critical injuries authorities attributed to a collapsed chimney.
The buildup to fall is in full swing. The next step is Labor Day parades and barbeques and, then, the school busses will begin to roll.
IT and telecommunications never had a real summer slowdown this year, though. Much was done and lots of news was made, and hasn’t even slowed down during the latter half of August. Here is a look at some of the news and more interesting commentary.
"I always imagined a few people on the phones in a small office taking calls, not a big office with actual departments, and definitely not anyone thinking about business continuity and risks." Over the past year I have heard this line said to me in varying forms when I have explained that I give advice on corporate risk and business continuity in the non profit sector.
Not a common misconception and when being able to easily list the risks relevant to the financial services industry for example, applying that to the non profit industry along with the associations of what is important is not as easily obvious straight away.
Some Challenges and observations:
The varying degrees of academia in non profit organisations are expansive and the primary challenge is making it accessible and relatable to all.
The attitudes that this would take too long - it’s not required in our industry and focusing on delivering primary front line services was more important. But has anyone thought about those supporting functions?
"This will never happen to us anyway." At first, it made me feel uneasy hearing this but this is the best challenge to promote business continuity in any industry. Using the "if we don’t comply, we will get fined" card almost shifts the desired affect from wanting to provide great assurance to an exhausting check box exercise. The appetite and denial factor is a tough barrier to get around.
Forgotten plans - in most cases contingency plans were in people’s minds but just not on paper. Hearing various stories of incidents taking place which resulted in an instant panic before the swift realisation that "oh yes, we have a plan, we know what we need to" kicked off a series of reactions to get things back to normal.
Planning V’s practicing - countless months were spent planning and writing but practicing those BCP’s were missing. In recent exercises some feedback I got was that no one had ever tested their plans and found it really useful. The actions that were thought to take five minutes took twenty. This started a chain of actions which plan owners needed to implement in order to become more resilient in an incident. A friend said to me once that businesses don’t fail because of a bad business continuity plans, but because of bad choices. That stuck with me.
So what does BC look like in these industries?
We live in a robust and dynamic society and whilst a generic approach to start off a plan is valuable, they can be adaptable. I quickly realised that I was getting too hung up on wanting to make each teams plan look the same and what really mattered was that it absolutely has to work for the people invoking it, and if it is clear and coherent, that is sufficient.
It is without a doubt that the non-physical threats such as reputational risks, loss of funding from a major donor and employee scandals can have serious impacts on your operation, especially when the majority of funding is provided by the public generosity. If an incident occurred what would be the emergency funding protocol? It is things like this that needs the most consideration. Yes, every industry needs to consider the building, IT/data and staff but what about the intangible factors that essentially calls for a disaster.
Making those threats relatable is key and, the empowerment resulting in a shift in view of risk and business continuity only being related to IT and Financial services is essential. (Because of the varying levels of academics in these industries often sit under one roof).
What does this all mean?
All non profits, for example charities, are run like businesses. Fact!
Non profit or not, business continuity is on everyone’s mind, but they just don’t know that this is what it is. Yes, the variations of levels in what constitutes a threat differs from industry to industry but essentially, what matters most is the resiliency each organisation has to overcome any incident it faces.
RISKercizing until next time
It’s hard to have a conversation in the enterprise these days without the topic veering toward Big Data. What is it? Where does it come from? And what are we supposed to do with it?
But despite the fact that none of these questions have clear answers yet, IT is still tasked with preparing to accommodate Big Data and then figuring out how to derive real value from it.
Part of the problem is the term “Big Data” itself. While large data volumes are a facet of Big Data, that’s not where the challenge lies. Rather, says IBM’s Doug Balog, it’s the need to accommodate the ‘variety, velocity and veracity’ that advanced analytics require that will give most managers fits. This will require not only bigger, more scalable infrastructure, but entirely new ways to collect, analyze and store data, which, from IBM’s perspective, will require advanced Power8 architectures married to powerful third-party platforms like Canonical and the various Linux distributions.
Every organization should have an Emergency Action or Evacuation Plan. Even when it is not required (by the building owner, fire department or occupancy regulations) it is a ‘best practice’ for every organization to plan and practice to evacuate all personnel from the workplace. Often, evacuation focuses on getting out quickly. Surely that’s the most critical objective. . While simple in principle, there are some considerations that should not be overlooked:
Too Close for Safety: The standard ‘rule of thumb’ for Assembly points is at least 200 feet from the evacuated building. This is intended to assure personnel will not be endangered is window glass or other debris falls. Keep in mind that taller buildings may have a wider potential debris pattern. Two-hundred feet should be used as the minimum. Assuring employee safety should be the priority.
Obstruction: When Emergency Services (Fire, police, ambulance) arrive, will they have sufficient room to do their job? Crowds of evacuated personnel shouldn’t impede their work. Emergency services may need room to park and to turn their vehicles around. Make sure Assembly Points are a reasonable distance from entrances and drive paths- and assure personnel won’t interfere.
(MCT) — For six weeks, Florida reeled under the assault of four hurricanes.
First Charley struck Port Charlotte Aug. 13, 2004, with 150-mph winds. Then Frances pounded Martin and Palm Beach counties, collapsing part of Interstate 95 near Lake Worth and sending gusts into Broward that left a quarter-million people without electricity. Ivan came ashore near Pensacola with 120-mile-per-hour winds and a storm surge that swamped coastal towns. Jeanne struck the same area as Frances, turning out the lights in most of Palm Beach County, ripping off roofs and flooding houses.
It came to be known as the Year of the Four Hurricanes.
Following that beating, and another one the next year with Hurricanes Wilma and Katrina, there have been dramatic improvements to Florida’s electric grid, shelters, forecasting abilities and ability to communicate. And while another season like 2004 still would be disastrous, residents would have more warning and stand a better chance of returning faster to normal life.
(MCT) — The good news is people are more alert to and educated about weather this time of year.
Husbands and wives on the Coast can carry on a conversation about how the amount of sand in the upper atmosphere along the Atlantic affects the chances a tropical storm will develop.
But the down side is the array of information can be confusing and the social media sites, looking for clicks, tend to hype tropical activity.
Find a trusted source, local emergency managers say.
Here’s a tip that might take a little pressure off the data scientist talent search: A data scientist doesn’t necessarily need to be a math wizard with a PhD or other hard science background.
In fact, that type of person might actually prove disappointing if your goal is Big Data analytics for humans, according to data scientist Michael Li.
That may seem odd, given that Li’s work focuses on exactly the kind of credentials normally associated with the term “data scientist.” Li founded and runs The Data Incubator, a six-week bootcamp to prepare science and engineering PhDs for work as data scientists and quantitative analysts.
You can’t just wing it anymore. Many things have changed since you first said you wanted to become a fireman, an astronaut, a veterinarian or a nun. This is especially true in the field of business continuity.
Business continuity is not just concerned with IT recovery anymore. Supply chain management is critical to sustaining company operations. How do we determine what is or isn’t critical? Shouldn’t we bring these issues to the attention of our C-Level management?
These are just some of the issues confronting BCP Managers and most practitioners today had to learn how to handle these things along the way. As time goes by, trying to cover all bases regarding continuity has become more and more complicated. Instead of learning while working the job, a little bit of education to start would go a long way to getting ahead of what needs to be done.
The GlaxoSmithKline PLC (GSK) corruption matter in China continues to reverberate throughout the international business community, inside and outside China. The more I think about the related trial of Peter Humphrey and his wife, Yu Yingzeng for violating China’s privacy laws regarding their investigation of who filmed the head of GSK’s China unit head in flagrante delicto with his Chinese girlfriend, the more I ponder the issue of risk in the management of third parties under the Foreign Corrupt Practices Act (FCPA). In an article in the Wall Street Journal (WSJ), entitled “Chinese Case Lays Business Tripwires”, reporters James T. Areddy and Laurie Burkitt explored some of the problems brought about by the investigators convictions.
They quoted Manuel Maisog, chief China representative for the law firm Hunton & Williams LLP, who summed up the problem regarding background due diligence investigations as “How can I do that in China?” Maisog went on to say, “The verdict created new uncertainties for doing business in China since the case hinged on the couple’s admissions that they purchased personal information about Chinese citizens on behalf of clients. Companies in China may need to adjust how they assess future merger partners, supplier proposals or whether employees are involved in bribery.”
I had pondered what that meant for a company that wanted to do business in China, through some type of third party relationship, from a sales representative to distributor to a joint venture (JV). What if you cannot get such information? How can you still have a best practices compliance program around third parties representatives if you cannot get information such as ultimate beneficial ownership? At a recent SCCE event, I put that question to a Department of Justice (DOJ) representative. Paraphrasing his response, he said that companies still need to ask the question in a due diligence questionnaire or other format. What if a third party refuses to answer, citing some national law against disclosure? His response was that a company needs to very closely weigh the risk of doing business with a party that refuses to identify its ownership.
It’s been said that Big Data and the cloud go together like chocolate and peanut butter, but it looks like more symbiosis is at work here than meets the eye.
While on the surface it may seem like the two developments appeared at the same time by mere coincidence, the more likely explanation is that they both emerged in response to each other – that without the cloud there would be no Big Data, and without Big Data there would be no real reason for the cloud.
Silicon Angle’s Maria Deutscher hit on this idea recently, noting that the two seem to be feeding off each other: As enterprises start to grapple with Big Data, they will naturally turn to the cloud to support the load, which in turn will generate more data and the need for additional cloud resources. In part, this is a continuation of the old paradigm that more computing power and capacity simply causes users to up their data requirements. Of course, the cloud comes with additional security and availability concerns, but in the end it is the only way for already stretched IT budgets to feasibly cope with the amount of data being generated on a daily basis.
An improving economy and updated business practices have contributed to companies sending more employees than ever on international business trips and expatriate assignments. Rising travel risks, however, require employers to take proactive measures to ensure the health and safety of their traveling employees. Many organizations, however, fail to implement a company-wide travel risk management plan until it is too late – causing serious consequences that could easily have been avoided.
The most effective crisis planning requires company-wide education before employees take off for their destinations. Designing a well-executed response plan and holding mandatory training for both administrators and traveling employees will ensure that everyone understands both company protocol and their specific roles during an emergency situation.
Additionally, businesses must be aware that Duty of Care legislation has become an integral consideration for travel risk management plans, holding companies liable for the health and safety of their employees, extending to mobile and field employees as well. To fulfill their Duty of Care obligations, organizations should incorporate the following policies within their travel risk management plan:
Ian Kilpatrick looks at the risks involved with mobile devices and how to secure them.
Mobile devices with their large data capacities, always on capabilities, and global communications access, can represent both a business applications’ dream and a business risk nightmare.
For those in the security industry, the focus is mainly on deploying ‘solutions’ to provide protection. However, we are now at one of those key points of change which happen perhaps once in a generation, and that demand a new way of looking at things.
The convergence of communications, mobile devices and applications, high speed wireless, and cloud access at a personal level, are driving functionality demands on businesses at too fast a rate for many organizations.
Lockton report provides information to help protect companies' employees and operations from Ebola threats.
The current Ebola outbreak, deemed ‘an international public health emergency’ by the World Health Organization, has left many companies uncertain of how to properly protect themselves, while ensuring the safety of its employees and operations.
"The situation on the ground is evolving quickly and poses a threat not only to companies with operations in the region, but to all companies who have employees that may come in contact with the Ebola virus while traveling internationally," said Logan Payne of Lockton's International Risk Management Team.
Most companies are concerned with two main areas when facing a threat like Ebola: personnel risk and an interruption of normal business operations leading to a loss of revenue.
The 2014 Business Continuity Institute Africa Awards took place on Tuesday 19th August at a ceremony to coincide with the SADC and ITWeb Business Resilience Conference in South Africa. The BCI Africa Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in or operating in Africa.
The Winners of the Awards were:
Business Continuity Manager of the Year
Sylvain Prefumo MBCI, Head of Business Continuity at the State Bank of Mauritius
Emmanuel Atta Hanson MBCI, Business Continuity Manager at Barclays Bank of Ghana Ltd, and Elnora Aryee-Quaynor, Director of Africa Risk and Quality at PricewaterhouseCoopers (Ghana) Ltd, were both Highly Commended
Business Continuity Public Sector Manager of the Year
Dr Clifford Ferguson, Business Continuity Manager at the Government Pensions Administration Agency
Business Continuity Consultant of the Year
Peter Frielinghaus MBCI, Senior BCM Advisor at ContinuitySA
Lynn Jackson MBCI, Senior Business Continuity Consultant at ContinuitySA, was Highly Commended
Business Continuity Team of the Year
Barclays Bank of Kenya
Deloitte was Highly Commended
BCM Newcomer of the Year
Darren Johnson AMBCI, BCM Advisor at ContinuitySA
Business Continuity Innovation of the Year
Business Continuity Provider of the Year (Service)
Most Effective Recovery of the Year
Barclays Bank of Kenya
Business Continuity Personality of the Year
Congratulations to all the winners and well done to all those who were nominated. All winners from the BCI Africa Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition.
Computerworld - When Healthcare.gov was launched last October, it gave millions of Americans direct experience with a government IT failure on a massive scale. But the overall reliability of federal IT operations is being called into question by a survey that finds outages aren't uncommon in government.
Specifically, the survey found that 70% of federal agencies have experienced downtime of 30 minutes or more in a recent one-month period. Of that number, 42% of the outages were blamed on network or server problems and 29% on Internet connectivity loss.
This rate of outage isn't anywhere near as severe or dramatic as what Healthcare.gov faced until it was fixed. But the report by MeriTalk, which provides a network for government IT professionals, suggest that downtime is a systemic issue. The research was sponsored by Symantec.
The report is interesting because it surveys two distinct government groups, 152 federal "field workers," or people who work outside the office, and 150 IT professionals.
For all the care and feeding we’ve given to the data center over the years, it must be remembered that all that technology and the skills to operate it are a means to an end. The real prize these days is application performance.
An increasingly mobile workforce is fostering dramatic changes in the way work and productivity are measured, and enterprise infrastructure needs to keep up with these trends in order to remain relevant in the years to come. That means issues like throughput and compute power are still important, but so are architectural flexibility and the need to become more responsive to user needs.
According to a recent survey from SolarWinds, 93 percent of business people say the performance and availability of apps like Exchange, Sharepoint and NetSuite are crucial to their job performance, with nearly two-thirds describing them as critically important. At the same time, however, 36 percent say they have waited a full day for problems to be resolved in mission-critical apps, while 22 percent have experienced wait times of several days.
By Claire Phipps, MBCI
Businesses are usually in operation to make money and deliver a service or provide a product. To be successful there are many traits required and by ensuring your business is dynamic, adaptive, efficient and cost effective are all good starting points. Who would want a business that is passive, rigid, ineffective and expensive?
The same is true when talking about good management disciplines and recognised international standards and best practice.
So why don’t we evolve these disciplines and channel our way of thinking to change the way in which we deploy them. Adapt the methods in which we operate to one of ‘organizational resiliency’ - an all-encompassing comprehensive management discipline that ‘ticks all the right boxes’, provides success, growth, strength, security and a return on our investment.
Within my industry, there has long been an ongoing discussion and debate with regards to the future of business continuity and whether or not organizational resilience is the way forward. The fact that we are still not getting a concrete answer could be the answer itself. Yet again I’m hearing the phase being more commonly discussed and thought I would consider my own opinions on the topic and open this up for further discussion.
Senior disaster management officials from APEC economies, meeting in Beijing in the aftermath of the Ludian Earthquake in Southwest China, have detailed new far-reaching measures to strengthen relief and risk reduction capabilities across the Asia-Pacific, the world’s most disaster-prone region.
Upon observing a moment of silence for the victims of the 6.5 magnitude quake, officials were briefed on efforts to help survivors and speed recovery, and sanctioned deeper cooperation to protect against future emergencies. Joint actions are being taken forward in technical capacity building exchanges between APEC economies.
“The frequent occurrence of natural disasters poses a serious threat to lives and the economic health of the entire region,” cautioned Dou Yupei, China’s Vice Minister of Civil Affairs, in remarks to the 8th APEC Senior Disaster Management Officials’ Forum. “We must join hands to reduce disaster risk and guarantee the coordinated development of society, economy and the environment.”
IFMA, the US-based International Facility Management Association, has published an overarching guide to business continuity and emergency preparedness. It includes results from the IFMA 2014 Business Continuity Survey and research forums on emergency preparedness and business continuity.
‘High Stakes Business: People, Property and Services (Facility Management Perspectives on Emergency Preparedness and Business Continuity in North America)’ looks at the growing necessity of emergency and business continuity planning as a strategic priority; one which provides a unique opportunity for facility managers to establish valued partner status in ensuring organizational resiliency and longevity.
“Emergency preparedness and business continuity are critical and complex tasks that affects all facets of commercial and institutional facilities are central to FM worldwide. This publication provides practical guidance to facility professionals in order to develop plans that will best equip their organizations to resume normal operations as quickly as possible after disaster strikes,” said Stephen Ballesty, IFMA Board of Directors, IFMA Research Committee Chair, Director, Head of Advisory, Head of Research.
The report is available at a cost of $180 for non IFMA members and £90 for members.
The 2014 BCI Asia Awards took place on Thursday 14th August at the 12th Asia Business Continuity Conference in Singapore. The BCI Asia Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in the region.
The Winners of the Awards were:
Business Continuity Provider of the Year (Product)
Business Continuity Team of the Year
Business Continuity Innovation of the Year
BCM Manager of the Year
Khalid Ahmed Bahabri
BCM Newcomer of the Year
All winners from the BCI Asia Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition 2014.
Maintaining a supply chain's resilience is a daunting challenge, especially considering the increasing scale and complexity of supply chains worldwide. To support business continuity professionals in helping to assess their supply chains, the Business Continuity Institute has just published its latest Working Paper which uses a series of statistical comparisons from previous studies to look at the influence the number of suppliers an organisation has on the frequency and cost of supply chain disruption.
The research concluded that supply chain complexity does influence the frequency and cost of disruption which represents an important step towards the better understanding of supply chain disruption. Establishing the relationship between the complexity of supply chains to the frequency and cost of incidents will validate efforts by supply chain planners to work towards greater visibility of their supply chains. This also provides additional proof that may be used to justify continuous investment towards further understanding an organisation’s supply chain.
The study does highlight however, that given the implications of this research to decisions made by organisations, it is recommended that further statistical analysis be done to other variables that affect supply chains.
The Supply Chain Resilience survey has been one of the most comprehensive studies of its kind. It has produced useful findings that have guided organisations into imparting resilience to their supply chains. A more thorough study therefore provides greater opportunities to refine this tool and make it even more helpful to organisations worldwide.
To download the full version of the BCI's 'Working Paper Series No. 2: A quantitative analysis of selected variables in the 2013 Supply Chain Resilience Survey', please click here.
To take part in the BCI's 2014 Supply Chain Resilience survey and help further this research, please click here.
You can contact the paper’s author – Patrick Alcantara of the BCI’s Research Department – with any feedback about this particular paper or with any suggestions for future topics.
The main challenges in properly implementing business continuity management in an organisation can be expressed in four words: engagement, understanding, appropriateness and assumptions. In other words: senior management needs to be involved and committed to BCM; business continuity managers need to understand the essentials about IT operations; BCM processes need to link business objectives to operational realities; and any assumptions in BC planning need to be closely scrutinized. If this sounds like IT governance, you’re right. IT governance gives some good hints about how to make business continuity a practical, valued reality.
Maintaining the state’s trend of taking a leading position on new technological and legal challenges, a California Court of Appeals ruled earlier this month that within the state,
“We hold that when employees must use their personal cell phones for work-related calls, Labor Code section 2802 requires the employer to reimburse them. Whether the employees have cell phone plans with unlimited minutes or limited minutes, the reimbursement owed is a reasonable percentage of their cell phone bills."
And with that, a fresh set of headaches for companies and IT departments managing or allowing employee-owned devices used for work purposes is born.
By Victoria Harp
CDC leads the nation in responding to public health emergencies, such as outbreaks and natural disasters. While the agency encourages the public to be aware of personal and family preparedness, not all CDC staff follow those guidelines. In an effort to increase personal preparedness as part of workforce culture, CDC created the Ready CDC initiative. Targeting the CDC workforce living in metropolitan Atlanta, this program recently completed a pilot within the organization and is currently being evaluated for measurable improvements in recommended personal preparedness actions. Ready CDC is co-branded with the Federal Emergency Management Agency’s (FEMA) Ready.gov program, which is designed for local entities to take and make personal preparedness more meaningful to local communities. Ready CDC has done just that; the program uses a Whole Community approach to put personal preparedness into practice.
FEMA’s Whole Community approach relies on community action and behavior change at the local community level to instill a culture of preparedness. To achieve this with Ready CDC, the CDC workforce receives the following:
- The support needed to participate from their employer
- Consistent messaging from a trusted, valued source
- Localized and meaningful personal preparedness tools and resources
- Expertise and guidance from local community preparedness leaders
- Personal preparedness education that goes beyond the basic awareness level to practicing actionable behaviors such as making an emergency kit and a family disaster plan
Are you Ready CDC?
When the Office of Public Health Preparedness and Response Learning Office conducted an environmental scan and literature review, as well as an inward look at the readiness and resiliency of the CDC workforce, the need for a program like Ready CDC emerged. Although CDC has highlighted personal preparedness nationally in its innovative preparedness campaigns, there have been no formal efforts to determine if or ensure that the larger CDC workforce is prepared for an emergency. After all, thousands of people make up CDC’s workforce in Metro Atlanta, throughout the United States, and beyond.
The public relies upon those thousands of people to keep the life-saving, preventative work of CDC going 24/7. When the CDC workforce has their personal preparedness plans in place, they should be more willing and better able to work on behalf of CDC during a local emergency. Research has shown that individuals are more likely to respond to an event if they perceive that their family is prepared to function in their absence during an emergency*. Also, the National Health Security Strategy describes personal preparedness in its first strategic objective as a means to build community resilience.
Local Partnerships for the CDC
Ready CDC intends to move the dial by using its own workforce to understand behaviors associated with preparedness, including barriers to change. This is the most intriguing aspect of Ready CDC for the local community preparedness leaders involved. Most community-level preparedness education is currently conducted at the awareness level. Classes are taught and headcounts are taken, but beyond that, there is no feedback or follow-up to determine if their efforts are leading to desired behavior changes. Ready CDC is currently measuring and studying the Ready CDC intervention and that has local community preparedness leaders around metro Atlanta very interested in its outcomes.
While CDC has subject matter experts on many health-related topics, CDC looked to preparedness experts in and around the Metro Atlanta community to help make Ready CDC a locally-sustainable intervention. After all, the best interventions are active collaborations with community partners**. Key community partners from the American Red Cross; Atlanta-Fulton County, DeKalb County, and Gwinnett County Emergency Management Agencies; and the Georgia Emergency Management Agency played ongoing and significant roles in developing the program content, structure, and sustainability needed for CDC’s Metro Atlanta workforce. CDC gets the benefit of their time and expertise while partners have the satisfaction of knowing their efforts are making a difference in and contributing to the resilience of their communities. Also, because of these great partnerships, one lucky class participant wins a family disaster kit courtesy of The Home Depot and Georgia Emergency Management Agency.
Do you have a cybersecurity emergency plan in place? If you do, are you confident in your cybersecurity plan? If you answered both of these questions with a yes, pat yourself on the back for a job well done. And then volunteer some advice to your business peers because you are in the minority.
According to a new study by the SANS Institute, sponsored by AccessData, AlienVault, Arbor Networks, Bit9 + Carbon Black, HP and McAfee/Intel Security, found that 90 percent of American businesses don’t have a very effective cybersecurity emergency plan. One of the top reasons why an effective plan isn’t in place is lack of time to do so and a lack of budget, at 62 percent and 60 percent, respectively.
So, the companies that are already spending time and money on some sort of cybersecurity emergency plan don’t have one as good as they’d like. But these companies are also in the minority, as 43 percent don’t have any type of formal emergency response plan and 55 percent don’t have a response team. That could be a fatal mistake, especially considering that more than half claimed to have had at least one critical incident requiring a response over the past two years.
Banks may be undermining their own efforts at Big Data, according to a recent Information Week column.
“When faced with the requirements of a new big data initiative, banks too often only draw on prior experience and attempt to leverage familiar technologies and software-development-lifecycle (SDLC) methodologies for deployment,” writes Michael Flynn, managing director in AlixPartners' Information Management Services Community.
The problem: Those technologies enforce structure and focus on optimizing processing performance. That means the data is aggregated and normalized in an environment that works against Big Data sets in three ways, Flynn explains:
(MCT) — Dr. Diane Weems knew the virus was on their minds, so the acting director of the East Central Health District just launched into it at last week’s meeting of the Richmond County Board of Health.
“OK, does anyone have questions about Ebola?” she asked board members.
The lethal outbreak in Africa has prompted a lot of unneeded fear even among health care workers who might not understand that it takes more than casual contact to cause an infection, she said.
Augusta and Georgia have faced far bigger public health threats in the past and will likely face worse in the future, experts said.
The problem with the outbreak in West Africa, where nearly 2,000 people have been infected and more than 1,000 people have died, is that unlike past outbreaks in self-contained rural villages, this one is occurring in more populated areas, Weems said. These countries also lack a good public health infrastructure and health workers might not be taking common infection control procedures, such as wearing gloves, she said.
As the trend for larger and more frequent wildfires continues, a team of scientists, engineers, technologists, firefighters and government and industry professionals is working on a project, called WIFIRE, to build an end-to-end cyberinfrastructure for simulation, prediction and visualization of wildfire behavior.
The WIFIRE system will analyze wildfire dynamics with specific emphasis on the climate. The system will integrate heterogeneous satellite information and remote sensor data by computational techniques like signal processing, visualization, modeling and data assimilation to develop a scalable method to monitor weather patterns and predict the spread of a wildfire.
The project started with a three-year, $2.65 million grant to the University of California at San Diego in October 2013 when participants in the project began integration and cataloging of data from sensors, satellites and scientific models to create scalable wildfire models. Participants include the San Diego Supercomputer Center (SDSC), the California Institute for Telecommunications and Information Technology’s Qualcomm Institute and the University of Maryland.
Land Cover Atlas helps communities “see” vulnerabilities and craft stronger resilience plans
A new NOAA nationwide analysis shows that between 1996 and 2011, 64,975 square miles in coastal regions -- can area larger than the state of Wisconsin -- experienced changes in land cover, including a decline in wetlands and forest cover with development a major contributing factor.
Overall, 8.2 percent of the nation’s ocean and Great Lakes coastal regions experienced these changes. In analysis of the five year period between 2001-2006, coastal areas accounted for 43 percent of all land cover change in the continental U.S. This report identifies a wide variety of land cover changes that can intensify climate change risks, such as loss of coastal barriers to sea level rise and storm surge, and includes environmental data that can help coastal managers improve community resilience.
"Land cover maps document what's happening on the ground. By showing how that land cover has changed over time, scientists can determine how these changes impact our plant’s environmental health," said Nate Herold, a NOAA physical scientist who directs the mapping effort at NOAA's Coastal Services Center in Charleston, South Carolina.
Among the significant changes were the loss of 1,536 square miles of wetlands, and a decline in total forest cover by 6.1 percent.
The findings mirror similar changes in coastal wetland land cover loss reported in the November 2013 report, Status and Trends of Wetlands in the Coastal Watersheds of the Conterminous United States 2004 to 2009, an interagency supported analysis published by the U.S. Fish and Wildlife Service and NOAA.
This new NOAA analysis adds to the 2013 report with more recent data and includes loss of forest cover in an overall larger land area survey. Both wetlands and forest cover are critical to the promotion and protection of coastal habitat for the nation’s multi-billion dollar commercial and recreational fishing industries.
Development was a major contributing factor in the decline of both categories of land cover. Wetland loss due to development equals 642 square miles, a disappearance rate averaging 61 football fields lost daily. Forest changes overall totaled 27,515 square miles, equaling West Virginia, Rhode Island and Delaware combined. This total impact, however, was partially offset by reforestation growth. Still, the net forest cover loss was 16,483 square miles.
These findings, and many others, are viewable via the Land Cover Atlas program from the NOAA’s Coastal Change Analysis Program (C-CAP). Standardized NOAA maps allow scientists to compare maps from different regions and maps from the same place but from different years, providing easily accessible data that are critically important to scientists, managers, and city planners as the U.S. population along the coastline continues to grow.
“The ability to mitigate the growing evidence of climate change along our coasts with rising sea levels already impacting coastlines in ways not imaged just a few years ago makes the data available through the Land Cover Atlas program critically important to coastal resilience planning,” said Margaret Davidson, National Ocean Service senior advisor for coastal inundation and resilience science services.
C-CAP data identify a wide variety of land cover changes that can intensify climate change risks — for example, forest or wetland losses that threaten to worsen flooding and water quality issues or weaken the area’s fishing and forestry industries. The atlas’s visuals help make NOAA environmental data available to end users, enabling them to help the public better understand the importance of improving resilience.
“Seeing changes over five, 10, or even 15 years allows Land Cover Atlas users to focus on local hazard vulnerabilities and improve their resilience plans,” said Jeffrey L. Payne, Ph.D., acting director for NOAA’s Coastal Services Center. “For instance, the atlas has helped its users assess sea level rise hazards in Florida’s Miami-Dade County, high-risk areas for stormwater runoff in southern California, and the best habitat restoration sites in two watersheds of the Great Lakes.”
Selected Regional Findings – 1996 to 2011:
The Northeast region added more than 1,170 square miles of development, an area larger than Boston, New York City, Philadelphia, Baltimore, and the District of Columbia combined.
The West Coast region experienced a net loss of 3,200 square miles of forest (4,900 square miles of forests were cut while 1,700 square miles were regrown).
The Great Lakes was the only region to experience a net wetlands gain (69 square miles), chiefly because drought and lower lake levels changed water features into marsh or sandy beach.
The Southeast region lost 510 square miles of wetlands, with more than half this number replaced by development.
Many factors led to the Gulf Coast region’s loss of 996 square miles of wetlands, due to land subsidence and erosion, storms, man-made changes, sea level rise, and other factors.
On a positive note, local restoration activities, such as in Florida’s Everglades, and lake-level changes enabled some Gulf Coast and Southeast region communities to gain modest-sized wetland areas, although such gains did not make up for the larger regional wetland losses.
C-CAP moderate-resolution data on the Land Cover Atlas encompasses the intertidal areas, wetlands, and adjacent uplands of 29 states fronting the oceans and Great Lakes. High-resolution data are available for select locations.
All C-CAP data sets are featured on the Digital Coast. Tools like the Digital Coast are important components of NOAA’s National Ocean Service’s efforts to protect coastal resources and keep communities safe from coastal hazards by providing data, tools, training, and technical assistance. Check out other products and services on Facebook or Twitter.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.
What if there was only a single BCM/DR methodology that all organizations would follow? Would it be able to address the specific concerns of particular industries or generalize to the point where it adds no value? Would it be able to address all situations, all possible scenarios and all industries in all countries? How could any single methodology address every situation and every minute detail; taking into account language interpretation, definitions and culture? Could it be done?
If everything was the same and the same perspectives were leveraged it would make sense for what satisfies the needs of a manufacturer to use the same rationale that suits an insurance company. But that is impossible isn’t it? There are other concerns for a manufacturer has that an insurance company wouldn’t. That’s like saying what is good for one person is good for another. Well, we know that’s not correct because we are all individuals with our own wants, needs, desires…and dislikes.
There are thousands upon thousands of organizations in the world, so how can we ever expect that each of these will need the exact same BCM/DR solution or framework? We can’t.
Hospitals nationwide are hustling to prepare for the first traveler from West Africa who arrives in the emergency room with symptoms of infection with the Ebola virus.
Dr. Thomas R. Frieden, director of the Centers for Disease Control and Prevention, has said such a case is inevitable in the United States, and the agency this month issued the first extensive guidelines for hospitals on how recognize and treat Ebola patients.
The recommendations touch on everything from the safe handling of lab specimens to effective isolation of suspected Ebola patients. But one piece of advice in particular has roused opposition from worried hospital administrators.
The C.D.C. says that health care workers treating Ebola patients need only wear gloves, a fluid-resistant gown, eye protection and a face mask to prevent becoming infected with the virus. That is a far cry from the head-to-toe “moon suits” doctors, nurses and aides have been seeing on television reports about the outbreak.
Big Data isn’t just Hadoop and in-memory anymore. Big data technologies and tools have grown significantly over the past few years — so much so that it’s hard to keep up with them.
If you’d like to get up to snuff and are primarily interested in open source solutions, I recommend this CIOL.com column by Virenda Gupta, senior vice president at Huawei Technologies India.
He discusses new open source solutions in the areas of Big Data processing, analytics and mining. He also addresses Big Data virtualization, where he sees a shortage of comprehensive platforms.
(MCT) — Ebola is not the virus that keeps Marc Lipsitch up at night.
Lipsitch, a Harvard epidemiologist who grew up in Atlanta, is on a mission to eradicate human-engineered strains of deadly pathogens such as the H5N1 “bird flu.” Those strains exist only in a handful of labs, where they have been genetically altered to make the virus more contagious.
H5N1, which first infected humans in 1997 in China, has killed about 60 percent of the almost 700 people who have been diagnosed with it. Nearly all of them got sick through contact with infected birds; in nature, H5N1 does not pass easily from person to person.
If it acquires that ability in the wild before scientists have developed effective vaccines and treatments, many millions of people are likely to die.
(MCT) — The mean season has arrived.
During the 10-week stretch from mid-August through October, the most powerful storms tend to form in the Atlantic, Caribbean and Gulf of Mexico. It's also when South Florida is most likely to be struck.
"Almost all South Florida hurricanes and the vast majority of tropical storms have struck our area during these months," said meteorologist Robert Molleda of the National Weather Service in Miami.
Normally, ocean waters heat up and wind shear eases during the heart of the season, allowing tropical systems to form and grow.
The good news this year: Tropical waters should remain cooler than normal and wind shear stronger than normal over the entire Atlantic basin for the remainder of the hurricane season.
The Business Continuity Institute is pleased to welcome its first Associate Fellow (AFBCI) since the new grade was created. Having completed a rigorous assessment process, Johannes Muellenberg now has the honour of being able to call himself an AFBCI and gain extra recognition through the use of those letters after his name.
Earlier in the year, the BCI launched its AFBCI grade in order to meet the growing demand of our members, many of whom have contributed significantly to the industry and the Institute but are not yet eligible to become a Fellow. The AFBCI grade sits between MBCI and FBCI and successful candidates must have demonstrated their commitment to the industry through the number of years experience they have, and a commitment to ongoing learning through their participation in a continuous professional development (CPD) scheme.
To find out more information on BCI membership grades, please click here.
No two disasters are ever the same and business continuity practitioners should never base their plans directly on an individual experience, but case studies still provide an extremely helpful tool when it comes to thinking about what organisational disruptions may occur and how they can be dealt with. That is the purpose of a new book titled ‘In hindsight: a compendium of business continuity case studies’ launched in July at Missenden Abbey in Buckinghamshire, UK, a tribute to the venue where the idea for the book was originally conceived.
In hindsight was edited by Robert Clark MBCI and authored by several people from the field of resilience who all (with one exception) came together when studying at Buckinghamshire New University under the tutelage of Philip Wood AMBCI who provided the preface for the book. In his preface he states "I have found it to be an interesting, thought provoking and stimulating collection of studies and I have learned a great deal from reading it. Learning is key to understanding, and understanding allows us to make the right decisions.”
This compendium of business continuity case studies contains fascinating examples showing the diverse range of issues that organisations could have to deal with. With stories ranging from financial crises (collapse of Barings Bank) to industrial disasters (Piper Alpha), from disease outbreaks (SARS) to natural disasters (UK flooding of 2007), from product recalls (Toyota’s 8 million cars in 2009/10) to crowd management (Dusseldorf Love Parade in 2010), this book is packed with case studies of various incidents demonstrating what happened, how it was dealt with and an additional focus on what went well and what didn’t go well.
In explaining why ‘hindsight' is perhaps the perfect theme for a book, Robert Clark highlighted that “we tend not to look back enough on what has happened in the past in order to learn from it. That's why this book is not just about theory, it is about looking at past incidents and identifying how an effective business continuity management system could have made the situation better.”
Disasters will always happen but if we can learn from each one then we can improve on the outcome the next time something similar happens. To find out more about this book, please click here.
The 2014 BCI Asia Awards took place on Thursday 14th August at the 12th Asia Business Continuity Conference in Singapore. The BCI Asia Awards are held each year to recognise the outstanding contribution of business continuity professionals and organizations living in or operating in China, Tibet, Hong Kong, Japan, Macau, North Korea, South Korea, Taiwan, Mongolia, Philippines, Malaysia, Singapore, Laos, Thailand, Vietnam, Brunei, Myanmar (Burma), Cambodia, East Timor, Indonesia.
The Winners of the Awards were:
Business Continuity Provider of the Year (Product)
Business Continuity Team of the Year
Business Continuity Innovation of the Year
BCM Manager of the Year
Khalid Ahmed Bahabri
BCM Newcomer of the Year
Congratulations to all the winners and well done to all those who were nominated. All winners from the BCI Asia Awards 2014 will be automatically entered into the BCI Global Awards 2014 which take place in November during the BCI World Conference and Exhibition 2014.
(MCT) — Nearly 24 hours after witnessing the devastation himself, Gov. Rick Snyder today declared a disaster for metro Detroit counties in the wake of a historic flood that left a huge path of destruction across the region.
Thousands of flooded basements and raw sewage spills. Wrecked cars. A massive sinkhole. Ongoing traffic nightmares.
Metro Detroit is dealing with all of this — and more. Adding to the chaos, scavengers are now going through water-logged debris that people are putting out on the curb for trash. Where that ends up is uncertain, triggering yet more public health concerns.
The devastation has left local officials exasperated and pleading for help, saying there is no way their communities can handle this on their own. They are in dire need of state and federal aid, they say. And it needs to come fast.
With the Northern Hemisphere now in the midst of hurricane, typhoon and cyclone season, many businesses have emergency plans in place, plywood to board the windows, and generators at the ready. But a new study from economists Solomon M. Hsiang of Berkeley and Amir S. Jina of Columbia, “The Causal Effect of Environmental Catastrophe on Long-Run Economic Growth,” found it is far more difficult for the overall economy to weather the storm.
As Rebecca J. Rosen explained in The Atlantic, economists previously had four competing hypotheses about the impact of destructive storms: “Such a disaster might permanently set a country back; it might temporarily derail growth only to get back on course down the road; it might lead to even greater growth, as new investment pours in to replace destroyed assets; or, possibly, it might get even better, not only stimulating growth but also ridding the country of whatever outdated infrastructure was holding it back.”
After looking at 6,712 cyclones, typhoons, and hurricanes that occurred between 1950 and 2008 and the subsequent economic outcomes of the countries they struck, Hsiang and Jina were able to decisively strike down most of these hypotheses. “There is no creative destruction,” Jina said. “These disasters hit us and [their effects] sit around for a couple of decades.”
In 2012, when Superstorm Sandy struck the East Coast, thousands of residents were displaced from their homes. In wake of the panic and chaos, Airbnb, an online platform where people list and book accommodations around the world, saw an opportunity to leverage its existing services for neighbors to help neighbors. During the disaster, 1,400 Airbnb hosts — who typically collect payment for accommodations — opened their homes and cooked meals for those left stranded.
After Sandy, Airbnb reached out to the San Francisco Department of Emergency Management to share what it learned and discuss how it could reach a broader audience during an emergency. Simultaneously, the company was in discussions with officials in Portand, Ore., about an initiative to help civic leaders and community members work together to create a more shareable and livable city.
Over a series of articles, Hilary Estall, Director of Perpetual Solutions, will be discussing subject areas aimed at those managing a business continuity management system (BCMS) and in particular, those systems certified to ISO 22301. With her pragmatic approach to management systems and auditing in particular, Hilary will offer an insight into areas not widely discussed but still important for the ongoing success of a BCMS.
In the second article of the series, Hilary Estall looks at what’s involved when a certified BCMS reaches its recertification point. What does this mean and what’s involved?
In this article I demystify the process of recertification; the procedure undertaken by certification bodies every third year in the cycle of management system certification. I identify how an organization should prepare and the process of recertification itself. Is it just another audit or is there more to it?
If your organization has a certified business continuity management system (BCMS) you will know that in order to retain it, your certification body will carry out periodical audits. You will also know that when you first achieved certification and were issued with your certificate, it had an expiry date on it, three years hence*. What are the implications of this expiry date and how should you prepare for ‘renewal’?
When it comes to data restoration, addressing deleted mailboxes or emails is the most common request of IT administrators, according to new survey data from Kroll Ontrack.
When asked how often they receive requests for data restoration, 61 percent of the nearly 200 Ontrack PowerControls customers surveyed across EMEA, North America and APAC report they receive up to five email related restoration requests a month, with an additional 11 percent claiming up to 10 times a month.
In Europe, the second most common data restoration need was disaster recovery (16 percent), followed by missing data (12 percent). In the US, the second most common data restoration need was collection of electronic data for ediscovery (21 percent), followed by consolidating data from older to new applications to eliminate legacy servers (15 percent).
Requests for data restoration came from all departments across an organization, with 24 percent stemming from the internal legal department, 22 percent coming from IT security and 15 percent originating from sales and marketing. Why do these people need their email and documents back? 45 percent of IT administrator respondents note that employees request their email and documents back because they were accidentally deleted. Internal investigations (17 percent) ranked as the second most common source of restoration requests.
Historically, vendor solutions for disaster recovery have been created for on-site use for individual enterprises. The client company concerned was the sole owner of the user data involved, and disaster recovery could be implemented without having to worry about anybody else. The cloud computing model changes that situation. It’s possible to use cloud services to have your own dedicated servers and instances of applications, or to share physical space but still have your own application (as in multi-instance setups). However, multi-tenancy (perhaps the defining feature of cloud architectures) makes the application of disaster recovery solutions rather more delicate.
We talk about Big Data and, now, Small Data as if it’s always clear with which you’re dealing. Big Data means volume, variety or velocity (or all three) and small data is structured and everything else.
Of course, the reality isn’t always so binary, according to a panel of medical and pharmaceutical experts at the recent MIT Chief Data Officer and Information Quality Symposium.
SearchCIO.com covered the event, and, in a recent article, shared a few lessons from the panel’s trial-and-error approach to dealing with data variety. Mark Schreiber’s experience is a perfect example.
Codenomicon's discovery of OpenSSL's "Heartbleed" flaw this past spring highlighted the increasing importance of source code assurance and quality control as software grows in prominence in daily life. The Heartbleed memory leak opened the door for infiltrators to obtain passwords and security keys to decode encrypted data — a vulnerability that allegedly still threatens enterprise systems months after its discovery, according to a recent report.
(MCT) — Karen Windon still gets chills when she thinks back on Hurricane Charley.
"We were right in the cross-hairs for a long time as Charley barreled up the Gulf of Mexico," Windon recalled Tuesday.
Windon, now a deputy administrator for Manatee County, Fla., was the county's public safety director in 2004.
"For me, it was a mixture of tense moments, and swelling pride, knowing we had such a committed team at the emergency operations center at that time," Windon said.
Although Manatee County escaped much of Charley's fury, with a historic right turn that directed it northeast through Punta Gorda and Arcadia on Aug. 13, 2004, it proved to be a game changer.
It changed the local public perception of hurricanes from something to ride out to knowing there could be a dangerous killer on the loose. And Charley put emergency managers on notice that they needed to step up their games.
Manatee County officials got serious about building a stand-alone, hardened emergency operations center that could withstand such natural disaster as a hurricane. Officials moved ahead with plans for a new Public Safety Center that might otherwise have languished on a wish list for years.
Recently I did a remarkably silly thing. Something I hadn’t done in almost seventeen years as the proverbial travelling consultant.
I went to London. No, that’s not the silly thing – I go to London quite often and honestly it’s really not that bad there. Even for a country bumpkin like me. No, the silly thing came to light after I’d boarded the train and it was pulling out of the station. I opened my bag to take out my laptop and some papers so that I could start work and my laptop wasn’t there. I checked again. And again. But it still wasn’t there. After checking for a fourth time the penny finally dropped – I’d left my laptop at home. I was a couple of minutes into a two-hour train journey, all ready to get stuck in to some quality report writing time and my laptop, one of the main tools of my trade – if not the main tool – was sitting at home, rather than on the table in front of me.
After the initial panic attack subsided I remembered that I wasn’t presenting today, so at least I didn’t need my laptop for any of my meetings. And I had my phone, and lots of people tell me that’s all they need to be able to work. “I can just work from wherever I am, as long as I have my mobile phone and an internet connection” is an assertion I hear all the time. Well this was a perfect opportunity for me to put that theory to the test.
Luckily I had a charger with me, otherwise I’d have been in trouble from the off. Because the second thing I didn’t do last night – the first being to not spot the absence of a laptop when I checked the contents of my bag (yes I did actually check, or at least I thought I did – it was late) – was to charge my ‘phone. I have one of those ‘phones that you have to charge about every three and a half hours (you know the ones) so the 20% remaining battery life probably wouldn’t have got me halfway to London, let alone seen me through the day.
So I plugged in and off I went. I couldn’t work on the report that I’d planned to because, whilst I synchronise files between my desktop and laptop, I don’t store all of my data in the cloud as a matter of course. In fact I don’t store much there at all, particularly if it’s confidential. Call me old fashioned but I haven’t yet developed the same blind faith in 'the cloud' that many others have. I’m with one of my information security colleagues on this one – he recently said “I wish people would stop calling it ‘the cloud’ and start calling it ‘putting my data on someone else’s computers’. Don’t get me wrong, I’m not saying 'the cloud' is all bad. And yes, I do use it. But I’m extremely selective about what I choose to put there. There are, after all, some significant advantages if it’s used properly. But the cloud is a big and often dimly-lit place and not every cloud is created equal. Call me a cynic but I largely think of 'the cloud', particularly the free bits of it, as a really convenient way of letting someone else delete, corrupt, leak, sell, give away, deny me access to or otherwise compromise my data so that I don’t have to do it myself. Which I personally think is a healthy attitude that others would do well to adopt.
But I digress. In any case, trying to write a proper report on a phone, as opposed to making a few notes, isn’t the easiest thing in the world to do. For a start, typing large amounts of text on a phone isn’t as easy as on a real keyboard, at least for anyone with normal sized fingers. Let alone the fact that my phone is constantly correcting what I type, which means I spend an inordinate amount of time correcting it back again. Then there’s the compatibility issues (which I won’t go into here as it’ll probably just turn into a rant against Microsoft and Apple), which means that you’re pretty much restricted to text only, without too much formatting and certainly nothing as weird and wonderful as a table.
But I digress again. At least I could start by sending a few e-mails. Except there was no network connection. On-board wifi hasn’t made much of an appearance on the trains from Evesham to London yet, at least not the peak time trains (for some reason you can get it at 2 o’clock in the afternoon, which is really useful for the majority of business travellers who actually have to get up in the morning). And the mobile phone signal is somewhat patchy for the first part of the journey. Funny how I can get a mobile signal at the top of a ski slope but not in the Cotswolds, despite the claims of 99% UK coverage by the mobile ‘phone companies (second rant suppressed).
So I read a couple of (paper) documents, wrote a bit of my blog, corrected the corrections, finally managed to send and receive some e-mails, did a bit of web browsing (albeit looking at stuff on a very small screen), popped a couple of headache tablets and arrived in London for my meetings.
Shortly before I got on the train home, my phone started bleating “low battery” at me again. “No matter”, I thought, “I’ll just charge it on the train”. Except the electrical sockets on this particular train weren’t working. So I had about twenty minutes of trying to access my e-mails (and failing, due to a glitch at my internet service provider – good old Sod’s Law!) and writing a few notes for later processing before my phone gave up the ghost. At which point I gave up too and read the paper instead.
So, how effective was my plan to “just work from wherever I am using my mobile ‘phone”. Well, I suppose I managed to do a bit, and significantly more than in the pre-smartphone days. But how effective was it really? Well I think the answer to that is fairly evident. I reckon I probably achieved fifteen to twenty percent of what I’d have been able to do had I had my laptop to hand.
Yes, remote working is eminently possible – I do it all the time – but its effectiveness is hugely dependent on the tools available and the type of work that you’re trying to do remotely. Even working at home can be problematical and far less efficient than working in an office, if that’s what you normally do. And if you’re a laptop user and you don’t have it with you (which is a distinct possibility if you’re one of the many, many people who leave their laptops in the office when they go home) remote working can be trickier still.
And yes, there are all sorts of things that can be done with a smartphone (aside from checking Facebook or tweeting), particularly if your job largely involves phoning and emailing people and making a few notes. But in my experience their usefulness is limited and they’re really no substitute for a proper computer if you have things like reports to write (or read) or large, complicated spreadsheets to deal with, amongst other things. And, whilst they may be OK for a short period, I challenge anyone to work effectively for anything more than a very short time using just their smartphone.
So next time someone says to you “I can just work from wherever I am, as long as I have my mobile phone and an internet connection,” I strongly suggest you challenge them to prove it. Because some things are a lot easier said than done.
Andy Osborne is the Consultancy Director at Acumen, and author of Practical Business Continuity Management. You can follow him on Twitter and his blog or link up with him on Linked In.
I’m really very grateful that Education Month has given me an opportunity to focus on what interests me. Primarily, I’m interested in business continuity (as an element of a wider interest in Organisational Resilience). I’m also interested in improvement; that means self, organisation, personal and professional. And of course, I’m interested in education. I like to learn from my peers, colleagues, students and business partners, as well as by studying and maintaining focus on what is going on around us every day. And because it’s my job, I want to share that enthusiasm and interest with others; in this case, you.
Business continuity is one of those industries/professions/sectors that is on a growth trajectory. It needs to be as it works in an environment that is rife with influences that may engender or initiate change and thus inform the shape of risk and impact landscapes. There is much speculation, theorising and pontificating about what is coming, how it should be influenced or could be controlled and how we deal with impacts. From globalized business activity to changes in national and international power balances, from political reorientations to an emergence of technology enabled ‘people power’. Also, while there is an immense amount of opinion and theory put forward daily from all quarters concerning human behaviour and its effect on others (such as, by implication, political, economic, social, technological impacts) it is also worth considering ideas, theories and opinions on the less easily quantifiable and controllable. These are all areas for thought, concern and yes, education.
So, if we are aware of the potential problems, what’s the problem? Well, there are thousands of business continuity professionals (that is what you are: professionals) out there who are undereducated, or perhaps miseducated, or maybe even not specifically educated at all. You may have been trained; but ‘educated’ is a different thing. Of course, you will know things, processes, functions, problems and issues and you will be adept in your role, and if that’s OK with you; then that’s OK. The sector abounds with professionals who are working hard, mainly successfully, to do what needs to be done and in general, we don’t equate ourselves with reticence, lack of confidence or indecision; or indeed lack of self-awareness.
However, there are very many people who do hesitate when it comes to education. It is interesting. Maybe this hesitancy is not about cost; nor is it usually about obtaining support from employers. Usually, there is a fear of being overcome by the difficulties and challenges of learning, perhaps because they have been away from formal education for many years, or simply because they are familiar with training rather than the academic rigour of university programmes.
Well, simply put, there is nothing to be afraid of or worried about. If you decide to undertake an academic programme you can expect to be provided with advice, support, guidance and resources to allow you to grow into the mysteries of higher educational learning. In fact, here’s a little secret – there are no mysteries at all! Learning takes time; skills take practice, correction and amendment to perfect. It can be done and in fact, it is not intimidating or difficult at all. It does take hard work and application – but so does life.
Most importantly higher education learning doesn’t turn you into an academic; it enhances your professional capabilities. In fact, unless you are steeped in study on a daily basis, you are not an academic or a scholar – in reality, for those who undertake professional and academic courses as part of their CPD (continuing professional development), the clue is in the acronym - ‘CPD’! And importantly, it is not all about theory; education in the modern world and in the BC world should be about practical application.
So, in Education Month, perhaps it is worthwhile taking pause from your busy and demanding life and thinking about what you would like to be.
- Better paid? Education helps whether you study for a certificate, diploma, bachelors or master’s degree.
- More competitive? Education helps you to think about and analyse the world around you.
- Better at your job? Education helps you to learn and understand what you do and why – and what you should be doing and why.
- A thought leader? Education helps you to become a more effective thinker as well as an effective practitioner; win/win!.
Education will not necessarily make you any better than anyone else. Just holding an award is meaningless if you are unable to make it work for you and if you cannot use and develop the skills and knowledge gained from your learning. But - if you’ve taken the time and trouble to read the Education Month blogs and other publicity then you must be interested and it may be time to transform your interest into reality.
Phil Wood is the Head of Enterprise, Security and Resilience within the Faculty of Design, Media and Management at Buckinghamshire New University in the UK.
Ultimate responsibility for ERM starts at the top. However, everyone who matters within an organization should participate in the ERM process.
While several executives have significant responsibilities for ERM, including the Chief Risk Officer, Chief Financial Officer, Chief Legal Officer and Chief Audit Executive, the ERM process works best when all key managers of the organization contribute. The COSO ERM framework states that managers of the organization “support the entity’s risk management philosophy, promote compliance with its risk appetite and manage risks within their [respective] spheres of responsibility consistent with risk tolerances.” Therefore, identifying leaders throughout the organization and gaining their support is critical to successful implementation of ERM.
A goal of ERM is to incorporate risk considerations into the organization’s agenda and decision-making processes. This means that ultimately, every manager is responsible, which can only happen when performance goals, including the related risk tolerances, are clearly articulated, and the appropriate individuals are held accountable for results.
One question often posed to me is how to think through some of the relationships a company has with its various third parties in order to reasonably risk rank them. Initially I would break this down into sales and supply chain to begin any such analysis. Anecdotally, it is said that over 95% of all Foreign Corrupt Practices Act (FCPA) enforcement actions involve third parties so this is one area where companies need to put some thoughtful consideration. However, the key is that if you employ a “check-the-box” approach it may not only be inefficient but more importantly, ineffective. The reason for this is because each compliance program should be tailored to an organization’s specific needs, risks and challenges. The information provided below should not be considered a substitute for a company’s own assessment of the corporate compliance program most appropriate for that particular business organization. In the end, if designed carefully, implemented earnestly, and enforced fairly, a company’s compliance program—no matter how large or small the organization—will allow the company, generally, to prevent violations, detect those that do occur, and remediate them promptly and appropriately.
By now, you’ve heard all the hoopla over IBM’s new brain-like chip. There’s little doubt that this is significant chip innovation, but what interests me is what this new development means for data.
Most of the news has focused on the similarities between SyNapse's TrueNorth and the human brain. Actually, as revealed last week, the technology represents 16 million neuron chips, which is a good deal short of the 100 billion neurons in the human brain, according to the UK’s University of Manchester Computer Engineering Professor Steve Furber.
Furber is a co-designer of the original ARM processor chip in the 1980s. For the past three years, he has worked on a project that would model 1 billion neurons, according to the UK Register.
During the process of developing a Business Continuity Plan or strategy it is easiest to focus on the larger picture; to understand the major impacts and potential roadblocks. But when putting that Plan on paper (figuratively or literally) it is time to think about more granular logistical needs and issues. One that is often overlooked is where – and how – the money will come from to pay for that recovery strategy. A good plan must document that process, or create one if it doesn’t already exist.
Even if one assumes that the organization will pay any price to recover its business operations in the most timely manner possible, questions remain:
- Who has the authority to approve expenditures?
- What are the limitations of that authority?
- What is the process needed to gain approval of expenditures?
- How will expenses be documented?
- How will vendors and suppliers be paid?
If the Business Continuity Plan calls for moving personnel to another office many miles away, how will their transportation costs (airline or train tickets, fuel reimbursement) and lodging be paid?
CHRISTCHURCH, New Zealand — You don’t see it, but you certainly know when it’s not there: infrastructure, the miles of underground pipes carrying drinking water, stormwater and wastewater, utilities such as gas and electricity, and fiber-optics and communications cables that spread likes veins and arteries under the streets of a city.
No showers, no cups of tea or coffee, no flushing toilets, no lights, no heating, and no traffic lights — a modern bustling city immediately shuts down. Factor in damaged roads, bridges, and retaining walls above ground, and the situation is dire.
That calamity hit Christchurch, New Zealand, in a series of earthquakes that devastated the city in 2010 and 2011.
Most people here don’t see the extent of repair work going on underground. They just notice roadworks and seemingly millions of orange cones that have sprouted up all over the city. Yet the organization created to manage Christchurch’s infrastructure rebuild has a vital role, and it’s become something of a global model for how to put the guts of a city back together again quickly and efficiently after a disaster.
By Charlie Maclean-Bristol
The first death caused by Ebola (officially Ebola virus disease (EVD)) outside Africa caught my eye this week, this was a Saudi national who had been visiting Sierra Leone.
Over the last few months the number of deaths from the illness has been growing, infecting people from Guinea, Sierra Leone and Liberia.
At the time of writing there have been 932 deaths and over 1500 cases.
Apart from the first death outside Africa, the illness has recently spread to Nigeria, with one death and a number of other cases.
Nigeria, with its large population and strong links to Europe, makes it more likely that the illness could spread further.
By Tom Salkield
2014 started badly - by severely testing the UK’s flood defences. Information security professionals have a similarly precarious feel, as they work to continuously hold back a flood of ever more sophisticated attacks and protect their information assets. Cybercrime, like the weather, is often unpredictable, but organizations can gain a competitive advantage by making risk–based decisions and investments to focus resources and get the best return on investment to prevent costly breaches to their defences.
The coverage of the flood damage to many areas of the UK dominated the news earlier this year. The debate still rages between those who argue that more should have been invested in planning and delivering effective defences, and those who claim that the volume of rain meant there was little more that could have been done to prevent the devastation.
Tripwire, Inc., has published the results of a survey of 215 attendees at the Black Hat USA 2014 security conference in Las Vegas, Nevada.
Industry research shows most breaches go undiscovered for weeks, months or even longer. Despite this evidence, 51 percent of respondents said their organization could detect a data breach on critical systems in 24 to 48 hours, 18 percent said it would take three days and 11 percent said within a week.
According to the Mandiant 2014 Threat Report, the average time required to detect breaches is 229 days. The report also states that the number of firms that detected their own breaches dropped from 37 percent in 2012 to 33 percent in 2013.
“I think the survey respondents are either fooling themselves or are naively optimistic,” said Dwayne Melancon, chief technology officer for Tripwire. “A majority of the respondents said they could detect a breach in less than a week, but historical data says it is likely to be months before they notice.”
Agile project methodologies have their roots in the software industry, but the overall principle of staying close to market requirements can be applied in any sector. When risk management becomes difficult because of uncertainties like the weather or the economy, short agile cycles encourage a focus on objectives. This may make more sense than detailed planning that tries to put everything in place for the mid to long term. Efficiency and business continuity can be improved, on condition that communications remain open and productive with all stakeholders. So with these advantages, why don’t all organisations and projects jump on the agile bandwagon?
(MCT) — It’s been a little more than three months since the April 28 tornadoes ravaged a portion Limestone County, and efforts continue to get residents back on their feet.
United Way of Athens-Limestone County has played an integral role in those efforts.
After the tornadoes, the nonprofit organization took on 75 long-term recovery cases, but that doesn’t include those who were provided other services, according to United Way Executive Director Kaye Young McFarlen.
Some need quick, easy help on the front end. Others were more long term and more involved.
Behind the media sensationalism and hyping of the story overall, there is little to dispute the fact the Ebola virus is a regional emergency that has the potential to become much more. Of course, this is the problem with every transmissible disease and especially so in our age of international travel for business and pleasure. The symptoms and effects of this disease are particularly unpleasant and if you have an interest, there is no shortage of descriptions of those available to you. Also, there is no shortage of warnings and pronouncements from the governments and agencies such as the World Health Organisation about the spread and effects of Ebola.
How much of a risk is there? For me, the prime risk is the lack of awareness of the disease while other things are going on. Ebola can be added to the list of big things that are happening in 2014 (Ukraine, Syria, Iraq, Libya, Malaysian Airlines, natural disasters) that probably desensitise us to the gravity of each situation individually and their collective impact. Besides, Ebola is happening in Africa, mainly, and attracts the usual international public response – relative disinterest. Although the outbreak is in the news, the threat appears to be downplayed, certainly here in the UK, as being of remote concern.
Going mobile with your data? Don’t think you can forget data quality. In fact, data quality takes on a new importance when you’re dealing with enterprise mobility, warns David Akka, head of Magic Software’s UK branch.
“In an enterprise mobility project, we typically have the same challenge of presenting information from multiple systems to the user on a single screen, but mobile brings other challenges as well,” writes Akka in a recent Enterprise Apps Tech column. “For example, typing on a small touchscreen increases the chance that critical data may be misspelled (increasing the chance of duplicating customer records); and users are also far less likely to search multiple records, as they get frustrated faster on mobile.”
What surprised Akka, and prompted his blog post, is that he found that a major automotive industry company is outsourcing data quality to an external agency — despite the fact that data quality could easily be added into the integration workflow.
Forecasters with NOAA’s Climate Prediction Center now say the chances of a below-normal Atlantic hurricane season have increased to 70 percent, up from 50 percent in May.
In its updated outlook, NOAA said overall atmospheric and oceanic conditions that are not favorable for storm development will persist through the season.
Check out the revised numbers in this NOAA graphic:
However, coastal residents may want to heed the words of NOAA lead forecaster Dr. Gerry Bell:
By now, you’ve heard about the Russian gang of hackers who allegedly gathered more than a billion user names and passwords and a lot of other information. How did you react to the news? I kind of shrugged my shoulders about it. It’s news, sure, but as someone who reads about breaches daily and gets regular updates about what’s happening in the state of cybersecurity, my reaction was this: What user names and passwords could they have that haven’t already been breached at some point?
I’m not the only one who said this. Shortly after I told some friends on Facebook that they shouldn’t panic, I got this comment in an email from John Prisco, CEO with Triumfant:
This issue reminds me of an iceberg, where 90 percent of it is actually underwater. That’s what is going on here with the news of 1.2 billion credentials exposed. So many cyber breaches today are not actually reported, often times because companies are losing information and they are not even aware of it. Today, we have learned of a huge issue where it seems like a billion passwords were stolen overnight, but in reality the iceberg has been mostly submerged for years – crime rings have been stealing information for years, they’ve just been doing it undetected because there hasn’t been a concerted effort on the part of companies entrusted with this information to protect it.
Over a series of articles, Hilary Estall, Director of Perpetual Solutions, will be discussing subject areas aimed at those managing a business continuity management system (BCMS) and in particular, those systems certified to ISO 22301. With her pragmatic approach to management systems and auditing in particular, Hilary will offer an insight into areas not widely discussed but still important for the ongoing success of a BCMS.
In her first article, Hilary Estall shares her thoughts on becoming a BCMS Lead Auditor and explores why people sometimes mistakenly opt for this particular auditor classification when more appropriate options may be available:
In this article I consider the role of the Lead Auditor and why so many individuals opt for this route for their auditor training. It’s a subject close to my heart and one which, in my opinion, is misrepresented and therefore misunderstood by those seeking auditor training. Whilst it’s not limited to business continuity management system standards, this is the context in which I have written the article.
Dr. Steven Goldman identifies ten business continuity and disaster recovery trends that are emerging, highlighting actions that business continuity managers can take in response to each item.
10: There has been an overall worldwide increase in the number of natural disasters
As a trend, the incidence of natural disasters worldwide has steadily increased, especially since the 1970’s, according to reports from the New England Journal of Medicine (NEJM) and from global insurer Munich Re.
Climate-related disasters include hydrological events such as floods, storm surge, and coastal flooding, while meteorological events include storms, tropical cyclones, local storms, heat/cold waves, drought, and wildfires. There were three times as many natural disasters between 2000 and 2009 as compared to the amount between 1980 and 1989. The NEJM notes that a vast majority (80 percent) of this growth is due to climate-related events. As a result, the amount of economic damage due to these natural disasters has seen a steady upturn. This in turn means that companies and organizations need to be prepared for natural disasters.
The number of geophysical disasters has remained fairly stable since the 1970’s. Geophysical disasters include earthquakes, volcanoes, dry rock falls, landslides, and avalanches.
What does this mean to you?
The conventional wisdom is that if you fight Mother Nature, she always wins. However, this does not mean you surrender! It means that companies and organizations need to be prepared for whatever Mother Nature can dish out. Remember Hurricane Sandy? Many companies in the northeast USA were battered, but several not only survived but also continued operations. How? Planning, preparation, and execution.
The world around us is constantly changing. Some say we now live and work in a VUCA environment, characterised by:
So how do businesses survive (and thrive) when nothing ever stands still? Perhaps part of the answer is in continuous learning and development, which can enable individuals to be agile and responsive to each and every challenge.
GALVESTON — Nearly six years after Hurricane Ike, one of the nation’s deadliest hurricanes, struck this city, boarded-up and dilapidated houses and empty lots still punctuate the streets. Many houses that remain are decorated with “for sale” signs.
“This was a thriving neighborhood,” Tina Kolunga said as she drove down a street lined with abandoned houses. On another street, she pointed out large patches of grass where homes and public housing used to sit. Kolunga still lives in Galveston, though she struggled for years after Ike to rebuild her home.
“This used to be one of the busiest restaurants in town,” Kolunga said, pointing out a rundown white building still worn from water damage.
As Kolunga toured the damage that remains years after Ike, recounting the ongoing recovery struggles of her neighbors, state lawmakers across town worried about the future of this coastal town and the surrounding region. At a hearing of the Joint Interim Committee to Study a Coastal Barrier System, held just a few miles from Kolunga's neighborhood, on Texas A&M University's Galveston campus Monday, experts told legislators that the coast is still not adequately prepared for a hurricane like Ike, which in September 2008 left billions of dollars of damage and at least 100 people dead in its wake.
Over the past several years, most (but not all) states made strides in reducing their inventory of bridges in poor condition.
Friday marked the seven-year anniversary of the I-35 bridge collapse in Minneapolis. The tragedy and subsequent bridge failures have helped focus public attention on the issue, leading some lawmakers to support additional investment in infrastructure.
A Governing story published in June examined how some states managed to significantly cut their tallies of structurally deficient bridges.
In the six years following the Minneapolis collapse, the number of structurally deficient bridges declined 14 percent nationwide.
To view trends for a particular state, select it in the menu below. Charts illustrate changes for structurally deficient and functionally obsolete bridges for the past 20 years.
Presidential Policy Directive 8: National Preparedness, requires an annual National Preparedness Report (NPR) that summarizes national progress in building, sustaining and delivering the 31 core capabilities outlined in the National Preparedness Goal (the Goal). The intent of the NPR is to provide the Nation—not just the federal government—with practical insights on core capabilities that can inform decisions about program priorities, resource allocation, and community actions. This report marks the third annual NPR, updating and expanding upon findings from the previous two years. The 2014 NPR highlights accomplishments achieved or reported during 2013.
In 2014 the Nation faced a range of incidents that challenged our collective security and resilience and confirmed the need to enhance preparedness across the whole community. Incidents like the Boston Marathon bombings, wildfires, drought, mass shootings, and ongoing management of several long-term recovery efforts, required activating capabilities across the five mission areas outlined in the Goal—Prevention, Protection, Mitigation, Response and Recovery.
Overarching Findings on National Issues
In addition to key findings for each of the 31 core capabilities, the 2014 NPR outlines cross-cutting findings that involve multiple mission areas:
- Embracing a new approach to disaster recovery: Major events, such as Hurricane Sandy and the severe 2012-2013 drought, have served as catalysts for change in national preparedness programs, drawing clearer links between post-disaster recovery and pre-disaster mitigation activities.
- Launching major national initiatives: The Federal Government has initiated several national-level policy and planning initiatives that bring unity of effort to preparedness areas, including critical infrastructure security and resilience, cybersecurity, recovery capabilities, and climate change.
- Managing resource uncertainties: Budget uncertainties have created preparedness challenges at state and local levels of government, resulting in increased ingenuity, emphasis on preparedness innovations, and whole community engagement.
- Partnering with tribal nations: Tribal partners are now more systematically integrated into preparedness activities. However, opportunities remain for Federal agencies and tribal nations to increase engagement and expand training opportunities on relevant policies.
The Nation Continues to Make Progress
The 2014 NPR identifies five core capabilities that require ongoing sustainment to meet expected future needs: Interdiction and Disruption, On-scene Security and Protection, Operational Communications, Public and Private Services and Resources, and Public Health and Medical Services.
Opportunities for Improvement
The 2014 NPR identifies the following core capabilities as national areas for improvement: Cybersecurity, Health and Social Services, Housing, Infrastructure Systems, and Long-term Vulnerability Reduction. Cybersecurity, Health and Social Services, and Housing have been areas for improvement for three consecutive years. Several ongoing initiatives, including implementation of Executive Order 13636 on Improving Critical Infrastructure Cybersecurity, Presidential Policy Directive 21 on Critical Infrastructure Security and Resilience, and the Hurricane Sandy Rebuilding Strategy will enable continued progress in these areas.
Key Factors for Future Progress
The 2014 NPR represents the third opportunity for the Nation to reflect on progress in strengthening national preparedness and to identify where preparedness gaps remain. Looking across all 31 core capabilities outlined in the Goal, the NPR provides a national perspective on critical preparedness trends for whole community partners to use to inform program priorities, to allocate resources, and to communicate with stakeholders about issues of shared concern.
Ebola is the big news story of the moment, all the media are covering it and they seem to be competing with each other to raise the fear level. ‘Out of control’, ‘deadly’, ‘terror’, all those words appear in even the more restrained media publications.
Many of us in the business continuity field will have had someone ask what planning we should do; hopefully this article will help you with that and also give you ammunition to combat some of the media excesses!
The ‘not invented here’ syndrome was something that forward-looking corporations set out to beat about 20 years ago. If a different product or service could be more cost-effectively bought in rather than being designed and manufactured in-house, then it was bought in. The challenge was to overcome misplaced pride and internal turf wars, where being asked to give up control over development could be construed as an attack on credibility, status or both. Some departments resisted by refusing to work with something that was ‘not invented here’. Now, Disaster Recovery as a Service (DRaaS) may be plagued with a similar issue, where companies cannot look outside what they already have – but for a different reason.
Master data management (MDM) solutions are used for much more than customer and product master data. According to a recent Information Difference report, MDM is also used for asset, location, supplier, finance and personnel data.
“Indeed it has become quite common for MDM efforts to begin in a relatively low-key area such as maintaining relatively stable reference data (country codes, etc.) as a toe in the water before broadening the initiative out to deal with more volatile master data domains,” the “MDM Landscape Q2 2014” report states. You can read the full report on The Information Difference’s site.
That doesn’t mean it’s a good idea to broadly apply MDM technology. As both Forrester and Gartner analysts as well as several IT Business Edge readers have pointed out, MDM is often misunderstood and misapplied within organizations.
Hawaii Residents and Visitors Urged to Follow Direction of Local Officials
WASHINGTON – The Federal Emergency Management Agency (FEMA), through its National Watch Center in Washington and its Pacific Area Office in Oahu, is continuing to monitor Hurricanes Iselle and Julio in the Pacific Ocean. FEMA is in close contact with emergency management partners in Hawaii.
According to the National Weather Service, Hurricane Iselle is about 900 miles east southeast of Honolulu with sustained winds of 85 MPH, and Hurricane Julio is about 1,650 miles east of Hilo, Hawaii, with sustained winds of 75 MPH. Tropical storm conditions are possible on the Big Island of Hawaii on Thursday. These adverse weather conditions may spread to Maui County and Oahu Thursday night or Friday. A tropical storm warning is in effect for Hawaii County, and tropical storm watches are in effect for Maui County and Oahu.
“I urge residents and visitors to follow the direction of state and local officials,” FEMA Administrator Craig Fugate said. “Be prepared and stay tuned to local media – weather conditions can change quickly as these storms approach.”
When disasters occur, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations and numerous private interest groups who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
Although there have been no requests for federal disaster assistance at this time, FEMA has personnel on the ground who are positioned in the Pacific Area Office year round. An Incident Management Assistance Team has also been deployed to Hawaii to coordinate with state and local officials, should support be requested, or needed.
At all times, FEMA maintains commodities, including millions of liters of water, millions of meals and hundreds of thousands of blankets, strategically located at distribution centers throughout the United States and its territories.
Safety and Preparedness Tips
- Residents and visitors in potentially affected areas should be familiar with evacuation routes, have a communications plan, keep a battery-powered radio handy and have a plan for their pets.
- Storm surge can be the greatest threat to life and property from a tropical storm or hurricane. It poses a significant threat for drowning and can occur before, during, or after the center of a storm passes through an area. Storm surge can sometimes cut off evacuation routes, so do not delay leaving if an evacuation is ordered for your area.
- Driving through a flooded area can be extremely hazardous and almost half of all flash flood deaths happen in vehicles. When in your car, look out for flooding in low lying areas, at bridges and at highway dips. As little as six inches of water may cause you to lose control of your vehicle.
- If you encounter flood waters, remember – turn around, don’t drown.
- Get to know the terms that are used to identify severe weather and discuss with your family what to do if a watch or warning is issued.
For a Tropical Storm:
- A Tropical Storm Watch is issued when tropical cyclone containing winds of at least 39 MPH or higher poses a possible threat, generally within 48 hours.
- A Tropical Storm Warning is issued when sustained winds of 39 MPH or higher associated with a tropical cyclone are expected in 36 hours or less.
For Flash Flooding:
- A Flash Flood Watch is issued when conditions are favorable for flash flooding.
- A Flash Flood Warning is issued when flash flooding is imminent or occurring.
- A Flash Flood Emergency is issued when severe threat to human life and catastrophic damage from a flash flood is imminent or ongoing.
More safety tips on hurricanes and tropical storms can be found at www.ready.gov/hurricanes.
Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions almost always focus on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent, and then the problem is compounded through advanced techniques like hyperthreading and shared memory access.
Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions almost always focus on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent, and then the problem is compounded through advanced techniques like hyperthreading and shared memory access.
(MCT) — While Anniston, Ala., schools have not been the scene of the sort of firearm violence that has struck other schools around the country in recent years, district officials and others across the state are taking steps to permit a safer outcome if such a situation develops.
The tactic: To let all first responders know the layout of the school before an emergency arises.
During the summer, detailed 3-D virtual maps were created revealing the nooks and crannies inside each of Anniston City Schools’ seven school buildings, at a cost to the district of between $2,000 and $3,000 per school, said Superintendent Darren Douthitt.
Most of us can’t imagine conducting day-to-day business without email. Our dependence has only increased because of smart devices that keep us connected to our email 24/7.
How would your business operate if suddenly, unexpectedly, no one had access to their email?
More importantly, what would happen if – while that email outage was taking place – all incoming emails were irretrievably lost? Would you miss business opportunities? Could your lack of access make prospects, customers and vendors feel like you are ignoring them, don’t care about their needs (or worse)? Do you fully understand all regulatory implications that may apply to missed communications?
Corporations spend a lot of time and money to ensure their employee- and customer-facing technologies are compliant with all local and regional data privacy laws. However, this task is made challenging by the patchwork of data privacy legislation around the world, with countries ranging from holding no restrictions on the use of personal data to countries with highly restrictive frameworks. To help our clients address these challenges, Forrester developed a research and planning tool called the Data Privacy Heat Map (try the demo version here). Originally published in 2010, the tool leverages in-depth analyses of the privacy-related laws and cultures of 54 countries around the world, helping our clients better strategize their own global privacy and data protection approaches.
The most recent update to the tool, which published today, highlights two opposing trends affecting data privacy over the past 12 months:
Companies large and small appear to have been targeted in what is being described as the largest known data breach to date.
As first reported by The New York Times, a Russian crime ring amassed billions of stolen Internet credentials, including 1.2 billion user name and password combinations and more than 500 million email addresses.
The NYT said it had a security expert not affiliated with Hold Security analyze the database of stolen credentials and confirm its authenticity.
One of the challenges with Big Data is how to find value hidden in all that volume. Experts generally recommend approaching it as an explorer rather than simply querying the data to find specific answers.
As an astrophysicist, Dr. Kirk Borne knows a thing or two about probing the unknown. Borne, professor of Astrophysics and Computational Science at George Mason University, began tinkering with large data sets because of science, but soon became an advocate for Big Data. Now, in addition to his work as a professor and astrophysicist, Borne is a transdisciplinary data scientist.
According to Borne’s May post for the MapR.blog, he has identified four major types of Big Data discoveries (data-to-discovery, he terms it):
The changes to storage technology have been well-documented over the years. From tape to disk to solid state, not to mention DAS, SAN/NAS and StaaS, the only constant in the storage industry has been change.
Lately, however, these technological changes are starting to coalesce in the data center to produce not only bigger and better storage, but entirely new architectures designed to address increasingly specialized workloads. This has given the enterprise unprecedented ability to craft their own storage environments, rather than simply upgrade their legacy vendor solutions.
Naturally, this is producing a fair amount of turmoil in the traditionally staid storage industry. As Redmond Magazine’s Jeffrey Schwartz notes, established firms like EMC, HP and NetApp are under increasing pressure from start-ups like Nasuni and Pure Storage who are turning to advanced Flash and memory solutions aimed specifically at mobile and cloud-based data loads. Even companies like Microsoft are moving into storage hardware as they ramp up their cloud offerings in the race to beat Amazon to the highly lucrative enterprise storage market.
Among the already scary global state of affairs, cybersecurity and critical infrastructure are also areas that have become increasingly tense. In July, The Economist ran a special section on cybersecurity, and one of the stories focused on critical infrastructure attacks. One passage explains perhaps the key issue driving the underlying threat to the world’s critical infrastructure, and it involves the way in which supervisory control and data acquisition (SCADA) systems, which control network operations, have evolved:
Many of these were designed to work in obscurity on closed networks, so have only lightweight security defences. But utilities and other companies have been hooking them up to the web in order to improve efficiency. This has made them visible to search engines such as SHODAN, which trawls the internet looking for devices that have been connected to it. SHODAN was designed for security researchers, but a malicious hacker could use it to find a target.
Security is, I believe, a major contributor to organisational resilience. It is about protecting assets from loss and damage, risk analysis and management, and alignment with organisational needs. It’s not about criminals and criminality. If you want to be adept and capable as a security professional, knowing about what motivates criminals is not actually of much practical utility. Why should you be interested in ‘rational choice’ when what you need to know about are the methods required to protect your assets? Why study the nuances of criminal investigation when you are looking into the security breach that has already occurred? Obviously, if you want to inform methods of limiting future damage then that is useful, but for me not the driving focus of security.
The functions of security have moved on rapidy from alignment to policing activities to a much wider embedded and linked function. The security professional should be as comfortable in blending his or her functions with those of crisis and continuity management as they are in conducting risk analyses. The security professional should be less concerned with crime rates and more with the ability to identify and manage their own vulnerabilities to all types of threat, some malicious and criminal, but many not. The growth in security these days is of course around IT, information and cyber; and there are adversaries out there who are deeply criminal. They no doubt hit all the spots for criminological theories; but it doesn’t matter – the cyber security professional’s role is to limit the penetration and damage whether the adversary is a kid in his bedroom or a nation-state. Or even the insider who does not understand the damage that their IT use can cause
Traditional data backup happens once every so often – once an hour, once a day, once a week, for example, depending on the recovery requirements associated with the data. It’s typically the recovery point objective or RPO that determines the frequency of the backup. If you cannot afford to lose more than the last 30 minutes’ worth of data, then your RPO will be 30 minutes and backups will happen at least every half an hour. Continuous replication on the other hand changes the model by backing up your data every time you make a change. But what does that do to RPO, disk space requirements and network capacity (assuming you’re backing up to storage in a different physical location)?
As the health of two Ebola-stricken American missionaries deteriorated late last month, an international relief organization backing them hunted for a medical miracle. The clock was ticking, and a sobering fact remained: Most people battling the disease do not survive.
Leaders at Samaritan’s Purse, a North Carolina-based Christian humanitarian group, asked officials at the Centers for Disease Control and Prevention whether any treatment existed — tested or untested — that might help save the lives of Kent Brantly and Nancy Writebol, both of whom had contracted Ebola while helping patients in Liberia.
The CDC put the group in touch with National Institutes of Health workers in West Africa, where an employee knew about promising research the U.S. government had funded on a serum that had been tested only in monkeys.
What should a business continuity plan contain? It's important to keep it concise and manageable, but I'm sure we all have our own ideas as to what the 'must have' items are. Charlie Maclean-Bristol of PlanB Consulting takes us through what he thinks the top ten features of a good plan are:
1. Scope. On many of the plans I see it is not clear what the scope of the plan is. The name of the department may be on the front of the plan but it is not always obvious whether this is the whole of the department, which may cover many sites, or just the department based in one location. It should also be clear within strategic and tactical plans what part of the organisation the plan covers. Or does it cover the whole of the organisation? Where large organisations have several entities and subsidiaries it should be clear whether the tactical and strategic plans cover these.
(MCT) — With dozens of local doctors and medical staff among the dead, U.S. and foreign experts are preparing to flood into West Africa to help fight the deadliest Ebola outbreak on record.
Although two Americans, Dr. Kent Brantly and health worker Nancy Writebol, have contracted the disease, health experts say foreigners taking careful precautions should not be at serious risk.
But more than 60 local medical staff, about 8 percent of the fatalities, have died in Sierra Leone, Liberia and Guinea — poor countries with weak, overloaded health-care systems that are ill-equipped to handle the outbreak.
Ebola expert G. Richards Olds, dean of medicine at UC Riverside, compared local health-care workers there to doctors who donned beaked masks, leather boots and long, waxed gowns to fight the plague in medieval Europe.
The harmful toxin found in Lake Erie that caused a water crisis in Ohio's fourth-largest city this weekend has raised concerns nationally. That's because no states — including Texas — require testing for such toxins, which are caused by algal blooms. And there are no federal or state standards for acceptable levels of the toxins, even though they can be lethal.
In Toledo, Ohio, where voluntary tests at a water treatment plant found elevated levels of the toxin microcystin, which is produced by blue-green algae, the city is urging residents and the several hundred thousand people served by its water utility not to drink tap water, even if they boil it. Exposure to high levels of microcystin can cause abdominal pain, vomiting and diarrhea, liver inflammation, pneumonia and other symptoms, some of which are life-threatening. Restaurants have closed and there are shortages of bottled water as far as 100 miles away.
In Texas, which has battled blue-green algae problems at several of its lakes, Terry Clawson, the spokesman for the state's Commission on Environmental Quality, said surface water data has "not demonstrated levels of algal toxins that show any cause for alarm."
(MCT) — Hotshot Hollywood directors make movies about machines that can predict the future and software programs that can peer ahead in time. Silver screen villains plot to use the predictive power for evil; heroes fight for good.
The drama makes for great movies, but it's not all science fiction: the Tennessee Highway Patrol is already using that kind of technology every day.
It's called predictive analytic software. And it could be the start of a whole new generation of traffic safety, a new tool as revolutionary as seat belts or radar.
"It's the coming thing," said Tennessee Highway Patrol Colonel Tracy Trott.
Tennessee Highway Patrol analysts plug all sorts of factors into the software — like weather patterns, special events, home football schedules, festivals and historic crash data — and the program spits out predictions of when and where serious or fatal traffic accidents are most likely to happen.
ABUJA, Nigeria — In an ominous warning as fatalities mounted in West Africa from the worst known outbreak of the Ebola virus, the head of the World Health Organization said on Friday that the disease was moving faster than efforts to curb it, with potentially catastrophic consequences, including a “high risk” that it will spread.
The assessment was among the most dire since the outbreak was identified in March. The outbreak has been blamed for the deaths of 729 people, according to W.H.O. figures, and has left over 1,300 people with confirmed or suspected infections.
Dr. Margaret Chan, the W.H.O. director general, was speaking as she met with the leaders of the three most affected countries — Guinea, Liberia and Sierra Leone — in Conakry, the Guinean capital, for the introduction of a $100 million plan to deploy hundreds more medical professionals in support of overstretched regional and international health workers.
“This meeting must mark a turning point in the outbreak response,” Dr. Chan said, according to a W.H.O. transcript of her remarks. “If the situation continues to deteriorate, the consequences can be catastrophic in terms of lost lives but also severe socioeconomic disruption and a high risk of spread to other countries.”
Summer vacation: Isn’t it great? Except it is not what it used to be. We are either expected by our employers and clients to somehow remain accessible and productive 24/7 while we’re “off,” or we put that pressure on ourselves. Or we’re in the middle of a job search and don’t want to lose precious momentum or appear not to be serious.
Taking needed vacation time in order to relax and recharge can be especially difficult for those working in IT. A Computerworld piece that is filled with seriously depressing anecdotes about IT folks working through vacation cites a 2014 TEKsystems survey that “found that 47% of senior IT professionals are expected to be available 24x7 while on vacation (up from 44% in 2013), compared to 18% of entry- to mid-level IT professionals (a decrease from 20% in 2013).”
Here are ideas from IT Business Edge and elsewhere for how to manage the expectations, stress, extra duties and communication challenges that your wonderful vacation now brings.
The debate has been going on for a long time. Is it Business Continuity for business processes and Disaster Recovery for IT? Is Business Continuity just the current term for any preparedness planning going on in the organization? Does it depend on who is the driving force behind the need to create a plan? Was it IT, a business line, Audit or Risk Management that got it started? One thing for sure is that in most companies the people on either side of the fence don’t often talk to each other. And it has been that way for years.
When I did an internet search on the topic of Business Continuity vs Disaster Recovery, I found posts going back many years. Just last year (August 27, 2013) Jim Mitchell posted a blog that said, “Unless and until IT and ‘the business’ work together as equal partners in the development of comprehensive Business Continuity, we haven’t moved into a truly ‘post-DR’ world. As long as the two extremes see themselves as adversaries, they are unlikely to reach true Business Continuity objectives. As long as they fight separately over the same budget dollars (and we all know who usually wins that battle), they will never truly be partners in organization recoverability.” A year later this is still true.
The Director-General of WHO and presidents of west African nations impacted by the Ebola virus disease outbreak will meet Friday in Guinea to launch a new joint US$100m response plan as part of an intensified international, regional and national campaign to bring the outbreak under control.
The scale of the ongoing outbreak is unprecedented, with approximately 1,323 confirmed and suspected cases reported, and 729 deaths in Guinea, Liberia and Sierra Leone since March 2014.
“The scale of the Ebola outbreak, and the persistent threat it poses, requires WHO and Guinea, Liberia and Sierra Leone to take the response to a new level, and this will require increased resources, in-country medical expertise, regional preparedness and coordination,” says Dr Chan. “The countries have identified what they need, and WHO is reaching out to the international community to drive the response plan forward.”
When it comes to business continuity and disaster recovery planning, hope is not a strategy. IT departments, however, are too often surprised by the inevitable when a disaster they could have seen coming changes everything. Even companies that have a good disaster recovery or even disaster recovery as a service (DRaaS) plan in place aren't immune to significant business disruptions; they may think their company is fully protected, but Logicalis US warns that having a disaster recovery plan alone may be putting the proverbial cart before the horse.
The horse, in this case, is developing a solid business continuity strategy first.
"Disaster recovery – even DR as a Service – is technology based. The technology will save whatever data you tell it to, but the success of your business depends as much – if not more – on the effectiveness and efficiencies of your processes and procedures," says David Kinlaw, Practice Manager, Data Protection and Availability Services, Logicalis US. "Critically reviewing, evaluating and improving those processes and procedures is therefore essential to ensuring the success of your business."
That's because the true value of business continuity planning is not limited to technology. Done correctly, the exercise of developing and implementing a thorough business continuity plan opens ongoing conversations between IT and business units, empowering them as a team to face whatever challenges lie ahead. Combine a well-implemented disaster recovery or DRaaS plan with a strong business continuity strategy and the organization will have a winning combination for long-term sustainability.
For a while, the general assumption was that Ethernet would supplant all things Fibre Channel in the data center. But the rise of cloud computing and virtualization has created demand for more storage bandwidth than ever.
Rising to the challenge, Cisco this week made additions to its storage area network (SAN) lineup that not only provide 16G of bandwidth, but are also much simpler to manage by both automating the provisioning process and providing tools for detecting network congestion and recovery logic that helps ensure application performance requirements are continuously met.
Nitin Garg, senior manager for product management in the data center switching group at Cisco, says it is now much simpler to provision the Cisco MDS 9148S 16G Fabric Switch, the Cisco MDS 9706 Storage Director, and the Cisco MDS 9700 FCoE Module for multi-protocol networking fabrics.
In its fifth annual board of directors survey, “Concerns About Risks Confronting Boards,” EisnerAmper surveyed directors serving on the boards of more than 250 publicly traded, private, not-for-profit, and private equity-owned companies to find out what is being discussed in American boardrooms and, in turn, what those boards are accomplishing as a result.
According to the report, reputation remains the top concern across a range of industries:
(MCT) — When a major hurricane strikes the Gulf Coast again — as it inevitably will — the federal government will undoubtedly respond in some manner, just as it did after hurricanes Rita and Ike. But the key word in that sentence is "after." The damage will have been done, and coastal residents will bear the brunt of the recovery.
A new study by the National Research Council reinforces that reality. It encourages state and local governments to do all they can now to minimize devastation from hurricanes instead of hoping that Washington will ride to the rescue afterward.
That makes sense. Congress is usually slow to act when disasters strike, and the Federal Emergency Management Agency has a spotty record — even if it has improved in recent years. Responsibility for hurricane risk is scattered among many governmental agencies, the study says, yet collectively they are doing little about protecting coasts before storms strike.
We all rely on USB to interconnect our digital lives, but new research first reported by Wiredreveals that there's a fundamental security flaw in the very way that the humble Universal Serial Bus functions, and it could be exploited to wreak havoc on any computer.
Wired reports that security researchers Karsten Nohl and Jakob Lell have reverse engineered the firmware that controls the basic communication functions of USB. Not only that, the've also written a piece of malware, called BadUSB, that can "be installed on a USB device to completely take over a PC, invisibly alter files installed from the memory stick, or even redirect the user's internet traffic."
Embedded within USB devices—from thumb drives thorough keyboards to smartphones—is a controller chip which allows the device and a computer it's connected to send information back and forth. It's this that Nohl and Lell have targeted, which means their malware doesn't sit in flash memory, but rather is hidden away in firmware, undeletable by all but the most technically knowledgable. Lell explained to Wired:
As the deadliest outbreak of Ebola in recorded history continues to devastate Western Africa, the American Red Cross is supporting efforts through both financial and staffing support.
While the Sierra Leone Red Cross is taking the lead in promoting awareness through social mobilization campaigns, the American Red Cross, along with the global Red Cross network, is helping amplify efforts and strengthen capacity. An American Red Cross specialist has been deployed to provide telecommunications support and internet to the health team in country, and follows another IT specialist that had been in Sierra Leone for the past month.
The American Red Cross has also assisted with remote mapping and information management in the region and has contributed $100,000 to strengthen the capacities of both the Liberia Red Cross and Guinea Red Cross. These funds will help manage the Ebola outbreak response and increase public awareness of the virus.
Red Cross volunteers in the region working to assist with Ebola awareness efforts. In total, more than 1,200 volunteers have been mobilized in Sierra Leone, Liberia and Guinea to date.
Since March 2014, some 1,200 cases have been reported and more than 670 deaths have been linked to the virus in Sierra Leone, Liberia, Guinea and most recently, Nigeria.
Currently outbreaks are centered in the cities of Kailahun and Kenema in Sierra Leone, and the counties of Lofa and Montserrado in Liberia.
Recognizing the severity of the issue, Liberian President Ellen Johnson Sirleaf has announced the closure of most of Liberia’s borders, with stringent medical checks being stepped up at airports and major trade routes. The government has also banned public gatherings of any kind, including events and demonstrations.
Difficulties remain in identifying cases, tracing contacts, and raising public awareness about the disease and how to reduce the risk of transmission. These difficulties, including widespread misconception, resistance, denial and occasional hostility, are considerably complicating the humanitarian response to containing the outbreak.
For more information on the Ebola outbreak and response, visit http://www.ifrc.org.
One of the more frustrating things about IT is that in the wake of the consumerization of IT, no matter how hard internal IT departments try, they can’t wean end users off shadow IT services. Much of that has to do with the user experience those services provide. Designed for consumers, they tend to be a lot simpler to use than applications delivered by the enterprise IT organization. The simple fact is that in order for internal IT organizations to win that battle, they have to deliver an application that provides a much better customer experience than the consumer application they are trying to replace.
With that goal in mind, EMC Syncplicity has delivered an upgrade to its file transfer and synchronization software for Apple iOS devices that makes it easier to not only surface the most relevant and pertinent content, but also predicts which content an end user is likely to want to access next.
By Deborah Ritchie
A report from the Information Commissioner’s Office sets out how the law applies when big data uses personal information. It details which aspects of the law organisations need to particularly consider. Big data is a way of analysing data that typically uses massive datasets, brings together data from different sources and can analyse the data in real time. It often uses personal data, be that looking at broad trends in aggregated sets of data or creating detailed profiles in relation to individuals, for example lending or insurance decisions.
Some commentators have argued that existing data protection law can’t keep up with the rise of big data and its new and innovative approaches to personal data. That is not the view of the ICO, which stresses the basic data protection principles already established in UK and EU law are flexible enough to cover big data. “Applying those principles involves asking all the questions that anyone undertaking big data ought to be asking,” the report reads. “Big data is not a game that is played by different rules.”
(MCT) — During severe weather, Carla Kerr, her daughter and her mother bunker down in their 10-foot-long bathroom on the first floor. With blankets, a flashlight and a weather radio, it’s a bit of a tight fit.
As residents of Guinotte Manor, a public housing complex in Kansas City, they don’t have basements where they can take cover from tornadoes.
At the end of next summer, Kerr will have a safer solution across the street.
The Garrison Community Center will start construction on a safe room this summer, said Bob Lawler, project manager of Kansas City Parks & Recreation Department. The safe room will be able to withstand the highest-rated tornadoes while holding 1,300 occupants, close to the estimated number of residents within a half-mile radius.
A survey into cyber security in the retail sector suggest that a number of organisations don’t realise the goal of PCI compliance is the protection of cardholder data alone – not for the business as a whole.
Conducted by Dimensional and Atomik Research and sponsored by Tripwire, the survey evaluated the attitudes of 407 retail and financial services organisations in the US and the UK on a variety of cyber security topics.
Despite industry data to the contrary, Tripwire’s retail cybersecurity survey indicates that organisations that rely on PCI compliance as the core of their information security program were twice as confident that they could detect rogue applications, such as those used to exfiltrate data. These respondents were also significantly more confident that they would be able to detect misconfigured or unauthorised network shares, which was a key attack vector exploited in the Target data breach.
Ensuring employee safety by rapidly disseminating the right information, and keeping communication lines open in a time of crisis are both priorities for businesses. Traditional solutions for this have relied on the manual ‘call tree’ or ‘phone tree’. Key employees are contacted first to inform them of whatever situation or crisis has arisen, with remaining staff to be contacted as soon as possible afterwards. However, even for smaller organisations of 100 people for example, the manual call tree rapidly demonstrates its limitations. For larger enterprises, there is no doubt – a better solution is required.
MONTGOMERY, Ala. – Community Emergency Response Teams prepare for the worst, then when disaster strikes, they help themselves, their families, their neighborhoods and their communities.
Begun in Los Angeles in 1985, the CERT program consists of specially trained volunteers who are called into action during and immediately following major disasters before first responders can reach the affected areas. They work closely with fire and emergency management departments in their communities.
More than 2,200 CERT programs are available in the United States. In Alabama, 10 counties offer CERT training and maintain teams. During a disaster, Alabama CERT members may self-deploy in their neighborhoods, be mobilized by a sheriff’s office or report to a pre-determined location.
“CERT groups provide immediate assistance to people in their areas and lead spontaneous volunteers before we can get to the area and inform emergency management of what the needs are,” said Art Faulkner, director of Alabama Emergency Management.
Billy Green, Deputy Director of Emergency Management for Tuscaloosa County, had just finished a training class for Hispanic CERT volunteers the week before the tornado outbreak of April 2011 in Alabama.
“We finished on the Saturday before the tornadoes hit,” he said. “These Spanish speakers took exactly what they learned and put it out in the field. The City of Holt has a high Hispanic population, and this team was able to go out there and do search and rescues.”
Holy Spirit Catholic Church set up its own shelter for the Hispanic population, he added. “Those guys were in that shelter helping and making sure everyone was all right.”
This April’s severe weather and flooding caught many Mobile County residents by surprise, said Mike Evans, Deputy Director of Mobile County Emergency Management Agency.
“Mobile gets the most rainfall of anywhere in the continental United States with 67 inches per year,” he said. “This wasn’t like during hurricane season; getting a lot of rain and thunderstorms is pretty common. But areas that normally flood didn’t, it was urban areas.”
Since the ground was already saturated, the rain had nowhere to go so roads that were low flooded, he said.
“People tried to drive through and we had to get them out,” Evans said.
CERTs distributed commodities and one team knocked on doors asking who was going to leave the area and who was going to stay, he said. After the storm, his teams notified people who left the area of the status of their property.
CERTs can also work with crowd and traffic control, work at water stations at large events, help community members prepare for emergencies, and assist with fire suppression and medical operations as well as search and rescue operations.
Initially, CERT members take training classes that cover components of disaster activities, including disaster preparedness, fire suppression, medical operations, search and rescue and disaster psychology and team organization. Additional training occurs twice a year with mock disasters. Refresher courses are also held. The Federal Emergency Management Agency supports CERT by conducting or sponsoring train-the-trainer and program manager courses for members of the fire, medical and emergency management community, who then train individual CERTs.
CERTs are organized in the Alabama counties of Dale, DeKalb, Shelby, Morgan, Tallapoosa, Jefferson, Colbert, Calhoun, Russell and Coffee.
To join an existing CERT program in your community, go online to www.fema.gov/community-emergency-response-teams. Click on the “find nearby CERT programs” link and enter your zip code. If there is a team near you, you will see the name and phone number of a contact person as well as pertinent information about the local program.
That site can also provide information on how to build and train your own community CERT, the curriculum for training members as well as how to register the program with FEMA.
Aside from providing a vital community service, CERT members receive professionally recognized training and continue to increase their skills.
“CERTs complement and enhance first-response capabilities by ensuring safety of themselves and their families, working outward to the neighborhood and beyond until first responders arrive,” said FEMA’s Federal Coordinating Officer Albie Lewis. “They are one of the many volunteer organizations that we rely on during a disaster.”
The industry is so focused right now on Big Data and the Internet of Things that it’s hard to write about anything else. But it’s important to remember that some organizations are still struggling with more basic data problems.
Government Technology recently published a contributed piece about Lodi, California, a town of about 60,000 people and a $350 million wine industry.
Jay Mishra, VP of development at Astera Software, wrote the piece, and it’s pretty obvious he’s promoting the company’s own ETL solution.
More than 175 million records were compromised between April and June due to 237 data breaches, bringing the 2014 total to 375 million records affected and 559 data breaches. That’s a lot of records illegally accessed for less than 1,000 breaches worldwide. What this tells me is that even SMBs store a lot more records than they may realize, and a single data breach can result in a huge payoff for a hacker.
These numbers are from SafeNet’s Breach Level Index second quarter report. The report found that retail was the hardest hit industry, with more than 145 million records stolen, or 83 percent of all data records breached, according to a release.
Here is an important finding in the report: Less than 1 percent of all of the data breaches in the second quarter happened to networks that used encryption or strong security platforms to protect the data. So, no, not every security system is foolproof, but you greatly improve your chances of avoiding a breach if you put strong security practices in place. At the same time, it is a little scary to think how many businesses are still lacking when it comes to network security. Good security is vital to any company’s success, and a second report from SafeNet shows why. Once a customer discovers a company has been breached, he or she is not likely returning. As Yahoo Finance reported:
This story was originally published by Data-Smart City Solutions.
Data science and big data are hot topics in today’s business and academic environments. Corporations in a variety of industries are building teams of data scientists. Universities can barely keep up with student demand for courses. The hope is that new analytic methods, combined with more data and computational power, will uncover insights that would otherwise remain undiscovered. In the private sector, these new insights lead to new revenue opportunities and more targeted investments.
This week I’m back at the National Emergency Training Center (NETC) in Emmitsburg, MD. If you’ve read some of my past blogs, you’ll know that this is “home base” for National Community Emergency Response Team (CERT) training. Even though this isn’t my “first rodeo” at the NETC, I still find it an honor whenever I get the opportunity to teach here. There’s so much history in this region of the United States as well as on the campus that houses the NETC. Throughout the week, I hope to share a few of the stories and sites that make this such a special place to come to.
The NETC is home to both the National Fire Academy (NFA) and the Emergency Management Institute (EMI). The 107-acre campus was the original site of Saint Joseph’s Academy, a Catholic school for girls from 1809 until 1973. It was purchased by the U.S. Government in 1979 for use as the NETC.
The National Fire Academy (NFA) is one of two schools in the United States operated by the Federal Emergency Management Agency (FEMA) at the NETC. Operated and governed by the United States Fire Administration (USFA) as part of the U.S. Department of Homeland Security (DHS), the NFA is the country’s pre-eminent federal fire training and education institution. The original purpose of the NFA as detailed in a 1973 report to Congress was to “function as the core of the Nation’s efforts in fire service education—feeding out model programs, curricula, and information.
Apache’s open source Storm may be the big buzz in Big Data streaming analytics, but according to a recent Forrester report, the commercial vendors are the ones who have “got the goods.”
While Storm is used by a number of high-profile companies, including the Weather Channel, Spotify and Twitter, the research firm writes that it’s nonetheless “a very technical platform that lacks the higher order tools and streaming operators that are provided by the vendor platforms evaluated in this Forrester Wave …”
In its July report on Big Data Streaming Analytics Platforms, the research firm reviewed seven platforms: IBM, Informatica, SAP, Software AG, SQLstream, TIBCO and Vitria. Forrester assessed each on 50 criteria, including business application and platform integration, data sources, development tools, ability to execute, partnerships and pricing.
Ask a roomful of IT experts what the future holds for the data center and you’re likely to get a roomful of different opinions. This is doubly true during periods of revolutionary change like we are seeing now.
With outlooks ranging from all-cloud, all-software constructs to massive hyperscale infrastructure tailored toward specific web-facing or Big Data workloads, it seems that the enterprise has a range of options when it comes to building next-generation infrastructure.
Even during times of heady change, however, it is still useful to anticipate the future by analyzing the past. TechNavio, for example, has noticed that rack units have nearly doubled in size over the past decade, leading the firm to conclude that future data centers will feature higher ceilings and taller equipment racks. A key driver in this is the rising cost of property, which is causing designers to build up rather than out. But it also has to do with the need for increased densities and the prevalence of wireless connectivity, which reduces the need for bulky cables.
KPMG’s UK and global lead in KPMG’s cyber security practice, Malcolm Marshall, is warning organizations about the impact that international political disputes can have on the ability to conduct ‘business as usual’. He suggests that, “whilst attention is focused on the search for resolutions in the ‘corridors of power’, businesses need to be ready to defend themselves, as the cyberspace in which they operate increasingly becomes the new battleground.”
Mr. Marshall says: “Businesses are so focused on cyber-attacks by organized crime that it is easy for them to ignore the possibility of being targeted by groups wanting to make a political point, possibly even with backing from a hostile government.
The International Federation of Risk and Insurance Management Associations (IFRIMA) has established a working group to define ‘the core knowledge and competencies that lie at the heart of risk management in whatever context it is practiced’.
FERMA, RIMS, Alarys and the Institute of Risk Management (IRM) are among the organizations taking part.
The aim of the working group is to produce a short document that any risk management body can use as the foundation of a risk management education and /or certification process.
Publication is planned for sometime in 2015.
Employing a third party to store and deliver assets critical to Disaster Recovery or Business Continuity Plans can be invaluable. But offsite storage should never be “dump it and forget it”. Despite everything your storage provider may promise, it’s what you don’t know that could become a problem when you need to retrieve your data backup, ‘go box’ or other essential recovery assets.
First, there’s the hand-off process. If your IT team ships physical backups offsite on a regular basis, that process can become routine. Over time, routine can slip into neglect. Neglect can result in outcomes that may a problem – or a disaster – when it’s time to recall those backups. And if you are using internal means to store vital assets, understanding the process and its security is just as critical – perhaps even greater.
What is the process? Is it documented? Is it verified with the vendor/provider periodically? Take the time to visit the provider (or even follow their pickup agent) to see exactly how the process works. Ask to see your stored materials, the vendor’s logs and their entry procedures in action.
Civic technologist Matt Stempeck makes an unusual proposal in a recent Harvard Business Review post (registration required): Businesses, especially in the tech sector, should consider donating data over dollars.
Stempeck draws the idea from the International Charter on Space and Major Disasters, a 1999 act that required satellite companies to provide imagery to public agencies in times of crisis. Stempeck points out that under that act, DMC International Imaging has provided valuable imagery on:
- Flooding in the UK and Zimbabwe
- The spread of lotus in Algeria
- Fires in India
- Snow in South Korea
Study looks at more than 60 years of coastal water level and local elevation data changes
Annapolis, Maryland, pictured here in 2012, saw the greatest increase in nuisance flooding in a recent NOAA study. (Credit: With permission from Amy McGovern.)
Eight of the top 10 U.S. cities that have seen an increase in so-called “nuisance flooding”--which causes such public inconveniences as frequent road closures, overwhelmed storm drains and compromised infrastructure--are on the East Coast, according to a new NOAA technical report.
This nuisance flooding, caused by rising sea levels, has increased on all three U.S. coasts, between 300 and 925 percent since the 1960s.
The report, Sea Level Rise and Nuisance Flood Frequency Changes around the United States, also finds Annapolis and Baltimore, Maryland, lead the list with an increase in number of flood days of more than 920 percent since 1960. Port Isabel, Texas, along the Gulf coast, showed an increase of 547 percent, and nuisance flood days in San Francisco, California increased 364 percent.
"Achieving resilience requires understanding environmental threats and vulnerabilities to combat issues like sea level rise," says Holly Bamford, Ph.D., NOAA assistant administrator of the National Ocean Service. "The nuisance flood study provides the kind of actionable environmental intelligence that can guide coastal resilience efforts."
“As relative sea level increases, it no longer takes a strong storm or a hurricane to cause flooding,” said William Sweet, Ph.D., oceanographer at NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) and the report’s lead author. “Flooding now occurs with high tides in many locations due to climate-related sea level rise, land subsidence and the loss of natural barriers. The effects of rising sea levels along most of the continental U.S. coastline are only going to become more noticeable and much more severe in the coming decades, probably more so than any other climate-change related factor.”
The study was conducted by scientists at CO-OPS, who looked at data from 45 NOAA water level gauges with long data records around the country and compared that to reports of number of days of nuisance floods.
Nuisance flooding events have increased around the U.S., but especially off the East Coast. Click graphic for high resolution PDF. (Credit: NOAA)
The extent of nuisance flooding depends on multiple factors, including topography and land cover. The study defines nuisance flooding as a daily rise in water level above the minor flooding threshold set locally by NOAA’s National Weather Service, and focused on coastal areas at or below these levels that are especially susceptible to flooding.
The report concludes that any acceleration in sea level rise that is predicted to occur this century will further intensify nuisance flooding impacts over time, and will further reduce the time between flood events.
The report provides critical NOAA environmental data that can help coastal communities assess flooding risk, develop ways to mitigate and adapt to the effects of sea level rise, and improve coastal resiliency in the face of climate- and weather-induced changes.
NOAA's mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook and our other social media channels.
Top ten U.S. areas with an increase nuisance flooding*
Meters above mean higher high water mark
Average nuisance flood days, 1957-1963
Average nuisance flood days, 2007-2013
Atlantic City, N.J.
Sandy Hook, N.J.
Port Isabel, Texas
San Francisco, Calif.
* More than one flood on average between 1957-1963, and for nuisance levels above 0.25 meters.
As the enterprise delves ever deeper into virtual and cloud infrastructure, one salient fact is becoming clearer: Attributes like scalability and flexibility are not part and parcel to the technology. They must be developed and integrated into the environment so that they can truly provide the benefits that users expect.
Even at this early stage of the cloud transition, providers are already feeling the blowback that comes from overpromising and under-delivering. According to a recent study by Enterprise Management Associates (EMA), one third of IT executives say they found scaling either up or down to be not as easy as they were led to believe. With data loads ebbing and flowing in a continual and often chaotic fashion, just trying to match loads with available resources is a challenge even with modern automation and orchestration software.
(MCT) — The coast's susceptibility to big storms is clearly no secret, but ever wonder what the shoreline looked like 100 years ago? Or about the rate at which sea level is changing? The U.S. Geological Survey released an interactive website last week that will allow Coastians to easily research coastal changes.
The tool, called the USGS Coastal Change Hazards Portal, shows changing sea levels, retreating shorelines and vulnerability to extreme coastal storms. A link to the site can be found at sunherald.com.
USGS research geologist Robert Thieler said a large driver behind the portal, which became available July 16, was to bring the three research themes together into one easy-to-use website. He said the functionality of the site and the value of the information make it a useful tool for the general public as well as city and county officials.