Industry Hot News (6945)
Andrew Waite gives an overview of the Heartbleed vulnerability.
This week has been an interesting and busy one for those on both sides of the information security fence: a critical vulnerability, dubbed Heartbleed, was publicly disclosed in the widely used library OpenSSL, which forms the core of many SSL/HTTPS provisions.
What is it?
Without getting too technical, the Heartbleed flaw allows a malicious and unauthorised third party to access protected data in memory. The exact data access is random, but there have been corroborated reports that it can expose clear-text passwords, private SSL keys and other sensitive data which would negatively impact the security of your systems, users and clients.
How to determine if you’re vulnerable
The vulnerability effects any service utilising OpenSSL version 1.0.1 through to OpenSSL version 1.0.1f. If you (or your in-house sysadmin) can confirm that your SSL implementation isn’t running any of the affected versions, you’re safe from this particular weakness. Unfortunately, OpenSSL is widely used and embedded into many other appliances and application stacks.
Since the notification announcement, a number of websites have been released to enable you to enter your system name/IP address and the site will check for you. However, what a third party may do with the information once determining your system is vulnerable could be a risk in its own right…
Tamiflu (the antiviral drug oseltamivir) shortens symptoms of influenza by half a day, but there is no good evidence to support claims that it reduces admissions to hospital or complications of influenza. This is according to the updated Cochrane evidence review, published today (10th April 2014) by The Cochrane Collaboration, the independent, global healthcare research network and The BMJ.
Evidence from treatment trials confirms increased risk of suffering from nausea and vomiting. And when Tamiflu was used in prevention trials there was an increased risk of headaches, psychiatric disturbances, and renal events.
Although when used as a preventative treatment, the drug can reduce the risk of people suffering symptomatic influenza, it is unproven that it can stop people carrying the influenza virus and spreading it to others.
CIO — In 1998, when Paul Rogers started at GE, implementing optimization software at a coal-fired power plant was easier said than done. Management understood and worked with GE to develop the software. Within the plant itself, though, the vast majority of employees didn't know how to use a computer, let alone software, and were very suspicious of the system.
These days, says Rogers, now GE's chief development officer, the tables have turned. Smartphone-toting plant employees know firsthand how technology changes their lives as consumers — and they want to know why the industrial environment isn't like their home environment.
"They want to optimize equipment, and that's a sign that the world is ready," Rogers says. Put another way: "My daughter has radically different experiences about how the world works."
CIO — The past two weeks brought big news in the public cloud computing market. In the course of four days, three technology giants made bold statements about their intent to be one of the most important public cloud providers — and, indeed, position themselves to be the No. 1 cloud company on the planet.
For anyone using cloud computing, what happened last week indicates how critical the biggest companies in technology view it and how cloud adopters need to evaluate their strategy in light of the ongoing price competition upon which the leaders have embarked.
Here's the high-level overview of what was announced:
Business continuity is often about reinforcing existing infrastructure or eliminating sources of business disruption. Bringing in techniques to accelerate or multiply results thanks to good business continuity may not be so frequent, but here’s one that may well do that. It’s version control, which is used when several knowledge workers need to simultaneously work on the same computer files to create advantage for the organisation – but without stepping on each other’s toes. Version control technology started in software development. However, it can be used for projects to create web content, coordinated product rollouts, corporate business plans and more.
PC World — By now you've likely heard about the Heartbleed bug, a critical vulnerability that exposes potentially millions of passwords to attack and undermines the very security of the Internet. Because the flaw exists in OpenSSL--which is an open source implementation of SSL encryption--many will question whether the nature of open source development is in some way at fault. I touched based with security experts to get their thoughts.
Closed vs. Open Source
First, let's explain the distinction between closed source and open source. Source refers to the source code of a program--the actual text commands that make the application do whatever it does.
Closed source applications don't share the source code with the general public. It is unique, proprietary code created and maintained by internal developers. Commercial, off-the-shelf software like Microsoft Office and Adobe Photoshop are examples of closed source.
A new report from application specialists Camwood reveals that, in the wake of recent migrations following the conclusion of support for the Windows XP operating system, and with the accelerating pace of change in the IT department, IT directors and managers now see near constant change and migration projects as the new norm. Coping with this change has now become a primary concern for IT departments.
According to the report, 90% of IT decision makers believe that the pace of change in IT is accelerating, and that this presents a significant challenge. 72% find the pace of change in IT ‘unsettling’. 93% also agree that, in the new IT environment, a flexible IT infrastructure is key to their organisation’s success, with 79% believing that IT departments that don’t adapt risk demise.
BALTIMORE—The Food and Drug Administration is increasingly harnessing data-driven, risk-based targeting to examine food processors and suppliers under the Food Safety Modernization Act. At this week’s Food Safety Summit, the FDA’s Roberta Wagner, director of compliance at the Center for Food Safety and Applied Nutrition, emphasized the risk-based, preventative public health focus of FSMA.
While it has long collected extensive data, the agency is now expanding and streamlining analysis from inspections to systematically identify chronic bad actors. FSMA regulations and reporting are revolutionizing many of the FDA’s challenges, but so is technology. According to Wagner, whole genome sequencing in particular has tremendous potential to change how authorities and professionals throughout the food chain look at pathogens. WGS offers rapid identification of the sources of foodborne pathogens that cause illness, and can help identify these pathogens as resident or transient. In other words, by sequencing pathogens (and sharing them in Genome Trakr, a coordinated state and federal database), scientists can track where contamination occurs during or after production.
Hurricane forecasters are sounding a warning bell for the U.S. East coast in their latest predictions for the 2014 hurricane season, even as overall tropical storm activity is predicted to be much-less than normal.
WeatherBell Analytics says the very warm water off of the Eastern Seaboard is a concern, along with the oncoming El Niño conditions.
In its latest commentary forecaster Joe Bastardi and the WeatherBell team notes:
We think this is a challenging year, one that has a greater threat of higher intensity storms closer to the coast, and, where like 2012, warnings will frequently be issued with the first official NHC advisory.”
WeatherBell Analytics is calling for a total of 8 to 10 named storms, with 3-5 hurricanes and 1-2 major hurricanes.
The London Assembly Environment Committee has published a summary of the flood risks facing the UK capital.
24,000 properties in London are at significant risk of river flooding and the Environment Agency estimates that plans currently under development could protect 10,000 of these.
The Committee warns that the risks of flooding may be increasing. The effects of climate change in southern England could mean drier summers and wetter winters. More heavy rain in the Thames region would increase surface water risk and may lead to more river flooding in London.
Ways to reduce flood risk include sustainable drainage and river restoration, which create space for flood waters to be held higher in the river catchment and soak back into the ground. Allowing low-lying areas to flood safely at times of high water flow should protect homes, roads and businesses.
Murad Qureshi AM, Chair of the Environment Committee says:
“London needs to bring back its rivers to protect itself from inevitable flooding in the future. The more we can restore natural banks to London’s rivers, the less likely heavy rain will cause the degree of flooding we saw in the early part of this year.”
“Heavy or prolonged rain locally or upstream can cause rivers to flood. Tens of thousands of properties are at high or medium risk of river flooding. This is not just from the Thames, but also from the many smaller rivers that flow into it. A lot of people don’t know where their local rivers are, until they escape their channels.”
Read Flood Risks in London Summary of Findings (PDF).
Computerworld — My son is a chief technology officer. Some companies have a chief digital officer. Can chief data wrangler be far behind?
There seems to be a trend to come up with a title to replace "CIO" that encompasses the latest direction of the profession. Titles are reflecting an emphasis on big data, social networking and data analytics.
This doesn't happen with other titles. Take the chief financial officer. I have yet to hear of a CFO becoming the chief mergers officer when the company contemplates its first merger or acquisition. The CFO's role changes to encompass some new duties but that officer remains in charge of finance. And I suspect that most CFOs would not appreciate a change in title every time their role was redefined. And yet, add big data to IT's functions and someone says we need a new title to reflect that. But we really don't. The CIO remains in charge of the enterprise's information and data, big or otherwise.
CSO — Symantec has declared 2013 the year of the "mega-breach," placing security pros on notice that they stand to lose big from phishing, spear-phishing and watering-hole attacks.
The company released Tuesday its Internet Security Threat Report for 2013, which found that eight breaches exposed the personal information of more than 10 million identities each. By comparison, 2012 had only one breach that size and in 2011 there were five.
The number of massive data breaches in 2013 made it the "year of the mega-breach," Symantec said. Information stolen included credit card information, government ID numbers, medical records, passwords and other personal data.
Federal regulators on Tuesday approved a simple rule that could do more to rein in Wall Street than most other parts of a sweeping overhaul that has descended on the biggest banks since the financial crisis.
The rule increases to 5 percent, from roughly 3 percent, a threshold called the leverage ratio, which measures the amount of capital that a bank holds against its assets. The requirement — more stringent than that for Wall Street’s rivals in Europe and Asia — could force the eight biggest banks in the United States to find as much as an additional $68 billion to put their operations on firmer financial footing, according to regulators’ estimates.
Faced with that potentially onerous bill, Wall Street titans are expected to pare back some of their riskiest activities, including trading in credit-default swaps, the financial instruments that destabilized the system during the financial crisis.
Mistrust of the public cloud is driving many enterprises toward the pursuit of private clouds. For critical data and applications, this may seem like a no-brainer as it is wiser to keep the important stuff on trusted infrastructure.
Not all private clouds are the same, however, and unless you happen to be a platform developer, you’ll end up placing your trust in someone else’s technology, just as you do with physical and virtual infrastructure.
At the moment, it seems the private cloud is shaping up to be a battle between VMware and the OpenStack community, says cloud broker RightScale. And according to the firm’s latest survey, nearly a third of enterprises are looking to turn legacy vSphere and vCenter environments into private clouds. But that doesn’t mean the market is a lock for VMware. OpenStack deployments are on the rise, driven largely by a desire to avoid vendor lock-in, even as vCloud Director adoption is starting to flag.
LINCROFT, N.J. – In the weeks after a federally declared disaster, emergency teams from government agencies, nonprofits and volunteer organizations work together to help survivors make their way out of danger and find food, clothing and shelter.
After the immediate emergency is over, the long work of recovery begins.
And as New Jersey survivors of Hurricane Sandy have learned over the past 18 months, full recovery from a devastating event like Sandy may take years.
Communities throughout New Jersey have been working hard to repair, rebuild and protect against future storms. In many cases, the challenges they face are formidable.
At the invitation of individual communities and in partnership with the state, FEMA’s office of Federal Disaster Recovery Coordination works with residents and municipal officials in impacted municipalities to develop a strategy for full recovery.
For communities that require assistance, the FDRC can provide a team of recovery specialists with a broad array of skills. Among them: civil engineering, architecture, land-use planning, economic development, environmental science and disabilities integration.
The FDRC is activated under the National Disaster Recovery Framework, which provides a structure for effective collaboration between impacted communities, federal, state, tribal and local governments, the private sector, and voluntary, faith-based and community organizations during the recovery phase of a disaster.
Federal Disaster Recovery Coordinator consult with impacted municipalities and assist with long-term planning, helping these communities determine what their priorities are and what resources they will need to achieve a full recovery.
In major disasters or catastrophic events, the FDRC is empowered to activate six key areas of assistance known as Recovery Support Functions.
The RSFs are led by designated federal coordinating agencies: Housing (U.S. Department of Housing and Urban Development); Infrastructure Systems (U.S. Army Corps of Engineers); Economic (U.S. Department of Commerce); Health and Social Services (U.S. Department of Health and Human Services); Natural and Cultural Resources (U.S. Department of Interior); and Community Planning and Capacity Building (FEMA).
Working in partnership with a State Disaster Recovery Coordinator and a Hazard Mitigation Adviser, the FDRC oversees an assessment of impacted communities and helps to develop a recovery support strategy. That strategy helps these hard-hit communities gain easier access to federal funding, bridge gaps in assistance, and establish goals for recovery that are measurable, achievable and affordable.
Here in New Jersey, approximately 12 communities have partnered with FDRC to prioritize their goals for recovery, locate the resources needed to achieve those goals and rebuild with resiliency.
In the Borough of Highlands, FDRC has assisted this severely impacted community in developing a plan for a direct storm water piping system that will decrease flooding in the low-lying downtown area. FDRC has also collaborated with the community on designing a more resilient, attractive and commercially viable central business district called the Bay Avenue Renaissance Project. The U.S. Army Corps of Engineers has initiated a feasibility study on their plan to protect the town from future flooding via a mitigation effort that includes installing floodwalls, raising bulkheads and building dune barriers.
In the devastated Monmouth County town of Sea Bright, FDRC worked with the community to create a plan for the construction of a beach pavilion that will serve as a year-round community center, library, lifeguard facility and beach badge concession. FDRC is also working with Sea Bright officials to develop a grant application to fund streetscape improvements in the downtown area of this beachfront municipality
In Tuckerton, FDRC worked with municipal officials on a plan to relocate its heavily damaged police station and borough facilities to a former school building that is much less vulnerable to flooding.
In partner communities throughout the state, FDRC subject matter experts are working to help residents envision a future that incorporates a strong infrastructure, increased storm protection and an enhanced environment that reflects the vision of the community.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
DENVER - Crisis counseling services will continue over the next nine months for survivors of the Colorado flooding disaster in September 2013 because of a $4 million federal grant. FEMA and the Substance Abuse and Mental Health Administration have awarded the $4,058,060 grant to the Colorado Department of Public Health and Environment through the 2014 Crisis Counseling Assistance and Training Program (CCP).
The new grant will allow counselors to continue door-to-door services and community outreach counseling programs. Since the disaster, Colorado Spirit crisis counselors have:
- Talked directly with 18,178 people and provided referrals and other helpful information to more than 88,000;
- Met with nearly 1,200 individuals or families in their homes.
CCP was established by the Stafford Disaster Relief and Emergency Assistance Act to provide mental health assistance and training activities in designated disaster areas. The program provides the following services:
- Individual crisis counseling and group crisis counseling to help survivors understand their reactions and improve coping strategies, review their options and connect with other individuals and agencies that may assist them;
- Development and distribution of education materials such as flyers, brochures and website information on disaster-related topics and resources;
- Relationship building with community organizations, faith-based groups and local agencies.
They say that age is only a number, so with that in mind, IBM set out to prove that the 50-year-old mainframe still has what it takes to dominate enterprise computing.
As part of its celebration of the 50th birthday of the mainframe, IBM today unveiled a slew of products and initiatives intended to make sure the mainframe stays relevant through at least the first half of the 21st Century.
The new offerings include the zDoop implementation of Hadoop for mainframes that IBM worked with Veristorm to develop, and an IBM DS8870 flash storage system that IBM says is four times faster than traditional solid-state disk (SSD) technology.
In addition, IBM unveiled an IBM Enterprise Cloud System based on mainframes that has been configured with IBM cloud orchestration and monitoring software.
CSO — In large-scale organizations, implementing mobile device management (MDM) is typically given. After all, with so many employees using mobile devices that either contain or connect to sources of sensitive information, there needs to be some way to keep everything in check. But what about those companies that aren't big enough to be able to afford an MDM implementation and a full-sized IT department to manage it? Without a means to centralize the control of mobile devices, how can these smaller companies protect their data?
Some SMBs have found ways to help mitigate risk without traditional MDM, but it isn't always easy. Right off the bat, things are tricky given that smaller companies often implement BYOD since they can't afford to provide employees with devices.
I’m excited about the Internet of Things (IoT), and I expect it to create incredible opportunities for companies in almost every industry. But I’m also concerned that the issues of security, data privacy, and our expectations of a right to privacy, in general — unless suitably addressed — could hinder the adoption of the IoT by consumers and businesses and possibly slow innovation. So, with all the hype of the IoT, I’m going to play devil’s advocate, because these issues tend to receive limited coverage when considering the impact of new technology developments on society.
First of all, I am amazed at all the connected products and services that are starting to appear. These include, for example: those for connected buildings and homes, like heating and air conditioning, thermostats, smoke detectors, and so on; entertainment systems; and sensor-enabled pill boxes and remote healthcare monitoring devices. There are also a lot of consumer devices (in addition to smartphones and tablets), such as smart watches and Internet-enabled eye glasses, connected kitchen appliances like crock pots and refrigerators, wearable exercise trackers and pet trackers, and too many more to practically list.
From the title of this post, some people might immediately think of intuition: that vague and rather flaky resource used when that’s all you have. However, we’re actually thinking of something a little more structured in this context. In the coming age of Big Data and associated worldwide online resources, analytical techniques like those used in business intelligence can be used to detect trends and tipping points. They can give individuals and organisations meaningful information about how likely certain disasters will be: for example, "there is a 90 percent chance currently that your factory will be flooded out to a depth of eighteen inches of water."
You got a call from a reporter asking for your comment about an issue you were afraid might see the light of day. So, you know they’re onto it and going to run something.
This is a fairly common situation and unfortunately for PR and crisis comms consultants, this is often when you get the call from the client. No time to lose, but what is the strategy?
My thoughts on this were prompted by PR Daily’s post today on “Five Ways to Respond to Bad Press Before the Story Runs.” I have great regard for Brad Phillips, who wrote the post and the book: “The Media Training Bible.”
Without doubt, cloud computing is the future of the enterprise. But clouds come in many varieties – some light and fluffy, others dark and ominous – so the question for CIOs today is what kind of cloud is appropriate, and are there ways to ensure that today’s cloud does not become tomorrow’s storm?
According to IHS Technology, cloud spending is on pace to jump by more than a third over the next three years to $235 billion. Key drivers run the gamut from lower operating costs and more flexible data environments to support for advanced business applications like collaboration and Big Data analytics. As the market matures, then, organizations across multiple industries are likely to shed their concerns about security and management as they strive to turn IT infrastructure from a cost center to a competitive advantage.
PC World — Why should you use open source software? The fact that it's usually free can be an attractive selling point, but that's not the reason most companies choose to use it. Instead, security and quality are the most commonly cited reasons, according to new research.
In fact, a full 72 percent of respondents to the eighth annual Future of Open Source Survey said that they use open source because it provides stronger security than proprietary software does. A full 80 percent reported choosing openA source because of its quality over proprietary alternatives.
Sixty-eight percent of respondents said that open source helped improve efficiency and lower costs, while 55 percent also indicated that the software helped create new products and services. A full 50 percent of respondents reported openly contributing to and adopting open source.
Computerworld — A couple of weeks into his job as lead QT developer at software development consultancy Opensoft, Louis Meadows heard a knock on his door sometime after midnight. On his doorstep was a colleague, cellphone and laptop in hand, ready to launch a Web session with the company CEO and a Japan-based technology partner to kick off the next project.
"It was a little bit of a surprise because I had to immediately get into the conversation, but I had no problem with it because midnight here is work time in Tokyo," says Meadows, who adds that after more than three decades as a developer, he has accepted that being available 24/7 goes with the territory of IT. "It doesn't bother me -- it's like living next to the train tracks. After a while, you forget the train is there."
Not every IT professional is as accepting as Meadows of the growing demand for around-the-clock accessibility, whether the commitment is as simple as fielding emails on weekends or as extreme as attending an impromptu meeting in the middle of the night. With smartphones and Web access pretty much standard fare among business professionals, people in a broad range of IT positions -- not just on-call roles like help desk technician or network administrator -- are expected to be an email or text message away, even during nontraditional working hours.
Computerworld — As the economy continues to rebound and the competition for qualified IT professionals reaches new heights, employers seeking to attract or retain staffers are increasingly becoming like anxious suitors, desperate to figure out how to please their dates: "What do you want? What will make you stay? What really matters in our relationship?"
According to Computerworld's 2014 IT Salary Survey, tech workers are looking for many traditional benefits of a good partnership: financial security, stability and reliability -- all represented by salary and benefits. But this year's results confirm a growing trend: IT professionals are placing increasing importance on "softer" factors in the workplace, which have less to do with dollars and cents and more to do with corporate culture, personal growth and affirmation.
Read the full report: Computerworld IT Salary Survey 2014
It must be the human condition that does it; the certainty with which we approach the issues that may affect us. Risk assessment incorporates a requirement to analyse probability or likelihood; we can attach mathematical process to this and I have attached an example – not to critique it – but to illustrate the concept of what I term ‘buffering’. Buffering is something which protects us from actuality, and allows us to distance ourselves from the realities of issues. In the example, the mathematics are quite simple but convincing to the layman; I term myself a layman in mathematics and I have colleagues who can do this type of thing to a very significant and complicated level indeed. However, the problem that I have with this is that buffering allows us to interpret what we see and orientate it to our needs.
Risk and uncertainty are not about rolling dice; of course they are linked aspects and the loss risks associated with the activities of some dice rollers can be extreme. Maths allow calculation of probability - but the die will roll a different way every time due to other unmeasured variable such as who is throwing, where and with what degree of energy. There is therefore uncertainty that is additional even to the study and assessment of random variables.
The shooting rampage at Fort Hood has once again focused attention on the military’s mental-health system, which, despite improvement efforts, has struggled to address a tide of psychological problems brought on by more than a decade of war.
Military leaders have tried to understand and deal with mounting troop suicides, worrying psychological disorders among returning soldiers, and high-profile violent incidents on military installations such as the one that left four people dead and more than 16 injured at the Army post in Texas on Wednesday.
But experts say problems persist. A nationwide shortage of mental-health providers has made it difficult for the military to hire enough psychiatrists and counselors. The technology and science for reliably identifying people at risk of doing harm to themselves or others are lacking.
A discussion is going on right now about the role of the enterprise service bus in cloud integration. Does it matter?
I’m not convinced it does. Most of the discussion seems to be coming from vendors, and while it’s probably good thought fodder for architects, I’m unconvinced there’s much of a strategic case for caring here.
One recent example, “Why Buses Don't Fly in the Cloud: Thoughts on ESBs,” appeared on Wired Innovation Insights and was written by Maneesh Joshi, the senior director of Product Marketing at SnapLogic.
One of the reasons energy conservation is such a hot button issue in the data center these days is that no one has a clear idea how to assess the situation.
To be sure, metrics like PUE (Power Usage Effectiveness) are a step in the right direction, but even its backers will admit that it is not a perfect solution and should not even be used to compare one facility against another. And as I pointed out last month, newer metrics like Data Center Energy Productivity (DCeP) provide a deeper dive into data operations but ultimately rely largely on subjective analysis in order to gauge the extent that energy is being put to good use.
Did you get a boatload of World Backup Day pledge messages through Facebook and Twitter last week? This independent global initiative encourages everyone to backup important data on all computing devices — and spread the word. As they say, “friends don’t let friends go without a backup.” Absolutely right.
As people around the globe were taking the World Backup Day pledge, I was presenting at DRJ Spring World 2014, the world’s largest BC/DR conference. As I reported, the vast majority of organizations are NOT prepared to respond to intentional or accidental threats to IT systems.
- 73% failing in terms of disaster readiness (scored a D or F)
- 60% do not have a documented DR plan
- 68% plans don’t exist or proved not very useful
The news is not much better for the minority of organizations who have a DR plan in place. Again, the 2014 annual report documents that where they exist, DR plans are largely gathering dust:
By Rakesh Shah
Distributed denial of service (DDoS) is no longer just a service provider problem: far from it. It can be a very real business continuity issue for many organizations.
DDoS attacks are what some would consider an epidemic today for all sorts of organizations. Why? The stakes continue to skyrocket. The spotlight continues to shine brightly, attracting attackers looking for attention for many reasons and motivations.
In recent times, attack motivation has been politically or ideologically motivated. Attackers want to make a statement and to make headlines (and to cause many headaches along the way) – quite similarly to the effect a sit-in or a strike would have in the ‘offline’ world.
This new breed of attacker targets high profile organizations in order to ensure his or her grievances will be heard. Few targets are as high profile or mission critical to the economy as financial services.
Avere Systems has released the findings of its ongoing original study into cloud adoption conducted at the recent Cloud Expo Europe 2014.
Like their US counterparts at the AWS Summit in Vegas last November, the majority of the attendees in London surveyed indicated that they currently use or plan to use cloud within the next two to five years for compute (71 percent), storage (76 percent), with application purposes (80 percent).
One major difference in response was that 53 percent of US respondents cited organizational resistance as a major barrier to cloud use compared to just 11 percent in Europe indicating a potentially less conservative approach in the region.
Today ends my review of what I believe to be the five steps in the management of a third party under an anti-bribery regime such as the Foreign Corrupt Practices Act (FCPA) or UK Bribery Act. On Monday, I reviewed Step 1 – the Business Justification, which should kick off your process with any third party relationship. On Tuesday, I looked at Step 2 – the questionnaire that you should send and third party and what information you should elicit. On Wednesday, I discussed Step 3 – the due diligence that you should perform based upon the information that you have received from and ascertained on the third party. On Thursday, I examined Step 4 – how you should use the information you obtain in the due diligence process and the compliance terms and conditions which you should place in any commercial agreement with a third party. Today, I will conclude this series by reviewing how you should manage the relationship after the contract is signed.
I often say that after you complete Steps 1-4 in the life cycle management of a third party, the real work begins and that work is found in Step 5– the Management of the Relationship. While the work done in Steps 1-4 are absolutely critical, if you do not manage the relationship it can all go down hill very quickly and you might find yourself with a potential FCPA or UK Bribery Act violation. There are several different ways that you should manage your post-contract relationship. This post will explore some of the tools which you can use to help make sure that all the work you have done in Steps 1-4 will not be for naught and that you will have a compliant anti-corruption relationship with your third party going forward.
Computerworld — Although Apple isn't the sole focus of Microsoft's Enterprise Mobility Suite (EMS) or of Satya Nadella's new "mobile-first cloud-first" vision for the company, its iOS devices dominate enterprise mobility, meaning that Apple will play a major role in Microsoft's mobility strategy. In pursuing this strategy, Microsoft is, in a way, copying Apple's approach to business and enterprise iOS customers, albeit from a different perspective.
Microsoft began adding the ability to manage iOS and Android devices to its cloud-based Intune management suite last year. Although initial support for iOS device management was very basic, the company updated Microsoft Intune's iOS capabilities in January. While Microsoft has a ways to go before it catches up to the feature sets of the major mobile device management and enterprise mobility management vendors, the company looks committed to advancing its mobile management tools quickly.
Computerworld — The challenge: Justify to the senior management committee the expense of business relationship management (BRM) within the IT function.
Now, there are many ways to do that. All the tools for assessing value can be drawn upon. There's the balanced scorecard, ROI, maturity models (with key performance indicators) and assessments against them, surveys, IT investment ratios, IT productivity over time. All very plausible, given the right circumstances.
But as CIO, I knew that I had to do more than show that BRM made compelling sense from a stockholder perspective. I also had to show how its success would be measured over time.
Do you think your anti-virus software is doing an adequate job in detecting malware and keeping your computers and network safe?
Unfortunately, you may need to re-think your attitudes toward AV software. According to a new report from Solutionary and the NTT Group, AV fails to spot 54 percent of new malware that is collected by honeypots. Also, 71 percent of new malware collected from sandboxes was undetected by over 40 different AV solutions.
The report also found that even a minor SQL injection could result in financial losses upwards of $200,000 – the kind of dollar amount that could cripple a small business.
Everything in IT these days is rapidly moving to be defined by software, including now backup and recovery.
EMC today launched a Data Protection Suite spanning its Avamar, NetWorker, Data Protection Advisor, Mozy and SourceOne products that not only makes them easier to acquire, but also sets the stage for managing them as an integrated set of processes.
Rob Emsley, senior director of product marketing for EMC, says that just like the rest of the enterprise, data protection is moving toward a software-defined model that promises to make it easier to manage backup and recovery, compliance and archiving.
As part of that exercise, Emsley says EMC is moving toward enabling a self-service model under which end users would be able to directly invoke EMC products and services within the policy guideline set by the internal IT organization across both structured and unstructured data sets.
This week, a new report from the United Nations’ Intergovernmental Panel on Climate Change summarized the ways climate change is already impacting individuals and ecosystems worldwide and strongly cautioned that conditions are getting worse. Focusing on impacts, adaptation and vulnerability, the panel’s latest work offers insight on economic loss and prospective supply chain interruptions that should be of particular note for risk managers—and repeatedly highlights principles of the discipline as critical approaches going forward.
Key risks the report identified with high confidence, span sectors and regions include:
The second earthquake to strike the Los Angeles area on March 28 is a wake-up call and reminder of the risk to commercial and residential properties in Southern California, according to catastrophe modeling firm EQECAT.
(The M5.1 quake located 1 mile south of La Habre follows the M4.4 earthquake near Beverley Hills (30 miles to the northwest) on March 17.)
In its report on the latest quake, EQECAT notes that most homeowners do not carry earthquake insurance (only about 12 percent of Californians have earthquake coverage, according to I.I.I. stats), and those that do typically carry deductibles ranging from 10 percent to 15 percent of the replacement value of the home, and commercial insurance often carries large deductibles and strict limits on insurance coverage.
CSO — Hacking is no longer just a game for tech-savvy teens looking for bragging rights. It is a for-profit business -- a very big business. Yes, it is employed for corporate and political espionage, activism ("hacktivism") or even acts of cyberwar, but the majority of those in it, are in it for the money."
So, security experts say, one good way for enterprises to lower their risk is to lower the return on investment (ROI) of hackers by making themselves more expensive and time-consuming to hack, and therefore a less tempting target. It's a bit like the joke about the two guys fleeing from a hungry lion. "I don't have to outrun him," one says to the other. "I just have to outrun you."
Of course, this only applies to broad-based attacks seeking targets of opportunity -- not an attack focused on a specific enterprise. But, in those cases, being a bit more secure than others is generally enough.
Let’s proceed by elimination. Servers? Those are the things that fall over when your data centre is hit by lightning and for which you do your disaster recovery planning anyway. Desktop PCs? They’re practically nailed to your desk, so they won’t be going with you as you run for the exit. Laptops? Maybe, although battery power and hard drive fragility may be issues. Smartphone? Compact, highly portable, runs tons of apps but has such a tiny screen. So finally, is the tablet computer the best compromise for IT on the run while you’re trying to get everything else back to normal?
CIO — The concept of a "data lake," sometimes called an "enterprise data hub," is a seductive one.
The data lake is the landing zone for all the data in your organization — structured, unstructured and semi-structured. — a central repository where all data is ingested and stored at its original fidelity All your enterprise workloads, from batch processing and interactive SQL to enterprise search and advanced analytics, then draw upon that data substrate.
Generally, the idea is to use HDFS (Hadoop Distributed File System) to store all your data in a single, large table. But building out such a next-generation data infrastructure requires more than simply deploying Hadoop; there's a whole ecosystem of related technologies that need to integrate with Hadoop to make it happen. And while Hadoop itself is open source, many of the other technologies that can help you build that infrastructure are open core or fully proprietary.
It seems the more the enterprise becomes steeped in cloud computing, the more we hear of the end of local infrastructure in favor of utility-style “mega-data centers.” This would constitute a very dramatic change to a long-standing industry that, despite its ups and downs, has functioned primarily as an owned-and-operated resource for many decades.
So naturally, this begs the question: Is this real? And if so, how should the enterprise prepare for the migration?
Earlier this week, I highlighted a recent post from Wikibon CTO David Floyer touting the need for software-defined infrastructure in the development of these mega centers. Floyer’s contention is that “megaDs” are not merely an option for the enterprise, but the inevitable future, in that they will take over virtually all processing, storage and other data functions across the entire data ecosystem. The key driver, of course, is cost, which can be distributed across multiple users to provide a much lower TCO than traditional on-premise infrastructure. At the same time, high-speed networking, 100 Gbps or more, has dramatically reduced latency of distributed operations and is now available at a fraction of the cost of only a few years ago.
By Michael Bratton
Even though plans represent just one component of a larger business continuity management system, they are what guide the organization through all phases of response and recovery following the onset of a disruptive incident – from the initial response and assessment to the eventual return to normal operations. Effective planning is meant to ensure that response and recovery efforts align to the expectations of all interested parties and provide a repeatable approach to minimize downtime.
This article explores different types of plans and examines their purpose within a wider business continuity strategy.
Sungard Availability Services has announced that it is now a standalone company, following its split-off from SunGard Data Systems Inc. The new company, with annual revenues of approximately $1.4 billion and operations in 11 countries, will remain headquartered in Wayne, PA.
As a result of the split-off, Sungard AS now has its own board of directors and a new brand.
"Now that we are an independent firm, we have the flexibility to evolve our culture, our industry relationships and our investments to maximize our business and best serve customers," said CEO Andrew A. Stern.
"Today's announcement is the next step towards creating a highly-focused IT services business that's dedicated to providing world-class managed / availability services to our customer base," Stern noted. "All of us here at Sungard AS are very excited about the prospects to accelerate our growth, and we look forward to continue partnering with our customers to deliver the business outcomes they need."
Sungard AS today revealed its new brand identity, which includes a new logo. The company, which pioneered the concept of shared IT disaster recovery infrastructure more than 30 years ago, will continue to leverage its ‘always on, always available’ brand positioning. Its new logo represents strength and dynamism. A forward-leaning angle in the logo conveys progression and growth, while a triangle in the logo represents stability and the support that the company will continue to provide its customers.
Sungard AS leverages its scale and global reach to address its approximately 7,000 customers' cloud, managed hosting and recovery-services needs. "Our company will continue to focus investments in our newer service offerings, which include Enterprise Managed Services, Enterprise Cloud Services, Recovery as a Service and Assurance, our next-generation business continuity management software offering," Stern said.
CIO — The perennial data center quest to beat the heat has sparked a wave of innovation in enterprise computing.
Densely packed computing facilities produce a lot of heat. Getting rid of it is a must for boosting the reliability of computing and communications gear. The trick is keeping things cool without running up utility bills and expanding the carbon footprint.
To that end, IT managers have an expanding list of options and measures to consider. Data centers may combine straightforward approaches (such as organizing centers into cold and hot aisles) with more elaborate components (such as cooling towers). Even water-cooled computers, once a staple of the mainframe world, appear to be making a comeback. Immersion cooling, in which servers are bathed in a nonconductive cooling fluid, has made an appearance in a few data centers.
Since the March 22 landslide, the Red Cross has mobilized five response vehicles and more than 300 trained workers – more than half of them from Washington State.
Through Monday (March 31), the Red Cross has served 15,000 meals and snacks in partnership with Southern Baptist Disaster Relief, handed out hundreds of comfort and relief items, and provided nearly 2,400 mental health or health-related contacts. In addition, our shelters have provided more than 130 overnight stays.
- Red Cross mental health and spiritual care volunteers are caring for families who have lost loved ones or are waiting for word on the missing.
- Red Cross workers are meeting one-on-one with people affected to create recovery plans, navigate paperwork and locate help from other agencies. In some situations, the Red Cross may also provide direct financial support to people who need extra help, including assistance with funeral expenses and mental health counseling.
- Red Cross Family Care Centers that are open in Darrington and Arlington are places where affected family members can receive emotional and spiritual support, mental health assistance, and care for children after they receive notification of loss of a loved one.
- Red Cross workers are also providing emotional support and help with creating individual recovery plans at Joint Resource Centers in Darrington and Arlington.
With eight confirmed cases of Ebola reported in the Guinea capital, Conakry, Médecins Sans Frontières (MSF) says that the country is 'facing an unprecedented epidemic in terms of the distribution of cases.'
“We are facing an epidemic of a magnitude never before seen in terms of the distribution of cases in the country: Gueckedou, Macenta Kissidougou, Nzerekore, and now Conakry,” said Mariano Lugli, coordinator of MSF's project in Conakry.
To date, Guinean health authorities have recorded 122 suspected patients and 78 deaths. Other cases, suspected or diagnosed, were found in Sierra Leone and Liberia.
MSF continues to strengthen its teams on the ground in Guinea. By the end of the week, there will be around 60 international fieldworkers who have experience in working on haemorrhagic fever. The group will be divided between Conakry and the other locations in the south-east of the country.
Just got back from Orlando where I helped kick off the largest BC/DR conference in the world yesterday, Spring World 2014.
I previewed my talk in Orlando Sunday with an online webinar last week. If you were able to participate in last Wednesday’s webinar, (which is archived on the Disaster Recovery Journal’s website) entitled The State of Disaster Recovery Preparedness, you may recall this excellent question posed by one of the attendees:
“How do we convince upper management to fund disaster recovery?”
Getting the executive team on your side is a foundational step toward developing and implementing a sound DR plan. Like most things in life, I think communications is key — both what you say and how you say it.
The 2014 BCI North America Awards took place on Sunday March 30th as part of the Disaster Recovery Journal (DRJ) Spring World 2014. The awards recognise the outstanding contribution of business continuity professionals and organizations living in or operating in the North America Region, including USA and Canada.
The winners were:
Business Continuity Industry Personality of the Year
Frank Perlmutter MBCI
BCM Newcomer of the Year
Leanne Metz AMBCI, Associate Director, Mead Johnson Nutrition
Business Continuity Innovation of the Year
Public Sector Manager of the Year
Brian Gray MBCI Chief, Business Continuity Management, United Nations
Business Continuity Manager of the Year
Dave Morgan MBCI, Senior BCP Manager, Delta Dental
Business Continuity Team of the Year
Franklin Templeton Investments
Most Effective Recovery of the Year
Business Continuity Consultant of the Year
Skip Williams, Owner, Kingsbridge Disaster Recovery
Business Continuity Provider of the Year (Product)
ResilienceONE® BCM Software
Business Continuity Provider of the Year (Service)
The Intergovernmental Panel on Climate Change (IPCC) has issued a new report that says the effects of climate change are already occurring on all continents and across the oceans. The world, in many cases, is ill-prepared for risks from a changing climate. The report also concludes that there are opportunities to respond to such risks, though the risks will be difficult to manage with high levels of warming.
The report, entitled ‘Climate Change 2014: Impacts, Adaptation, and Vulnerability’, from Working Group II of the IPCC, details the impacts of climate change to date, the future risks from a changing climate, and the opportunities for effective action to reduce risks. A total of 309 coordinating lead authors, lead authors, and review editors, drawn from 70 countries, were selected to produce the report. They enlisted the help of 436 contributing authors, and a total of 1,729 expert and government reviewers.
CloudEndure has published the results of a benchmark survey, entitled ‘2014 State of public cloud disaster recovery’. This presents best practices and success metrics reported by companies that host web applications in the public cloud.
The highlights of the survey report are:
- When it comes to service availability, there is a clear gap between how organizations perceive their track record and the reality of their capabilities. While almost all respondents claim they meet their availability goals consistently (43 percent) or most of the time (49 percent), 26 percent of the organizations surveyed don’t measure service availability at all. It is hard to tell how these organizations claim to meet their goals when they are not able to measure them.
- While the vast majority of the organizations surveyed (79 percent) have a service availability goal of 99.9 percent or better, over half of the companies (54 percent) had at least one outage in the past 3 months.
- The top challenges in meeting availability goals are insufficient IT resources, budget limitations, and limited ability to prevent software bugs.
- Load balancing and local (single region/zone) storage backup are the leading strategies to ensure system availability and data protection cited by 59 percent and 51 percent of the respondents respectively.
- There is a strong correlation between the cost of downtime and the average hours per week invested in backup / disaster recovery.
Complimentary copies of the report are available for download after free registration.
Avalution Consulting has announced the release of a new feature, ‘Catalyst Insights’, for its Catalyst business continuity software suite.
Catalyst Insights provides automatic business continuity metrics that enable business continuity and IT disaster recovery managers to quickly identify and address preparedness gaps and report on their organization's level of preparedness.
With Catalyst Insights users can:
- View granular business continuity dashboards, ratings, relationships, and dependencies for each element of the planning lifecycle by department, location, application, IT infrastructure, products and services, or the program as a whole;
- Examine individual elements of the organization to understand upstream and downstream dependencies, identify and address gaps, and report on their current level of preparedness;
- Visually map directional relationship dependencies for individual departments, locations, applications, IT infrastructure, products and services, or across the entire organization.
The Catalyst business continuity software suite can be trialled for 30-days before buying.
LOS ANGELES — It has been 20 years since Southern California experienced a major earthquake, a powerful 6.7-magnitude temblor that rolled through Northridge, killing 57 people. But this stretch of seismic calm, though welcome in obvious ways, has undermined efforts to force Los Angeles to deal with what officials describe as potentially lethal deficiencies in earthquake preparation.
That may be changing. Since two back-to-back earthquakes Friday evening — a relatively small one with a magnitude of 3.6, followed by a long and rolling 5.1 quake — Los Angeles has been shaken by nearly 175 smaller aftershocks. It is the first time this area has suffered an earthquake in excess of 5 magnitude since 1997, and it comes two weeks after a 4.4 earthquake jolted residents awake.
None of these quakes caused injuries or widespread damage, other than broken water pipes and some homes that have been declared at least temporarily uninhabitable. But geologists see them as the predictable end of a cycle: a return to what might be an uncomfortable normal in which 5-magnitude earthquakes become routine events.
YOKOHAMA, Japan — Climate change is already having sweeping effects on every continent and throughout the world’s oceans, scientists reported on Monday, and they warned that the problem was likely to grow substantially worse unless greenhouse emissions are brought under control.
The report by the Intergovernmental Panel on Climate Change, a United Nations group that periodically summarizes climate science, concluded that ice caps are melting, sea ice in the Arctic is collapsing, water supplies are coming under stress, heat waves and heavy rains are intensifying, coral reefs are dying, and fish and many other creatures are migrating toward the poles or in some cases going extinct.
Risk levels and uncertainty change significantly over time. Competitors make new and sometimes unexpected moves on the board, new regulatory mandates complicate the picture, economies fluctuate, disruptive technologies emerge and nations start new conflicts that can escalate quickly and broadly. Not to mention that, quite simply, stuff happens, meaning tsunamis, hurricanes, floods and other catastrophic events can hit at any time. Indeed, the world is a risky place in which to do business.
Yet like everything else, there is always the other side of the equation. Companies and organizations either grow or face inevitable difficulties in sustaining the business. Value creation is a goal many managers seek, and rightfully so, as no one doubts that successful organizations must take risk to create enterprise value and grow. The question is, how much risk should they take? A balanced approach to value creation means the enterprise accepts only those risks that are prudent to undertake and that it can reasonably expect to manage successfully in pursuing its value creation objectives.
Among a whirlwind of course leadership, business development, teaching, writing, and course validations, I found time to present to Thames Valley Chamber of Commerce’s Windsor Debate on ‘The Changing Face of National Security’. My presentation – Cyber Security: Mission Impossible? – was part of a wider programme of discussions by senior military and industry influencers and analysts about the dynamic changes that affect policy and capability.
The event was held at Windsor Castle, a suitable backdrop for discussions concerning the defence and maintenance of the UK’s values and priorities in the face of historic challenges, and looking forward to an uncertain and unpredictable future. The debate was a fantastic opportunity to contribute to and learn from the knowledge and ideas surrounding our resilience for the future. Delegates discussed everything from international stability to aviation security, and from state intelligence to cyber security. As we at Bucks have these subjects firmly in our portfolio, the debates allowed me to contextualise what we think we know about some of these areas – and of course how much we don’t know.
Gavin Butler attended the fourth Future of Cyber Security 2014 conference in London on 20th March 2014 and here’s what he thought about it:
The series of presentations gave a useful overview of the current state of play in cyber security thinking and predictions for the future. Retired Colonel John Doody hosted proceedings and introduced talks from Lord Errol (Merlin) and Chris Gibson from the CERT-UK, IBM, Palo Alto Networks, Barclays Bank, Encode, Allianz and Airwatch. In particular, Chris Gibson’s presentation did provide confidence in that the future of cyber security is indeed in ‘safe hands’ as the CERT-UK seeks to reinforce links with industry and academia and help promote information sharing, such as through CISP, which will enable organisations to take a more ‘resilient’ outlook towards developing their own effective cyber security controls. There is also further scope and recognition for SMEs to adopt concepts from the ‘Cyber Security Strategy’, perhaps as they are now seen as vital to the UK economy and hence ‘critical national infrastructure’?
After disasters like the Oso landslide in Washington State, a common question is why people are allowed to live in such dangerous places. On the website of Scientific American, for example, the blogger Dana Hunter wrote, “It infuriates me when officials know an area is unsafe, and allow people to build there anyway.”
But things are rarely simple when government power meets property rights. The government has broad authority to regulate safety in decisions about where and how to build, but it can count on trouble when it tries to restrict the right to build. “Often, it ends up in court,” said Lynn Highland, a geographer with the United States Geological Survey’s landslide program in Golden, Colo.
Her agency provides scientific information about geologic features and risks, but it has no regulatory authority, and state and local regulations are a patchwork, she said. When disaster strikes, people find that their insurance policies do not cover landslides without special riders that can be ruinously expensive.
How odd that even though we are this deep into the cloud transition, people are still debating the merits of public vs. private vs. hybrid.
If the latest research is to be believed, however, most enterprises have already moved beyond this debate and are actively seeking a variety of cloud-based solutions that will combine the best of the cloud as well as legacy virtual and even physical infrastructure.
Take, for example, CTERA Networks’ recent Cloud Storage Report, which holds that 63 percent of enterprises prefer internal or hosted virtual private cloud solutions over SaaS offerings like Dropbox for their storage and collaboration needs. This is actually a no-brainer – in fact, I’m surprised the number is that low – considering the advantage of keeping critical data safely tucked behind the firewall rather than on a public service. Public services will have their role to play going forward, but they are not likely to house mission-critical data and applications, at least not for long.
By Samuel Greengard
In recent years, as organizations have embraced cloud computing, CIOs and other executives have witnessed significant gains. In many cases, their enterprises have boosted IT availability, reduced demands on internal infrastructure and notched productivity improvements along with cost savings. Last October, Gartner reported that cloud computing will emerge as the bulk of IT spend by 2016 and half of all cloud services will take a hybrid cloud approach by 2017.
But as more and more organizations drift into the cloud, one fact is perfectly clear: the risk of an outage or outright failure is real, and such an event could have significant repercussions during and after an event. Already, a number of high-profile cloud providers have endured episodic outages and failures, including Amazon Web Services, Google Drive, Dropbox and Microsoft Azure. In some instances, companies using these products and services haven't just endured downtime, they've also lost data.
By Nathaniel Forbes, MBCI, CBCP
Late in 2013 the head of BCM for one of Asia’s largest banks voluntarily transferred within his bank to a job entirely unrelated to BCM. He is the most-experienced, knowledgeable and highest-paid non-expatriate BCM professional I know in Asia.
I wondered why anyone with eleven years of full-time BCM experience and a compensation package the envy of his peers would make such a move. He agreed to answer my questions on-the-record if I didn’t use his name or identify his employer.
By Paul Kirvan, FBCI
In March 2014 the business continuity profession lost one of its founding fathers, Ron Ginn, (Hon) FBCI. Although Ron was in his 80s he lived a vigorous life and never lost his passion for the profession he helped create. For a fitting tribute to Ron’s memory, I have compiled thoughts and remembrances from several of Ron’s friends and colleagues, including myself.
As one of the few ‘foreigners’ in the early days of the business continuity profession in the UK and Europe, I became involved in an organization many of you will remember, called Survive! This was instrumental in the growth of the profession in Europe and North America and also in the founding of the Business Continuity Institute. During my many trips to the UK I had the pleasure of meeting Ron Ginn on several occasions. Ron was one of my early mentors and inspirations for my continued involvement in the profession. His enthusiasm was infectious; he really understood the direction that the profession needed to go and was a constant source of encouragement and challenge for all of us who were there in the ‘early days’. I last spoke to Ron during the 2012 BCI World Conference in London, and even in his 80s, Ron was still challenging me to do more in the profession. He was a true inspiration to me, and will be greatly missed.
Teon Rosandic, VP EMEA, xMatters, gives a vendor’s view of the developments which are improving the capabilities of emergency notification systems and why traditional one-way mass notification is on the way out.
Many of the mass notification systems that businesses utilise today haven’t changed or evolved since they were originally designed many years ago. It’s the same old thing – put your message in the message box and broadcast it out to everyone in your database. This type of archaic communication system just doesn’t cut it today with more and more incidents and crises that require immediate attention and the need for two way communication at every step of the way.
However, there is new technology available, and there are things that the business continuity and risk manager should consider when looking for a mass notification approach.
This article delves into the ins and outs of what effective mass communication technology can deliver and what the old systems lack.
US statesman Benjamin Franklin was famous for many things and for one in particular: his proclamation that “in this world nothing can be said to be certain, except death and taxes”. Well, Benjamin, it seems like modern technology and inflation have conspired to add a couple more items: server crashes and data security breaches. In other words, it’s not a matter of if these events will occur. It’s a matter of when. It’s true that robust quality IT products can push out the when so far that it seems to disappear in the distant future. However, smart organisations make the assumption that both things will happen and take appropriate precautions.
When Spiceworks surveyed IT professionals recently about their attitudes toward certifications, one of the most interesting data points was that about half of the respondents will be paying for their continuing IT education themselves this year. Only 56 percent said that their employers would pay for training in 2014. But half of the IT pros said they think that certifications are very valuable or extremely valuable to their careers. And 80 percent of them said they plan to complete some training or certification this year. Since having to pay for continuing education yourself often really means you’ll need to find some free or lower-cost training, let’s take a look at a range of vendor-specific, higher education, online and free online training resources. We’ll begin with one of the hottest IT skill sets for 2014:Big Data, aka analytics and/or business intelligence.
Even as rescue teams search for more bodies in the aftermath of the March 22 mud slide in Washington, records show that while the area is prone to these disasters, homes were allowed to be built there anyway.
The slide, triggered by excessive rain, has claimed 24 lives so far and 176 are still unaccounted for, the Associated Press reports.
Snohomish County Emergency Management Director John Pennington said during a news conference on March 24 that the slide was “completely unforeseen” and that it “came out of nowhere.”
In a 1999 report filed with the U.S. Army Corps of Engineers, however, geomorphologist Daniel J. Miller and his wife, Lynne Rodgers Miller, warned of “the potential for a large catastrophic failure” in the area, according to the Seattle Times.
Computerworld — The mobile market is moving on. Traditional smartphones and tablets are maturing. The next phase is coming, and it consists of the Internet of Things, a descriptive phrase that includes all manner of smart (and barely smart) devices, often connected wirelessly.
While smartwatches, fitness bands and connected appliances are important, the current focus on consumer products diminishes the fact that the greatest impact this category may have will be on the enterprise. Consumer experimentation will lead the market, but enterprise adaptation will not be far behind. For this reason, I use the term "Enterprise of Things" (or EoT) to describe this next wave that enterprises will need to deal with, even as most still try to adequately cope with the more mature mobile devices already impacting their users, networks and applications.
Computerworld — Business groups in a growing number of companies appear to be plowing ahead on data analytics projects with little input or help from their own IT organizations.
Rather than leveraging in-house IT skills and technology, many business groups are using their own data and department-level analysts to cobble together analytics strategies, according to a survey by IDC.
Business managers and IT managers appear to have different assessments of the value enterprise IT organizations bring to big data and data analytics projects. While IT groups see themselves as enablers, business leaders tend to view IT as a stumbling block.
For the study, IDC surveyed 578 line-of-business managers, IT managers, data analysts and business executives.
IDG News Service (Boston Bureau) — SAP is continuing to merge its HANA in-memory database platform with its Business Warehouse data warehousing software, with the latest update adding support for HANA's real-time data loading services.
Companies with large data warehouses often load information sets at off-peak times, such as in overnight batch jobs. But with the general availability of Business Warehouse 7.4, HANA's "smart data access" services can tap any source within or outside a company as it's needed. SAP is calling the approach an "in-memory data fabric."
The services don't actually physically move data into Business Warehouse; rather, the target sources are viewed as virtual tables. This services provide broader access to data sets, as well as the ability to keep frequently accessed information sets inside the core data warehouse while reaching out to ones that are needed only occasionally as desired.
James Leavesley, outlines why risk managers need to be up to speed with the social media revolution.
Social media is no longer just the latest buzz word or an experiment for creative marketing teams. Organizations are fast recognising the importance of social media from a customer, employee and business partnership perspective. Companies are using blogs, videos, Facebook and Twitter to connect with ‘communities’. However, it only takes one disgruntled customer to take to Twitter, You Tube or Facebook and the results can be costly. Even worse damage can be done by a rogue employee with access to corporate social media accounts and a determination to discredit the company.
So here are five reasons why risk managers should get up to speed with social media and how to control it:
In its latest Bulletin, APEC (Asia-Pacific Economic Cooperation) has provided details of what it is doing to assist regional SMEs to develop business continuity plans.
The Bulletin focuses on a multi-year project launched in 2011 by APEC to enhance the capacity of SMEs to prepare for disasters and to ensure “minimal and tolerable disruption to business operations and supply chains”.
“The main goal of the APEC project is to promote SMEs to establish business continuity plans for sustainable global supply chains,” Johnny Yeh, executive director of the APEC SME Crisis Management Center in Chinese Taipei, told the APEC Bulletin. Mr. Yeh is overseeing the APEC project.
“This is accomplished by training related government, non-profit and private sector organizations in APEC member economies, so they, in turn can train SMEs in their respective economies,” Mr Yeh continued.
As part of the project, experts have developed a simple step-by-step APEC Business Continuity Planning Guidebook for SMEs.
Network World — Cisco this week is unveiling two new configurations of its recently-launched Nexus 9000 switches, a new 40G Nexus switch.A In addition, Cisco is celebrating the fifth anniversary of its UCS server.
Cisco also announced certification programs for its new Application Centric Infrastructure (ACI) programmable networking product line, which includes the Nexus 9000 switches. ACI is Cisco's non-SDN response to the software-defined networking trend sweeping the industry.
The 16-slot Nexus 9516 and four-slot Nexus 9504 had been expected, and they join the existing eight-slot Nexus 9508. The Nexus 9516 is positioned as an aggregation layer switch for service provider or high-demand deployments, offering 576 wire-speed 40Gbps Ethernet ports and 60Tbps of throughput. It takes up 21 RUs, supports 2,304 10G ports, consumes 11 watts per 40G port, and uses two to four Cisco and/or Broadcom ASICs per line card.
CIO — Few deny that the healthcare industry in the U.S. faces tremendous pressure to change. Few deny the role that technology will play in stimulating this change, either.
Uncertainty creeps in, though, when healthcare organizations try to address their healthcare needs. This is especially true of healthcare providers — the hospitals, medical offices, clinics and myriad long-term care facilities that account for roughly 70 percent of healthcare spending and that have spent much of the 21st century rushing to catch up to other vertical industries.
Most providers, says Skip Snow, a senior analyst with Forrester, are "very new to the idea that they have all this structured data in clinical systems." That's largely because, until recently, the mission of the healthcare CIO was ancillary to a provider's core mission. IT often fell under the CFO's domain, Snow says, since it focused so much on business systems.
It was recently revealed that the personal details of 10,000 asylum-seekers housed in Australia were accidently leaked via the Department of Immigration and Border Protection’s website. This has damaged asylum-seekers’ trust in the Australian government and, according to Greens Senator Sarah Hanson-Young, potentially put lives at risk. Such incidents represent significant breaches of local regulations and can result in heavy penalties.
Recent amendments to existing privacy laws in Australia and Hong Kong allow each country’s privacy commissioner to enforce significant penalties for repeated or serious data breaches. Countries like Japan and Taiwan, where new privacy laws have been passed and/or existing ones are being enforced more strictly, also assess penalties for noncompliance.
It’s funny how some myths continue to be believed, even by hard-nosed business people. The notion that virtualisation will save a company’s data is such a myth. Although it can be valuable in optimising an organisation’s use of IT resources and reacting quickly to changing IT needs, virtual environments are not inherently safer than independent physical servers. But data recovery provider Kroll Ontrack found that 80 percent of companies believe that storing data virtually like this is less or no riskier. Beliefs are one thing, statistics are another. 40 percent of companies using this virtual mode of storage were hit with data loss in 2012 – 2013. What’s going on?
Computerworld — Driven by a very strong belief in the future of software-defined data center technology, Bank of America is steering its IT to almost total virtualization, from the data center to desktop.
The technology does for the entirety of a data center what virtualization did for servers: It decouples hardware from the computing resources. Its goal is to enable users to create, expand and contract computing capability virtually, quickly and efficiently.
The software-defined data center is not yet a reality. But there are enough parts of the technology in place to convince David Reilly, Bank of America's global infrastructure executive, that it is the future.
"The software-defined data center is going to dramatically change how we provide services to our organizations," said Reilly. "It provides an opportunity for, in effect, the hardware to disappear.
"We think it's irresistible, this trend," said Reilly.
Dell yet again signaled its intentions to compete more aggressively in the analytics space with the acquisition today of StatSoft.
With 1,500 customers, StatSoft is the second major analytics acquisition that Dell has made since acquiring Quest Software. In 2012, just prior to being acquired by Dell, Quest Software acquired Kitenga, a provider of high-end analytics software that usually gets applied to Big Data problems.
In contrast, John Whittaker, director of product marketing for Dell Information Management, says StatSoft represents a more mainstream play into the realm of predictive analytics. As there is definitely a blurring of the line these days between analytics applications, Whittaker says customers should expect to see Dell Software being significantly more aggressive in terms of delivering analytics capabilities into the midmarket.
About a month ago, I reported on a study from Ponemon Institute and AccessData that revealed that most companies are doing a poor job when it comes to detecting and effectively responding to a cyberattack. As Dr. Larry Ponemon, chairman and founder of the Ponemon Institute, said in a statement when the report was released:
“When a cyber-attack happens, immediate reaction is needed in the minutes that follow, not hours or days. It’s readily clear from the survey that IR processes need to incorporate powerful, intuitive technology that helps teams act quickly, effectively and with key evidence so their companies’ and clients’ time, resources and money are not lost in the immediate aftermath of the event.”
AccessData’s Chief Cybersecurity Strategist, Craig Carpenter, has been looking at this problem in some depth. We aren’t totally clueless on why these attacks are able to cause tremendous amounts of damage, both financial and reputational, to companies. For example, as information about the Target breach continues to trickle out, we have a pretty good idea of how and why the incident occurred. Our concern now, Carpenter said in a blog post, is fixing these problems. The key, he said, is prioritization and improved integration. In an email to me, Carpenter provided a few steps every company should take to prevent a “Target-like” breach in the future:
InfoWorld — Apache Cassandra is a free, open source NoSQL database designed to manage very large data sets (think petabytes) across large clusters of commodity servers. Among many distinguishing features, Cassandra excels at scaling writes as well as reads, and its "master-less" architecture makes creating and expanding clusters relatively straightforward. For organizations seeking a data store that can support rapid and massive growth, Cassandra should be high on the list of options to consider.
Cassandra comes from an auspicious lineage. It was influenced not only by Google's Bigtable, from which it inherits its data architecture, but also Amazon's Dynamo, from which it borrows its distribution mechanisms. Like Dynamo, nodes in a Cassandra cluster are completely symmetrical, all having identical responsibilities. Cassandra also employs Dynamo-style consistent hashing to partition and replicate data. (Dynamo is Amazon's highly available key-value storage system, on which DynamoDB is based.)
“If you’re not paranoid, you’re not paying attention.” It’s an old joke, but one that rings true as I finish my presentation for this Wednesday’s online webinar with The Disaster Recovery Journal. Here are just three of the danger signals from the 2014 Annual Report on the State of Disaster Recovery Preparedness that I’ll describe during the webinar.
DANGER SIGNAL 1: 3 out of 4 companies worldwide are failing in terms of disaster readiness. Having lots of company will be no consolation for organizations that have failed to respond to the alarming rise in intentional and accidental threats to IT systems.
DANGER SIGNAL 2: More than half of companies worldwide report having lost critical applications or most/all datacenter functionality for hours or even days. Once again, more evidence that business is at-risk for crippling losses.
DANGER SIGNAL 3: Human error is the #2 cause of outages and data loss, reported by 43.5% companies reporting in. How does your disaster recovery plan address this key vulnerability?
The good news? There are specific actions you can take right now to be better prepared to recover your systems in the event of an outage.
The Terrorism Risk Insurance Program, a public/private risk-sharing partnership which is set to expire at the end of 2014, is absolutely critical to maintaining the health of the American economy, according to an updated white paper just released by the Insurance Information Institute (I.I.I.).
The I.I.I.’s Terrorism Risk: A Constant Threat, Impacts for Property/Casualty Insurers explains that should the federal Terrorism Risk Insurance Program Reauthorization Act (TRIPRA) be allowed to expire at year-end 2014, this would have a detrimental impact on the availability and affordability of terrorism insurance for businesses.
Nothing is more important to developing and maintaining an effective C&E program than risk assessment, and effective risk assessment is, as a general matter, perhaps the most daunting task a C&E officer is likely to face. The challenges are both conceptual (a surprising lack of consensus on what the point of a risk assessment is) and practical (getting business people and others to be candid and thoughtful about what they may view as unpleasant and unnecessary topics).
But C&E risk assessment has been an expectation of the U.S. government since the 2004 amendments to the Federal Sentencing Guidelines for Organizations, and anti-corruption compliance standards of other countries are turning these expectations into something of a global mandate. Beyond this, many companies’ C&E programs are in desperate need of some sort of refreshment – and, as much as any program function, a risk assessment can provide a powerful foundation for this.
CIO — IT security is a tricky issue: Too much security -- or too little -- could bankrupt your company. The key is to strike the right balance. These three IT executives share their advice.
Determine Your Investment Best Bets
Martin Gomberg, global director of security, governance and business protection, A&E Networks: Security is a slide switch. Slide it all the way to the right, and nothing will get in, nothing will get out -- and nothing will get done. Slide it all the way to the left, and we will all have a party, it will be a great day -- but we'll only have one of them. My approach is to find the setting where risk is not too high, nor is risk mitigation an impediment to innovation.
In our industry, the threats are increasing and becoming more targeted, and our ability to protect ourselves is diminishing. Meanwhile, the technologies required for protection are getting more complicated and expensive, capable security staff are more difficult to find, and new laws and regulations are more likely to impose severe penalties for breaches.
Computerworld — A few weeks ago, I was happy to hear that Target CIO Beth Jacobs had resigned. This wasn't only because falling on her sword was the right thing to do after her company's massive data breach. The fact that just days earlier I had realized that I was caught up in this mess had something to do with it.
My credit card was used several dozen times at a Mumbai shopping site, and I am convinced that it was compromised in the Target breach. But why didn't my credit card issuer's security algorithms pick up this obvious anomaly? Because I am a frequent traveler, I was told, the charges didn't seem out of the ordinary.
Really? Forty purchases from the same online shopping site didn't seem just a bit suspicious -- even though, in all my travels, I've never been to India?
Think Target and the hit it took when hackers stole the private information of millions, requiring many to update credit cards and the like. It’s a disaster that most executives believe will happen to them–not if, but when. So, that makes it even more amazing to find out that most executives think, according a study published in the Economist, that two thirds of CEOs think a good response to such an attack will enhance their reputation.
PRNewser from mediabistro reporting on the Economist story notes that while 66% think they will come out of such an event smelling like a rose, only 17% surveyed say they are “fully prepared.”
Hootsuite, perhaps the best social media management and monitoring tool that I know of, today experienced a hack attack in the form of a Denial of Service attack. One client emailed me Ryan Holmes’ response. The CEO of Hootsuite was fast, empathetic, transparent and almost completely on target. (Only thing missed in my mind was an apology, but perhaps he felt there was nothing to apologize for and he may be right).
I’ve seen some hefty price tags associated with poor data quality, but I have to say, last year’s figure from the Ministry of Defence may take the prize. The UK agency was told “it was at risk of squandering a £1 billion in investments in IT because of dire data quality” last year, according to Martin Doyle, the Data Quality Improvement Evangelist for DQ Global.
This year, another UK agency, the National Health Service (NHS), is under scrutiny for sharing data without consent. Names and addresses may have been taken from the database and sold for studies, which meant it was uploaded to third-party cloud storage services, according to Doyle.
As if that weren’t bad enough, the NHS is also working on a project called Care.data, which is a centralized hub for patient care records. The NHS has “problems recalling exactly who has all of this patient information already, suggesting it has bigger problems to solve,” he writes. This issue has triggered a backlog in patient care.
The cloud is the latest juggernaut to sweep the enterprise IT industry, and if you ask most experts, the expectation is that the entire data universe will one day reside on distributed virtual architecture.
At the moment, however, the vision has not been completely sold to the people who build and maintain enterprise corporate environments.
According to new data from 451 Research at the behest of Microsoft, more than 45 percent of IT executives consider their organizations to be beyond the pilot phase of cloud computing, with at least half of that group saying they are “heavy” cloud users. However, only 6 percent have labeled the cloud as the default platform for new applications, while only 18 percent turn to the cloud regularly for new projects. All of this suggests that while the enterprise has embraced the cloud with open arms, the vast majority are using it for low-value or non-critical functions – hardly the new data paradigm that has been touted so far.
In a white paper entitled ‘Are public agencies better prepared to deal with crises in 2014?’ Noggin IT has released findings from a survey of US organizations.
The survey, conducted in late 2013, reveals an increasingly complex environment for those in crisis management due to greater regulatory compliance, Internet-connected stakeholders, more unpredictable weather events and political and financial volatility, where technology is key to improving organizational resilience and business continuity.
James Boddam-Whetham, managing director Noggin IT says “We are seeing a situation where public agencies are being required to do more with less. Some of the interesting pain points that came out of this survey were that actual crisis management team activation was still a struggle for many organizations; as was the broader issue of employee communications during a crisis. Both point to a perhaps overlooked consideration for a crisis management software solution: can it actually assist you manage your internal people affairs during a crisis. Much of the emphasis for crisis management systems has been on informing the public, or alerts and notifications, rather than necessarily getting the internal ship in order. An ability to organise internal stakeholders would therefore seem to be a logical consideration for any crisis management solution.”
The Business Continuity Institute has announced the creation of a new ‘Associate Fellow’ (AFBCI) senior membership grade for those people who have reached a senior level in the business continuity profession but have concentrated more on developing their practical working experience rather than specifically contributing to the development of the Institute or the discipline.
The AFBCI grade sits between MBCI and FBCI. Applicants must fit into either of the following criteria:
- A current MBCI held for a minimum of 3 years;
- A current MBCP credential held for at least 3 years with the DRII.
The applicant must also:
- Be currently working in business continuity management;
- Have a minimum of seven years working experience within the discipline and knowledge across all six BCI Professional Practices;
- Have three years of CPD completed using the BCI’s CPD system or CPEs through the DRII system if using MBCP to apply (These must be the three years previous to year application);
- Complete a full scored assessment application process.
PC World — Each time there's a high-profile data breach, security experts exhort the same best practices: Create unique logins for every service you use, use complex passwords, vigilantly comb your credit card statements for anomalies. The advice is sound. Unfortunately, it obscures the fact that the safety of your personal information is ultimately in the hands of companies you share it with.
Identity theft is changing. Customer databases are a treasure trove of personal information and much more efficient for hackers to target than individuals. In this new landscape, the guidelines security experts--and journalists like me--espouse are really just damage-control measures that minimize the impact of a successful attack after the fact, but do absolutely nothing to protect your personal data or financial information from the attack itself.
Look back on some of the major data breach incidents of 2013. Adobe was hacked, and attackers gained access to customer account information for nearly 150 million users, as well as credit-card information from nearly three million customers. Target was hacked, and the credit- or debit-card details for 40 million customers were exposed. In those cases, there was little any individual consumer could have done to prevent being affected by those data breaches.
Computerworld UK — Big data analytics tools will be crucial to enterprise security as criminals deploy faster and more sophisticated methods to steal valuable data, according to security firm RSA.
"We are really at the beginning of intelligence-driven security: it is just the tip of the iceberg. Looking forward we are going to have to be smarter [to deal with threats], and we are going to be looking at better data science," said RSA's head of knowledge delivery and business development, Daniel Cohen.
"It's not 'if' we are going to be breached, but 'when' we are going to be breached, so there is a need to focus more on detection. We saw with the Target breach it was the human factor that slipped there, so we have to be able to bring in more automation."
The number of successful attacks against high-profile businesses have clearly increased in recent years, with the compromise of Target's point of sale systems just one example of the variety of methods that cyber criminals are using to steal data on a large scale.
IBM moved today to take a bigger bite out of fraud by combining various pieces of software and services into a common framework that is simpler to deploy.
Rick Hoene, worldwide fraud solutions leader for IBM Global Services, says that while IBM has been delivering technologies to fight fraud for over 20 years, the scope of criminal fraud activity now requires a more integrated approach. To that end, IBM is launching a Smarter Counter Fraud initiative, which isbased on IBM Counter Fraud Management Software and existing assets. This combination creates a single offering that is simpler to both acquire and install.
Based on IBM’s Big Data analytics technologies, the IBM software is designed to aggregate data from external and internal sources and apply analytics in ways to prevent, identify and investigate suspicious activity. It includes analytics that identify non-obvious relationships between entities, visualization technology that identifies patterns of fraud, and machine-learning software to help prevent future occurrences of fraud based on previous discoveries.
While Hadoop may make Big Data more accessible, the setting up of a Hadoop cluster on commodity servers is not particularly simple.
To help IT organizations automate that process, Continuuity today announces it is contributing Loom, cluster management software that automates the process of provisioning a Hadoop cluster, to the open source community.
Continuuity CEO Jonathan Gray says it is a byproduct of the company’s effort to provide an application development environment for Hadoop that can be deployed on a private or public cloud. As customers began to build applications on the Continuuity platform-as-a-service (PaaS) environment, it became apparent they needed help with the DevOps elements of Hadoop.
Network World — Venture capital firms continue to funnel big sums of money to big data startups.
Most recently, Cloudera raised $160 million in new financing from investors including T. Rowe Price and Google Ventures. The latest round for Cloudera (which offers its own distribution of Hadoop plus integrated tools) brings its total funding to $300 million.
On the same day Cloudera announced its venture capital windfall, analytics startup Platfora announced funding of its own. Platfora, based in San Mateo, closed a $38 million round from investors including Tenaya Capital, Citi Ventures, Cisco and Allegis Capital. The latest round brings Platfora's total financing to $65 million.
Platfora's analytics and visualization software is designed to run on top of Hadoop; existing customers include DirecTV, Disney, and The Washington Post.
CIO — HR professionals and recruiters continue to rely on big data to refine the application and hiring process. They are tapping data analytics to predict ROI, performance and likely behavior. However, with so much valuable data available, it's easy to gloss over one of the most important parts of the recruiting process: the human element.
Focusing on "small data" can not only improve the speed and efficiency of your hiring process and pinpoint obstacles in your organization, it can make it easier to find passive talent candidates.
"It's been so exciting over the last few years to see the number of data collection and analysis tools growing, and I have no problem with using those tools," says Jason Berkowitz, vice president of client services at Seven Step Recruiting Process Outsourcing (RPO.)
Companies invest in enterprise risk management to identify, analyze, respond to and monitor risks and opportunities in their internal and external environments. These investments maximize opportunities, help avoid nasty surprises and provide reasonable assurance on the achievement of the organization’s objectives.
Established risk assessment processes can suffer from stale thinking in identifying and evaluating risks, especially risks that are ever-changing. Here are five ways to refresh your process, push thinking beyond the “known knowns” and improve the quality of thinking in your risk assessment process.
Business Continuity specialist Vocal has announced that it has been shortlisted for a BCI North America Award in the ‘Business Continuity Innovation of the Year’ category, after nominating its product ‘Command’ for the accolade.
“Command has spent many months in development, and years in its realisation, so this recognition is very important to us” says Vocal’s Trevor Wheatley-Perry. “It’s a truly unique and highly revolutionary solution, because it means that for the first time, any organisation, of any size, anywhere in the world can precisely replicate and manage its operations, systems, and processes during an incident.”
Built on Vocal’s award-winning iModus platform, Command is an easy-to-use tool which offers Vocal’s clients a comprehensive overview of their business continuity plans and fixed assets, a multi-faceted communications system to relay processes to people instantly, and a full audit of decisions made and actions taken during an incident. With hopes to implement the solution across a diverse range of new industries in 2014, the shortlist for the prestigious BCI Award has come at precisely the right time.
“The BCI North America Awards recognise the outstanding contributions of organisations and individual professionals working in the USA and Canada’s business continuity industry,” Trevor continues.
“It’s fantastic to have been shortlisted for such a prestigious award, and as all winners are automatically entered into the BCI Global Awards, being considered for this accolade might turn into some even more exciting opportunities in the future.”
The BCI North America Awards event will be held as part of the Disaster Recovery Journal Spring World Show 2014, a four-day industry gathering which begins on 30 March in Orlando, USA.
Vocal is recognised throughout the world as a trusted innovator of multi award-winning and proven business continuity and communication solutions. In 2007, Vocal launched iModus; the first fully integrated business continuity suite encompassing Notification, Planning, Mapping, Alerting, Staff Safety and Incident Management modules. A multi award-winning solution, iModus was selected as the emergency messaging system for incident management during the London 2012 Olympic and Paralympic Games.
On March 12th, 2014, Everbridge, the leader in Unified Critical Communications, acquired Vocal. The strategic acquisition further elevates Everbridge’s status as the world’s largest provider of emergency notification and critical communication solutions. With the addition of Vocal, Everbridge now offers the broadest product family in the industry, delivered through 12 distributed datacenters, and supported by employees in seven offices in North America, Europe, and Asia. The combined entity will serve more than 2,500 global clients who use the solution to communicate with over 50 million unique end-users every year.
COMMAND is all about actions; the ability to plan ahead and see what will actually happen, in any scenario, at any time. Built on the iModus platform, COMMAND has communications at its core; relaying processes according to the people who need to know, and manipulating data according to exacting operations and processes. We believe that COMMAND is the most exciting innovation the industry has seen in recent years, a new language that will transform the way incidents are managed across the world.
While it is highly innovative, COMMAND is beautiful in its apparent simplicity. It replicates a flow chart of any organisation’s process mapping systems, actions and priorities for varying scenarios are pre-set, one action at a time so that in even in the worst set of circumstances, an untrained member of staff could use the system to confidently and effectively respond to a business interruption.
COMMAND will eliminate the compromises involved in paper processes. It will allow any organisation to build a resilient framework, step-by-step and layer-by-layer according to its processes. The added incentive here is being able to test, measure and evaluate as part of the process, leaving no stone unturned.
Key Features of Command:
- Automatic Workflow at the touch of a button
- Bespoke Software
- Precision Management
- Double Edged: Joint role of director and recorder
CIO — The best practices and technologies involved with data loss prevention (DLP) on mobile devices aim to protect data that leaves the security of the corporate network. Data can be compromised or leaked for a variety of reasons: Device theft, accidental sharing by an authorized user or outright pilferage via malware or malicious apps.
The problems associated with mobile data loss have been compounded by the uptick in employees bringing their own devices to work, whether they have permission from IT or not. In a BYOD situation, the user owns the device, not the organization, and makes security somewhat trickier for IT to establish and maintain.
At a minimum, any mobile device that accesses or stores business information should be configured for user identification and strong authentication, should run current anti-malware software and must use virtual private networking (VPN) links to access the corporate network.
IDG News Service (Bangalore Bureau) — Brazil's lawmakers have agreed to withdraw a provision in a proposed Internet law, which would have required foreign Internet companies to host data of Brazilians in the country.
The provision was backed by the government in the wake of reports last year of spying by the U.S. National Security Agency, including on communications by the country's President Dilma Rousseff.
The legislation, known as the "Marco Civil da Internet," will be modified to remove the requirement for foreign companies to hold data in data centers in Brazil, according to a report on a website of the Brazilian parliament.
Businesses can’t function if they don’t have customers. When customers find other solutions and move away, it’s therefore a threat to business continuity. Conventional banks may be at risk if a new development in online-only banking takes off. Startup ‘Simple’ (that’s the company’s name) for instance is giving clients an innovative alternative. Its solution is to eliminate fees, move all the banking activity to the Internet and offer online apps to help track budgets and finances. It makes its money from interest charges and internetwork payments, but can work with lower margins than conventional bricks-and-mortar banks that must pay for the operation of high street branches. Is this the end of the old-style banks?
Rudyard Kipling once said, “If history were told in the forms of stories, it would never be forgotten.” Could the same be true for your data?
Mike Cavaretta argues that it is true. Cavaretta is a veteran data scientist, as well as a manager at the Ford Motor Company in Dearborn, Michigan. In a recent GigaOm column, he says telling a good story is key to helping others understand your data.
“Many analytics presentations crash and burn because no one answered the question, ‘So what?’” he writes. “Almost as bad are the presentations with dense formulas and a single R2 value. Take your audience on a data journey.”
Despite 77 percent of companies suffering an incident in the past two years, over a third of firms (38 percent) still have no incident response plan in place should an incident occur.
Arbor Networks, Inc., has published the results of an Economist Intelligence Unit survey on the issue of incident response preparedness that it sponsored. The Economist Intelligence Unit surveyed 360 senior business leaders, the majority of whom (73 percent) were C-level management or board members from across the world, with 31 percent based in North America, 36 percent in Europe and 29 percent in Asia-Pacific.
The report entitled ‘Cyber incident response: Are business leaders ready?’ shows that despite 77 percent of companies suffering an incident in the past two years, over a third of firms (38 percent) still have no incident response plan in place should an incident occur. Only 17 percent of businesses globally are fully prepared for an online security incident.
The global energy sector is increasingly vulnerable to cyber-attacks and hacking, due to the widespread adoption of Internet-based, or ‘open’, industrial control systems (ICS) to reduce costs, improve efficiency and streamline operations in next-generation infrastructure developments.
According to the Marsh Risk Management Research paper, ‘Advanced Cyber Attacks on Global Energy Facilities’, energy firms are being disproportionately targeted by increasingly sophisticated hacker networks that are motivated by commercial and political gain.
Releasing the paper at Marsh’s bi-annual National Oil Companies (NOC) conference being held in Dubai, Andrew George, Chairman of Marsh’s Global Energy Practice, commented:
By Vali Hawkins Mitchell
I live in Seattle. I listen to KOMO news daily. The helicopter traffic guys keep us moving every day. Their copter crashed this morning. 2 dead. More injured. And as I watch the news on KOMO TV…the news employees are reporting on the event with their emotions tucked back as deep as possible to do their jobs. This was their company, their co-workers, they saw the crash out their window. My heart goes out to them. And I am again humbled by the topic of my book about people in their companies having big emotions, disasters, and hoping they have a plan in place. Wishing I could run down there to help. Hoping they have a protocol in place for the immediate situation, knowing counselors will be volunteering, HR and EAP providers will be called in, and the day will progress to the next news story. The staff will be expected to just move forward. The costs will be numerated into small boxes in accounting books. There will be funerals. There will be memorials. I wonder if the person driving the car to work who was missed by the ball of flame by only a few feet will have a good day at work or go home and get drunk. I wonder. I care. Emotional preparations for unexpected incidents cannot be ignored. I am weary of trying to “sell a damn book” in order to get people to consider the long term ramifications of emotional impact on companies. But I know the costs, because I have done the math and provided companies the format to do so. I know the costs and emotionally charged long term influence of the IMPACT of this helicopter next to the Seattle Space needle will involve KOMO employees, locals, tourists who can’t go on the monorail today, first responders, and much more.
With planes disappearing, nations unhinged, and helicopters falling out of the sky, I admit I wish this morning that I didn’t know that each event was not only an emotional strain, but a fiscal devastation to all parties. Once again, I encourage you and your company to plan for the expected and the unexpected. Sigh.
"The Cost of Emotions in the Workplace" www.improvizion.com
Computerworld UK — Most companies are spurning the chance to improve their anti-fraud and anti-bribery efforts by not taking full advantage of big data analysis, according to research from business consulting firm EY.
EY found that 63 percent of senior executives surveyed at leading companies around the world agreed that they need to do more to improve their anti-fraud and anti-bribery procedures, including the use of forensic data analytics (FDA).
The survey polled more than 450 executives in 11 countries, including finance professionals, heads of internal auditing and executives in compliance and legal areas. They were asked about their use of FDA in anti-fraud and anti-bribery compliance programs.
CIO — Project managers are in short supply, and that will leave many organizations woefully disadvantaged as the economy rebounds, according to a recent study by project management training company ESI International.
The ESI 2013 Project Manager Salary and Development Survey, based on data from 1,800 project managers in 12 different industries across the U.S., reports that as projects continue to increase in complexity and size, many organizations find themselves both understaffed and with underdeveloped project management professionals. And that's putting them at a competitive disadvantage.
"Budget constraints, an aging base of professionals and a looming talent war all contribute to a talent crisis that should be addressed from the highest levels of the organization," says Mark Bashrum, vice president of corporate marketing and open enrollment at ESI International.
Big data frightens me sometimes. Seeing this headline from Information Week, “IBM: We'll Stand Up To NSA,” gave me heart palpitations.
It sounds noble and all, but really, when a large corporation is willing to stand up to the NSA over data about you or your company … is there any chance you’re winning? It reminds me of Tolkien’s “The Hobbit,” when the trolls are arguing over how to cook the dwarves. Roasting, boiling or jelly — it’s all the same to the dwarves in the end, right?
David J. Walton, a litigator who specializes in technology issues, took a look at how companies are really using Big Data. He’s an attorney, so this isn’t about business cases, ROI or any of that stuff — it’s about law, and when viewed through that lens, this is Brave New World stuff.
It has been a decade since VoIP became a standard telecommunications tool. Its age has not slowed the development of the technology, however. For instance, Twilio this week announced a VoIP advancement that it says could improve ease of use of enterprise-based systems.
According to GigaOm, Twilio will use an approach called Global Low Latency (GLL), which repurposes the approach used by the public switched telephone networks (PSTN) that VoIP is displacing.
A call on the PSTN offers great quality because a circuit is guaranteed. VoIP, to this point, has cut costs by sending packets via the best available path. Though cheaper, this approach introduces imperfections. Twilio’s idea is to limit the extremes of the traditional VoIP approach:
CSO — "Data Lake" is a proprietary term. "We have built a series of big data platforms that enable clients to inject any type of data and to secure access to individual elements of data inside the platform. We call that architecture the data lake," says Peter Guerra, Principal, Booze, Allen, Hamilton. Yet, these methods are not exclusive to Booze, Allen, Hamilton.
"I have read what's available about it," says Dr. Stefan Deutscher, Principal, IT Practice, Boston Consulting Group, speaking of the data lake; "I don't see what's new. To me, it seems like re-vetting available security concepts with a name that is more appealing." Still, the approach is gaining exposure under that name.
In fact, enterprises are showing enough interest that vendors are slapping the moniker on competing solutions. Such is the case with the Capgemini / Pivotal collaboration on the "business data lake" where the vendors are using the name to highlight the differences between the offerings.
Shadow IT is a fact of life for nearly every IT department across the board. But does that mean it’s time to throw in the towel? Not exactly, but it does mean that things will have to change, both for users and managers of data infrastructure.
First, some numbers. According to CA Technologies, more than a third of IT spending is now heading to outside IT resources, and this is expected to climb to nearly half within three years. The figures are shocking, but keep two things in mind: First, they come from CA, which makes its living building systems that help organizations keep track of their data infrastructure, and second, they represent all outsourcing activity, not just what is termed “shadow IT.”
In this article Charlie Maclean-Bristol, a highly experienced business continuity consultant, lists ten areas where many business continuity plans can be improved. How does your plan stack up?
Charlie’s list is as follows:
1. Scope. On many of the business continuity plans that I see it is not clear what the scope of the plan is. The name of the department may be on the front of the plan but it is not always obvious whether this is the whole of the department, which may cover many sites, or just the department based in one location. It should also be clear within strategic and tactical plans what part of the organization the plan covers. Where large organizations have several entities and subsidiaries it should be clear whether the tactical and strategic plans cover these.
2. Invocation criteria. I believe it should be clear what sort of incidents should cause the business continuity plan to be invoked. I also believe that these invocation criteria should be ‘SMART’ (specific, measurable, attainable, realistic and timely), so as not to be open to misinterpretation. The criteria should be easy to understand so if you get a call at 3am in the morning to inform you of an incident it should be obvious whether you invoke or not. Focus should be on the loss of an asset such as a building or an IT system, not on the cause of the loss. There needs to be a ‘catch-all’ in the invocations criteria which says 'and anything else which could have a major impact on our operations’ so that the criteria are not too rigid if you need to invoke for an incident you have not yet thought of.
Costs and benefits of BCM: let us ask the right questions, not answer the wrong ones
By Matthias Rosenberg
The costs and benefits of BCM : I have dealt with this issue for almost 20 years now and it always goes back to one question: Why would a company invest in something that does not provide a contribution to revenue and that is meant to protect the company against something that hopefully never happens? This question is quite understandable from a business perspective and therefore justified as a basic question. Those who cannot give a plausible answer to this question will fall at the first hurdle. This issue is fundamental to our profession and at the same time underrepresented in the BCM literature. Even the Good Practice Guide (GPG) 2013 does not see the task of selling BCM as a central task of a BC manager; but in reality the sale and presentation of the business continuity topic are critical for our success.
Soft skills are as important in BCM as in any other management discipline.
Let me give you some examples: BCM professionals need strong presentation skills and they need strong training skills (e.g. to train BCM coordinators). These are specific skills that can be described. Analytical skills (e.g. to prepare BIA results for top management) and communication skills are equally important. In the end it is not enough to read another BCM standard, to take part in a training course or to buy BCM software and hope to run a BCM programme successfully. A BCM professional needs experience and one of the most important skills to implement a BCM programme successfully: patience.
By Jayne Howe
The costs associated with developing and implementing a business continuity program in your organization can vary greatly. Most of the cost variables are going to be dependent on two factors: what you already have in place and what components still need to be addressed; and whether your organization has internal business continuity expertise.
It’s likely that any organization successfully operating in this century will have at least a few basic components in place. They may be components that are necessary to be eligible for insurance coverage; to meet the criteria for regulatory bodies that your organization’s industry needs to be part of; or complying with basic building fire codes. But even if you don’t have internal BC expertise, you don’t need to start with a blank piece of paper to try to configure the other components that are necessary for a complete and robust business continuity program.
Using a business continuity standard as a base guideline for your own internal development can assist in identifying those modules that are necessary to develop an all-inclusive and comprehensive BC program. This can be extremely helpful in preventing you from travelling down an incorrect or incomplete path, and therefore saving wasted resource time and costs.
Managing vulnerabilities in a business context.
By Paul Clark
Network security can be both an organization’s saviour, and its nemesis: how often does security slow down the business? But security is something you can’t run away from. Today’s cyber-attacks have a direct impact on the bottom line, yet many organizations lack the visibility to manage risk from the perspective of the business.
Traditionally, network security revolves around scanning the servers for vulnerabilities, reviewing them and the risk to the server by drilling down through the reporting to assess how vulnerabilities could be exploited, and then looking at how those risks can be remediated. Looking at vulnerabilities in this technical context leaves a lot to be desired in terms of actual impact on the business.
These risks can be put into two groups. There is the security risk, which is about compromise. How can the network be compromised and what would happen if the vulnerability was exploited? What damage would be done, and what information could be lost? Assessing these types of risk is usually the domain of the information security team.
On March 13th BATS Global Markets (BATS), a leading operator of securities markets in the US and Europe, successfully conducted a full-scale business continuity test
of its US equities exchanges BZX and BYX, and BATS Options. These operations were switched to BATS’ disaster recovery site and the company’s global headquarters was disconnected from all outside network access for the entire day.
All of BATS’ Kansas City-area employees reported to the disaster recovery site and conducted their daily routines from the secure and remote location. The BATS offices in New York City, Jersey City, and London continued normal operations.
Turn Around Don’t Drown
Turn Around Don’t Drown, or TADD for short, is a NOAA National Weather Service campaign used to educate people about the hazards of driving a vehicle or walking through flood waters.
This year is the 10th anniversary of the TADD program. Hundreds of signs depicting the message have been erected at low water crossings during the past decade. The phrase “Turn Around Don’t Drown” has become a catchphrase in the media, classroom, and even at home. It’s one thing to see or hear the phrase, and another to put it into practice.
Flooding is the 2nd leading cause of weather related fatalities in the U.S. (behind heat). On average, flooding claims the lives of 89 people each year. Most of these deaths occur in motor vehicles when people attempt to drive through flooded roadways. Many other lives are lost when people walk into flood waters. This happens because people underestimate the force and power of water, especially when it is moving. The good news is most flooding deaths are preventable with the right knowledge.
Just six inches of fast-moving water can knock over an adult. Only eighteen inches of flowing water can carry away most vehicles, including large SUVs. It is impossible to tell the exact depth of water covering a roadway or the condition of the road below the water. This is especially true at night when your vision is more limited. It is never safe to drive or walk through flood waters. Any time you come to a flooded road, walkway, or path, follow this simple rule: Turn Around Don’t Drown.
For more information on the TADD program, visit http://tadd.weather.gov
For flood safety tips, visit the newly redesigned website at www.floodsafety.noaa.gov or http://emergency.cdc.gov/disasters/floods/index.asp
Essentially the Non-Executive Director's role is to provide a creative contribution to the board by providing objective criticism. So I recommend that all Non-Executive Directors consider challenging the board to count the costs involved in deploying business continuity management and balancing these costs against quantifiable benefits gained from its Business Continuity Management System and Programme.
The Good Practice Guidelines suggest that embedding BCM is hard to measure, but secretly I believe that Executive Directors deep down in their hearts and minds know full well if they are merely trying to be compliant.
In the busy world of the Executive, maybe they only have time to ask if the business is adequately covered from a risk and business continuity perspective. Is it the difference between plausible deniability and culpable liability? To paraphrase a well-known political interviewer: “Did you know there was a problem, in which case you are culpable or did you genuinely not know in which case you were incompetent, which is it?”
Before I start I feel I should make two important points :
1) If you’re expecting a serious, academic blog containing a reasoned argument backed up by empirical evidence, you’ve come to the wrong place;
2) I was asked to write 500 words, which I understand is what proper bloggers do. I’ve exceeded that ever so slightly so if you have a short attention span, you might want to leave now.
Assuming you’re still with me…
Well, it’s Business Continuity Awareness week and with the aim of raising the profile and understanding of the whole thing, from multiple perspectives, it is a fantastic idea. I wonder how many BC ‘professionals’ will be participating and become more aware. Really aware.
Being an educator is a privilege and to develop knowledge, capability and understanding along with confidence and perhaps earning power is a fantastic motivator. From the learning perspective, education should develop enthusiasm, knowledge and understanding in learners. To understand learners you need to understand what potentially limits their own capability; particularly when they are at the start of the higher education journey.
Here’s the thing from my perspective, there are a significant number of practitioners and consultants in the BC education system (not training – education!), but not enough. Of course, I would say that, wouldn’t I? But the point is this; of the thousands of practitioners, highly experienced perhaps, with professional memberships and in good positions in their businesses, not nearly enough make the time, effort or commitment to become educated in their profession.
CIO — Imagine you're working for a big financial services company and you stupidly left your BYOD smartphone on the seat in a commuter train, yet you're not really sure where you've misplaced it. So you search high and low, in your home and car, at restaurants and coffee shops.
In the back of your mind, you know that the company requires you to contact those robotic IT folks within 24 hours of losing your phone so that they can remotely wipe it, but you don't want that to happen. They'll delete precious notes that you need for a client, maybe even personal photos that you forgot to back up. Besides, you haven't searched everywhere yet.
You miss the 24-hour window, and the company promptly fires you.
Techworld — Most large organisations now make advance plans to bring in external security consultancies should they suffer data and security breaches, a new survey for Arbor Networks has found.
The Economist Intelligence Unit (EIU) study (registration required) of 360 global senior business executives backed up by interviews with a dozen security executives found that around two thirds of firms had formal incident response plans in place for serious security incidents, with the same number complimenting this with a dedicated in-house response team.
Despite this apparent readiness, 80 percent of larger organisations had made advance arrangements with external experts, mainly in computer forensics, to supplement the initial response by an internal IT team.
The data snooping debate has quietened down a little recently, even if Edward Snowden’s name still crops up here and there. Whether or not the revelations about intelligence activities have changed much in terms of governmental attitude and behaviour remains to be seen. Pressure can still be applied to Internet, cloud and telecommunications service providers to provide data about users, and the only safe data encryption may be the one you do yourself. Indeed, increasingly large quantities of information are generated every day and are available for analysis by government agencies. But who decides what to do with all the data?
For decades, businesses have used ‘outsourcing’ (obtaining goods or services through a 3rd party, rather than from an internal source) as a mean of reducing expenses, eliminating overhead and reducing risks.
As a Business Continuity professional, I’ve always been leery of the risk reduction angle. While outsourcing may shift the burden of risk onto the outsourced party, it doesn’t eliminate the consequences of the risk, should it occur. It’s easy to dismiss the potential impact of a disruption that occurs to an outsourced process, function or service. But – like every other risk – the internal ‘ripple effect’ can still be felt, even though the actual disruption happens to that 3rd party.
Most outsourcing contracts require that the 3rd party have a Business Continuity and/or IT Disaster Recovery Plan in place. Too often, that Plan’s existence is never verified. You should know how often it is updated and tested. You should get a copy and read it (even if you have to visit the 3rd party to view it). Perform your own audit: is the plan adequate when compared to your own BCM standards? If not, make suggestions for improvements, and follow-up to assure those improvements occur.
CIO — We're all familiar with the Target payment card breach late last year. Up to 110 million payment card numbers were stolen through a huge hole in the company's network, right down to the security of the PIN pads. The breach cost Target CIO Beth Jacobs her job; it was, and still is, a serious matter.
Target is obviously a public company, so this situation garnered a lot of attention. As a CIO or member of the executive technical staff, though, there are some observations about the situation that can apply to your company.
Here are four key lessons from Target's very public example of a data breach.
Network World — When Microsoft stops supporting Windows XP next month businesses that have to comply with payment card industry (PCI) data security standards as well as healthcare and financial standards may find themselves out of compliance unless they call in some creative fixes, experts say.
Strictly interpreted, the PCI Security Standards Council requires that all software have the latest vendor-supplied security patches installed, so when Microsoft stops issuing security patches April 8, businesses processing credit cards on machines using XP should fall out of PCI compliance, says Dan Collins, president of 360advanced, which performs security audits for businesses.
But that black and white interpretation is tempered by provisions that allow for compensating controls supplementary procedures and technology that helps make up for whatever vulnerabilities an unsupported operating system introduces, he says.
Celebrating this St. Patrick’s Day, I’m reminded that luck has very little to do with being prepared to recover your systems in the event of an outage. In fact, one of the most important lessons from the 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council involves a commitment to taking action—and not accepting the status quo. Based on hundreds of responses from organizations worldwide, the Annual Report offers a few key suggestions for implementing DR best practices so that companies can be much better prepared to recover from outages or disasters.
You can download the report for free at http://drbenchmark.org/
Here are three of the Annual Report’s major recommendations:
- Build a DR plan for everything you need to recover, including applications, networks and document repositories, business services such as the entire order processing system, or even your entire site in the event of an outage or disaster. It’s an important exercise that will force you to prioritize your DR planning efforts
- Define Recovery Time Objectives (RTO) & Recovery Point Objectives (RPO) for critical applications. Without these important metrics, you cannot set proper expectations and assumptions from management, employees, and customers about your DR capabilities and how to improve them. You need to set the playing field before a disaster or outage happens. There is a free tool, for example, that can help you test your own Recovery Time Actuals or RTAs in VMware environments.
- Test critical applications as frequently as possible to validate they will recover within RTOs/RPOs. For DR preparedness to improve, companies around the world must begin to automate these processes and get beyond the high cost in time and money of verifying and testing their DR plans. If you don’t test, you simply can’t know what will happen.
As both intentional and accidental threats to IT systems continue to grow and accelerate, we at the DR Preparedness Council have dedicated our efforts to increasing awareness of the need for DR preparedness. At the same time, we will continue to identify and share best practices as they evolve so that we can help organizations worldwide feel more secure and confident about their own ability to recover systems when outages and disasters strike.
To get you started, you can do a few things right now to improve your own DR preparedness:
Take time out to fill out the online benchmark survey to see how you are doing compared to others
Get a free trial of PHD Virtual ReliableDR and see how you can affordably test your recovery capabilities every week, every day or every hour if you want—a breakthrough in DR planning and preparedness.
To learn more, tune into an online webinar on Wednesday, March 26 with The Disaster Recovery Journal. At this webinar, I’ll be giving a sneak-preview of my presentation at Disaster Recovery Journal’s Spring World 2014, the industry’s largest business continuity conference and exhibition, taking place March 30 – April 2, 2014 in Orlando, Florida. By attending this webinar you will learn:
- The Findings of the 2013 DR Preparedness Survey
- What Downtime Costs in Real Dollar Terms - Top Causes of Outages - Best Practices from the Best Prepared Organizations - How to Increase Your DR Preparedness
Emerging risks that risk managers expect to have the greatest impact on business in the coming years could be on the cusp of a changing of the guard, according to an annual survey released by the Society of Actuaries.
It found that the risk of cyber attacks and rapidly changing regulations are of growing concern to risk managers around the world, and may be slowly replacing the risk of oil price shock and other economic risks which were of major concern just six years ago.
Some 47 percent of risk managers saw cyber security as a significant emerging risk in 2013, up seven points from 40 percent in 2012.
Another Business Continuity Awareness Week has arrived.
In this timezone that tends to mean late nights if you want to catch the webinar program live, rather than on the replay. Replays are fine if you only want to passively consume the material, but if you also plan to ask questions and engage with the presenter then the live broadcast is the only option.
Network World — We sometimes focus more on the wireless side of the network when it comes to security because Wi-Fi has no physical fences. After all, a war-driver can detect your SSID and launch an attack while sitting out in the parking lot.
But in a world of insider threats, targeted attacks from outside, as well as hackers who use social engineering to gain physical access to corporate networks, the security of the wired portion of the network should also be top of mind.
So, here are some basic security precautions you can take for the wired side of the network, whether you're a small business or a large enterprise.
During Flood Safety Awareness Week, March 16 to 22, the National Oceanic and Atmospheric Administration (NOAA) and the Federal Emergency Management Agency (FEMA) are calling on individuals across the country to Be a Force of Nature: Take the Next Step by preparing for floods and encourage others to do the same.
Floods are the most common — and costliest — natural disaster in the nation affecting every state and territory. A flood occurs somewhere in the United States or its territories nearly every day of the year. Flood Safety Awareness Week is an opportunity to learn about flood risk and take action to prepare your home and family.
"Many people needlessly pass away each year because they underestimate the risk of driving through a flooded roadway,” said Louis Uccellini, Ph.D., director of NOAA's National Weather Service. "Survive the storm: Turn Around Don't Drown at flooded roadways."
“Floods can happen anytime and anywhere,” said FEMA Administrator Craig Fugate. “Take steps now to make sure your family is prepared, including financial protection for your home or business through flood insurance. Find out how your community can take action in America’s PrepareAthon! with drills, group discussions and community exercises at www.ready.gov/prepare.”
Our flood safety awareness message is simple: know your risk, take action, and be an example. The best way to stay safe during a flood and recover quickly once the water recedes is to prepare for a variety of situations long before the water starts to rise.
• Know Your Risk: The first step to becoming weather-ready is to understand that flooding can happen anywhere and affect where you live and work, and how the weather could impact you and your family. Sign up for weather alerts and check the weather forecast regularly at weather.gov. Now is the time to be prepared by ensuring you have real-time access to flood warnings via mobile devices, weather radio and local media, and avoiding areas that are under these warnings. Visit ready.gov/alerts to learn about public safety alerts and visit floodsmart.gov to learn about your flood risk and flood insurance available.
• Take Action: Make sure you and your family members are prepared for floods. You may not be together when weather strikes, so plan how you will contact one another by developing your family communication plan. Flood insurance is also an important consideration: just a few inches of water inside a home can cost tens of thousands of dollars in damage that typically will not be covered by a standard homeowner’s insurance policy. Visit Ready.gov/prepare and NOAA to learn more actions you can take to be better prepared and important safety and weather information.
• Be an Example: Once you have taken action, tell family, friends, and co-workers to do the same. Technology today makes it easier than ever to be a good example and to share the steps you took to become weather-ready.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter and our other social media channels.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. http://www.ready.gov/
CIO — WASHINGTON — CEOs at some of the nation's leading tech companies see boundless potential for big data and smarter, integrated systems to address major social challenges in areas ranging from medicine to education to transportation — but at the same time, they worry that policymakers at home and abroad could stand in the way of that vision.
Top executives at firms such as Dell, IBM and Xerox gathered in the nation's capital this week under the auspices of the Technology CEO Council, bringing with them a message that the data economy is imperiled by concerns about security and privacy and protectionist policies that could limit the growth of cloud computing and balkanize the Internet.
"The biggest barriers I think that we see are not around the engineering. It's around regulation. It's around protectionism. It's around trust, or lack thereof. It's around policies and procedures," says Xerox Chairman and CEO Ursula Burns, who also chairs the CEO council.
CIO — CIOs who toiled in obscurity while keeping back-office operations running are about to be thrust into the spotlight and will be staring eyeball to eyeball with the all-important customer. At least, this is the main finding in an IBM survey of more than 1,600 CIOs.
"CIOs are increasingly being called upon to help their companies build new products and services and transform front-office capabilities," says Linda Ban, global C-suite study director at IBM's institute for business value, adding, "We're starting to see CIOs move outside of IT, maybe doing a stint in marketing."
ERP Gives Way to ROI
Times have certainly changed from the ERP days, when a CIO's career hung on awesome integration skills and deep knowledge of complex software. Today's CIO needs to be well-versed in new-fangled technology areas that drive sales, such as mobile that has become a touch point for reaching the emerging digital customers.
By Jacquelyn Lickness
When a hospital in South Carolina spotted bats flying through its facility, officials sprang into action launching an investigation to prevent a possible rabies outbreak. Because bats are commonly infected with the virus, any contact with the flying mammals is taken very seriously. The hospital quickly involved state public health officials, who then reached out to CDC to help investigate any possible exposure to the rabies virus.
Rabies is a disease typically acquired through the bite of a rabid animal, and can be deadly if the exposure (e.g., bite) is not recognized early enough. Across the globe there are more than 55,000 human deaths from rabies each year. However, in the U.S. human cases are extremely rare, with approximately two human deaths annually. Most exposures to the rabies virus in the U.S. occur through contact with animals that are commonly infected with the virus, including bats, raccoons, skunks, and foxes.
Participation in the response effort
The response effort in South Carolina is ongoing and has involved collaboration among hospital staff, state public health officials, and CDC rabies experts and volunteers. Because hundreds of patients and hospital staff might have come in contact with bats, it was important to assess each individual’s risk of exposure.
In this event, it was critical to understand any interaction with a bat. It is possible that bat bites can go unnoticed if the person is sleeping or sedated, thus placing a person at risk for rabies. As a result, the investigation team asked about certain activities such as bat handling and touching, heavy sleeping or sedation, and other medical history that may indicate exposure.
Rabies expert and CDC Epidemic Intelligence Service (EIS) Officer Dr. Neil Vora orchestrated a response that included the administration of hundreds of phone-based surveys to hospital patients and staff. This large-scale investigation was managed through the CDC Emergency Operations Center. EIS officers, veterinary and medical students, and public health students from nearby Emory University eagerly offered their support for the data-gathering activities. The Student Outbreak and Response Team (SORT), a public health organization from Emory University that assists in outbreak responses, organized a contingency of nearly 20 students to assist the efforts. In the span of four days, a total of 55 volunteers made 817 calls.
The investigation wasn’t just limited to patient questionnaires. Other activities included the distribution of letters and flyers to patients and visitors to warn of bat exposure, mapping and creation of a timeline of bat sightings, and testing of bats for rabies. A quick response was made possible through collaboration between the hospital, South Carolina public health officials, a local pest control company, and all participants at CDC.
Determining the extent of exposure
In total, 53 bats have been sighted in the hospital, of which 12 were tested and have results available, all of which were negative. That said, other bats in the colony that have not been tested could still have had rabies. After the removal of the bats and other interventions to prevent their re-entry, the bat sightings have decreased. As a result of the collaborative effort among CDC, the state public health department, and the affected hospital during this response, partnerships were strengthened and new public health tools and practices were developed. Most importantly, all involved continue taking measures to understand best practices in rabies prevention and treatment to ensure the safety of the public’s health.
DENVER – Flooding is the most common natural disaster in the United States. Recent years have seen more frequent severe weather events, like Hurricane Sandy, which ravaged the East Coast. The Federal Emergency Management Agency (FEMA) manages the National Flood Insurance Program (NFIP) that provides flood insurance policies that provide millions of Americans their first line of defense against flooding. But those flood insurance policies are only one component of the program and just part of the protection NFIP provides to individuals and the American public at large.
For anyone to be able to purchase an NFIP policy, the only requirement is that they live in a participating community. A participating community can be a town or city or a larger jurisdiction like a township or county that includes unincorporated areas. It is up to the community to opt into the NFIP program for the benefit of its citizens. When joining the program, the community agrees to assess flood risks and to establish floodplain management ordinances. In return for taking these actions, residents are able to purchase federally backed flood insurance policies.
One of the cornerstones of the NFIP is the flood mapping program. FEMA works with states and local communities to conduct studies on flood risks and develop maps that show the level of risk for that area, called a Flood Insurance Rate Map (FIRM). The FIRM provides useful information that can assist in communities in planning development. The area that has the highest risk of flooding is the Special Flood Hazard Area (SFHA), commonly called the floodplain. The SFHA has a one percent chance of being flooded in any given year. Because of the greater risk, premiums for flood insurance policies for properties in the SFHA are greater than for those for properties outside of it.
Equally important to knowing the risks of flooding is having a game plan to address those risks. This is role of floodplain management. Local communities must comply with minimum national standards established by FEMA, but are free to develop stricter codes and ordinances should they choose to do so. Key elements of floodplain management include building codes for construction in the floodplain and limitations on development in high risk areas. Floodplain management is an ongoing process, with communities continually reassessing their needs as new data becomes available and the flood risk for areas may change.
The NFIP brings all levels of government together with insurers and private citizens to protect against the threat of flooding. Federally sponsored flood maps and locally developed floodplain regulations give property owners the picture of their risk and ensure building practices are in place to minimize that risk. As a property owner, purchasing a flood insurance policy is a measure you can take to further protect yourself. To find out more about your individual risk contact your local floodplain administrator. For more information on flood insurance policies or to find an agent, visit www.floodsmart.gov or call 1-800-427-2419.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
DENVER – There’s a hidden threat that strikes countless unprepared Americans each year – flooding. Unlike fire, wind, hail or most other perils, flood damage is not covered by a homeowners’ policy. An uninsured flood loss can undo a lifetime’s worth of effort and create a mountain of bills. Fortunately, a National Flood Insurance Program (NFIP) policy provides the defense against such losses and can ensure that a flood doesn’t bring financial ruin.
Flooding is an ever present threat; it can happen at any time and in virtually any location. While certain areas may be more prone to flooding – especially those in coastal areas or riverine environments – history has shown that almost no place is immune to flooding. Flooding can have many causes: a quick heavy rainfall or rapid snowmelt can cause flash flooding, a blocked culvert or storm sewer drain can create flooding in a city neighborhood, or prolonged wet weather can swell streams and rivers. Even dry conditions can pose a threat, as minimal rainfall in wildfire burn areas or drought stricken regions can create flash flooding when soils are unable absorb even slight precipitation.
Flood insurance is easy to get, the only requirement is that you live in a participating community (which might be a county or other jurisdiction for those living in unincorporated areas). That’s right; you don’t need to live in a floodplain to purchase a policy. In fact, if you live outside a floodplain you may be eligible for a preferred risk policy that has a much lower premium than for a policy in a higher flood risk area. And in most cases you can purchase an NFIP policy with the insurance agent you already deal with for other insurance needs. When that isn’t possible, NFIP can put you in touch with another agent that can get you a flood insurance policy.
One key difference of an NFIP policy from another insurance policy is the 30-day waiting period prior to the policy going into effect. But that doesn’t mean anyone should view a policy like a lottery ticket, something purchased only if flooding appears imminent. A policy should be viewed as protection against a continuing threat rather than a hedge against a singular event such as anticipated spring flooding or following a wildfire.
The average flood insurance premium nationwide is about $700 a year – less than $2 a day for financial protection from what could be devastating effects of a flood to one’s home or business. By purchasing a policy now, or keeping your existing policy, you have peace of mind. As with any insurance, be sure to talk with your agent about the specifics of your policy – how much coverage you need, coverage of contents as well as structure and any other questions you might have.
Find out more about your risk and flood insurance at www.floodsmart.gov. To purchase flood insurance or find an agent, call 1-800-427-2419.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
If you wear the CIO hat of a very large retail company, what could be worse than to have your site broken into and tens of millions of customers’ information records stolen and … right at the peak of the holiday season? Well, I suppose it could be worse if your organization had recently spent millions to buy the latest in security equipment and software and set up a large, 24×7 monitoring center halfway around the world to monitor the critical alerts from security software … and then when someone 12 time zones away did notice that the organization’s networks had been breached and sent a notice to their overlords in the US, nothing much happened for nearly three weeks while the bad guys were stealing millions of customers’ credit card information and passwords.
Of course, that could be a really big problem. In fact, it might get a CIO, along with a number of underlings, fired after having to testify on nationwide TV before Congress, and after launching a huge internal review to see what really happened and placing blame somewhere other than at the top. And all this might cause any company to lose hundreds of millions in sales and frighten away millions of loyal customers … and three months later it might be on the front cover of one of the US’s leading business journals (see “Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It“).
Tomorrow, March 15 is enshrined as one of the most famous days of all-time, the “Ides of March”. On this day in 44 BC, the “Dictator for Life” Julius Caesar was assassinated by a group of Roman nobleman who did not want Caesar alone to hold power in the Roman Empire. It was however, this event, which sealed the doom of the Roman Republic as his adopted son Octavian first defeated the Republic’s supporters and then his rival Dictator Marc Anthony and became the first Emperor of the new Roman Empire, taking the name Augustus.
One of the more interesting questions in any anti-corruption compliance regime is to what extent your policies and procedures might apply in your dealings with customers. Clearly customers are third parties and in the sales chain but most compliance programs do not focus their efforts on customers. However, some businesses only want to engage with reputable and ethical counter-parties so some companies do put such an analysis into their compliance decision calculus.
However, companies in the US, UK and other countries who do not consider the corruption risk with a customer may need to rethink their position after the recent announcements made by Citigroup Inc. regarding its Mexico operations.
Network World — Making the leap to SDN? Don't jump in blind. It helps to know what software-defined networking is, first off, and then what it can do for you.
Then it's smart to know all the inner workings of an SDN controller, the differences between products offered by established vendors and start-ups, and whether open source and bare metal switching might be an option. Lastly, learn your own network -- will it even support SDN or require a wholesale rip-and-replace? -- and then learn from your peers about their experiences. Here's an 11-tip guide on how to prep for SDNs:
You would think Big Data would be important to any financial services firm, but it turns out, data integration and management are more pressing problems, particularly for buy-side companies, according to a recent FierceFinanceIT article.
Buy-side companies typically sell investment services such as private life insurance, hedge funds, equity funds, pension funds and mutual funds. Sell-side companies are registered members of the stock exchange and handle direct investments, often for the buy-side companies.
The article quotes executives from DataArt, which builds custom software solutions for financial services and other industries. DataArt executives say only a few buy-side companies are dabbling in Big Data as a way to learn more from social media data. Instead, the real focus for asset managers and midsize firms is preparing data for compliance reports.
CSO — It was 2010 and the monthly ISSA meeting featured speaker was Major Gen. Dale Meyerrose, VP of Harris information assurance at the time. Dale asked if we should teach being a responsible cyber citizen in our schools. Back then I had just started working in a large Public School District that had never before had an information security analyst. I had lots to share about information Security and lots more to learn about educating users in the business of Education!
I think this is a very appropriate term. So how long have you been a responsible cyber citizen? Where did you learn to become one? We all learned how to drive a car and hopefully we are responsible drivers, at least there is training and a test for drivers of automobiles. What about being a responsible cyber citizen? There is no official curriculum in our schools for it? Can you actually cause your country and yourself significant monetary losses or worse, just by not being aware of the dangers that lurk on the internet? The point is, over time malware has become quite sophisticated, what started as a prank in the 1980s is now a multi-billion dollar cyber-crime industry.
In a new study, the Workplace Bullying Institute found that 27% of Americans have suffered abusive conduct at work and another 21% have witnessed it. Overall, 72% are aware that workplace bullying happens. Bullying was defined as either repeated mistreatment or “abusive conduct.” Only 4% of workers responded that they did not believe workplace bullying occurred.
The study found that 69% of the bullies were men and they targeted women 57% of the time. The 31% of bullies who are female, however, overwhelmingly bullied other women—68% compared to 32% who mistreated men in the workplace. Identifying the perpetrators also shed light on how corporate power dynamics play a role in abusive workplace behavior. The majority of bullying came from the top (56%), while only a third came from other coworkers.
The recent flooding episode has highlighted shortcomings in the UK government’s approach to risk events says Chairman of the Institute of Risk Management, Richard Anderson.
“The terrible flooding in Somerset and the Thames has brought into sharp focus the ‘fingers crossed’ and ‘touching wood’ approach to risk management strategy that is so often adopted by government. It is regrettable that this seems to be the default mechanism to approaching all manner of risks. It is an appalling state of affairs because we understand how to manage risk better now than we ever have in the past. Since the flooding we have seen lots of frenetic activity from government officials which is unproductive and the government would be better served by seeking the advice of the increasing cadre of expert risk professionals who are largely being ignored at the moment.
“Routine risk thinking tends to be handled at a very junior level in government. Much of it is no more than painting by numbers as committees consider whether a risk should be red, amber or green. Most risks are considered in isolation of other risks materialising. That is not what happens in real life: in real life as one thing hits, another does straight after, and another and another. The interdependence of multiple impact risks needs to be managed far more professionally.
The US National Institute of Standards and Technology (NIST) will host the first of six workshops devoted to developing a comprehensive, community-based disaster resilience framework, a national initiative carried out under the President's Climate Action Plan. The workshop will be held at the NIST laboratories in Gaithersburg, Md., on Monday, April 7, 2014.
Focusing on buildings and critical infrastructure, the planned framework will aid communities in efforts to protect people and property and to recover more rapidly from natural and man-made disasters. Hurricanes Katrina and Sandy, and other recent disasters, have highlighted the interconnected nature of buildings and infrastructure systems and their vulnerabilities.
The six workshops will focus on the roles that buildings and infrastructure systems play in ensuring community resilience. NIST will use workshop inputs as it drafts the disaster resilience framework. To be released for public comment in April 2015, the framework will establish overall performance goals; assess existing standards, codes, and practices; and identify gaps that must be addressed to bolster community resilience.
NIST seeks input from a broad array of stakeholders, including planners, designers, facility owners and users, government officials, utility owners, regulators, standards and model code developers, insurers, trade and professional associations, disaster response and recovery groups, and researchers.
All workshops will focus on resilience needs, which, in part, will reflect hazard risks common to geographic regions.
The NIST-hosted event will begin at 8 a.m. and is open to all interested parties. The registration fee for the inaugural workshop is $55. Space is limited. To learn more and to register, go to: www.nist.gov/el/building_materials/resilience/disreswksp.cfm.
Registration closes on March 31, 2014.
More information on the disaster resilience framework can be found at www.nist.gov/el/building_materials/resilience/framework.cfm
The UN Office for Disaster Risk Reduction (UNISDR) is working with IBM and AECOM to measure cities’ resilience to disasters.
The first output of the partnership is a Disaster Resilience Scorecard created for use by members of UNISDR’s ‘Making Cities Resilient’ campaign which has been running now for almost four years.
The scorecard is based on the Campaign’s Ten Essentials – UNISDR’s list of top priorities for building urban resilience to disasters — and has been developed by IBM and AECOM. A list of potential cities is being developed to test the scorecard and to support their disaster resilience planning.
The Disaster Resilience Scorecard reviews policy and planning, engineering, informational, organizational, financial, social and environmental aspects of disaster resilience. Each of the criteria has a measurement scale of 0 to 5, whereby 5 is regarded as ‘good practice.’
The scorecard will be available at no cost through UNISDR, AECOM or IBM.
Both IBM and AECOM are part of UNISDR’s Private Sector Advisory Group and the Making Cities Resilient Steering Committee.
CSO — Healthcare organizations see an expanding landscape of uncertainty that has raised concerns among security pros and points to the need for more thorough threat analyses, a study showed.
Risks posed by health insurance and information exchanges, employee negligence, cloud services and mobile device usage has dampened confidence in protecting patient data, the Fourth Annual Benchmark Study on Patient Privacy & Data Security found. The study, released Wednesday, was conducted by the Ponemon Institute and sponsored by data breach prevention company ID Experts.
Despite the concerns, the study showed progress on the security front. The average cost of data breaches for organizations represented in the study fell to $2 million over a two-year period, compared to $2.4 million in last year's report.
Data deduplication or the elimination of repetition of data to save storage space and speed transmission over the network – sounds good, right? ‘Data deduping’ is currently in the spotlight as a technique to help organisations boost efficiency and save money, although it’s not new. PC utilities like WinZip have been compressing files for some time. The new angle is doing this systematically across vast swathes of data. By reducing the storage volume required, enterprises may be able to keep more data on disk or even in flash memory, rather than in tape archives. Vendor estimates indicate customers might store up to 30 terabytes of digital data in a physical space of just one terabyte.
In my previous post, I shared the ongoing debate about the most effective way to approach Big Data so that it will yield meaningful, useful and, hopefully, profitable findings.
The top two options are approaching data as an explorer versus Tom Davenport’s contention that you need to use a hypothesis, which I translate as using a more scientific-method based approach.
Explorer advocates say Big Data is too big for the typical reports-driven approach, and what’s worked for early adopters has been tinkering with the data to see what it reveals. Davenport and others contend that is a great way to waste time, spend money and create unhappy business leaders.
According to a recent Entrepreneur article, small businesses should find effective ways to analyze data in order to give their customers what they need without pushing too hard to gather more data from those same customers. Sounds simple, yet complex.
And when you also consider that data is increasing exponentially, and the way Big Data has been multiplying, it’s no wonder small to midsize businesses (SMBs) have become quite overwhelmed about how to collect, sort, and use Big Data in any effective manner.
But what SMBs need to realize is that the key to using data is “refinement.” In his Entrepreneur article, Suhail Doshi explains:
Archiving has always been one of those functions that pulls the enterprise in two different directions. Increased data volumes, of course, require more storage capacity, but as data sits in the archives for longer periods of time, it loses its value. So in the end, the enterprise must devote more resources to constantly diminishing assets.
Of course, this is the lifeblood of the archival management industry as numerous companies work up sophisticated algorithms and other tools to analyze data and then shift it from one set of resources to another based on its intrinsic value. The real purpose behind Big Data management, after all, is not to accommodate increasing volumes but to mine existing stores for gold and then store the rest at the lowest possible cost—or discard it altogether.
Naturally, part of this process requires the development of low-cost media, such as tape, which offers the benefit of stable, long-term storage for data that is accessed infrequently. Disk-based archiving is also gaining in popularity, although this is primarily in tiered solutions, considering the disk’s relatively weak long-term reliability.
Ever have one of those days? Blargggg. The good news is that it is normal and human and okay to be “off” a bit from time to time…which is a lot different than having an emotional spin event…in that context, since I’m having one of those blarggh days….and don’t feel very creative I thought I’d just share a few Emotional Continuity Management definitions today…
Emotional: All human feelings, those defined as positive and negative.
Spinning: Normal emotions that, for some reason, escalate and continue to develop an additional energy beyond the emotions of the original event. Emotional spinning occurs when a person, or several people, join forces with someone else to form a mutual or collective energy spin. The increasing collective emotional dynamic created by rampant, unmanaged, or poorly managed feelings.
Big Data is a bit of a problem for businesses. The fact is that data is growing enormously, both in its volume and importance. Also, we’ll soon see a big push on usable open data and its value. So, many organizations must move on Big Data.
Yet, I haven’t found a use case that will deliver for any and every company. McKinsey recently asked eight executives from companies with leading data analytics programs about their experiences. According to the McKinsey report, “[t]he reality of where and how data analytics can improve performance varies dramatically by company and industry.”
One problem may be that Big Data requires a paradigm shift in how businesses approach data. Typically, business is goal-oriented with data: You run a report because you need a specific set of data on a specific topic.
CIO — On the day of Facebook's IPO, a concurrency bug that lay hidden in the code used by Nasdaq suddenly reared its ugly head. A race condition prevented the delivery of order confirmations, so those orders were re-submitted repeatedly.
UBS, which backed the Facebook IPO, reportedly lost $350 million. The bug cost Nasdaq $10 million in SEC fines and more than $40 million in compensations claims — not to mention immeasurable reputational damage.
So why was this bug not discovered during testing? In fact, how did it never manifest itself at all before that fateful day in 2012?
IDG News Service (Brussels Bureau) — European politicians voted overwhelmingly on Wednesday in favor of new laws safeguarding citizens' data.
The new Data Protection Regulation was approved with 621 votes for, 10 against and 22 abstentions.
"The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible," said Justice Commissioner Viviane Reding, who first proposed the law.
"Strong data protection rules must be Europe's trade mark. Following the U.S. data spying scandals, data protection is more than ever a competitive advantage," she said in a statement.
[Updated Wednesday, March 12, 2014; new copy added at bottom.]
Rescue operations are launched from a number of countries, combing the seas for Malaysia Airlines Flight 370.
All the talking heads are claiming the Boeing 777-200 went into the water.
But maybe not.
According to the tv talking heads, Flight 370 set off from Kuala Lumpur headed north-northeast toward is Beijing destination. But it diverted from its flight plan and turned westward, crossing over Malaysia or southern Thailand on a mostly westerly course where it dropped off the radar; contract was lost.
This question is as old as Business Continuity Best Practices. But there is a logical answer that many organizations (and most BCM Auditors) fail to recognize.
That simple answer: No.
But this would be a very short blog if some explanation didn’t accompany that short answer. So let’s see if I can make the logic clear…
The chief purpose of a BIA is to gain an understanding of what’s important to the enterprise. An enterprise-wide BIA enables an organization to rank its Business Processes and IT Applications in order of criticality to the delivery of the organization’s Products and Services. That ranking enables the organization to prioritize which Processes and Applications – if impacted by a disruption – should be restored first (or which Recovery Plans should be activated first).
CSO — Given last year's revelations about the National Security Agency's (NSA) massive surveillance and data analytics conducted on Americans, along with continuing stories about local police scanning thousands of license plates per day, it might sound absurd to say that government lags behind the private sector in the use of Big Data analytics.
But those examples tend to be outliers among the nation's sprawling bureaucracies, especially at the state and local levels. In general, the private sector is well ahead of the public sector in the use of Big Data analytics, according to a recent report titled "Realizing the promise of Big Data," sponsored by the IBM Center for the Business of Government.
While the report's author, Kevin Desouza, an associate dean for research at Arizona State University, cited multiple examples of it being used in government, he found that the overall promise of Big Data analytics is largely unrealized so far in the public sector. He called it, "a new frontier" for government at all levels.
(CNN) -- New York police and fire officials were responding to reports of a massive explosion in Manhattan's East Harlem, authorities said Wednesday.
There were at least 11 minor injuries as clouds of dark smoke rose over the residential neighborhood of red-brick tenements, fire officials said.
Metro North commuter rail service, which runs along the site of the blast on Park Avenue, was suspended, officials said.
"Two buildings have collapsed. I hope there is no one in there. It's just rubble," a worker at a nearby flea market said.
Computerworld — Marketing executives salivate at the thought of being able to track shoppers via their mobile devices. The only problem: How to get consumers to sign on to that? MasterCard might have the answer. By spinning it as a global payment convenience, MasterCard has put a happy face on a major potential information grab.
Here's the deal. MasterCard and its partner Syniverse, a global mobile telecom firm, want you to opt in to let them track your mobile geolocation data. MasterCard says that cardholders who opt in and then travel to other countries will have fewer transactions denied. You see, cardholders are supposed to call their issuer before leaving the country so that their itineraries can be fed to the issuer's antifraud systems. When the cardholders don't do that, they are more likely to have their purchases denied.
So, says MasterCard, let's make this easier for everyone. Just register your phone with us, and then when a transaction request for you comes in from, say, Greece, our system will be able to check to see if your phone is in Greece too. If it is, the transaction is more likely to go through.
Techworld — The attack that planted malware on Target's point of sale (POS) terminals in November's huge data breach used inside knowledge of the network rather than a vulnerability in its retail software, McAfee has said in its latest quarterly analysis.
Snippets of information on the attack's engineering have been trickling out steadily since Target made the incident public in January, but this one suggests if not complexity then at least a degree of planning.
As has been widely discussed, the Target attack deployed the off-the-shelf BlackPOS, a generic but hugely popular toolkit used by criminals to capture data on retail computers connected to the card readers used by customers.
Hilary Estall takes a look at how organizations are faring with their BCMS audits and what, if any, trends are appearing.
ISO 22301 has been in circulation for approaching two years but the uptake for third party certification remains at a steady crawl. Why is this? As with many other management system standards, there will be some organizations keen to be amongst the first to obtain certification and maximise the associated benefits, but for most, there will need to be an external factor to influence the decision whether to seek formal certification. ISO 22301 is no different.
That said, a number of organizations have taken the initiative and now benefit from a business continuity management system (BCMS) which not only stands up to the scrutiny of an independent auditor (which let’s face it can vary in its worth) but more importantly, offers assurance that should the worst happen, the business (or part covered by business continuity arrangements) stands in good stead for riding the storm.
So, what can we learn from those who have already dipped their corporate toes into the water, otherwise known as ISO 22301? This article draws on my personal experience both as an auditor (one of the tough ones!) and a BCMS consultant; and tries to get underneath what might be holding your BCMS back.
SWIFT has launches a new business continuity solution to support global payment systems. Developed by SWIFT, the Market Infrastructure Resiliency Service (MIRS) is a backup service for Real Time Gross Settlement (RTGS) systems - electronic platforms used for the continuous settlement of high value and multi-currency cash payments between banks.
Central banks and financial market infrastructures operate RTGS systems to ensure effective settlement of high value payment transactions. As a backup platform, MIRS provides a third line of support to RTGS operators experiencing problems with first and second line backup systems. Once active, MIRS provides the essential functions required to achieve final settlement in real-time on a transaction by transaction basis. Once MIRS is deployed, RTGS operators remain in full control of the service while SWIFT manages the technical operations.
Juliette Kennel, head of market infrastructures, SWIFT, says: "Given the prominent role that RTGS systems play in the world economy, it is vital to safeguard effectively against operational disruptions and manage related risks. MIRS provides market infrastructures with the necessary tools to maintain business as usual operations even in the very unlikely but high impact event that their first and second lines of defence were to fail."
Since July 2011, SWIFT has been working with a group of central banks, including the Bank of England, to identify the necessary requirements to enable RTGS functions to operate normally in the case of disruptions at their existing sites. At the end of 2013, the Bank of England completed a pilot and successfully tested MIRS with the Clearing House Automated Payment System (CHAPS) community. CHAPS is a UK payments scheme that processes and settles both systemically important and time-dependent payments in sterling. On 24 February 2014, the Bank of England went live with MIRS, further increasing the resiliency of the Bank's RTGS service, the UK's High Value Payments System.
Toby Davies, head of market services at the Bank of England, says: "With two live operational sites, our current RTGS systems are highly resilient. However, we wanted to establish an additional contingency solution that was both robust and cost effective. MIRS will allow us to continue operating at full business volumes in the unlikely event of an outage affecting both our existing sites simultaneously."
MIRS is available to all HVPS market infrastructures, including those not currently on SWIFT.
When it comes time to build a new data center or modernize an old one, the movement of applications from one set of systems to another can be a painful process that usually takes weeks to accomplish.
To address that particular challenge, Delphix launched the Delphix Modernization Engine, which automatically creates, manages and archives virtual copies of applications, databases and files.
The Delphix Modernization Engine is based on data virtualization technology that Delphix has been using to allow IT organizations to copy databases. Rick Caccia, Delphix’ vice president of strategy and marketing, says the company is now extending the reach of that technology to include applications and files. This helps reduce a process that once took weeks to complete down to a couple of days.
Organizations tend to develop far-reaching plans to describe their strategic ambitions, tactics, goals, milestones, and budgets. However, these plans in and of themselves do not create value. Instead, they merely describe the path and the prize. Value can be realized only through the unremitting, collective actions of the hundreds or thousands of employees who are ultimately responsible for designing, executing, and living with the changed environment.
Unless an organization successfully aligns its culture, values, people, and behaviors to encourage the desired results, failure is highly predictable.
This challenge becomes even more acute when considering transformation efforts that are enabled through the introduction of enterprise resource planning (ERP) or other technology-enabled solutions. As is frequently the case in these deployments, companies often pay a lot of attention to new processes and technologies. However, they limit their focus on the essential resource — people — and how they must work and behave in the “future state.” Though deployment success demands that employees adopt new business processes, ways of working, new behaviors, communication channels, software tools, and so on, many initiatives frequently focus the dominant portion of a change budget on how to operate the new tool and, as a consequence, underachieve or fail.
CIO — Today's businesses generate more data than ever before. Not coincidentally, IT has never been more critical to the success of a small business. Luckily, the per-gigabyte cost of hard disk drives and associated storage technologies has never been lower, while the advent of technology such as cloud storage offers even greater opportunities to do more with less.
For many small businesses, though, their backup and storage strategy hasn't caught up with their more pervasive use of computers. This could be due to confusion about the various storage options, or a failure to understand that the old paradigm of the occasional batch backup is no longer adequate.
A storage vendor representative will have you believe that it offers the perfect backup hardware for your business. However, backup is more than hardware, since storage needs for individual organizations invariably differ. This means a one-size-fits-all mentality is doomed to offer a mediocre fit in terms of either budget or functionality.
Think you need advanced computer skills to set up a phoney bank website and fool people into giving you their money? Think again. DIY phishing is now on offer in kit form. Someone who knows how to set up a personal website or even a Facebook page probably has the level of knowhow required to get started in fraud and identity theft. For business continuity, the threats are multiplied. Instead of having to deal (only) with specialised cybercriminals, organisations and their employees must now be wary of almost anyone and everyone. But is that such a bad thing?
IDG News Service (Boston Bureau) — Oracle is planning to make significant investments in its ERP software for higher education institutions, with an eye on keeping the installed base happy and fending off challenges from the likes of Workday.
A new Oracle Student Cloud service will be configurable to manage "a variety of traditional and non-traditional educational offerings," Oracle said. The first incarnation of the product will be released sometime in 2015 and will support student enrollment, payment and assessment.
In addition, Oracle will release new features for higher education in its HCM (human capital management) and ERP (enterprise resource planning) cloud services during this year and next, according to Monday's announcement. The capabilities will target areas such as union contracts and grant management, and will be tied into Oracle Student Cloud.
Everybody wants to explain technology in terms that business leaders can understand. Generally, that’s a good thing, but it can have a downside.
When you oversimplify the technology, it can help sell in the short term, but in the long run, it leads to unpleasant surprises, scope creep and skeptical business leaders.
That’s what seems to be happening with Big Data analytics, according to eight executives from companies heavily vested in data and analytics.
One of the problems with developing entirely new data architectures like the cloud is that no one has a clear idea of the end game. Just about everyone these days wants to be on the cloud, but we are still struggling to define what, exactly, “the cloud” is and how to implement it.
Indeed, the schism between the public and private camps is as strong as ever, with the former describing private clouds as nothing more than automated virtualization, while the latter describes over-reliance on public resources as a recipe for disaster. And if you prefer hybrids? Well, you must be completely hopeless.
Lately, however, some voices are raising the possibility of a compromise. Rather than a simple black-and-white view of the cloud, perhaps there could be numerous shades of gray.
Symantec Challenges Financial Services Security
In this age of the customer, there is nothing more important than the effective and safe operation of our financial system. Trillions of dollars move around the world because of a well-oiled financial services system. Most consumers take our financial services system for granted. They get paid, have the money direct deposited into their account, pay bills, use their ATM card to get cash, and put family valuables in the safety deposit box. The consumer’s assumption is that their cash, investments and valuables are safe.
Symantec’s 2014 CyberWar Games set out to prove or disprove how correct are these assumptions. Symantec’s cyberwar event is the brainchild of Samir Kapuria, a Symantec vice president within the Information Security Group. Symantec structures the event as a series of playoff events. Teams form and compete, earning points for creating and discovering exploits. Out of this process, the ten best teams travel to Symantec’s Mountain View, California headquarters to compete in the finals.
PEACH BOTTOM, Pa. — Stored near the twin nuclear reactors here, safely above the flood level of the Susquehanna River, is a gleaming new six-wheel pickup truck with a metal blade on the front that can plow away debris from an earthquake or other disaster. Attached to the back is a trailer that carries a giant diesel-powered pump that can deliver 500 gallons of water a minute.
If the operators at the Fukushima Daiichi plant in Japan had owned such equipment when the tsunami struck three years ago Tuesday, they might have staved off disaster, plant operators say.
Now, here at the Peach Bottom nuclear plant, which has the same design as Fukushima Daiichi, engineers and technicians are busy applying such lessons, preparing for a worst-case scenario even worse than the plant’s designers envisioned in the 1970s.
“After Fukushima, we have to ask, what if we were wrong?” said Michael Pacilio, Exelon’s chief nuclear officer, showing off the truck and other purchases.
This week I want to continue examining the good news coming out of the 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council . Based on hundreds of responses from organizations worldwide, the Annual Report offers several insights into the best practices of companies that are better prepared to recover from outages or disasters.
You can download the report for free at http://drbenchmark.org/
Specifically, I want to explore what organizations are doing to set specific DR metrics for RTOs and RPOs so they can measure and test their DR performance—and hopefully enhance their ability to manage recovery faster and more effectively.
Results from the survey indicate that more prepared organizations set specific DR metrics for RTOs and RPOs. These organizations, for example, define specific Recovery Time Objectives and Recovery Point Objectives for each of their mission critical business services such as Customer Orders, Finance, and Email communications.
Computerworld — IT executives at Splunk faced a challenge. They needed to provide training materials for employees who would be using a new security program. The $268 million San Francisco company makes an application that collects machine data on everything from servers to elevators and heating systems.
"A lot of our employees have Ph.D.s and are IT geniuses," says CIO Doug Harr. Rather than lay down the law with these folks about what they can and can't load on their desktop computers, IT gives them administrative powers and a few security guidelines. So when it was time to train users, Harr knew a run-of-the-mill how-to would be a bad idea. "We looked long and hard for training materials that would be acceptable to them," he says.
Asia Pacific (AP) organizations have historically been slower to outsource critical information security functions, largely due to concerns that letting external parties access internal networks and manage IT security operations exposes them to too much risk. They have also not fully understood the real business benefits of outsourcing partnerships from a security perspective. However, this trend has recently started to reverse. I have just published a report that outlines the key factors contributing to this change:
- Skill shortages are leading to higher risk exposure. Scarce internal security skills and a dearth of deep technical specialists in the labor pool are ongoing challenges for organizations around the world. This not only raises the cost of staffing and severely restricts efficiency, it may also increase the costs of security breaches by giving cybercriminals more time to carry out attacks undetected; at least one study indicates that the majority of reported breaches are not discovered for months or even years. The early adopters of managed security services in AP tell us that external service providers’ staff have more technical knowledge and skill than their internal employees.
DENVER – In the past six months, more than $284 million in federal funds has been provided to Coloradans as they recover from last September’s devastating floods.
More than $222 million has come in the form of disaster grants to individuals and families, flood insurance payments and low-interest loans to renters, homeowners and businesses. More than $62 million has been obligated to state and local governments’ response and recovery work.
At the same time, long-term recovery efforts are underway, staffed and funded by federal, state and local governments, and by volunteer agencies dedicated to helping those most in need.
The $284.9 million breaks down this way: (All figures are as of COB March 3, 2014.)
- $60,418,419 in FEMA grants to more than 16,000 individuals and families for emergency home repairs, repair or replacement of essential personal property, rental assistance, and help with medical, dental, legal and other disaster-related expenses;
- $98,750,000 in U.S. Small Business Administration low-interest disaster loans to more than 2,440 homeowners, renters and businesses;
- $63,641,332 in National Flood Insurance Program payments on 2,071 claims, and
- $62,055,973 in FEMA Public Assistance reimbursements to state and local governments for emergency response efforts, debris cleanup, repairs or rebuilding of roads, bridges and other infrastructure, and restoration of critical services.
“The flooding disrupted the lives of thousands, changed the course of streams, isolated mountain communities, and left major roadways impassable in many places,” said Tom McCool, federal coordinating officer for the disaster. “More than 1,200 men and women from FEMA were mobilized from all over the country to this disaster. We’re proud to be part of the team as Coloradans recover, rebuild and renew their lives.”
Over a five-day period last September, historic rainfall swept through the Front Range, with some areas receiving more than 17 inches of rain. The flooding killed 10 people, forced more than 18,000 from their homes and destroyed 1,882 structures, damaging at least 16,000 others. Some of the hardest hit communities included Jamestown, Lyons, Longmont, Glen Haven, Estes Park and Evans.
At the request of Gov. John Hickenlooper, President Obama signed a major disaster declaration for Colorado on Sept. 14, 2013.
The 11 counties designated for Individual Assistance under the major disaster declaration are Adams, Arapahoe, Boulder, Clear Creek, El Paso, Fremont, Jefferson, Larimer, Logan, Morgan and Weld.
The 18 counties designated for Public Assistance are Adams, Arapahoe, Boulder, Clear Creek, Crowley, Denver, El Paso, Fremont, Gilpin, Jefferson, Lake, Larimer, Lincoln, Logan, Morgan, Sedgwick, Washington and Weld.
Other federal recovery activities and programs include:
- Approximately 50 percent of Public Assistance permanent repair work and nearly 65 percent of large (more than $67,500) Public Assistance projects contain mitigation measures to lessen the impact of similar disasters on publicly owned infrastructure. These mitigation measures have been approved for 123 projects with a cost of $3,439,200.
- FEMA hazard mitigation specialists have provided county and local officials with technical assistance and reviews of existing flood control measures and challenges, helping revise hazard mitigation plans, and providing advice and counsel on numerous mitigation and flood insurance issues.
- FEMA flood insurance inspectors assisted county officials to assess substantial damage at identified sites.
- National Flood Insurance Program specialists as well as the state NFIP coordinator and state mapping coordinator met with the City of Evans to discuss floodplain management and the city’s recent adoption of the Weld County preliminary maps. The State and FEMA will continue to work with city officials by providing additional training and technical assistance to support their floodplain management program.
Disaster Case Management Program
- FEMA has awarded a Disaster Case Management Grant of $2,667,963 to the State of Colorado. Under this state-administered program, case managers will meet one-on-one with survivors to assess unmet disaster-related needs that have not been covered by other resources.
Disaster Unemployment Assistance
- $302,795 has been dispersed to 151 applicants in this federally funded, state-administered program.
Crisis Counseling Grant Program
- Colorado Spirit crisis counselors have talked directly with 18,178 people and provided referrals and other helpful information to more than 88,000. Counselors met with nearly 1,200 individuals or families in their homes. The counselors are continuing door-to-door services and community outreach counseling programs. In mid-March, the longer-term Crisis Counseling Regular Services Program grant will be awarded to the State to continue the program.
- The grant will provide an additional nine months of crisis counseling outreach services to survivors.
- At the height of the disaster there were 53 agencies that ultimately provided a total of 275,784 volunteer hours. Survivors received shelter, food, water, snacks, muck-out, and debris removal.
- Long Term Recovery Groups have been established in Larimer, Weld and Boulder counties, and Longmont and Lyons.
- El Paso and Fremont counties are offering case management through El Paso County Voluntary Organizations Active in Disasters.
Disaster Legal Services Program
- Through the Colorado Bar Association/American Bar Association program, 284 State Bar-Licensed volunteer attorneys assisted 619 survivors with disaster-related legal issues. The program completed operations at the end of February.
Federal Disaster Recovery Coordination
- The Federal Disaster Recovery Coordination group has brought together federal and state subject-matter experts to advise local and state decision-makers on the best methods to achieve an effective recovery. The FDRC focuses on how best to restore, redevelop and revitalize the health, social, economic, natural and environmental fabric of the community.
- The group’s recently released Mission Scoping Assessment lists recovery-related impacts and the breadth of support needed, as well as evaluates gaps between recovery needs and capabilities. Its soon-to-be-released Recovery Support Strategies document outlines state recovery priorities and discusses how federal agencies can support those efforts.
- The State of Colorado, FDRC and other federal agencies are:
- assisting Lyons and Jamestown with long-term community planning and recovery organization;
- facilitating a survey to gauge impacts of flooding on business communities;
- helping identify housing options for disaster survivors, and
- helping local governments identify stream channel choke points so local communities can prioritize limited hazard reduction in streams.
- By clicking the “like” button on the COEmergency Facebook page, Coloradans can get detailed posts with useful information and photos. The Colorado Division of Homeland Security and Emergency Management’s (DHSEM) Twitter account COEmergency has more than 23,000 followers and offers disaster recovery information, links to news products and other information that disaster survivors may still find useful.
- More than 1,000 tweets have provided response and recovery information. Since the September floods began, more than 1,200 new participants have started following FEMA Region 8.
As today’s business environment requires greater levels of business continuity than ever before, a new survey commissioned by Avaya demonstrates that traditional network vulnerabilities are causing more business impacts that most realize, resulting in revenue and job losses.
The survey of mid-to-large companies in the United States, Canada, and United Kingdom found that 82 percent of those surveyed experienced some type of network downtime caused by IT personnel making errors when configuring changes to the core of the network. In fact, the survey found that one-fifth of all network downtime in 2013 was caused by core errors. Even more troubling is the fact that 80 percent of companies experiencing downtime from core errors in 2013 lost revenue, with the average company losing $140,003 per incident. The financial sector lost an average of $540,358 per incident.
The resulting impact on a career can be significant: 1 in 5 companies fired an IT employee when a network downtime incident occurred. The factor was more dramatic for some industries. Respondents also said that 1 in 3 companies in the natural resources, utilities & telecoms sector sacked IT staff due to downtime caused by change errors.
Avaya surveyed 210 IT professionals in large organizations (250+ employees) within the United States, Canada and United Kingdom to understand how much revenue was lost in total as a result of all the downtime incidents caused by core network changes in 2013. The surveys were completed in January 2014 in coordination with Dynamic Markets (UK).
Disasters both natural and human-caused can damage or destroy data and communications networks. Presentations at the 2014 OFC Conference and Exposition, being held March 9th-13th in San Francisco, Calif., USA will offer new information on strategies that can mitigate the impacts of these disasters:
New algorithm finds safe refuge for cloud data
Much of our computing these days, from browsing websites and watching online videos to checking email and following social networks, relies on the cloud. The cloud lives in data centers and disasters such as earthquakes, tornadoes, or even terrorist attacks, can damage the data centers and the communication links between them, causing massive losses in data and costly disruptions.
To mitigate such potential damage, researchers from the University of California, Davis (UC Davis), Sakarya University in Turkey, and Politecnico de Milano in Italy, first analyzed the risk that a disaster may pose to a communications network, based on the possible damage of a data center or the links that connect them to users. Then, they created an algorithm that keeps data safe by moving or copying the data from data centers in peril to more secure locations away from the disaster. The algorithm assesses the risks for damage and users' demands on the network to determine, in real-time, which locations would provide the safest refuge from a disaster.
CIO — The adoption of virtualization in recent years has laid the groundwork for many IT organizations to move from on-premise data centers to co-located environments and the cloud, says Craig Wright, principal at IT and outsourcing consultancy Pace Harmon. The increased acceptance of high-density platforms that require much smaller physical locations encourages portability as well.
Cloud implementation continues to grow, whether public cloud for standardized situations or private clouds for solutions that are differentiating or have increased security or regulatory requirements. That's driving more focus on orchestrating and aggregating infrastructure services, Wright says.
And automation is starting to shake things up with the promise of the software-defined data center. "In this scenario, everything in the data center is virtualized -- applications, databases, networks -- and an automation layer extends across all virtualization layers to create a unified platform," says Wright. This emerging approach requires a high level of virtualization maturity and orchestration sophistication to put all the pieces together efficiently.
CIO — This week's HP Industry Analyst Summit is IT's first company-wide, analyst-only event. That means it sets the bar that the others will attempt to beat this year.
Five areas define a good analyst event:
- Executive preparation: Did they take the event seriously?
- Demonstrated loyalty and collaboration: Is this a company — or a bunch of combatants?
- Dogfooding: Does the firm use its own products?
- Customers as vendor advocates: Does the customer have a voice?
- Entertainment: Are the analysts in the crowd checking email?
Analysts are a leveraged resource. If excited, they drive business to the vendor. If not, this value won't materialize. If alienated, they drive business from the vendor at a multiple based on the number of IT buyers or investors they touch.
IDG News Service (Tokyo Bureau) — The market for external disk storage systems has recovered from a slump, with factory revenues up 2.4 percent to US$6.9 billion in the fourth quarter of 2013, according to an IDC study.
Internal plus external disk storage systems produced $8.8 billion in revenue, up 1.3 percent from the last quarter of 2012 and jumping 17.2 percent from 2013's third quarter, which was seasonally slow.
IDC defines a disk storage system as a set of storage elements either inside or outside a server, including controllers, cables and (in some instances) host bus adapters, associated with three or more disks. It said total capacity of such systems shipped in the fourth quarter topped 10.2 exabytes (10.2 billion gigabytes), an increase of 26.2 percent from a year before.
Remember last year, when we were all talking about the coming “data tsunami?” Heck, even CNBC wrote about it.
The data tsunami metaphor has always struck me as odd, particularly after 2011’s very real tsunami devastated parts of Japan. I gathered that it meant big, but it was hard to envision data creating tsunami-level chaos and destruction. I’m starting to rethink that.
Gartner recently came out with this rather startling statement that 33 percent of Fortune 100 organizations will face an information management crisis within the next three years. Think about that: A third of the top companies in the United States — these companies — so poorly manage information, they soon won’t be able to value, govern or even trust their own information.
Although most would think that project management is bound by specific rules and technologies to get the job done, at least one person sees how creativity can bring about innovation and assist in overcoming obstacles that crop up while managing projects.
Author Ralph L. Kliem’s book Creative, Efficient, and Effective Project Management reveals the benefits to injecting creativity into the project management realm. The type of project management detailed in this book applies to creatively driven companies and other companies that rely on innovation and agility to achieve product success.
Kliem breaks the book into sections that include:
- Benefits of Creativity
- Opening Minds
- Misperceptions about Creativity
- Downsides of Creativity
- What Is the Relationship between Creativity and Projects
In our IT Downloads section, you can read an excerpt from this book, Chapter 7, Creativity Life Cycle Models. In this chapter, Kliem discusses the various models that can be used alongside traditional project management techniques and tools to achieve a more effective method of management. According to the author:
IDG News Service (New York Bureau) — Organizations can now add machine-generated data to their palate of information sources that can be aggregated and analyzed, thanks to a new connector jointly developed by Tableau Software, a provider of business intelligence software, and Splunk, which sells a log-file search engine.
"You can do data mashups between marketing data from structured systems and machine data that comes from the actual interactions, and get insights on product analytics or customer experience," said Tapan Bhatt, Splunk vice president of business analytics.
Splunk Enterprise software gathers data from server and other device log files, which can hold copious amounts of information about what visitors do when they visit a Web page, or use a connected mobile application. Such data can be used to better understand how people are using these products, information that can aid in marketing efforts or to refine site design or operations.
PC World — Cloud storage services such as Dropbox, Google Drive, and SugarSync are convenient, efficient--and notoriously insecure. Files are rarely encrypted, data transfer is typically not protected, and companies are usually able to access your files (even if they state they won't, they may be legally compelled to do so).
Documents such as business plans or other sensitive files (say, a copy of your birth certificate) should be protected. You can utilize a special, ultra-secure provider such as Wuala or Tresorit, or you can encrypt files yourself before uploading them to larger storage services, such as Dropbox.
Data doesn’t usually start flowing in one direction or another of its own accord; some action needs to be taken that enables that movement of data to occur.
With the immutable law of physics in mind, iboss Network Security has created a Secure Web Gateway that makes use of behavioral analytics to identify anomalies in the normal flow of data traffic in the enterprise that would signal that a particular system or application has been compromised.
Company CEO Paul Martini says that while trying to prevent all security breaches is next to impossible, limiting the amount of damage they cause needs to be a top IT priority. All too often, breaches are not discovered for months and yet, when they are discovered, it’s more than apparent that sensitive data was flowing between systems and applications in a way that was clearly abnormal.
Once an issue is discovered, it usually doesn’t take the average IT organization very long to resolve that particular problem. What can take forever, though, is actually discovering the real source of the problem.
Given all the interdependencies that exist between the components of an IT ecosystem, the root cause of particular issue is usually not immediately apparent. To help IT organizations discover the true source of an IT problem, Boundary has updated its IT operations monitoring software that is available as a service in the cloud.
Scott Fingerhut, vice president of marketing for Boundary, says the upgrades to the monitoring software not only help reduce the number of IT outages, but also can shorten the mean time to discovery of a core issue by identifying “patient zero” as the actual source of a problem.
CSO — In today's network environments, malware that evades legacy defenses is pervasive, with communication and activity occurring up to once every three minutes. Unfortunately, most of this activity is inconsequential to the business. You would think that would be good news right? The problem is that incident responders have no good way of distinguishing inconsequential malware from (potentially) highly damaging malware. As a result, they spend way too much time and resources chasing red herrings while truly malicious activity slips past.
Add into the mix sleepless nights that result from compulsive viewing of malware alert dashboards showing hundreds to thousands of malicious activity alerts. With a daunting list of malware to analyze and only so many hours in the day, its no huge surprise headline making breaches are increasingly becoming the norm.
The reality is that advanced malware defense is a complex undertaking, one that requires not only the ability to detect malware -- which in complex network environments is already difficult -- but also to prioritize action where it will have the best security outcome. Reducing the lifecycle of an active attack by even a few days can reduce the economic impact of an attack by millions.
Today is the anniversary of the most historic day of many in the history of the great state of Texas, the date of the fall of the Alamo. While March 2, Texas Independence Day, when Texas declared its independence from Mexico and April 21, San Jacinto Day, when Texas won its independence from Mexico, probably both have more long-lasting significance, if it is one word that Texas is known for around the world, it is the Alamo. The Alamo was a crumbling Catholic mission in San Antonio where 189 men, held out for 13 days from the Mexican Army of General Santa Anna, which numbered approximately 1,800. But on this date in 1836, Santa Anna unleashed his forces, which over-ran the mission and killed all the fighting men. Those who did not die in the attack were executed and all the deceased bodies were unceremoniously burned. Proving he was not without chivalry, Santa Anna spared the lives of the Alamo’s women, children and their slaves. But for Texans across the globe, this is our day to Remember the Alamo.
While Thermopylae will always go down as the greatest ‘Last Stand’ battle in history, the Alamo is right up there in contention for Number 2. Like all such battles sometimes the myth becomes the legend and the legend becomes the reality. In Thermopylae, the myth is that 300 Spartans stood against the entire 10,000 man Persian Army. However there was also a force of 700 Thespians (not actors; but citizens from the City-State of Thespi) and a contingent of 400 Thebans who fought and died alongside the 300 Spartans. Somehow, their sacrifice has been lost to history.
Not everybody chooses the cloud as the first option for backing up data. Despite the advantages of practically limitless storage area, pay-as-you-go pricing and resilience, a weak point for the cloud is the network speed for uploading or downloading all those gigabytes (terabytes, petabytes…). The alternative for organisations is to put their own solution in place, something that will let them blast large amounts of data backwards and forwards at high speed. In the old days of IT, an IT team would have been tasked with assembling the requisite components and tweaking them to make them work properly together. But now IT vendors have spotted the need and produced the PBBA, a solution whose popularity is growing steadily.
CHICAGO – The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) today released $707,507 in Hazard Mitigation Grant Program (HMGP) funds the City of Carmi, Ill., for the acquisition and demolition of 22 residential structures and the purchase of seven flood prone vacant lots located in the Little Wabash River floodplain. Following demolition, these properties will be maintained as permanent open space in the community.
“The Hazard Mitigation Grant Program enables communities to implement critical mitigation measures to reduce the risk of loss of life and property,” said FEMA Region V Administrator Andrew Velasquez III. “The acquisition and demolition of these homes permanently removes the structures from the floodplain and greatly reduces the financial impact on individuals and the community when future flooding occurs in this area.
"This grant will enable us to build on our previous flood mitigation efforts in Carmi, which removed more than three dozen homes from the floodplain," said Illinois Emergency Management Director Jonathon Monken. "With these additional property acquisitions, even more families can avoid the emotional and financial costs from future floods."
HMGP provides grants to state and local governments to implement long-term hazard mitigation measures. Through HMGP, FEMA will pay $707,507 or 75 percent of the project’s total cost. The City of Carmiwill contribute 25 percent of the remaining funds, or $235,836.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
CHICAGO - Understanding severe weather watches and warningswill help to keep you and your family safe during a disaster. FEMA and the National Weather Service (NWS) encourage everyone to learn this life-saving information and act if extreme weather threatens their area.
NWS alerts that are used to warn of severe weather, flood and tornado hazards include:
• Severe Thunderstorm Watch - Tells you when and where severe thunderstorms are likely to occur. Watch the sky and stay tuned to NOAA Weather Radio, commercial radio or television for information.
• Severe Thunderstorm Warning - Issued when severe weather has been reported by spotters or indicated by radar. Warnings indicate imminent danger to life and property to those in the path of the storm. Gather family members and pets and take shelter immediately. Have your emergency supply kit ready and continue to monitor your NOAA Weather Radio, commercial radio or television for more information.
• Tornado Watch - Tornadoes are possible. Remain alert for approaching storms. Watch the sky and stay tuned to NOAA Weather Radio, commercial radio or television for information.
• Tornado Warning - A tornado has been sighted or indicated by weather radar. Take shelter immediately.
• Flood Watch - Flooding is possible. Tune in to NOAA Weather Radio, commercial radio or television for information.
• Flash Flood Watch - Flash flooding is possible. Be prepared to move to higher ground; listen to NOAA Weather Radio, commercial radio or television for information.
• Flood Warning - Flooding is occurring or will occur soon; if advised to evacuate, do so immediately.
• Flash Flood Warning - A flash flood is occurring; seek higher ground on foot immediately. Do not attempt to drive into flooded areas or walk through moving water.
Be aware that sirens are designed as an outdoor warning system only to alert those who are outside that something dangerous is approaching. A NOAA Weather Radio can be critical to ensure you’re alerted to dangerous weather when indoors.
“The National Weather Service provides accurate and timely warnings and advisories, but they are only effective if people receive them, understand their risk, and take the correct action to protect themselves,” said Teri Schwein, Acting Central Region Director, National Weather Service. “Everyone should make time to prepare themselves before severe weather strikes by signing up for local weather emergency alerts, understanding NWS warnings and developing an emergency action plan.”
“Wireless Emergency Alerts (WEAs) sent to a mobile device are also used to notify individuals of potentially dangerous weather conditions,” said Andrew Velasquez, regional administrator, FEMA Region V. “If you have a WEA-capable phone and your wireless carrier participates in the program, this will enable you to be immediately aware of potentially life-threatening emergencies.”
You can find more information about WEA at www.fema.gov/wireless-emergency-alerts, and for valuable tips to help you prepare for severe weather visit www.ready.gov/severe-weatheror download the free FEMA app, available for your Android, Apple or Blackberry device.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Happy U.S. National Severe Weather Preparedness Week! I’d have sent a card, but I couldn’t get to the post office due to the icy road conditions and five-foot snow drifts. Let’s hope the awful winter weather is behind us. In any case, winter is followed by spring, summer and fall, and each of these seasons also has the potential to cause weather-related stress and hardship.
Preparing is important, but is also a bit of a hit-and-miss game: Severe weather is selective in its destruction, and precisely what type of weather will cause the damage is impossible to predict. As with many business continuity/disaster recovery (BC/DR) issues, the best approach is to prepare as well as possible from a structural point of view and hope that if an emergency occurs, those steps will help ameliorate the problem.
Providing power is one of the most vital generic steps. Continuity Central has reposted a long list of power contingency suggestions from the Diesel Technology Forum. The top five ideas: Assess the risk, install a standby generator, store a sufficient amount of fuel, maintain the equipment, and consider using a contractor to reserve a generator if an event occurs.
CIO — Target CIO Beth Jacob has apparently fallen on her sword in the wake of the massive security breach in mid-December that compromised 40 million debit and credit cards and swept national headlines. Her resignation was rendered this week effective immediately.
Fair or not, Jacob's resignation wasn't entirely surprising.
"If you look at the history of other large data breaches, turnover at the top of the IT shop is not unusual," says retail IT consultant Cathy Hotka.
Target CEO Gregg Steinhafel says the retailer is now looking outside the company for a CIO to succeed Jacob and help overhaul its network security, according to the Wall Street Journal.
Ironically, Jacob, who has a sterling reputation among retail CIOs, was thought of as a great hire by Target in 2008, Hotka says.
Network World — We were last in Cisco's new data center in Allen, Texas, in the fall of 2010 when the company was just putting the finishing touches on the 160,000 square foot building with 35,000 square feet of "raised floor" (they still use that lingo even though this facility doesn't use raised floors).
This data center, the crown jewel in the company's far reaching Global Data Center Strategy to consolidate and modernize core facilities, was brought online July 7, 2011 and we recently stopped back for an update (see our in-depth tour of the site under construction, or in pictures).
The Allen data center plays a critical role in the company's Cisco IT Elastic Infrastructure Services (CITEIS) private cloud, and is paired with a data center in Richardson, Texas, using Cisco's Metro Virtual Data Center architecture. MVDC enables one center to provide coverage for the other key applications, a fail-safe approach Cisco is using to safeguard critical applications.
The $375 billion shipping industry, which carries 90% of world trade, is next in line for drones to take over—at least, that’s what Rolls-Royce Holdings is betting on. The London-based engine manufacturer’s Blue Ocean development team has already set up a virtual-reality prototype in its Norwegian office that simulates 360-degree views from a vessel’s bridge. The company hopes these advanced camera systems will eventually allow captains in control centers on land to direct crewless ships. The E.U. is funding a $4.8 million study on the technology, and researchers are preparing a prototype for simulated sea trials next year.
“A growing number of vessels are already equipped with cameras that can see at night and through fog and snow—better than the human eye, and more ships are fitted with systems to transmit large volumes of data,” Follow us: @MailOnline on Twitter | DailyMail on Facebook" href="http://www.dailymail.co.uk/sciencetech/article-2568532/The-drone-boats-replace-cargo-ships-operated-remotely-world.html" target="_blank">said one Rolls-Royce spokesperson. “Given that the technology is in place, is now the time to move some operations ashore? Is it better to have a crew of 20 sailing in a gale in the North Sea, or say five people in a control room on shore?”
CIO — WASHINGTON — Federal CIOs, who consistently list cybersecurity as one of their top concerns, aren't likely to sleep any better after listening to Dave Aucsmith.
Aucsmith, senior director of Microsoft's Institute for Advanced Technology in Governments, offered a sobering assessment of the current state of play in information security Tuesday at a conference for federal IT professionals hosted by the software giant.
"I do not believe you can create secure computer systems," Aucsmith says. "So where does that leave you? Systems have to adapt and change in the presence of your adversaries, and you have to understand your adversary in order to adapt and change those systems."
Do you remember the car you were driving 20 years ago? How about the TV set you watched? These and other products were perfectly suited to that era, and with the proper upkeep would probably be fully functional today – although by now you likely would have moved on to newer, better things.
So why do we continue to populate our data centers with seriously aging technology, particularly now that we are on the cusp of a brave new computing world?
According to a recent survey by Brocade, a good number of facilities operate with technologies and architectural designs that date back 20 years or more. While it’s true that much of that infrastructure has been, or is in the process of being, revamped with virtualization and other techniques, the fact is that much of the hardware and software infrastructure is simply not up to the task of handling the diverse and dynamic data loads of a mobile, software-defined data ecosystem. Clearly, the data center is in need of substantial modernization, and sooner rather than later.
DENVER – FEMA, in conjunction with the State of Colorado, announced on Tuesday that Colorado will receive a Disaster Case Management Grant in the amount of $2,667,963. The money will be used for the Disaster Case Management Program for survivors of the devastating floods in Colorado last September.
“The State is excited to receive this FEMA program. It will provide the necessary funding for local case managers to assist individuals with the greatest or most challenging unmet needs,” said Emergency Management Director Dave Hard, Colorado Division of Homeland Security of Emergency Management.
Case managers meet one-on-one with survivors to assess their unmet needs as a result of the disaster. Unmet needs are items, support, or assistance that have been assessed and verified by representatives from local, state, tribal and federal governments and/or voluntary and faith-based organizations and that have not been covered by other resources.
Case managers can:
- Qualify clients for long term recovery services;
- Assist clients with disaster recovery plans; and
- Refer clients to agencies for services that match their needs. Needs might include:
- Volunteers to help in repairing or rebuilding a house;
- Building supplies, and
- Furniture, appliances, household goods.
President Obama signed a major-disaster declaration for Colorado on Sept. 14, 2013. Colorado Governor John Hickenlooper requested the Disaster Case Management Program, a federally funded program administered by the State.
The Disaster Case Management Program augments state and local capacity to provide services in the event of major disaster declaration that includes Individual Assistance.
“This is another step in the recovery process. We recognize that people are still rebuilding their lives and this program is designed to link people who have unmet needs with organizations that may be able to help them,” said Federal Coordinating Officer Thomas McCool.
Question from a client:
Dr. Vali: Why do so some people at work turn everything into a negative emotional spin?
Hmmm…Well…. Some human beings are just difficult and obnoxious. Others have real problems and struggle every day. Still others think emotional upheaval is a functional way to communicate due to their imprinting, life experience, family of origin training, health, or maybe even karma. Others have been victims of trauma and are seeking serenity through their storms. Or maybe it’s you. I have seen wonderful people who, for reasons unknown, set each other off into spins that were unexpected…almost like an allergy attack! I was once called to consult for a company where two top players who had never worked together were put on a project which led them both to become uncharacteristically violent. It was totally without precident. They just could not be in the same room together. As a consultant, a first question I ask is if the spin is accute (like a sneeze and just happened recently and unexpectedly) or chronic (a pattern of disorder that repeats itself.) Then I start looking for reasons.
By Luke Bird
I was recently watching the Sochi Winter Olympic Games and hasn’t it been amazing? The speed and adrenaline of the race and jump events are enough to raise the blood pressure of the calmest person! These finely tuned athletes from around the world dedicate years of their life training day after day as they try to maximise their performance and it got me thinking...
Everybody knows the age-old saying “practice makes perfect”. The idea being that you can become progressively better at something the more times you do it. This certainly is the case with business continuity. If we anticipate an issue or problem before it occurs and we practice how to fix it in advance of it happening then we can reduce the overall impact or even prevent it happening in the first place: but in my experience I’d have to say it’s not as simple as that.
How do you practice responding to an incident that can be caused by any number of different reasons, at any time, and may also result in different impacts occurring depending on its magnitude? The truth is we couldn’t possibly prepare and practice our response for every conceivable business disruption even if we trained until the next Olympics! We have to be generally prepared for everything!
Too many organizations are unwilling to face the facts when it comes to their information security risks and protective status. To move forward, an honest assessment is required…
By Dr. Jim Kennedy
Industry and government continue to spend tremendous amounts of money on information security process, technology and people. Despite this expenditure the breaches continue to happen and the costs of these breaches continue to grow as well.
A prudent person would ask why. Then we see blogs entitled: ‘CFOs don’t want to get it when it comes to risk and security’ or magazine articles entitled: ‘Senior managers cause far more security headaches than workers they out rank’; and some of the answers becomes clear. Senior management and board level people simply do not perform their fiduciary responsibilities well or at all in this area. C levels are too high up in the food chain to be bothered with the day-to-day tribulations of information security.