Industry Hot News (6930)
By now, cloud computing is a familiar resource at most enterprises. But like any data infrastructure or architecture, good enough won’t do, which is why many organizations are looking beyond mere deployment strategies and into full-blown optimization.
However, optimizing the cloud will not proceed along the same track as optimization of traditional data technology. For one thing, nearly all of the functionality in the cloud, at least as far as the enterprise is concerned, happens on the virtual layer or above. So rather than creating optimal environments through advanced technology, the play here is in tighter integration of services and applications. At the same time, optimized platforms are no longer focused solely around enhancing PC or desktop productivity, but on mobile devices and both wired and wireless infrastructure.
In the time it took me to write this sentence, approximately 20 networks were hit with a cyberattack. No, it did not take me very long to write that sentence—it’s just that, according to the 2013 threat report from FireEye, a cyberattack is happening every 1.5 seconds.
Or, at least, that’s what happened in 2013, the time period the report covered. That number could be more frequent now. After all, in FireEye’s 2012 Advanced Threat Report, companies experienced a malware attack "every three minutes."
Look at that time difference in the course of one year. We went from enterprise networks being subjected to an attack every three minutes to nearly every second. For those who think the high-profile attacks we’ve seen over the past few months are an anomaly, think again. Enterprise is under attack, pure and simple. As the bad guys become more sophisticated and create even trickier ways to sneak onto a network, next year’s FireEye report will declare numbers that seem unimaginable right now.
The steady drip of data breaches on the news and in consumers' lives isn't doing anything to build confidence in the state of today's business environment. At the heart of the matter: data privacy, or perhaps more accurately, the lack of it.
A new report from PwC, "10 Minutes on Data Privacy," points out that privacy is evolving beyond a risk and regulatory issue. Winning consumer trust is essential, and privacy polices directly correlate with brand image. How businesses manage data privacy and communicate with customers says everything about public perceptions of trust.
According to the report, 89 percent of consumers surveyed said they avoid doing business with companies they believe do not protect their privacy online, and 85 percent of investors said boards should be involved in overseeing the risk of compromising customer data.
This week I want to examine in more detail the good news coming out of the 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council . Based on hundreds of responses from organizations worldwide, the Annual Report provides several insights into the best practices of companies that are better prepared to recover from outages or disasters.
You can download the report for free at http://drbenchmark.org/
I want to examine why some companies appear to be doing much better at preparing for outages by implementing more detailed DR plans.
TUCSON, Ariz. – On his 50th birthday, John Halamaka, the CIO of Beth Israel Deaconess Medical Center in Boston, was surrounded by his senior staff having cake. Then his second-in-command came in with "some" news.
A physician had gone to the Apple store and returned with a MacBook, downloaded email, and then left the office. When he returned, the new MacBook was gone. On it was a spreadsheet embedded in a PowerPoint with information on 3,900 patients, data for which the hospital was responsible.
The hospital issued a news release, in which Halamka pointed out how the incident was being treated, "extremely seriously," but also being used to bring about change. In this case, accelerating implementation of a program to assist employees with protecting devices they purchase personally.
Network World — Cisco's Application Centric Infrastructure (ACI) is a revolutionary re-thinking of how to provision and manage data center networks. While the early version we looked at has some rough edges, and Cisco still has some hard problems to solve, ACI has the potential to completely change the way that large, highly virtualized data center networks are configured and built.
Just so there's no confusion, ACI is not Cisco's version of Software Defined Networking (SDN). While SDN, for many network managers, is a solution in search of a problem, ACI is something entirely different. It's Cisco's attempt to solve the most significant and important problems facing data center managers: how to more closely link the provisioning of data center networks with the applications running over those networks.
The goal is to reduce human error, shorten application deployment times, and minimize the confusion that can occur when application managers and network managers speak very different vocabularies.
Don Thomas Jacob provides BYOD risk management advice.
BYOD adoption in the enterprise has increased significantly over the last couple of years and the trend is here to stay. While BYOD has been incorporated into some enterprises’ organizational strategy, there are numerous organizations where BYOD has been initiated by the employees themselves and many network administrators are still working out how best to manage the trend.
It is only with practical experience that network administrators can fully understand the problems associated with BYOD and the best methods to solve them. Many organizations are looking for immediate answers and most IT and network admins do not have the time to experiment with various technologies and solutions or research for the right tool to use in the network for BYOD monitoring or management.
Enterprises often begin implementing BYOD strategies by having additional authentication mechanisms, a separate VLAN and a wireless network for handhelds. While this may seem to be the quickest method to adopt BYOD, it also brings with it numerous problems. In addition to the everyday upkeep and maintenance of the enterprise network, IT admins have to take care of mobile device management, bandwidth issues and most importantly keep an eye on possible security issues. In fact, BYOD leaves the network open to a plethora of security issues.
Poor disaster recovery practices have led to losses of up to $5M
Hundreds of thousands of religious extremists are set to march on Jerusalem. Whether or not "hundreds of thousands" will descend on the Israeli capital is to be seen. My guess is that the turnout will be less than expected, but still there will be a sea of black hats.
In preparation for this event, Israel has ordered streets closed, trains to stop running, and buses to stay at the bus station.
Whether or not the risk of the mass demonstration occurs - in any volume - the risk already is impacting the capital.
The impact is that people won't be able to
* Go to work
* Go to school
* Go shopping for essentials (bread, milk, etc.)
* Get to a hospital or clinic if necessary
Essentially they are trapped in their neighborhoods, if not their homes.
IDG News Service — Sears Holdings said a review of its systems does not show evidence yet of a data breach as retailers continue to stay on guard in the light of payment card terminal hacking at Target and Neiman Marcus.
The department store chain, with 2,500 stores in the U.S. and Canada, is the latest company to say it is investigating a possible breach, following the hotel management company White Lodging Services and the arts and crafts chain Michaels.
"There have been rumors and reports throughout the retail industry of security incidents at various retailers, and we are actively reviewing our systems to determine if we have been a victim of a breach," wrote Howard Riefs, director of corporate communications at Sears Holdings, in an email.
With data from 15,000 customers and over 100 insurance executives, consulting firmCapgemini and Efma found that enhancing customer experiences directly impacted insurers’ profitability. “Given the increasing demand of internet and mobile channels in insurance, digital transformation is an effective approach to create positive experiences, secure customer loyalty, and ultimately improve insurers’ profitability,” the report states.
While many insurers say they are working to improve the user experience, ratings have only increased by about 2% worldwide, with only 32% saying they had positive experiences with their provider. Further, nearly 70% of customers reported that they are considering switching carriers. Digital presence is increasingly important in making customers happy, according to the study. For example, while internet-mobile is the least likely channel to offer a good experience, it has the greatest impact when successful. Overall, as Capgemini and the MIT Center for Digital Business found in 2012, firms with a strong digital presence and customer focus are 26% more profitable.
In addition to the new report, Capgemini released the following infographic with their findings:
The last time you were in a pharmacy did you notice advertisements for the flu vaccine? Signs like these will become more common as pharmacists take on an important role in administering vaccines to the general public. Have you also noticed how pharmacies seem to be everywhere? The ubiquity of pharmacies plus their extended hours of operation and streamlined access to preventative treatments makes them perfect for helping respond to emergencies, by distributing vaccines, medications, or protective masks. It’s encouraging to know that pharmacists in all 50 states can now administer vaccines and many are involved in emergency response training.
Research Supports the Role of Pharmacists
The Immunization Systems Project of the Emory Preparedness and Emergency Response Research Center (PERCC) conducts research to determine how immunization systems could combat public health emergencies such as vaccine shortages or pandemic flu. Some of our recent findings highlight the importance of incorporating pharmacies into emergency planning as a valuable resource for reaching the public with important health measures.
During our research we explored differences in providers’ experiences administering vaccines during the H1N1 pandemic. We surveyed vaccine providers (e.g., pediatricians, obstetricians, hospital providers, pharmacists) in Washington State to examine topics such as vaccine administration, participation in preparedness activities and communication with public health agencies.
Based on our results, pharmacists:
- Saw more patients on a daily basis than any other vaccine provider group
- Reported lower coverage rates of their staff receiving seasonal and H1N1 influenza vaccines
Compared with other providers, pharmacists were:
- Less likely to rely on local health departments for information about emergencies
- Less likely to have participated in emergency training or response activities in the past
- More inclined to rely on federal sources, corporate headquarters and professional organizations for information about public health emergencies
- Willing to work with health departments in future vaccine-related public health emergencies
Our research suggests that, given the broad reach of pharmacists and their high patient volume, pharmacists could become key first responders to improve the capability and reach of emergency response in the future. Encouraging pharmacists’ participation in emergency preparedness training as well as building connections between health departments and public health agencies are ways that can strengthen emergency response. Public health entities are actively taking steps to add pharmacies into the pool of emergency responders. Doing so leverages the extensive community reach of pharmacists and the high level of trust people feel towards them.
Preparedness and Emergency Response Research Centers are funded by CDC’s Office of Public Health Preparedness and Response. To find more information about PERRC programs across the U.S. visit http://www.cdc.gov/phpr/science/erp_PERRCs.htm.
CIO — Anthony Bradley spends his days preparing for the worst-case scenarios that could occur on one of the seven campuses of Miami Dade College. Most of the school's 164,000 students -- and a large majority of its more than 3,000 employees -- spend their days thinking about anything but that.
As director of emergency preparedness, Bradley has conducted vulnerability assessments to determine the likelihood of various crisis situations and created detailed response plans for everything from fires and hurricanes to bomb threats and active shooters. Keeping students, faculty and staff informed of potential emergencies and disasters that they might encounter and what to do when they occur is a key ingredient of the program.
In the past, that would be accomplished through handouts, pocket brochures, and in-person briefings. But "handouts and brochures wind up in the trash or at home in a drawer," says Anthony, "and people forget the briefings over time."
But what they almost always carry with them is a smartphone or tablet.
CIO — The obsolescence of enterprise security was at the core of McAfee's talk this week at the RSA Conference in San Francisco. The Target breach clearly showcased that you simply can't secure a company by trying to prevent unauthorized access, malware or any other internal or external security breach.
You have to step back and recognize that someone is going to break in and you must therefore focus on catching them before they can do any damage. This is a very different approach to security, and the lessons apply to both home and business and both electronic and physical security approaches. As an older woman who lives near me discovered this week when armed men pushed into her house and stole her safe, a perimeter approach to security is no longer adequate.
McAfee's presentation was so compelling it actually held my wife's interest because she could see how the lessons learned could be applied more broadly to personal defense.
McAfee argued it is in a war-like arms race, and its lead offering, which I spoke about last week (Threat Intelligence Exchange), is only the start of the first battle.
For a variety of reasons, backup and recovery over the years has not only become more complex, it’s become a lot more expensive. With the addition of multiple types of new platforms across the enterprise, backup and recovery offerings for each platform have proliferated.
Acronis wants to simplify backup and recovery with the release this week of backup and recovery software based on what the company dubs AnyData technologythat not only supports any platform, but also includes universal restore, de-duplication, and application support as part of the base offering. As an extension of that capability, Acronis this week also announced Acronis Backup-as-a-Service, a cloud-based backup and recovery service based on the company’s AnyData technology that the company’s partners can deliver via a variety of cloud service providers that Acronis has partnered with to create the service.
Managing third-party suppliers presents significant compliance challenges that often span an organization, raising legal, insurance, human resources and technology concerns, to name just a few. Corporations will continue to wrestle with these risks in the year ahead, but the convergence of external threats, abundance of valuable corporate data and the current regulatory environment has highlighted the importance of corporate cybersecurity practices. Cybersecurity is perhaps one of the hottest topics being discussed in boardrooms today. The Cybersecurity Framework, anticipated legislation and litany of high-profile data breaches have resulted in even more heightened scrutiny.
The landscape for corporate cybersecurity is rapidly changing and outsourced services, including IT and business process services, all stand to be impacted. Corporate stakeholders, particularly in the legal, information security and information technology departments, should be keenly focused on the current cybersecurity climate and the state of cybersecurity across third-party outsourcing agreements.
Findings from the eighth annual survey of chief audit executives in power and utilities, January 2014
How Utility IA Organizations Plan to Bolster Their Relevance and Response to Risks
Utilities are navigating dramatic and pronounced change. Demand management, smart grids, big data, shifting regulatory needs and growing capital investments are forcing utilities to change how they manage their businesses. At the same time, the growth of distributed generation, new sources of fossil fuel and the advent of shale gas and tight oil supplies are changing the industry’s economics and demanding new strategies. Utility company internal audit (IA) groups are pivotal to their company’s ability to navigate the risks inherent in these pervasive changes.
However, PwC’s eighth annual survey of Power and Utilities Chief Audit Executives (CAEs) found that IA groups are facing significant challenges in maintaining a central role. For example, respondents fear their groups won’t have the required skills to keep pace with a growing portfolio of capital projects, increasing regulatory complexity and new technologies. In addition, CAEs feel there is an opportunity to achieve closer alignment with the expectations of their stakeholders—from the critical risks that should be IA’s focus to advanced technologies that strengthen IA’s efficiency and efficacy.
I am currently studying Medieval England including the reign of Alfred the Great. As you might expect with someone monikered as ‘The Great’ he is certainly considered right up there with the greatest Kings of England. Not only did he largely drive out the Viking invaders from his country but he also set the stage for the unification of England under one crown, for the first time since the days of Roman Britain under the Caesars. One of the innovations he developed was fortified towns, called burgs, from which to resist Viking raids and incursion. But more than simply walled cities for defense, within these fortified towns was a wide road running down the middle of the town called the ‘High Street’ and a street situated next to the town’s walls appropriately called ‘Wall Street’. These streets were wider than the others in the town to facilitate the movement of troops in the time of crisis, such as a Viking raid. In other words, Alfred evaluated the risk to his kingdom and put multiple layers of steps into place to manage those risks.
In the Foreign Corrupt Practices Act (FCPA) compliance world, one of the key components that the Department of Justice (DOJ) wants to see is a risk assessment and a company managing its risks, based upon said risk assessment. One company’s response to a risk or set of risks does not necessarily mean that another company must follow it. The DOJ’s Ten Hallmarks of an Effective Compliance Program are broad enough to allow companies to manage their own risks, hopefully effectively. I thought about this concept when I was listening to a presentation by Flora Francis and Andrew Baird of GE Oil & Gas at the 2014 SCCE Utility and Energy Conference in Houston this week on GE’s third party risk management. First of all, if you have the chance to hear a couple of nuts and bolts compliance practitioners from GE like these two speak, run, don’t walk, to their presentation. GE’s commitment to compliance is well known but also the company’s willingness to share about their compliance program is a great boon to the compliance community. Lastly, is the gold-standard nature of the GE compliance program and while it may be more than your company needs to manage their own risks, the GE compliance regime does shine a light that we can all aspire to in our own compliance programs.
Despite the publicity given to Big Data and (to a lesser extent) the Internet of Things, their practical advantage has yet to be clarified. It’s difficult to think of them in terms of business continuity when they don’t influence the fortunes of an enterprise; unless you count the negative impact of money spent investigating them. A few companies cite gains in marketing effectiveness for example by analysing huge amounts of online data from customer interactions, but Big Data is not mainstream – or not yet. Similarly, the Internet of Things in which phones, PCs, cars, fridges and more are all web-enabled is a conversation starter rather than a reality. Things would change if either one acquired a killer app.
Reflecting on some of the most recent crises I’ve been involved in as an advisor, I asked: what am I really contributing?
I concluded by far the most valuable contribution was an outside perspective. Looking at the event and issues from the viewpoint of the customer, the stakeholder, the reporter, the victim, the detached observer. It is often very difficult for even the best communicators who are deeply embroiled in a problem to maintain that outside perspective. It’s the main reason why I think it is probably essential that your crisis communication plan include a qualified person completely outside your organization.
I worked on a plan for a major oil company a few years ago and saw in their plan the role of a Communications Advisor. In their case, it was intended for a specific PR expert who had a strong relationship with the President. But, it struck me as such a good idea I have built that role into almost every plan I have worked on since then. The responsibility of that person is to maintain a 30,000 foot view, maintain contact with stakeholders outside the organization, and represent an honest, objective and uninformed perspective.
While many California farmers are taking a wait-and-see approach regarding future rainfall, some almond growers are moving ahead with the removal of mature trees. But much more is at risk, including jobs and agricultural products for the rest of the country.
California grows about half of all U.S. fruits and vegetables, mostly in the Central Valley region. It also ranks as the top farm state by annual value of agricultural products. Crops exclusive to California are almonds, dates, figs, grapes for raisins, pomegranates, olives, peaches, pistachios, plums, rice, walnuts, kiwi fruit and clover seed.
In January, Gov. Jerry Brown declared a drought emergency, and this month President Obama announced relief aid for California farmers and ranchers. Because of the severity of the ongoing drought, the U.S. Bureau of Reclamation as well as the State Water Project said there would be no water for Central Valley farmers and ranchers. According to the California Farm Water Coalition, it is expected that about 2 million acres in the San Joaquin Valley will receive no water this year.
When it comes to succeeding with data quality, you might gain an edge by avoiding a centralized approach, argues one data governance director.
Alan D. Duncan is the director of data governance at the University of New South Wales, Australia. In a recent MIKE 2.0 blog post, Duncan reacts to a survey finding that a “lack of centralized approach” is linked with inaccurate data. He questions whether it’s really lack of centralization or actually a complete lack of any structure.
Duncan’s premise, as he explains in some detail for InformationAction, is this: The social and cultural character of your organization should shape how you handle data governance. That means there will be a many different ways to structure governance, but broadly speaking, he identified three:
CIO — In the years since the HITECH Act, the number of reported healthcare data breaches has been on the rise — partly because organizations have been required to disclose breaches that, in the past, would have gone unreported and partly because healthcare IT security remains a challenge.
Recent research from Experian suggests that 2014 may be the worst year yet for healthcare data breaches, due in part to the vulnerability of the poorly assembled Healthcare.gov.
Hacks and other acts of thievery get the attention, but the root cause of most healthcare data breaches is carelessness: Lost or stolen hardware that no one bothered to encrypt, protected health information emailed or otherwise exposed on the Internet, paper records left on the subway and so on.
What will it take for healthcare to take data security seriously?
A lot of coverage has been dedicated to BYOD and security from the employer’s side of things. Now an interesting new study out from AdaptiveMobile shows what employees don’t know about BYOD, which is mostly how much control employers have over those personally owned devices.
According to FierceMobileIT:
The study of 1,000 IT decision makers and 1,000 employees, conducted by Harris Interactive, found that 83 percent of staff would stop using their own device or still use it with deep concern, if they knew their employer could see what they were doing at all times. With 61 percent of enterprises already having this level of access in place, and with a need to increase control to address growing security threats, organizations could face a backlash in their employees' willingness to adopt BYOD.
Business Continuity Awareness Week takes place from 17th to 21st March and Continuity Central’s BCAW update page will provide all the information you need to make the most of this annual event.
Business Continuity Awareness Week is available to all organizations to make use of and this year two main themes have emerged:
- The Business Continuity Institute is building its BCAW activities around the theme of ‘Counting the cost’. The BCI says that this is designed to demonstrate the potential cost of not having an effective business continuity management system.
- Various Canadian organizations have grouped together to promote BCAW in that country. The theme chosen is ‘Business Continuity: Helping Protect Business Value.'
The Continuity Central BCAW update page will provide updates from both the above initiatives as well as looking at what individual businesses and organizations are doing during that week.
The update page can be visited in two ways: either using the full URL http://www.continuitycentral.com/businesscontinuityawarenessweek2014.html or the shortened version http://www.businesscontinuityawarenessweek.com
The London Risk Register was approved in early February and provides an annual assessment of the likelihood and potential impact of a range of different threats to London’s businesses and communities.
The updated Risk Register identifies 67 risks, categorised as:
- 4 Very High risks
- 33 High risks
- 24 Medium risks
- 6 Low risks.
The four ‘Very High’ risks are:
- Influenza Pandemic
- Severe inland flooding
- Fluvial or surface run-off
- Telecommunication failure.
The updated London Risk Register can be viewed here (PDF).
The London Resilience Team has also developed a number of short presentations providing an overview of the main risk areas. These can be viewed here.
Almost half of organizations are operating under the assumption that their network has already been compromised, according to a survey conducted by the SANS Institute on behalf of Guidance Software. When the limitations of perimeter security are exposed, endpoints and critical servers rife with sensitive information are rendered vulnerable. With many high profile breaches in 2013 occurring on endpoints, interest in improving endpoint security is top-of-mind for many information security professionals.
In the first-ever SANS Endpoint Security Survey, SANS surveyed 948 IT Security professionals in the United States to determine how they monitor, assess, protect and investigate their endpoints, including servers. The largest group of respondents encompassed security administrators and security analysts. More than one-third of those respondents (34 percent) work in IT management (e.g., CIO or related duties) or security management (e.g., CISO or similar responsibilities).
The overall results of the survey indicate that the topic speaks to the strategic concerns of management while also addressing the technical concerns of those ‘in the trenches’.
By Craig Garner
“A truth that’s told with bad intent
Beats all the lies you can invent.”
- William Blake
Formed through legislation signed by President Gerald Ford in 1976, the Office of the Inspector General (OIG) is one federal agency that should never be underestimated by those in the health care industry. In its pursuit to protect the integrity of health care programs and the welfare of their beneficiaries, the OIG boasts the power to determine the fate of most health care providers through standards both objective (42 U.S.C. § 1320a-7(a) – Mandatory Exclusions) and subjective (42 U.S.C. § 1320a-7(b) – Permissive Exclusions). While those unfortunate enough to find themselves on the List of Excluded Individuals and Entities (LEIE) may at times disagree, the pellucidity with which the OIG enforces its statutory directive is in perfect alignment with the transparency through which the agency insists providers conduct their business.
The recent examples of compliance program credits for Morgan Stanley and Ralph Lauren have demonstrated that, more than ever, an effective compliance program can protect a company from criminal indictment and generate bottom line benefits by helping a company avoid or reduce fines and penalties. Much of the recent enforcement action has been focused on liability for bribery and corruption actions performed by third parties on behalf of another company. When it comes to third party corruption, many compliance program leaders worry that they don’t know where to start on a third party compliance program and that they cannot afford the elaborate, richly funded programs that are so often profiled in the news.
Luckily, you don’t have to have a legion of compliance personnel and an unlimited budget to meet standards recently outlined in A Resource Guide to the U.S Foreign Corrupt Practices Act (FCPA Guidance) provided by the United States Department of Justice (DOJ) and Securities and Exchange Commission (SEC).
January 28th was the anniversary of the Space Shuttle Challenger disaster. The Rogers Commission detailed the official account of the disaster, laying bare all of the failures that lead to the loss of a shuttle and its crew. Officially known as The Report of the Presidential Commission on the Space Shuttle Challenger Accident - The Tragedy of Mission 51, the report is five volumes long and covers every possible angle starting with how NASA chose its vendor, to the psychological traps that plagued the decision making that lead to that fateful morning. There are many lessons to be learned in those five volumes and now, I am going to share the ones that made a great impact on my approach to risk management. The first is the lesson of overconfidence.
In the late 1970’s, NASA was assessing the likelihood and risk associated with the catastrophic loss of their new, reusable, orbiter. NASA commissioned a study where research showed that based on NASA’s prior launches there was the chance for a catastrophic failure approximately once every 24 launches. NASA, who was planning on using several shuttles with payloads to help pay for the program, decided that the number was too conservative. They then asked the United States Air Force (USAF) to re-perform the study. The USAF concluded that the likelihood was once every 52 launches.
Experts have long talked about the 360-degree of customers in near mythical terms and as a generally worthwhile, if not actually achievable, goal. A new business imperative could up the ante for integrating data about customers, according to Gartner.
In the past, what that’s really meant is that they want to align channels, such as in-store, online and customer. Now, the goal is to improve the customer engagement across business divisions as well. Basically, what that means is that they’ve added marketing and sales into the mix.
That’s going to be a big job, too. A Scribe Software survey released in October found that only 16 percent of companies support full integration between CRM and other business systems. And I can’t swear by this data because it’s a few years old, but back in 2012, Scribe found that 35 percent of businesses planned to handle CRM integration by manually re-entering the data.
COMPUTERWORLD — WASHINGTON - From ocean sensors to orbiting satellites, the National Oceanic and Atmospheric Administration (NOAA) collects about 30 petabytes of environmental data annually. But only about 10% of the data is made public, something the agency now wants to change.
NOAA wants to move its vast amount of untapped data into a public cloud, but without having to pay a whopping cloud services bill.
The agency believes the data has a lot of value to it, and is now seeking partnerships with commercial entities, universities and others. An ideal partner might be someone who can apply advanced analytics to the data to create new products and value-added services that also generates new jobs.
CIO — The demands of big data applications can put a lot of strain on a data center. Traditional IT seeks to operate in a steady state, with maximum uptime and continuous equilibrium. After all, most applications tend to have a fairly light compute load—they operate inside a virtual machine and use just some of its resource.
Big data applications, on the other hand, tend to suck up massive amounts of compute load. They also tend to feature spikes of activity—they start and end at a particular point in time.
"Big data is really changing the way data centers are operating and some of the needs they have," says Rob Clyde, CEO of Adaptive Computing, a specialist in private/hybrid cloud and technical computing environments. "The traditional data center is very much about achieving equilibrium and uptime."
IDG News Service (Boston Bureau) — A former Microsoft architect has founded a startup called Azuqua aimed at tackling the problem of joining together and automating business processes from multiple SaaS (software-as-a-service) applications.
The proliferation of SaaS and the "API [application programming interface] economy," provides a vast opportunity for a service that can easily pull together processes from multiple applications to serve various scenarios, CEO Nikhil Hasija said in an interview prior to Tuesday's launch of the company's platform.
There's also a need for a tool that can make doing this extremely easy for an average user, he said. While there are a wide range of cloud integration options, such as Dell Boomi and Informatica Cloud, "it requires a computer science degree to do something with them," Hasija claimed. "We're solving this for the business user and making IT look good for being able to deliver this."
IDG News Service (Boston Bureau) — Dell and NetSuite are broadening their relationship, with Dell becoming a global reseller and IT systems integrator for NetSuite's cloud ERP (enterprise resource planning) software.
NetSuite and Dell had already partnered around Dell's Boomi cloud integration technology, and signed off on the expanded agreement a couple of weeks ago, NetSuite CEO Zach Nelson said in an interview prior to Tuesday's announcement.
The deal has benefits for both companies. NetSuite will gain from Dell's vast global sales and service organizations, as well as the latter's specialization in industries such as health care and financial services.
Business Continuity Awareness Week takes place between 17th – 21st March 2014 and this year includes an opportunity to take part in the first business continuity ‘Flashblog’.
The Flashblog is basically a collection of short articles written around the same theme and published on the same date.
The topic which has been set is "Counting the cost, and benefits, for business continuity” and 500 word articles are being sought from the perspective of as many different types of authors as possible.
Articles will be published on various platforms (including Continuity Central), depending on the author’s preference, and will go live at 11am GMT on Tuesday 18th March using the hashtags #countingthecost and #bcFlashBlog.
For more details of how to take part go to http://bcflashblog.postach.io/join-in-the-bc-flashmob
The NFPA Technical Committee on Emergency Management and Business Continuity will meet between March 25th-27th 2014 to discuss progress on the 2016 edition of NFPA 1600.
The agenda for the First Draft Meeting, which will take place at Hilton St. Petersburg Carillon Park, St. Petersburg, FL, is as follows:
1. Starting time: 8:30 a.m., March 25, 2014.
2. Welcome (Don Schmidt, Chair)
3. Self-introduction of members and guests
4. Approval of Minutes of Pre-First Draft Meeting, Salt Lake City, 2013 Oct 22-23
5. Approval of agenda
6. NFPA staff liaison report (Orlando Hernandez)
Committee membership update
Distribution of sign-in sheets
7. Organizational reports/News related to NFPA 1600
8. Task group reports
9. Act on Public Comments to NFPA 1600. Take any other actions necessary to complete the ROC for NFPA 1600.
10. Old business.
11. New business
To read the minutes of the October 22nd-23rd meeting click here (PDF).
Risk levels and uncertainty change significantly over time. Competitors make new and sometimes unexpected moves on the board, new regulatory mandates complicate the picture, economies fluctuate, disruptive technologies emerge and nations start new conflicts that can escalate quickly and broadly. Not to mention that, quite simply, stuff happens, meaning tsunamis, hurricanes, floods and other catastrophic events can hit at any time. Indeed, the world is a risky place in which to do business.
Yet like everything else, there is always the other side of the equation. Companies and organizations either grow or face inevitable difficulties in sustaining the business. Value creation is a goal many managers seek, and rightfully so, as no one doubts that successful organizations must take risk to create enterprise value and grow. The question is, how much risk should they take? A balanced approach to value creation means the enterprise accepts only those risks that are prudent to undertake and that it can reasonably expect to manage successfully in pursuing its value creation objectives.
Computerworld — Now, here's a noble goal. U.K. telecom giant Orange on Friday (Feb. 21) launched a campaign to encourage companies to be much more transparent about the data they are collecting with their mobile apps, as well as helping consumers to better control how such data is used. Laudable, really -- and terribly unrealistic.
I'm not even talking about the fact that most companies would rather not be transparent about why they retain consumer data. ("We're trying to get you to buy expensive stuff that you don't need and probably don't even really want. Why do you ask?") The real problem is that you can't disclose what you don't know.
There is no question that technology today forms the core of business. In their role of facilitating transactions and storing sensitive data—the data of both the staff of the company and the stored data of the clients—the systems and networks of companies are increasingly under siege. This makes data both the most precious asset to the corporation, and the most vulnerable. Losing it may cause irrevocable damage to the reputation of a business, and thereby also the trust of shareholders. Logically, then, network security should be a key focal point in the disaster recovery plan of any business that wishes to stay afloat.
How, then, do we prepare our businesses to deal with threats to network security?
InfoWorld — Advanced persistent threats have garnered a lot of attention of late, deservedly so. APTs are arguably the most dangerous security concern for business organizations today, given their targeted nature.
An APT attack is typically launched by a professional organization based in a different country than the victim organization, thereby complicating law enforcement. These hacking organizations are often broken into specialized teams that work together to infiltrate corporate networks and systems and extract as much valuable information as possible. Illegally hacking other companies is their day job. And most are very good at it.
By all expert opinion, APTs have compromised the information infrastructure of any relevant company. The question isn't whether you've been compromised by an APT, but whether you've noticed it.
Resiliency is generally defined as the ability of an organization to (a) withstand threats that could have significant impact and (b) recover from any disruption within the thresholds set by the business. Resiliency is often, mistakenly, considered the responsibility of IT. Technological resiliency is of paramount importance, but cannot alone assure the resilience of an organization.
One of the ways to become more resilient is to reduce risk exposure and thereby increase the organization’s ability to withstand threats. How can this be achieved?
A Ground-Up Approach to Risk Reduction
Understand that risks are inherent in the assets (sites, people, processes, IT services and subsystems, suppliers, equipment, etc.) that vital operations rely on. Risk reduction efforts should focus on decreasing the risk exposure of those critical assets. Decreasing risks at this granular level can, with their cumulative effect, reduce the organization’s overall risk exposure.
In reviewing the results of the new 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council in this blog, I’ve focused on the bad news so far. Based on hundreds of responses from organizations worldwide, the Annual Report provides several insights into the best practices of companies that are better prepared to recover from outages or disasters.
You can download the report for free at http://drbenchmark.org/
OK, so here’s the good news. Some companies seem to be doing much better at preparing for outages and they exhibit certain traits that distinguish them from others who are not doing so well.
CHICAGO – Just a few inches of water can cause tens of thousands of dollars in damage to your home. A flood insurance policy could protect you from the devastating out-of-pocket expenses caused by flooding.
Don’t wait until it’s too late. A policy takes 30 days from application and payment to go into effect. And a typical homeowner’s insurance policy does not cover floods.
“Snow thaw and the potential for heavy spring rains heighten the flood risk throughout our area in the coming months,” said FEMA Region V Administrator Andrew Velasquez III. “A flood insurance policy is the best option to protect your home from the costly damage floodwaters can cause.”
Historically, flooding has resulted in millions of dollars in damages throughout the state of Wisconsin. In 2010, heavy rains dumped nearly 8 inches of water in a two hour period over the city of Milwaukee, resulting in more than 23,000 reports of damage from local residents. Last June, severe thunderstorms dumped a total of 8-13 inches of rain over northwestern, southwestern, and south central Wisconsin causing significant damage. Some areas received 1-2 inches of rainfall per hour that resulted in flash flooding and mudslides.
FEMA recommends that all Wisconsin residents visit FloodSmart.gov or call 1-800-427-2419 to learn how to prepare for floods, how to purchase a flood insurance policy and the benefits of protecting your home or property investment against flooding. You can also contact your insurance agent for more information.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
IDG News Service (Washington, D.C., Bureau) — The U.S. Congress should pass a law requiring businesses that have lost customer information in cyberattacks to notify those affected, U.S. Attorney General Eric Holder said Monday.
In light of recent data breaches, including at Target and Neiman Marcus, a data-breach notification law would help the U.S. Department of Justice combat crime, protect privacy and prevent identity theft, Holder said in a video message.
"As we've seen -- especially in recent years -- these crimes are becoming all too common," Holder said. "And although Justice Department officials are working closely with the FBI and prosecutors across the country to bring cybercriminals to justice, it's time for leaders in Washington to provide the tools we need to do even more: by requiring businesses to notify American consumers and law enforcement in the wake of significant data breaches."
Tripwire has released the results of an extensive analysis of security vulnerabilities in small office/home office (SOHO) wireless routers. As part of the research, Tripwire sponsored a study of 653 IT and security professionals and 1,009 employees who work remotely in the US and UK
Collectively, this research strongly shows that critical security vulnerabilities are endemic across the entire SOHO wireless router market, and a surprising number of IT professionals and employees who work remotely do not use basic security controls to protect their wireless routers.
SOHO wireless router security vulnerabilities present significant cyber security risks to employees and enterprise networks.
SSE Telecoms has launched the third and final eBook in its data centre sins series. ‘The definitive buyer’s guide for de-risking co-location projects’ includes a checklist of requirements for organizations to compare the different data centre tiers with their organization’s risk profile.
Numerous risks are inherent in data centre design and as a result, business decision makers tasked with selecting an appropriate facility to house their critical information should be aware of all the potential pitfalls and how to avoid them.
The new eBook builds on the knowledge readers will have gained in the first two eBooks – ‘7 deadly data centre sins: how to recognise them’ and ‘7 deadly data centre sins: how to mitigate them’ – offering impartial advice on how best to compare and contrast commercial data centre facilities, and to determine which approach and tier level is most appropriate to their business’s needs.
To download any of the above eBooks go to http://www.ssetelecoms.com/library/
The Committee of Sponsoring Organizations of the Treadway Commission (COSO) has published a new thought paper, ‘Improving Organizational Performance and Governance: How the COSO Frameworks Can Help’, developed to illustrate how the enterprise risk management and internal control frameworks can contribute to enhancing organizational performance and governance for sustainable success.
The paper was co-authored by Protiviti Managing Director James DeLoach and IMA (Institute of Management Accountants) President and CEO Jeffrey C. Thomson, CMA, CAE.
Since its inception in 1985, COSO has provided thought leadership and guidance on internal control, ERM, and fraud deterrence. Its landmark frameworks, Internal Control – Integrated Framework (2013) and Enterprise Risk Management – Integrated Framework (2004), offer a blueprint for helping organizations ensure effective controls and proficient risk management. The new thought paper provides a holistic approach to relating these frameworks to governance, strategy setting, and management processes.
Read the document (PDF).
The everyday consumer assumes that when they make a purchase, either online or in the checkout line, their card data is handed off to a trusted source, with security in place to protect them. They don’t see the complicated ecosystem that exists to process that transaction, nor fully understand the security mechanisms that may or may not be in place. To them, a transaction is a swipe of card, a signing of receipts (or entry of a PIN) and the swift deduction of funds from their account. It’s clean, simple and efficient.
The rotating door of data breaches with large retailers is proof that security in the payment ecosystem is anything but simple. Not only do they understand the potential harm of a breach to their own business, but they invest heavily in security mechanisms to prevent breaches from happening. With an estimated 110 million customer records stolen in one breach alone, it’s clear that the security strategy retailers are following is ineffective.
CIO — Security can be an acute pain point for CIOs. There might be nothing that causes more sleepless nights than ensuring the security of an organization's data and systems. Specialists fortify the network perimeter with firewalls and IDPSs, segment the network and perform regular audits and rigorous assessments. They also classify data and isolate critical files, and follow best practices regarding least privilege and security policies.
Unfortunately, these efforts are vulnerable to the actions of undereducated or malicious users. In its 2013 global, the Ponemon Institute estimates that the average total cost of a data breach in the United States is just over $5.4 million. Roughly 67 percent of the incidents resulted from a malicious or criminal attack or a system glitch, but 33 percent are attributed to the human factor, such as a negligent employee or contractor. It can all start with a single click on the wrong link in an email or trusting an imposter.
It’s kind of like the old question; ‘If a tree falls in the forest and no one is there to hear it, does it make a sound?’ A disaster isn’t a disaster if there’s no measureable impact. No impact to people’s perception of the situation. No impact to people’s lives. If there is a large fire but there is no people or property (facilities, IT equipment etc.) or processes involved – either by fighting the fire or being impacted by the fire – is it still a disaster? There are no fire fighters and no burning buildings, which have no people being impacted so is it still a fire worth tracking and determining the impact and disaster level? No, because there is no measureable impact.
There will be arguments that state yes, it is a disaster because of the damage it can still cause (i.e. the environment) but if no one is involved how do you know it’s a disaster? There’s nothing that tells you it’s a disaster; nothing to point towards to say ‘this’ is the reason for the fire being a disaster because when the large fire is discovered it’s impact isn’t known…yet
A new IDC study titled “U.S. 2014 SMB Corporate-Owned and BYOD Mobile Device Survey” confirmed that small to midsize businesses (SMBs) are now the driving force behind the rise in BYOD adoptions. The study predicts that BYOD will continue its strong presence in the workplace, with SMBs leading the way.
IDC Analyst Chris Chute, who co-authored the study, also sees SMBs introducing good BYOD management programs in a short amount of time:
“Small businesses have seen the most growth in BYOD device uptake and have responded by implementing policies that govern how those devices are used. This is a marked change from only a year ago when close to half of small firms cited having a zero-access BYOD stance. Now, with the availability of hosted software and easy-to-implement mobile solutions, SMB IT managers feel much more comfortable allowing personal devices access to internal IT resources.”
The other day I attended a meeting of a local business continuity forum. It was a very well run, very interesting meeting – the latter despite the fact that one of the topics was business interruption insurance, living proof that any subject can be made interesting by an engaging speaker. There was, however, one small glitch in proceedings that I thought was worthy of note. Or that at least gave me an excuse to write a blog.
The second item on the agenda involved a live link-up, via Skype, to a presenter in some far flung, desolate location – Reading, I think. At the appropriate time, the chairman initiated the call. And then… nothing happened, apart from a deafening silence. The technology didn’t work. Now, before you say anything, yes, of course it had been tested beforehand. This was, after all, a group of consummate business continuity professionals. It had, however, been tested on the previous Friday afternoon, whereas the live event was on a Monday morning, when the volume of traffic on the network is, apparently, much greater. To the extent that there wasn’t enough room left in the pipe for a teeny weeny little Skype call.
Target, Neiman Marcus and nearly 100 million of their customers whose personal information was stolen this past holiday season learned the hard way what companies of all sizes must: cybercrime is becoming more pervasive, its perpetrators more sophisticated and the harm it causes (individuals and companies) harder to calculate.
As cyber attacks become more common, companies are adopting policies to prevent and respond to them. Unfortunately, cyber attacks are like viruses: they are not static, but rather always evolving and adapting in order to infect as many people as possible. In most cases, before companies or industries can agree and implement defensive measures or best practices, those perpetrating cyber attacks are diligently working to circumvent the defensive measures and expand into completely new areas. Thus, companies must keep a vigilant eye on both yesterday’s attack and the emerging threat that may not materialize for another six months to a year.
Two months after Target announced a massive data breach in which hackers stole 40 million debit and credit card accounts from stores nationwide and the rising costs related to the incident are becoming clear.
Costs associated with the Target data breach have reached more than $200 million for financial institutions, according to data collected by the Consumer Bankers Association (CBA) and the Credit Union National Association (CUNA).
Breaking out the numbers, CBA estimates the cost of card replacements for its members have reached $172 million, up from an initial finding of $153 million. CUNA has said the cost to credit unions has increased to $30.6 million, up from an original estimate of $25 million.
So far, cards replaced by CBA members and credit unions account for more than half (54.5 percent) of all affected cards.
NETWORK WORLD — Imagine this in your data center: A swath of compute, networking and storage hardware from a variety of different vendors that are all controlled not individually but by software that overlays the entire operation.
Sound like a fantasy? It's the idea behind the software defined data center (SDDC) and research firm Enterprise Management Associates has declared that 2014 is the year for enterprises to seriously take a look at it.
But how do you get there? EMA analyst and blogger Torsten Volk has outlined three key priorities to adopting a SDDC strategy.
CSO — Security pros should reevaluate their use of technology and policies to bolster defenses against insider threats that many organizations downplay, a new study shows.
The threat of employees causing a data breach due to ignorance or malicious intent was behind viruses, data loss and hacking as the top security risks listed by 500 IT decision makers polled by IS Decisions, which specializes in securing Windows infrastructure. The respondents worked in organizations ranging from 50 to 10,000 employees in the U.S. and the U.K.
Only 21 percent of the respondents listed insider threats in the top three, demonstrating a lack of awareness of the seriousness of the risk, according to the survey. A separate study conducted by Forrester Research last year found that insiders were the top source of breaches, with 36 percent of such incidents stemming from inadvertent misuse of data by employees.
IDG NEWS SERVICE (Boston Bureau) — Companies that move the bulk of their IT operations to cloud services can end up realizing significant overall cost savings, according to a study by analyst firm Computer Economics.
The study looked specifically at companies that had moved mostly to the cloud and compared their spending habits to those of "more typical organizations," report author and Computer Economics President Frank Scavo wrote.
Computer Economics surveyed seven organizations with revenue ranging from US$50 million to $550 million. While acknowledging the sample size is small, the respondents' relative size is crucial, Scavo said in an interview.
There are critical differences in cloud storage according to backup size and priority. SMB – including education and small government agencies – primarily require acceptable backup and restore performance plus security and compliance reporting. The enterprise needs these things plus additional solutions for backing up larger data sets across multiple remote sites and/or storage systems and applications.
Note that no one is talking about backing up the corporate data center’s petabyte-sized storage to the cloud, not yet anyway. At its present level of development, online backup is best done for smaller scale systems. But even with this limited approach, it can have real advantages for business backup.
Cloud storage is not a do-all and be-all of data protection but it does have real benefits for some environments. One of its biggest advantages is replacing extensive off-site tape vaults. Tape libraries for active archives and massive on-site backup can be quite valuable in big data environments. But traditional off-site vaults require users to change tapes, label them, track usage, and order the truck to take them to the off-site vault; then go through another multi-step process to recover the tapes. In this respect online backup is far easier and less prone to manual error.
It’s the end of the world as we know it,
It’s the end of the world as we know it
It’s the end of the world as we know it, and I feel fine
The above lyrics came from REM and they reflect how I generally feel about law firm and lawyer pronouncements about the Foreign Corrupt Practices Act (FCPA) enforcement because [SPOILER ALERT] I am a lawyer, I do practice law and I do work for a law firm, the venerable TomFoxLaw. The FCPA Professor regularly chides FCPA Inc. for their scaremongering tactics, usually monikered as ‘Client Alerts’. Mike Volkov is even more derisive when he calls them the FCPA Paparazzi and cites examples from his days in Big Law, where law firm marketing campaigns are centered around doomsday scenarios about soon-to-occur FCPA; UK Bribery Act; or [fill in the anti-corruption law here] prosecutions and enforcement actions. I usually take such law firm scaremonger and blathering’s to be about worth as much as the paper they are printed on. Indeed I chide the FCPA Professor and Monsieur Volkov for their protestations. In other words, I feel fine.
How many passwords do you have? How many can you remember – and what do you do about the others? Business and consumer life is controlled to a significant degree by passwords. It’s a balancing act between making them memorable (for their rightful owners) without opening the door to password abuse or theft. The business continuity challenges that organisations face include weeding out passwords like ‘secret’, ‘1234’ or even just ‘password’, restricting password knowledge to only those who should know, and dealing with passwords that have been forgotten.
Organizations are dealing with more data coming in and out from all sorts of directions these days, without a doubt. Dealing strategically with that data, from integration to analysis, is a huge part of this blog’s goal.
Sometimes, however, you have to stop and smell the tactical. And a recent study conducted by the government IT site MeriTalk raises some BIG red flags about whether federal, state and local governments can manage the influx of data we’re about to see.
The report identifies five factors, which it calls the Big Five of IT, that will significantly affect the flow of data into and out of organizations: Big Data, data center consolidation, mobility, security and cloud computing.
Most IT professionals these days are well aware of the coming changes in data center infrastructure – perhaps not on an intimate level just yet, but many of the basic concepts behind cloud computing and software-defined infrastructure seem clear enough.
Last week, I highlighted some of the thinking around the advent of enterprise-class ARM infrastructure in the data center, with the note that ARMs are primarily suited toward large-volume, small-packet workloads characteristic of mobile and web-facing applications. But while much of the trade press has focused on the ARM ultimately “taking over” the data center, knocking the x86 off its 30-year perch, the reality is a bit more nuanced.
The thing is, web/mobile applications are not the only thing coming the enterprise’s way. There are also things like Big Data, enterprise application processing, and even desktop video conferencing and surveillance data to take into consideration. These functions typically involved lower-volume, large-packet workloads, which are more suited to the x86.
InfoWorld — "We shall fight on the beaches. We shall fight on the landing grounds. We shall fight in the fields and in the streets. We shall fight in the hills. We shall never surrender," said Winston Churchill in his famous June 1940 speech in the face of Nazi attacks on England. His earlier committment to the goal of victory, "however long and hard the road may be," is an apt analogy to the security battles that enterprises face.
The bad guys are persistent and sophisticated, and they're making inroads. It is hard to be optimistic when customers, investors, and regulators expect us to totally protect precious assets and preserve privacy, while some governments and vendors on whom we depend are themselves compromising our data, software, and networks.
The fight for security is harder than ever. Most organizations are fighting today's war with yesterday's tools and approaches -- such as protecting perimeters with passwords and firewalls -- and losing. There is too much emphasis on walling off our data and systems, and a misplaced belief that the secured-perimeter approach is adequate.
CSO — For years enterprises have battled to prevent and manage data breaches, yet the costs associated with data breaches keep climbing higher -- especially for organizations in highly regulated industries. The average cost of a breach today is $188 per record in the U.S, According to the Ponemon Institute, with the total costs of data breach hitting upwards of $5.4 million. Also according to Ponemon average losses are up 18% from the same survey in the prior year.
Our own Global Information Security Survey finds that breach costs are rising, as well, especially for those organizations with less mature security programs.
Is there anything organizations can do to curb rising breach costs? Turns out plenty. And most of it are things enterprises should already be doing.
WASHINGTON – The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) is requesting individuals who are interested in serving on the National Advisory Council (NAC) to apply for appointment. The NAC is an advisory council established to ensure effective and ongoing coordination of federal preparedness, protection, response, recovery, and mitigation for natural disasters, acts of terrorism, and other man-made disasters.
The NAC advises the FEMA Administrator on all aspects of emergency management while incorporating the whole community’s input through appointed council members.
The NAC consists of up to 35 members, all of whom are experts and leaders in their respective fields. The members of the NAC are appointed by the FEMA Administrator and are composed of federal, state, tribal, local, private sector, and non-profit leaders and subject matter experts in a wide range of disciplines.
Appointments are for a three-year term. The Administrator may appoint additional candidates to serve as a FEMA Administrator Selection. The NAC will have one position open for applications and nominations in each of the following disciplines:
- Emergency Management
- Emergency Response
- Non-Elected Local Government Officials
- Elected Tribal Government Officials
- Non-Elected Tribal Government Officials
- Health Scientist *
- Communications *
- Infrastructure Protection *
- Standards Settings and Accrediting
Individuals interested in serving on the NAC are invited to apply for appointment by submitting a Cover Letter and a Resume or Curriculum Vitae (CV) to the Office of the National Advisory Council by email, fax, or mail. The Cover Letter must include, at a minimum: the discipline area(s) being applied for; current position title and organization; mailing address; a current telephone number; and email address. Letters of recommendation may also be provided, but are not required. A complete application must be submitted to be considered for appointment; application criteria, submission information, and contact information can be found on the NAC webpage. Applications will be accepted until Friday, March 14, 2014, 11:59 p.m. EST.
The NAC meets in person approximately two times a year. Members selected for the NAC serve without compensation from the federal government; however, consistent with the charter, members receive travel reimbursement and per diem under applicable federal travel regulations. Registered lobbyists, current FEMA employees, Reservists, FEMA Contractors, and potential FEMA Contractors will not be considered for NAC Membership.
* Note: Individuals appointed for these positions will serve as a Special Government Employee. For more information on requirements, please visit www.oge.gov/Topics/Selected-Employee-Categories/Advisory-Committee-Members/.
For more information on the NAC visit: www.fema.gov/national-advisory-council.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Having spent 17 years of my career in Asia, I’ve long encouraged IT professionals to consider relocating outside of the United States, not just to advance their careers, but to enable them and their families to reap the many benefits of that experience. So when someone with more than 35 years in high-profile leadership positions who’s a lot smarter than I am says the same thing, I want his voice heard.
Ritch Eich, a management consultant and author of the book, “Leadership Requires Extra Innings: Lessons on Leading from a Life in the Trenches,” strongly encourages young people to expand their global outlook. Eich is a keen advocate of considering overseas relocation, so in an interview last week, I asked him to elaborate on the reasons for his advocacy. He said it’s one of the most important things people can do:
COMPUTERWORLD — California is facing its worst drought in more than 100 years, and one with no end in sight. Conserving water has never been more important, and Silicon Valley has an opportunity to offer technological solutions to the problem.
Consider, for example, the approach the East Bay Municipal Utility District took to encouraging customers to reduce water consumption.
Using technologies not available in earlier droughts, the Oakland-based agency issued report cards on water usage to 10,000 of its 650,000 customers in a year-long pilot program. For instance, EBMUD would put worried-looking smiley faces on the statements it sent to people in two-person households who used more than 127 gallons per day -- the average for a household that size. The statements disclosed each household's actual water usage and urged the customers to "take action" -- and many did.
Did you develop Big Data in a silo?
It’s okay. You can be honest here. You’re among friends. In fact, it’s a safe bet you’re not alone, since experts were predicting this might happen back in 2012. All the signs suggested organizations were developing Big Data in a sandbox; by default that means Big Data often became yet another data silo.
So you’re in good company if you developed your Big Data analytics in a silo, beyond your regular systems.
If past is truly prologue, then it shouldn’t come as a surprise to anyone who has studied the history of data infrastructure that virtualization, advanced cloud architectures and open, distributed computing models are starting to look a lot like the mainframe of old—albeit on a larger scale.
Everywhere you look, in fact, people are talking about pooled resources, higher utilization rates, integrated systems and a rash of other mainframe-like features intended to help the enterprise cope with the rising tide of digital information. Put another way: If the network is the new PC, then the data center is the new “mainframe.”
Of course, this new mainframe data center will differ from the old in a number of ways, most notably in the skill sets and development environments needed to run it. At the recent OCPSummit, for instance, there was no shortage of speakers highlighting the need for organizations to ramp up their knowledge of next-generation virtual and cloud technologies that will pull workaday infrastructure management tasks from physical layer infrastructure to more flexible software-defined constructs. It’s worth noting, however, that the virtualization and resource utilization techniques that ushered in the cloud were not created out of whole cloth during the client-server period, but were in fact carried over from earlier mainframe environments.
BMC Software wants to eliminate the whole notion of a level-one job ticket when it comes to IT support. BMC today unveiled a series of updates to its IT support software portfolio, including version 2.0 of BMC MyIT, the company’s self-service IT support application that makes use of a social media construct to deliver IT support.
Jason Frye, senior director of the office of the CTO at BMC Software, says with the latest version of BMC MyIT, it’s now possible for IT organizations to collaboratively address most routine IT support issues without ever generating a help desk support ticket. Not only will that make the internal IT support staff more productive, Frye says most end users will have a much higher level of satisfaction because they will be able to resolve most issues on their own.
There are many elements to a successful business continuity and life safety program. The most resilient organizations make sure that their people, teams and response efforts are aligned and resourced. This article will help you take the right steps to begin your journey to preparedness.
Conduct a Risk Assessment
To be prepared, it is vital for your organization to understand the threats that your locations could face. There are four key perspectives to consider for each of your organization’s locations:
Good business continuity training helps managers and enterprises prepare business continuity plans. However, they’ll also need to deal with a further factor – human error. This element is a cause of anything from small business failure to nuclear power plant meltdowns. A little information on the subject can help make business continuity that much more robust. Although sophisticated analytical techniques exist to assess human reliability, in the first instance we’ll take a common sense approach. This also makes it easier to apply error-prevention measures to your organisation and boost your business continuity still further. Compare them also with the theory and principles of business continuity from your training classes, and exercises you do to test BC plans.
The new 2014 Annual Report on the State of Disaster Recovery Preparedness from the Disaster Recovery Preparedness Council is an eye opener for IT professionals responsible for backup and recovery of their IT systems. Based on hundreds of responses from organizations worldwide, the Annual Report provides a wealth of information about how prepared companies are in recovering from outages based on the results of our benchmark survey launched last year.
You can download the report for free at http://drbenchmark.org/
My last blog highlighted the “bad news” from the report: three out of four companies fail to properly prepare for recovering their IT systems. One-third reported that critical applications were lost for hours and sometimes multiple days—and one in four said they had lost most, if not all of their datacenter for hours and even days.
No matter what advances take place in enterprise infrastructure in the coming years, the largest cost center is likely to be storage. Even as infrastructure becomes more software defined, relentlessly increasing data volumes will require organizations to either buy or lease storage capacity in ever larger amounts.
The question, then, isn’t how to cut back on storage, as much as it is how to make more efficient use of available storage. As I’ve mentioned in earlier posts, even cloud infrastructure can start to cost dearly as time passes and data loads mount.
A new study from Ponemon and AccessData reveals a disturbing trend in cybersecurity. When hit with some sort of cybersecurity attack, most companies have no idea how to respond or resolve the crisis.
“Threat Intelligence & Incident Response: A Study of U.S. & EMEA Organizations” (registration required to download) surveyed 1,083 CISOs and security technicians to find out how they deal with a data security event. The survey also wanted to know what these security professionals need to better detect such security problems, as well as what tools are needed to remediate problems after an attack.
Over the past several years, a lot of organizations have done a great job of dramatically reducing their IT costs by going to the cloud and adopting an on-demand computing model. As significant as that is, however, the fact remains that technology isn’t a company’s largest expense, not by a long shot—labor is. So why not dramatically reduce labor costs by adopting an on-demand labor model?
That’s the question I discussed earlier this week with Jeffrey Wald, co-founder and COO of Work Market, a provider of cloud-based contract labor management services, and a company that’s positioning itself to capitalize on what it sees as an inevitable shift to on-demand labor. I asked Wald to what extent he thinks the savings generated by on-demand computing is leading businesses to ask themselves, why not extend this model to the work force and implement on-demand labor? Wald said companies are making that connection:
Computerworld — There's a school of thought that IT departments -- and CIOs -- are disappearing. As more and more businesses buy cloud-based services, and turn to self-service and bring-your-own-device models, IT decision making is spreading throughout an organization, some experts say.
A new study by Forrester illuminates the changing IT landscape. It found that the share of IT projects primarily or exclusively run by IT department will decline from 55% in 2009 to 47% in 2015.
The study did find a rise in the number of IT projects handled jointly by CIO-led teams and business groups. More than a third of IT projects today are collaborative ventures, handled at all stages by multiple parties in an organization, Forrester says.
Only a little over 7% of IT purchases are now done without involvement by the CIO, and they are mostly smaller tech procurements. Clearly, the Forrester study doesn't suggest that the CIO's job is headed for extinction, but its conclusions about how the CIO's role is changing are telling.
Data management isn’t enough anymore — it’s time to think more broadly about data and how it’s managed, experts say. It’s time to shift to enterprise information management.
Why? (I feel like a broken record just saying it. But if you insist, I’ve found some new data to back me up.)
Ventana Research just released a benchmark research report on information optimization, according to Information Management. It includes this finding: While 97 percent of organizations say it’s important or very important to make information available to the business and customers, only 25 percent are satisfied with the technology they’re using to provide access to that data.
Here in the UK we are suffering from some quite serious coastal and inland flooding which is causing infrastructure damage, danger to life and will have significant long and short-term effects. The British are sometimes thought of as arrogant (I don’t think we are) but the arrogance of failing to change, and to accept that the failure to change will have an impact, is quite staggering when looking at what has happened here.
Failure to change 1: regardless of the cause, it is quite clear that weather patterns are changing; we have had more cases of flooding in the past 10 years than in recorded memory. So why have our infrastructure management systems been unable to cope with the effects of these floods? Because there has been little effective contingency planning. Such planning, to be effective, needs to include the self and wider analysis that truly recognises what happened previously and then allocates time, effort, money and personnel to the preparation of a flexible and deliverable civil protection and resilience plan. Our emergency response systems appear to be unable to manage and cope with the overall effect of these floods. Clarity of hindsight is a luxury; however the current planning processes and structures will need to change to manage the inevitable ‘next time’.
I’m relatively new to business continuity management, with only a little over ten years’ experience in this industry that is said to be made up of the 'Men in Grey' - bearded and grey suited men. Someone said this to me at last year’s BCI World Conference, I then looked in the mirror and sure enough that was me already.
So in my short time what changes have I seen, what incenses me and what gives me hope that as an Institute we are making progress?
Like many when they start out in this industry, I was volunteered as opposed to being a volunteer. It was in the days of PAS56 (Publicly Available Specification 56), the forerunner to BS25999 and now ultimately ISO22301.
Ron Hale is acting CEO of ISACA, as well as the association’s chief knowledge officer. Hale has more than 20 years of experience in the security field. Prior to joining ISACA, he was manager of security services for Northrop Corporation Defense Systems Division and a research manager for the Bank Administration Institute. He has also provided consulting services as a practice director in the Enterprise Risk Management division within Deloitte & Touche. He has a master’s degree in criminal justice from the University of Illinois and a doctorate in public policy from the Walden University School of Public Policy and Administration. In recognition of his accomplishments at ISACA, Hale was named to the NACD’s 2013 Directorship 100, a distinction given to 100 individuals who exemplify knowledge, leadership and excellence in corporate governance.
What changes have you seen in IT audit in the past few years and what changes do you anticipate going forward?
The IT audit profession has experienced a significant transition in the last years. First and most important, the concept of IT audit has been replaced by information systems (IS) audit due to the expanding nature of information systems within the enterprise and the critical reliance on information as a business enabler. Technology is no longer the primary focus. The work of auditors proficient in computing and communications technologies – as well as how these technologies are implemented and managed and integrated into business processes – is an essential part of providing assurance that risks are identified and effectively managed and that business processes involving technology solutions and processes are in compliance with enterprise policies.
Residents Urged to Continue Following Guidance from Local Officials
WASHINGTON – The Federal Emergency Management Agency (FEMA) continues to closely coordinate with impacted and potentially impacted states in the path of a severe winter storm, through its National Response Coordination Center in Washington D.C. and its regional offices in Atlanta, Boston, New York City and Philadelphia.
Today, President Obama declared an emergency for all counties in the State of South Carolina, at the request of Governor Nikki Haley, authorizing FEMA to support the state in its efforts to respond to the storm. The declaration comes in addition to the President’s Emergency Declaration for 91 counties in the State of Georgia yesterday, at the request of Governor Nathan Deal.
FEMA has deployed an Incident Management Assistance Team to the Georgia Emergency Operations Center in Atlanta, along with liaisons to the state emergency operations centers in Georgia, Maryland, Pennsylvania, South Carolina, and Virginia to facilitate close coordination with the states. FEMA has activated its Regional Response Coordination Centers in Atlanta and Philadelphia, and continues to be in close contact with state, tribal and local partners in impacted and potentially impacted areas and stands ready to support its partners, if requested and needed.
FEMA has also established an Incident Support Base in Augusta, Georgia where additional federal teams are on the ground. Commodities including generators, meals, water, blankets, and cots are being moved to that location. At all times, FEMA maintains commodities, including millions of liters of water, millions of meals and hundreds of thousands of blankets strategically located at distribution centers throughout the United States and its territories, including Atlanta, Ga. and Frederick, Md., if needed and requested.
The U.S. Department of Transportation’s Federal Highway Administration is helping facilitate the expedited movement of utility trucks and personnel in Florida, Georgia, Mississippi, and South Carolina which includes bypassing weigh stations as long as they are under the legal weight requirements.
According to the National Weather Service, dangerous ice and snow and is expected to intensify this evening as the storm moves up the Eastern Seaboard, affecting locations across the mid-Atlantic and Northeast. More than one inch of ice accumulation is possible from central Georgia into South Carolina through Thursday morning. Residents along the path of the storm can find their local forecast at www.weather.gov.
When natural disasters like severe weather strike, the first responders are local emergency and public works personnel, volunteers, humanitarian organizations, and private organizations who provide emergency assistance required to protect the public's health and safety and to meet immediate human needs.
FEMA encourages residents and visitors in the track of the storms to follow the instructions of state, local and tribal officials, and monitor NOAA Weather Radio and their local news for updates and directions provided by local officials. Residents can find trusted sources for weather and preparedness information via Twitter on FEMA’s Social Hub here: http://www.fema.gov/social-hub
Wireless Emergency Alerts are currently being sent directly to many cell phones on participating wireless carrier networks. These alerts are sent by public safety officials such as the National Weather Service about imminent threats like severe weather. They look like a text message and show the type and time of the alert, any action you should take, and the agency issuing the alert. More information on Wireless Emergency Alerts is available at http://www.ready.gov/alerts. Individuals can check with their cellular carrier to determine if their phone or wireless device is WEA-enabled.
Carbon monoxide or CO is a colorless and odorless gas that is emitted from fuel burning appliances, like generators, or machines that are not working or venting properly. Breathing in high levels of Carbon Monoxide can be fatal and kills more than 150 Americans annually. FEMA recommends the following steps to protect your family from the dangers of carbon monoxide:
- Install and maintain CO alarms inside your home to provide early warning
- Install CO alarms in a central location outside each separate sleeping area and on every level of your home
- Use portable generators outdoors in well-ventilated areas away from all doors, windows and vents
- Make sure vents for the dryer, furnace, stove and fireplace are clear of snow and other debris, and
- Remove vehicles from the garage immediately after starting.
For more information and winter preparedness tips, please visit: www.usfa.fema.gov to find out more on carbon monoxide and fire safety.
Preparing for Severe Winter Weather
Get to know the terms that are used to identify winter storm hazards and discuss with your family what to do if a winter storm watch or warning is issued.
- A Winter Weather Advisory means cold, ice and snow are expected.
- A Winter Storm Watch means severe weather such as heavy snow or ice is possible in the next day or two.
- A Winter Storm Warning means severe winter conditions have begun or will begin very soon.
- An Ice Storm Warning is when freezing rain produces a significant and possibly damaging accumulation of ice.
- Freezing Rain creates a coating of ice on roads and walkways.
- Sleet is rain that turns to ice pellets before reaching the ground. Sleet also causes roads to freeze and become slippery.
Avoid traveling by car, but if you must, make sure you have an emergency supply kit in the trunk of your car. FEMA encourages families to maintain an emergency supply kit both at home and in the car to help prepare for winter power outages and icy or impassable roads.
An emergency supply kit should include a three-day supply of food and water, a battery-powered or hand-crank radio and extra flashlights and batteries. Thoroughly check and update your family's emergency supply kit and add the following supplies in preparation for winter weather:
- Rock salt to melt ice on walkways;
- Sand to improve traction;
- Snow shovels and other snow removal equipment; and
- Adequate clothing and blankets to help keep you warm.
Ensure your family preparedness plan and contacts are up to date. Learn about the emergency plans that have been established in your area by your state and local government, and ensure your home and car are prepared for the winter weather.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Ask people where the next surprise will be in disaster recovery and they may well point to technology, the weather or legislation. While all of these areas should be taken into consideration, there’s another one that is vital to good DR management. It’s people. Perhaps because it’s so obvious, disaster recovery plans sometimes gloss over the human resources factor. ‘Get everybody back to work ‘ is frequently all that’s said, after a detailed discussion of phased computers and network recovery. However, it may take more than snapping your fingers to bring productivity back in a timely way.
People within a business are considered the center of Business Continuity Planning, where areas of concern and actions to be taken include:
Computerworld — A recent edition of the Computerworld Security Daily Newsletter contained no fewer than four articles discussing the data breach at Target, which was first disclosed way back in December. What exactly happened to Target remains a matter of great interest.
What's being said about the hack is that it was enabled by a single point of failure. The blame is pinned on unstoppable malware on the point-of-sale (POS) systems or, alternatively, on a compromise of an HVAC contractor's credentials. Either way, Target wants you to believe that the chain was exactly what its name implies: the target of a highly sophisticated attacker.
But the truth is that systematic failures, and not a single point of failure, led to the Target hack. No single vulnerability was exploited. There were vulnerabilities throughout Target's security architecture that led to the theft of 110 million payment card numbers, along with the personally identifiable information of most of the affected cardholders.
Spreadsheets should be banned from the risk management process says Keith Ricketts.
Spreadsheets are universally loved. Why? Because they give everyone their own version of the truth, with complete autonomy to update and amend them as often as they like, without interference from anyone else. However, while spreadsheets might be great tool at an individual level they are completely un-scalable, and therefore totally unsuitable for compiling and analysing information enterprise-wide, or even for individual projects.
When applied to a risk management scenario, the potential horrors magnify. Who knows what risks are lurking in a spreadsheet so far undiscovered, with all around thinking that they have ‘ticked the box’ and that risk is managed. Using spreadsheets and emails to manage risk, is a very risky approach.
Here are the main reasons that the spreadsheet approach doesn’t work:
Marsh has published a new document which aims to help organizations ensure business continuity during severe winter weather.
Winter weather events mixed with a lack of preparation can lead to building damage, freeze-up, flood, and business interruption losses says Marsh. Advance preparation can help to mitigate winter weather impacts on your operations and business continuity.
The document provides a useful checklist of winter weather impacts mitigation. Read it here (PDF).
Companies are no longer tolerant of security-and-compliance teams telling them they cannot go to the cloud: instead risk teams must learn how to adapt to the cloud environment. This is the view of John Overbaugh, managing director of Security Services at Caliber Security Partners.
Writing for http://www.isaca.org, Mr. Overbaugh suggests
four steps for organizational risk leaders to follow to help their companies adopt cloud technologies while minimizing overall risk:
Computerworld — The headlines about the storm approaching Georgia include a tinge of panic and wonder, but the view from Monty Hamilton's Atlanta office is of streets calm and empty.
Hamilton is the CEO of Rural Sourcing Inc., a domestic IT services company based in downtown Atlanta. He reported Tuesday afternoon that it was raining, but the streets were mostly deserted as weather reports forecast freezing rain and power outages.
"It's pretty vacant right now," said Hamilton, who said the city and state were doing a lot to prepare for the storm. That's in contrast to two weeks ago when a storm paralyzed the city with several inches of snow, leaving many stranded, including Hamilton.
The PC is dead. The PC is not dead. The PC is sort of dead, but that’s OK because the new client devices are much cooler.
By now, just about every theory on the PC’s future in the enterprise has been thoroughly consumed and digested by the technorati. And while the term “dead” gets thrown around a lot, it is clear that although the PC is no longer the primary means of data access in the enterprise, neither is it headed for the scrap heap.
A more likely scenario is that the PC will change in both form and function as the enterprise heads into the cloudy, mobility-drive future. The key question, then, is how.
There is a 75% chance of an El Niño event in 2013, according to an early warning report published in Proceedings of the National Academy of Sciences (PNAS). The researchers used a new method that uses network analysis to predict weather systems up to a year ahead, instead of the usual six-month maximum of other approaches. The model successfully predicted the absence of El Niño in 2012 and 2013.
El Niño events are characterized by a warmer Pacific Ocean, which results in a disruption to the ocean-atmosphere system. This can lead to warmer temperatures worldwide, droughts in Australia and Southeast Asia, and heavy rain and flooding in parts of the U.S. and South America. If such an event occurred toward the end of 2014, the increased temperatures and drought conditions could persist through 2015.
The researchers suggested that their work might help farmers and government agencies by giving them more time to prepare and to consider investing in flood- or drought-resistant crops.
The Target data breach is the gift that keeps on giving. It continues to capture attention with new revelations and insights.
The real opportunity for security professionals is to side-step speculation and use the coverage to spark productive conversations. The kinds of discussions that help others understand your value and set the stage for necessary changes.
The latest development was the potential compromise through a third party HVAC contractor.
Now the details around Target, an ongoing investigation, are still a bit murky. Brian Krebs is on the case and providing a valuable service to the industry. Let’s leave investigation to Brian and take the opportunity to build on his work to improve our organizations.- See more at: http://blogs.csoonline.com/security-leadership/2984/if-target-got-breached-because-third-party-access-what-does-mean-you#sthash.zZMSvMgx.dpuf
Mark Kedgley examines the importance of real-time file integrity monitoring in a constantly and quickly evolving threat landscape.
Few experts would argue against the importance of real-time file integrity monitoring (FIM) in an era of fast changing and sophisticated security threats. It is literally impossible to second guess the method of a breach and therefore the ‘last line of defence’ detection offered by FIM has never been more critical. The worldwide coverage of the recent breach at Target shows how vital cybersecurity is, and how high the stakes are if your defences are breached. Little wonder that leaders in security best practices such as NIST, the PCI Security Standards Council and the SANS organisation all advocate FIM as an essential security defence.
That said, many would also challenge the actual value and quality of some FIM deployments over the past decade. From the highly complex, $multimillion software investments all the way down to freeware, far too many deployments are actually increasing, rather than reducing, business risk by creating a deluge of unmanaged and unmanageable alerts. Put simply - too much information and not enough context to provide an effective solution.
Protiviti recently partnered with North Carolina’s State University’s ERM Initiative to conduct its second annual ‘Executive Perspectives on Top Risks Survey’. This obtained the views of more than 370 United States-based board members and C-suite executives about risks that are likely to affect their organization in 2014.
Key findings included:
- The overall survey responses suggest a business environment in 2014 that is slightly less risky for organizations than it was a year ago - however, board members view it to be more risky this year compared to 2013.
- Regulatory change and heightened regulatory scrutiny represents the top overall risk for the second consecutive year.
- Cyber threats and privacy/identity management are seen as an increasing threat.
The top 10 risks as perceived by executives are:
According to the Philadelphia Business Journal and other internet sources, hackers apparently accessed Target's data base via a subcontractor's data credentials.
The Wall Street Journal reports that a Pittsburgh PA refrigeration contractor began working with Target in 2006 installing and maintaining refrigerator systems in stores as the discounter expanded its fresh food offerings. Through that relationship, the contractor was linked remotely to Target's computer systems for "electronic billing, contract submission and project management.
Target's liability comes from its IT security advisors' failure to ask the important "What if" questions.
Of course, there’s a personal impact too.
The just-released 2014 Identity Fraud Report by Javelin Strategy & Research reveals that data breaches are now the greatest risk factor for identity fraud.
In 2013, one in three consumers who received notification of a data breach became a victim of fraud, up from one in four in 2012, the report found.
Some 46 percent of consumers with breached debit cards in 2013 became fraud victims in the same year, compared to only 16 percent of consumers with a social security number breached.
National Business Ethics Survey by Ethics Resource Center Reveals Decline in Workplace Misdeeds, Improvement in Ethics Culture in Past Six Years
ARLINGTON, Va. — Research released today by the Ethics Resource Center (ERC), America’s oldest nonprofit advancing high ethical standards and practices in public and private institutions, reveals that workplace misconduct is at an historic low, having steadily and significantly declined since 2007.
The eighth National Business Ethics Survey (NBES) shows that 41 percent of more than 6,400 workers surveyed said they have observed misconduct on the job, down from 55 percent in 2007. In addition, the report found that fewer employees felt pressure to compromise their standards, down to nine percent from 13 percent in 2011.
Noted Michael G. Oxley, ERC Chairman of the Board, former Congressman and House co-sponsor of the Sarbanes-Oxley Act of 2002, “Companies are working harder to build strong cultures and implement increasingly sophisticated ethics and compliance programs. The results of the survey are encouraging and show that companies are doing a better job of holding workers accountable, imposing discipline for misconduct and letting it be known publicly that bad behavior will be punished.”
Whether based on a whistleblower complaint or because you are subject to an inquiry from a governmental agency, a company faced with potential employee misconduct must perform an internal investigation. The goals of an internal investigation are to understand the nature and scope of the issue(s) and to take necessary remedial action promptly. To be truly effective, an organization should aim to achieve these goals while minimizing the impact on the company’s routine business operations.
Unfortunately, companies often inadvertently overlook certain issues in this process, which can result in an ineffective investigation and may pose additional litigation risks for the company.
Here is a list of five factors often overlooked when conducting an internal investigation:
It started with IT server virtualisation and then continued with cloud computing. Instead of physical machines running a company’s own software applications, we now simply have interfaces to virtual instances of these things. Computing resources are no longer located in a specific piece of equipment on a company’s premises. They are ‘somewhere’ in the cluster of virtualised servers, or on the network, or in the cloud. Software as a Service (SaaS) takes it all a step further: now not only are businesses relieved of the need to buy and run their own hardware, but there’s someone else to look after the software too. The potential advantages of budget flexibility, resilience and scalability are clear. But that doesn’t change the need to continually verify solid business continuity management, from one end right through to the other.
By Geary Sikich
If we agree on the basic premise that business continuity can be defined as sustaining what is critical to the enterprise’s survivability during periods of discontinuity; then we must recognize that the activity known as the business impact assessment / analysis (BIA) needs to be redefined.
The BIA, as currently practiced does not necessarily achieve the following:
- Define what is critical to the organization;
- Develop strategies to recover/sustain during times of discontinuity.
I posit a two-phase BIA framework consisting of a pre-event general analysis and a post-event identification and assessment of business impacts and potential consequences for the enterprise.
Events are nonlinear and therefore carry uncertain outcomes. As a result, traditional pre-event BIAs are of little value when conducted using concepts such as mission critical, recovery time objectives, recovery point objectives, etc. Events evolve; the elements of randomness and nonlinearity create opaqueness (opacity: the quality of being difficult to understand or explain) that a traditional BIA underestimates.
By Mark Kraynak
Gartner predicts that global spending on public cloud services will grow from $155 billion this year to $210 billion in 2016. The forces driving enterprise IT to the cloud are faster deployment and easier management, which translate in the end to less cost. But at the same time, cloud deployment is significantly increasing security and compliance risk because security solutions have not kept up – leaving high value assets seriously exposed.
So what are some of the security gaps exposed by this ‘cloudification’ of the data center? They include:
The subject of cloud costs keeps popping up in IT circles, most likely the result of more than two years’ worth of experience in shifting enterprise workloads off of traditional data center infrastructure. Increasingly, though, it seems that the cloud is not always the best choice for the pocketbook, particularly when long-term, scale-out architectures are needed.
I touched on this last month when I discussed a number of new analyses that claim internal enterprise resources can be delivered quite efficiently and at broad scale provided they are housed on the same virtual, federated infrastructure that powers most cloud services. Rob Enderle, for example, pointed out that private clouds can come in at half the cost of leading public services depending on the type of workload and the amount of data involved. A key factor in this disparity turns out to be rogue cloud deployments, which can often lead to redundancy and data duplication.
IDG News Service (Boston Bureau) — CIOs still have the last word over most IT spending but over time they will work more closely with business units on buying decisions, a Forrester Research survey finds.
Only 6.3 percent of new technology purchases in the U.S. were made and implemented solely by business units in 2013, according to the report's author, Forrester vice president and principal analyst Andrew Bartels. Some 9 percent of spending involved technology the business unit chose but the CIO's team implemented and managed.
However, "the ideal tech-buying process is one in which the business and the CIO's team work together to identify a need, find and fund a solution, choose the right vendor or vendors, implement it, and manage it," Bartels wrote in the report. "We estimate that more than a third of tech purchases will fit that profile by 2015."
Security is the No. 1 impediment to Cloud Service adoption. Forrester’s research has shown this over the last three years. Cloud Service Providers (CSPs) are responding to this issue. AWS has built an impressive catalog of security controls as a part of the company’s IaaS/PaaS offerings. If you are currently or considering using AWS as a CSP you should check out the following new research.
AWS Cloud Security - AWS Takes Important Steps For Securing Cloud Workloads
As chairman of the Disaster Recovery Preparedness Council, I’m proud to announce that we’ve issued our first annual report on The State of Disaster Recovery Preparedness. Based on hundreds of responses from organizations worldwide, the 20-page 2014 Annual Report provides a close look at how companies are doing when in comes to disaster recovery best practices based on our ground-breaking benchmark survey launched in 2013.
You can download the report for free at http://drbenchmark.org/
First, the bad news: For some it may come as a shock that three out of four companies taking the survey are at risk, failing to properly prepare for recovering their IT systems in the event of an outage or disaster. Others may not be so surprised. The report, however, does highlight some sobering statistics when it comes to the damage companies are suffering when they are unprepared.
CIO — Red Hat and Hortonworks, provider of one of the most popular Apache Hadoop distributions, expanded their existing strategic alliance on Monday as part of an effort to make it easier than ever to bring Hadoop into the enterprise in production environments.
Under the expanded alliance, the partners will integrate their product lines and enable joint go-to-market initiatives and seamless collaborative customer support. Additionally, the partners announced the availability of a beta of a Hortonworks Data Platform (HDP) plug-in for Red Hat Storage that allows Hortonworks' Hadoop distribution to run natively on top of Red Hat's storage offering.
Richard Chambers, CIA, CGAP, CCSA, CRMA, shares his personal reflections and insights on the internal audit profession.
Internal auditors are right to be concerned about third-party risks. The days of a company’s suppliers or partners being well-known and trusted businesses on the same street or town are a distant memory.
In the interconnected, global economy of the 21st century, you are apt to be purchasing raw materials, components, or services from business entities halfway around the world. In turn, these unfamiliar partners may be acquiring subcomponents from other businesses whose very existence may be unknown to us. Third parties can create extraordinary risks for an enterprise, as we have seen played out repeatedly on the global stage.
Hiring practices, working conditions, conflict minerals, carbon footprint, political conflict, data security, financial stability, intellectual property — the list goes on. No brand is immune; no partner too pure. Third-party relationships can reside in any part of an organization, with one contract often having little bearing on another.
COMPUTERWORLD — The think-tankers on the Executive Leadership Council at AIIM systematically use a four-box matrix to reduce uncertainty, allocate investments and calibrate new product/service initiatives. This simple tool -- with "important and difficult" in the upper right and "unimportant and easy" in the lower left -- produces surprisingly powerful insights.
During year-end discussions with 40 executives in 20 vertical markets, I discovered that they all now place big data in that upper-right quadrant. Similarly, readers of Booz & Co.'s Strategy+Business blog designated big data the 2013 Strategy of the Year, and the co-directors of Cognizant's Center for the Future of Work, in a masterful white paper, placed big-data-enabled "meaning making" at the pinnacle of strategic endeavor.
That was enough to prompt me to roll up my sleeves and systematically examine, vertical market by vertical market, how organizations are organizing their path to big data mastery.
This week, we reached the inevitable point in the controversy over the credit and debit card breaches where grim-faced retail executives from Target and Neiman Marcus, industry experts and consumer advocates turned up in Washington. They raised their hands and delivered well-rehearsed statements to our elected representatives.
It’s a familiar bit of theater, but their messages about the security of our personal data when we pay using plastic were startling.
“The innovations that are driving the industry forward and presenting consumers with exciting new methods of making purchases is also rapidly expanding beyond the bounds of our existing regulatory and consumer protection regimes,” went the written testimony of James A. Reuter, speaking on behalf of the American Bankers Association. “And, as has historically been the case, the criminals are often one step ahead as the marketplace searches for consensus.”
TECHWORLD — Extreme Networks has unveiled an ASIC-based big data analytics system that marries network data with application data to make it easier to manage large networks and cloud deployments.
The Purview offering provides visibility into application use across the network, helping organisations in four ways, said Extreme.
The product can improve the experience of connected users, enhance organisations' understanding of user engagement, it optimises application performance, and protects against malicious or unapproved system use.
Network World — Tech salaries saw a nearly 3% bump last year, and IT pros with expertise in big data-related languages, databases and skills enjoyed some of the largest paychecks.
Average U.S. tech salaries climbed to $87,811 in 2013, up from $85,619 the previous year, according to Dice's newly released 2013-2014 Salary Survey. Significantly, nine of the top 10 highest paying IT salaries are for skills related to big data, says the tech career site.
At the top of the list is R, a software environment for statistical computing and graphics. Here's the full list of the top 10 highest paying IT salaries:
1. R: $115,5312. NoSQL: $114,7963. MapReduce: $114,3964. PMBok: $112,3825. Cassandra: $112,3826. Omnigraffle: $111,0397. Pig: $109,5618. Service Oriented Architecture: $108,9979. Hadoop: $108,66910. Mongo DB: $107,825
Executives from Target and Neiman Marcus still don’t know how they could have better protected their customers from cybercriminals, they said at a congressional hearing Wednesday.
Asked exactly how recent attacks occurred, Target’s John Mulligan answered: “We don’t understand that today.’’ The company is still investigating, said Mulligan, the company’s chief financial officer and executive vice president, and “certainly from that there will be learnings.”
Michael Kingston, the chief information officer of the Neiman Marcus Group, said, “We’ve not yet found any evidence of how hackers were able to infiltrate our network.’’ The attack was “customized to evade detection’’ and occurred “in real time, when the card was swiped” just milliseconds before being encrypted. The breaches prompted several congressional hearings and briefings; last week, Attorney General Eric H. Holder Jr. told the Senate Judiciary Committee that his agency is investigating them.
Wednesday’s House hearing, “Can data breaches be prevented?,” ran 31 / 2 hours, but the short answer was: No. That’s despite the “hundreds of millions” Target spent trying, and the “tens of millions” Neiman’s spent.
The Committee of Permanent Representatives has endorsed an agreement between the Hellenic Presidency of the Council and European Parliament representatives with a view to establishing a European surveillance and tracking service. This will have the aim of enhancing the security of space infrastructures and the safety of satellite operations by reducing collision risks and helping to monitor space debris.
Space infrastructure is increasingly threatened by collision risks due to the growing population of satellites and the amount of space debris. In order to mitigate the risk of collision it is necessary to identify and monitor satellites and space debris, catalogue their positions, and track their movements. When a potential risk of collision has been identified satellite operators can then be alerted in time to move their satellites.
This activity is known as space surveillance and tracking (SST) and operational SST services do not currently exist at a European level.
The new SST support framework will foster the networking of national SST assets to provide SST services for the benefit of both public and private operators of critical space-based infrastructures.
Here’s a humbling prediction for IT: By 2018, the CMO’s IT budget could “outstrip” the CIO’s budget, according to Gartner.
And that’s fine with CMOs, who now see marketing as the natural home for Big Data projects, according to a recent Harvard Business Review Blog post written by Jesko Perrey and Matt Ariker of McKinsey & Company.
Predictably enough, CIOs see the situation a bit differently. But the naked truth is that both CMOs and CIOs “are on the hook for turning all that data into above-market growth,” Perrey and Ariker note.
In publishing its “Security Research Cyber Risk Report 2013,” an annual update, HP has delved into a number of the most vexing contradictions in security and risk management. The report’s goal, states HP, is “to provide security information that can be used to understand the vulnerability landscape and best deploy resources to minimize security risk.”
Key findings included these:
“Research gains attention, but vulnerability disclosures stabilize and decrease in severity.” The number of publicly disclosed vulnerabilities remained stable in 2013, as the number of high-severity vulnerabilities dropped for the fourth year in a row. Asks HP, “Is this a good indication of the improving awareness of security in software development or does this indicate a more nefarious trend – the increased price of vulnerabilities on the black market for APTs resulting in less public disclosures?”
CIO — Last year, Yahoo made headlines for rescinding its once-liberal work-from-home policies in the interests of "productivity" and "accountability." But not having a plan in place for keeping the business running if your employees physically cannot get to the office -- in the event of a winter storm, hurricane or even day-to-day concerns like a family illness or car trouble -- could put you at a significant disadvantage.
Here's how you can prepare your workforce - and your business - for the inevitability of employees working from home.
Business As (Un)Usual
The good news is that most organizations already embrace technologies like the cloud that ease employees' capability to connect and collaborate from almost anywhere.
CIO — How can CIOs and IT executives help their teams be more productive (besides providing them with free food)? Here are the top 11 tips -- from CIOs, IT executives, productivity and leadership experts and project managers -- for getting the most out of your IT team.
1. Set goals -- and be "Agile." "Be Agile in your goal setting," says Zubin Irani, cofounder & CEO, cPrime, a project management consulting company. "Have the team set goals for the quarter -- and break the work into smaller chunks that they can then self-assign and manage."
2. Communicate goals, expectations and roles from the get-go. "Provide your team with background information and the strategic vision behind [each] project, activity, task, etc.," says Hussein Yahfoufi vice president, Technology & Corporate Services, OneRoof Energy, a solar finance provider. "Not only does providing more background and information motivate employees more, [it makes them] feel more engaged."
At a time when several large companies are being investigated for bribery in China, organizations doing business there would do well to have strong policies and training programs in place, experts advise. They also caution that using a “cookie cutter” approach for compliance is not enough.
“There are several ongoing investigations right now for hiring of relatives of foreign officials,” Michael Volkov, chief executive officer of the Volkov Law Group, LLC said in a webinar, “Navigating the Waters of Anti-Corruption Compliance in China.”
He pointed out that Qualcomm, a wireless technology company, “is under investigation for hiring relatives of foreign officials and giving them jobs strategically. This is a serious investigation, and Qualcomm is a reputable company with a sophisticated compliance program.”
CIO — All readers have their share of successful and failed software projects. Everyone has a favorite war story. But for software project managers, either in a company or in a consulting organization, there's surprisingly little up-to-date information about what causes budget overruns and schedule slips.
Of course, management consultants worth their name will claim that their methodology will fix the problem — and they'll almost certainly have a two-dimensional graph showing how their expertise will take your organization up and to the right. Reductio ad Gartner Group.
Things aren't that simple. The Standish Group's Chaos Reports — a sort of CSI for IT murders — provide solid evidence that the success of software projects depends upon dozens of factors.
Network World — The growing number of natural disasters and the rise in data loss has increased the significance of having an effective disaster recovery (DR) strategy. Thankfully new capabilities are helping smaller companies keep pace. Here's a look at the prominent trends shaping disaster recovery today:
* Cloud Services: As the adoption of cloud services increases, enterprises are realizing the cloud can become part of their disaster recovery plan. Instead of buying dedicated resources in case of a disaster, cloud computing allows companies to pay for long-term data storage on a pay-per-use basis, and to only pay for servers if they have a need to run them for an actual disaster or test.
Network World — A few years ago the only cloud game in town was the public cloud, but today private and hybrid clouds are also true contenders. In fact, private cloud implementations address a prevalent set of challenges and issues that public clouds cannot and can help speed up and smooth the way of cloud adoption.
Here are five core tenets you should assess when weighing private cloud against public cloud options:
CDC is responsible for protecting the public from a host of health threats, including some pretty scary pathogens, like Ebola virus or anthrax for example. One way we do this is through our Select Agents Program which is responsible for governing and regulating the use of certain pathogens by research facilities and labs around the world. In the beginning of December I had the remarkable opportunity to accompany the inspection team who helps regulate the Select Agents Program on one of their routine lab inspections. I was invited to an inspection of a laboratory in the Southeast region of the U.S. that handles rare and dangerous pathogens to get a glimpse of how the Inspection team operates, what they look for, and what they do to protect us.
Laboratory inspections are an important aspect of the Select Agents Program since they ensure that labs and research facilities are complying with guidelines and regulations specific to biological research. In order to improve our understanding of human health and disease, some laboratories handle rare and potentially dangerous biologic agents and toxins, which are known to cause severe infection, illness, and sometimes death in humans. Laboratories that possess and use these types of biologic agents and toxins for manufacturing purposes, research use, or diagnostics must be registered through this program. When they register with the program, they agree to follow all requirements in the regulation (42 CFR Part 73 – Possession, Use and Transfer of Select Agents and Toxins) including, safety, incident response, security, and having appropriate training in place. CDC’s job is to ensure that all precautions are being taken at laboratories so that the public remain unexposed and unharmed by these potential health threats.
The inspection that I joined actually began one week prior to the inspection date when I met with the Inspection Team to prepare a folder with all of the Southeast facility’s biosafety plans, incidence response plans, and security plans. The following week, I flew to the site to meet with the inspection team. I was set to be with the team for the first and most active day of their inspection.
The inspection started with introductions and a briefing among the group. Then there was a visitor’s training to instruct all personnel of potential hazards as well as actions to take in the event of an emergency. To avoid workplace injuries and hazards, personnel must meet all occupational health qualifications. In this laboratory, personnel must perform an exercise test to confirm adequate fitness to wear a respirator. There are two types of respirators at this facility, one that is simply a facemask and another that is a full-body suit. The team thought that I would opt for the full body respirator because it did not require that I shave my beard. However, I gladly accepted the challenge to dawn the facemask respirator (and shave my beard!) to earn my place as member of the team.
Suited up in gowns, gloves, shoe covers, masks and other inspector accessories, we were ready to begin our inspection. Our goal was to go through all of their laboratory space to check that the facility was adhering to appropriate biosafety measures. We checked biological safety cabinets and animal cages, catalogued inventory, and performed other tasks associated with laboratory compliance. Lab personnel graciously halted their work during our visit
The devoted team sought to conduct as much of the laboratory-based inspection as possible the first day. We were successful. After seven hours of tireless work and a brief stint for lunch, we had canvassed the entire facility. The personnel at the Southeast facility were pleasant, welcoming, and grateful for the visit, remarking that they looked forward to an external perspective. Having thoroughly inspected the lab, we finally retired for the day.
A Days Work is Never Done
Though I remained for only the first day, the team continued diligently throughout the week. They reviewed all of the Southeast facility’s documents, checked its security, and evaluated its waste, storage, and laboratory maintenance procedures. The team is then responsible for generating a report that lists observations that deviate from regulatory requirements. After much collaboration between the Select Agents Program and the Southeast facility, the Southeast facility is expected to implement changes to receive standard renewal.
I was incredibly impressed with the Select Agents Program’s laboratory inspection. I know that because of them, we can rest assured that high containment facilities operate at the toughest standards. Thanks to this program, the biosafety measures in place consistently enhance the safety and security that the CDC promises to uphold to the American people.
I admit it took me awhile to finally get it. I have long wondered what could have caused the explosion in Department of Justice (DOJ) and Securities and Exchange Commission (SEC) enforcement of the Foreign Corrupt Practices Act (FCPA). Starting in about 2004, FCPA enforcement has not only been on the increase from the previous 25 years of its previous existence but literally exploded. Of course, I had heard Dick Cassin and Dan Chapman, most prominently among others, talk and write about FCPA enforcement as an anti-terrorism security issue post 9/11, but I never quite bought into it because I did not understand the theoretical underpinnings of such an analysis.
I recently finished listening to the Teaching Company’s “Masters of War: History’s Greatest Strategic Thinkers” by Professor Andrew Wilson of the Naval War College. It is a 24 lecture series on the content and historical context of the world’s greatest war strategists. In his lecture on ‘Terrorism as Strategy” Professor Wilson explained that corruption is both a part of the strategy of terrorism and a cause of terrorism. After listening to his lecture and reflecting on some of the world events which invoked both parts of his explanation, it became clear to me why FCPA enforcement exploded and, more importantly, why the US government needs to continue aggressive enforcement of the FCPA and encourage other countries across the globe to enact and enforce strong international and domestic anti-corruption and anti-bribery laws.
At the start of each year, there’s always a long list of IT offerings vying for attention. With many solutions still looking for a problem, it pays to take a moment to consider the business impact rather than being seduced by the high-tech glitter. Here’s a quick rundown of what might affect business continuity in 2014.
Experts generally see Big Data as a disruptive technology. Of course, you never know with these things: Sometimes you think something is amazing and it turns out to be more evolutionary than revolutionary.
But if the tech analysts are right and Big Data is a disruptive technology, then it would follow that it could also change the structure of organizations. We saw this happen a few decades ago when the proliferation of enterprise apps and personal computers led to the elevation of the CIO.
It begs the question: Will Big Data elevate data management to a CXO level?
At a time when several large companies are being investigated for bribery in China, organizations doing business there would do well to have strong policies and training programs in place, experts advise. They also caution that using a “cookie cutter” approach for compliance is not enough.
“There are several ongoing investigations right now for hiring of relatives of foreign officials,” Michael Volkov, chief executive officer of the Volkov Law Group, LLC said in a webinar, “Navigating the Waters of Anti-Corruption Compliance in China.”
HP has published its Cyber Risk Report 2013, identifying top enterprise security vulnerabilities and providing analysis of the expanding threat landscape.
Developed by HP Security Research, the annual report provides in-depth data and analysis around the most pressing security issues plaguing enterprises. This year’s report details factors that contributed most to the growing attack surface in 2013 — increased reliance on mobile devices, proliferation of insecure software and the growing use of Java—and outlines recommendations for organizations to minimize security risk and the overall impact of attacks.
LINCROFT, N.J. -- With a new year upon us, now is an ideal time for people to review their insurance policies. Understanding the details of what specific policies cover and what the policyholder is responsible for after a disaster is important as both clients’ needs and insurance companies’ rules change.
Insurers’ decisions and legislative changes have the biggest effect on changes in policies. Consumers should make themselves aware of possible changes in these areas and know what to look for while reviewing their policies.
The first check is the most obvious: the actual coverage. Policyholders should look at the specifics of which property is covered and the type of damage that is covered. Property owners should know that floods are not covered by standard insurance policies and that separate flood insurance is available. Flood insurance is required for homes and buildings located in federally designated high risk areas with federally backed mortgages, referred to as Special Flood Hazard Areas (SFHAs). Residents of communities that participate in the National Flood Insurance Program (NFIP) are automatically eligible to buy flood insurance. According to www.floodsmart.gov, mortgage lenders can also require property owners in moderate to low-risk areas to purchase flood insurance.
There are two types of flood insurance coverage: Building Property and Personal Property. Building Property covers the structure, electrical, plumbing, and heating and air conditioning systems. Personal Property, which is purchased separately, covers furniture, portable kitchen appliances, food freezers, laundry equipment, and service vehicles such as tractors.
What’s Not Covered
Policy exclusions describe coverage limits or how coverage can be purchased separately, if possible. Property owners should know that not only is flood insurance separate from property (homeowners) insurance, but that standard policies may not cover personal items damaged by flooding. In these cases, additional contents insurance can be purchased as an add-on at an additional cost. Some policies may include coverage, but set coverage limits that will pay only a percentage of the entire loss or a specific dollar amount.
The Federal Emergency Management Agency’s Standard Flood Insurance Program (SFIP) “only covers direct physical loss to structures by flooding,” FEMA officials said. The SFIP has very specific definitions of what a flood is and what it considers flood damage. “Earth movement” caused by flooding, such as a landslide, sinkholes and destabilization of land, is not covered by SFIP.
Structures that are elevated must be built up to Base Flood Elevation (BFE) standards as determined by the Flood Insurance Rate Maps (FIRMs). There may be coverage limitations regarding personal property in areas below the lowest elevated floor of an elevated building.
Cost Impact of Biggert-Waters
The Biggert-Waters Flood Insurance Reform Act of 2012 extends and reforms the NFIP for five years by adjusting rate subsidies and premium rates. Approximately 20 percent of NFIP policies pay subsidized premiums, and the 5 percent of those policyholders with subsidized policies for non-primary residences and businesses will see a 25 percent annual increase immediately. A Reserve Fund assessment charge will be added to the 80 percent of policies that pay full-risk premiums. Un-elevated properties constructed in a SFHA before a community adopted its initial FIRMs will be affected most by rate changes. Congress is still debating the implementation of Biggert-Waters.
The General Conditions section informs the consumer and the insurer of their responsibilities, including fraud, policy cancellation, subrogation (in this case, the insurer’s right to claim damages caused by a third party) and payment plans. Policies also have a section that offers guidance on the steps to take when damage or loss occurs. It includes notifying the insurer as soon as practically possible, notifying the police (if appropriate or necessary) and taking steps to protect property from further damage.
“FEMA’s top priority is to provide assistance to those in need as quickly as possible, while also meeting our requirements under the law,” FEMA press secretary Dan Watson said. “To do this, FEMA works with its private sector, write-your-own insurance (WYO) company partners who sell flood insurance under their own names and are responsible for the adjustment of their policy holders’ claims.”
Policyholders should speak with their insurance agent or representative if they have any questions about coverage. For further assistance with Sandy-related flood insurance cases in New Jersey and New York, call the NFIP hotline at 1-877-287-9804. Comprehensive information about NFIP, Biggert-Waters and flood insurance in general can be found at www.floodsmart.gov.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The idea of cars that communicate with each other to enhance safety and that drive themselves is counterintuitive. Airplanes, of course, have had autopilot functions for years. But Boeing 757s don’t have to pull into a parking space at Kmart or ease into traffic on the highway.
The reality is that advanced communications is playing a big role in getting from here to there. Indeed, the trend is accelerating. PCWorld and other sites report that the United States Department of Transportation (DoT) is taking steps to implement vehicle-to-vehicle (V2V) communications. The idea is straightforward:
Vehicle-to-vehicle communications refers to the emergence of Wi-Fi-like radios that could be mounted in cars and communicate with one another. Also known as Dedicated Short-Range Communications, V2V car-mounted radios would constantly communicate with other vehicles within range, providing speed and directional data to other cars' safety and navigation systems. The idea is that a car racing around a blind curve would "know" that a car was heading in the opposite direction, or a car would receive warnings that cars ahead were coming to an unexpected stop.
While every organization has its risks to deal with, mining companies—local or international—must consider myriad risks from every angle in every location. There are the risks that any company should consider, such as return on capital, supply chain and natural catastrophes, but there are others that mining operations must also pay careful attention to, which can vary by location. These include political risks, corruption, weather and even piracy and kidnapping.
A new report by Willis, “Mining Risk Review: Spring 2014” found that a top concern for a mining operation is its capital. The mining sector continues to face low commodity prices, combined with rising operational costs and supply and demand imbalances. Here are the top 10 risks reported by mining operations:
By Michael Vizard
Becoming a truly digital business involves leveraging data to create a sustainable business advantage. Clearly, there is an incredible amount of interest in creating a new generation of IT systems that leverage big data from different sources. However, while IT has never had more tools available for deriving business value from its IT investments, the biggest impediment may well be the fact that business executives often doesn't trust the data that IT has collected.
A recent survey of 442 business executives conducted by Harvard Business Review Analytics Services at the behest of QlickTech, provider of QlickView business intelligence software, finds that only 16 percent of the executives surveyed were confident in the accuracy of the data they used to make business decisions. Another 42 percent said they were not confident in their decisions simply because they couldn't get access to all the relevant data they needed.
Companies that create a culture of resilience throughout their organization are likely to be more successful in the long term, according to research by Cranfield School of Management and Airmic.
In the ‘Roads to Resilience’ report, published last week, the Cranfield authors urge boards and business leaders to challenge prevailing attitudes towards risk management and recognize that it should be a strategic priority and not just an operational or compliance issue.
Keith Goffin, Professor of Innovation at Cranfield School of Management who co-authored the report commented: “All industries are now facing unprecedented levels of risk that have real potential for harming their reputations and balance sheets. By bringing together the insights and experiences of those who have succeeded, this report challenges businesses to take the necessary actions to achieve resilience.”
Roads to Resilience examines eight leading organizations that have had to deal with significant uncertainty. Cranfield researchers interviewed senior staff with risk management responsibilities, including CEOs, at AIG; Drax Power; InterContinental Hotels Group; Jaguar Land Rover; Olympic Delivery Authority; The Technology Partnership; Virgin Atlantic and Zurich Insurance.
SIFMA has issued the following statement from Randy Snook, executive vice president, business policies and practices, in response to the Summary of Key Findings of the 2013 Pandemic Accord tabletop exercise that was held November 18-21, 2013, and sponsored by FEMA Region II, DHHS Region II, Federal Executive Board New York City, Federal Executive Board Northern New Jersey, Clearing House Association and SIFMA:
"Business Continuity Planning (BCP) is essential for ensuring a resilient financial sector that can effectively respond to any disaster or significant emergency situation. A pandemic scenario, such as a widespread influenza outbreak, is one of the most serious threats to the financial industry as it would impact the industry's most important resource - the employees that keep the financial system running smoothly. The Pandemic Tabletop exercise is an important component of resiliency planning as it enables industry and government participants to collaboratively examine how they would respond to a widespread influenza outbreak and identify best practices that will enhance pandemic response planning across the sector. Further, the findings identified by this tabletop will support the development of a full-scale pandemic exercise to take place in late 2014.”
The Pandemic Accord 2013: Continuity of Operations Pandemic Tabletop Exercise - Summary of Findings report summarizes the key findings and observations from the exercise and highlights the major themes that emerged across the four days of exercises, with a focus on business continuity planning.
TwinStrata has published the results of its ‘Industry Trends: Data Backup in 2014’ survey. Conducted between December 2013 and January 2014, the report analyzes responses from 209 IT personnel.
The results indicate an urgent need for organizations to make significant improvements to their backup strategies with one in five organizations experiencing back-up failures at least monthly and one in 10 weekly. As a result, 53 percent of organizations plan to make changes to their backup strategy this year. Incorporating cloud storage was the remedy most often cited by these respondents.
Disaster recovery was the area where backup strategies were most under stress:
- Just 12 percent of respondents predict that they can recover from a site disaster within a couple hours. Cloud storage users were twice as likely to recover in that timeframe (20 percent) as non-cloud storage users (9 percent).
- 63 percent of organizations measure site recovery time in days, with 29 percent requiring four days or more.
- More than half of organizations experience backup failure multiple times a year due to a host of issues from connectivity failure (25 percent), equipment failure (21 percent) or file corruption (18 percent).
The data breach at the Target Corp, the US supermarket chain, was a shock for many. The personal information of at least 70 million customers was stolen by hackers who intercepted the information as buyers used credit and debit cards at the company’s points of sale. The reputational damage seems to have quickly spilled over into an impact on the bottom line: Target cut its profit forecasts for the fourth quarter of 2013 by about 20 percent. However, this high profile case (third biggest US retailer) may just be a taste of the problems in line for other enterprises using the same kind of point of sale (PoS) systems.
Contrary to popular belief, meetings can be a positive experience in the workplace. Although many reasons come to mind when you consider how meetings can be ineffectual, proper planning can keep meetings on track, on time and on point.
According to the Garbuz blog, one reason many people don’t like attending meetings is because they feel there is not a clear expected objective. Attendees are most frustrated when it seems like a meeting is wasting their valuable work time.
To host an effective meeting, an event or meeting leader should do a lot of upfront planning. The IT Download “Effective Meeting Checklist” provides an extensive list of meeting essentials. It starts with a list of preparatory items, continues with a meeting execution list, and finishes with a follow-up list of items to check off after the meeting concludes.
Criminals love credit cards. As a new white paper from Symantec pointed out, credit card-related theft is one of the earliest types of cybercrime, and as we’ve seen by the recent retail breaches, credit and debit cards remain a prime target. The white paper added that Point of Sale (POS), the point at which the retailer first gathers credit card data, has become a favorite way for the bad guys to steal the data. The reason they like it so much is simple: Security hasn’t kept up with technology. These gaps make it easier than ever for thieves to take aim at retail credit card data by using POS malware.
In a Symantec blog post, Orla Cox explained:
POS malware exploits a gap in the security of how card data is handled. While card data is encrypted as it’s sent for payment authorization, it’s not encrypted while the payment is actually being processed, i.e. the moment when you swipe the card at the POS to pay for your goods. . . . Most POS systems are Windows-based, making it relatively easy to create malware to run on them.<
CIO — When a company gets a bad customer review on Yelp, Facebook, Twitter or any other social network, emotions can run high, because real damage to its reputation and sales can result.
The business owner usually has a knee-jerk reaction and responds in kind by attacking the offending customer with an emotionally charged online response.
Some businesses might take the opposite approach and choose the other extreme -- no response at all. By simply ignoring the bad review, a company hopes it will dissipate into the Internet ether, whereas a response might ignite a social media storm and cripple the company publicly.
How weird will the enterprise become in the cloud? Pretty weird, by the sound of some of the discussions taking place today.
We all know that the cloud will be extremely disruptive for existing data infrastructure. Concepts like the all-virtual, all-cloud data center were considered distant possibilities just a few short years ago, but now seem to be looming on the horizon as organizations seek to cut costs and increase data agility.
But even these notions of an ethereal data environment floating around the cybersphere are starting to look quaint compared to the ideas that some forward thinkers are coming up with now.
London-based Aon Risk Solutions, the global risk management business of Aon plc (NYSE: AON), just released its annual Terrorism and Political Violence Map. to help organizations assess terrorism and political violence risk levels across the globe. The map is produced in collaboration with global risk management consultancy, the Risk Advisory Group plc,.
The good news:
- 80 countries with terrorism perils indicated in 2014, 12% fewer than 2013
- Europe sees notable improvement with 11 countries having civil commotion perils removed
NOTE: Canada, Mexico, and the United States were not mentioned in the report, and the map
InfoWorld — In today's threatscape, antivirus software provides little piece of mind. In fact, antimalware scanners on the whole are horrifically inaccurate, especially with exploits less than 24 hours old. After all, malicious hackers and malware can change their tactics at will. Swap a few bytes around, and a previously recognized malware program becomes unrecognizable.
To combat this, many antimalware programs monitor program behaviors, often called heuristics, to catch previously unrecognized malware. Other programs use virtualized environments, system monitoring, network traffic detection, and all of the above at once in order to be more accurate. And still they fail us on a regular basis.
Here are 11 sure signs you've been hacked and what to do in the event of compromise. Note that in all cases, the No. 1 recommendation is to completely restore your system to a known good state before proceeding. In the early days, this meant formatting the computer and restoring all programs and data. Today, depending on your operating system, it might simply mean clicking on a Restore button. Either way, a compromised computer can never be fully trusted again. The recovery steps listed in each category below are the recommendations to follow if you don't want to do a full restore -- but again, a full restore is always a better option, risk-wise.
An article in The New York Times over the weekend gave a frightening account of the ongoing severe drought across California that is now threatening the state’s water supply.
As farmers, ranchers and homeowners brace for what could be the state’s worst drought in 500 years, The NYT reports that the snowpack in the Sierra Nevada, which supplies much of California with water during the dry season, was at just 12 percent of normal last week, reflecting the lack of rain or snow in December and January.
The NYT quotes Tim Quinn, executive director of the Association of California Water Agencies, saying:
SAN FRANCISCO — In the latest in a spate of online attacks affecting American businesses, White Lodging, which manages hotel franchises for chains like Marriott, Hilton and Starwood Hotels, is investigating a potential security breach involving customers’ payment information.
White Lodging Services Corporation, which works with 168 hotels in 21 states, confirmed that it was examining the data breach.
The intrusion into its systems was first posted by Brian Krebs, a security blogger, on Friday, when he reported that the breach might have resulted in the fraudulent use of hundreds of credit and debit cards used for payment at Marriott hotels between March 2013 and the end of the year.
CSO — Data privacy has gotten its fair share of attention these days, what with the high-profile data breaches that have taken place in recent months. Fittingly, PricewaterhouseCoopers released the results of its 2013 data privacy survey late last year, in which the 370 participants represented both board level members responsible for oversight of privacy programs within their organization and practitioners involved in day to day operations.
While some of the statistics were reassuring and showed that data privacy is growing in importance, it would appear that there's still a ways to go before it gets the amount of attention it deserves.
For instance, one of the many statistics indicated that the majority of respondents considered consumer privacy a "medium priority." By PwC's definition, this means that it's a business concern that gets "some attention."
Among the tech workers who anticipate changing employers in 2014, 68 percent listed more compensation as their reason for leaving. Other factors include improved working conditions (48 percent), more responsibility (35 percent) and the possibility of losing their job (20 percent). The poll, conducted online between Oct. 14 and Nov. 29 last year, surveyed 17,236 tech professionals.
Fifty-four percent of the workers polled weren't content with their compensation. This figure is down from 2012's survey, when 57 percent of respondents were displeased with their pay.
In many organizations, executives and employees – and even auditors, will ask Business Continuity Management (BCM) / Disaster Recovery (DR) practitioners if they have plans for every situation possible; every potential risk and every potential impact to the organization. Considering that the number of risks that exist in the world today is basically infinite – once you calculate all the various potential impacts to an organization from a single event – there will be communication, restoration and recovery plans that just can’t be developed, documented, implemented, communicated, validated or maintained. It is impossible to have a response to every situation; the secret it to be able to adapt to the situation and leverage the response plans you do have to help adapt to the disaster situation.
Still, the questions will come about these plans and why a response isn’t captured for a particular situation and its resulting scenarios. A BCM/DR practitioner must be able to address these questions and be able to respond with reasons as to why specific plans don’t – and can’t – exist.
There are a few key reasons that practitioners must be able to communicate to those asking the questions and they are noted below.
CSO — Target's disclosure that credentials stolen from a vendor were used to break into its network and steal 40 million credit- and debit-card numbers highlights the fact that a company's security is only as strong as the weakest link in its supply chain.
No matter how strong Target's internal security was, if the breach started with a third-party vendor, then the weakness was in how the retailer managed the security risk all large companies face when partners and suppliers interact with their networks, experts say.
"Hackers have reached a new level of mastery and companies are really struggling," Torsten George, vice president of marketing and products at risk management vendor Agiliance, said. "They're putting a lot of effort in protecting their own networks, but how do you really go after your suppliers and vendors? How do you assess the risk in doing business with them?"
Enterprise Risk Management, ERM, is simple and straight forward.
In plain and simple English, it it management of all risks across the organization that can disrupt "business as usual".
Unlike Business Continuity (BC) which, as I understand it, is concerned with "the usual suspects" of environmental events, human error, and technology error or malfunction, ERM is concerned with ALL threats, including those not directly under the auspices or control of the organization.
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Today's IT security teams are faced with rapidly mutating threats at every possible point of entry from the perimeter to the desktop; from mobile to the cloud. Fueled by the fast evolution of the threat landscape and changes in network and security architectures, network security management is far more challenging and complex than just a few years ago.
Security teams must support internal and external compliance mandates, enable new services, optimize performance, ensure availability, and support the ability to troubleshoot efficiently on demand--with no room for error. That's a lot to balance when managing network security.
What many would call a “dusting,” we Atlantans would call a “snowpocalypse” as evidence by this week’s 2 inches of snow which crippled the city, causing severe gridlock across the metro area, stranding school children and commuters who were forced to abandon cars on the highway. The mayor of Atlanta and Governor Deal have been making the media circuit, trying to explain what happened to cause the city to grind to a halt, but regardless of who’s fault it was, it’s time to take a look at the situation and see what we can learn from a preparedness perspective. Here are our top 5 lessons learned, that don’t just apply to folks in the Deep South, but to everyone who might be caught in an emergency situation.
- You can always count on…yourself. We’d like to be able to tell you that someone from your local, state, or federal government will always be available 24/7 to help everyone during an emergency, but that’s just not realistic. First responders are there to help the people in the most need, it’s important that everyone else be self-sufficient until emergency response crews have time to get the situation under control. That means you need to be prepared for the worst, with supplies, plans, and knowledge to make sure you can care for yourself and your family until the situation returns to normal.
- Keep emergency supplies in your car. So much of our lives revolve around our vehicles. For most of us that’s how we get to and from work everyday, shuttle our kids, and buy groceries. And in places like Atlanta many of us have long commutes, during which time anything could happen. You have emergency supplies in your house, why not in your car? Many motorists were stranded on the highways for 10 hours or more. You need to make sure you have a blanket, water, food, and other emergency supplies stored away in your trunk just in case.
- Make a family emergency plan. If you can’t pick up your kids who will? Many parents were stranded on the interstate and unable to get to their children’s schools. Sit down with your family and go over what you would do in different emergency situations. Is there a neighbor or relative in the area that can help out if you aren’t able to get to your kids. Let them know you’d like to include them in your plan. Make sure you also come up with a communication plan, that includes giving everyone a list of important phone numbers, not just to save in your cellphone but to keep in your wallet or kids’ backpack. Many commuters’ cell phones died while they were sitting on the roadways for hours. If all your important phone numbers are saved to your device and it died, would you be able to remember your neighbor’s number to ask them to check in on the kids when a Good Samaritan loans you their phone?
- Keep your gas tanks full. This is important to remember in other emergencies like hurricanes, when people are trying to evacuate. If there’s a chance you’re going to need your car, or your ability to get gas is going to be restricted (due to road closures or shortages), make sure you fill up your tank as soon as you hear the first warning. Many of the motorists trying to get home this week ran out of gas, worsening the clogged roads and delaying first responders from getting to people who really needed their help.
- Listen to warnings. The City of Atlanta and the surrounding metro area was under a winter storm warning within 12 hours of the first flakes, but residents and area leaders were slow to listen, most people didn’t start taking action until the snow began to fall, which lead to a mass exodus of the city. While no one likes to “cry wolf” in situations like these, it’s better to be safe than sorry. Learn the difference between a watch and a warning, and start taking action as soon as you hear the inclement forecast.
Earlier this week, I wrote about the challenges of data illiteracy. I think it’s particularly a problem in fields where data has been collected, but maybe is not seen as a way to guide strategy or output.
Education is one such field (they hate being called an industry, even though, let’s face it, they are). While education as a whole is data-heavy, its main focus is not on managing data or information, but on student output. And while data has been used to produce change, it’s not often used in a particularly strategic way. When test scores go down, that data triggers policy and sometimes theory change, but seldom is the data used to inform that change.
Data Privacy Day was earlier this week. I can’t think of a time when data privacy was more discussed among businesses and individuals than right now, and yet, this day to focus on privacy went largely unnoticed. At least, I had no idea it was coming until a couple of people alerted me. Now I know it falls every January 28.
Of course, data privacy isn’t something we should be thinking about only one day a year. Nor should data privacy be seen only in relation to NSA spying and Edward Snowden. It is something that should be practiced regularly and improved upon whenever possible in order to keep information from getting into the wrong hands (and I don’t mean the government).
As Guidance Software’s Anthony Di Bello pointed out in a blog post, data privacy and security needs to be used everywhere for it to be effective. The best practices used at work should extend to home. The trick is making sure employees understand why instituting best practices for privacy is so important. Di Bello provided an example from a chief information security officer (CISO) with whom he works, and I think this advice should be shared:
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
If you've built something yourself rather than buy it, like a book shelf or a bird house, you know the satisfaction of shaping something to your needs. And as long as nothing goes wrong, you're in good shape. But if it breaks you can't return it to the store for an exchange; you have to fix it yourself. And while repairing a bookshelf is one thing, recovering applications in a data center when they fail is something else entirely.
Linux is an excellent tool for creating the IT environment you want. Its flexibility and open-source architecture mean you can use it to support nearly any need, running mission-critical systems effectively while keeping costs low. This flexibility, however, means that if something does go wrong, it's up to you to ensure your business operations can continue without disruption. And while many disaster recovery solutions focus on recovering data in case of an outage, leaving it at that is leaving the job half done. Having the information itself will be useless if the applications that are running it don't function, and you are unable to meet SLAs.
Network World — An MIT research team next month will show off a networked system of flash storage devices they say beats relying on DRAM and networked hard disks to handle Big Data demands.
The copious amounts of data now collected for analyzation by organizations overtaxes computers' main memory, but linking hard disks across an Ethernet network to solve the problem proves too slow, according to the researchers.
Their Blue Database Machine, or BlueDBM (sounds like an IBM product!), consists of flash devices controlled by serially networked field-programmable gate array chips that can also process data. The researchers say flash systems can find random pieces of information from within large data sets in microseconds, whereas the seek time for hard disks can be more than double that.
CSO — Unless you're been living under a rock in North America, it's pretty hard to have missed news of recent high profile data breaches.
I'd venture to say these stories have made their way into the wider, global purview (note: as I write this, another report regarding a massive data breach in South Korea affecting 20 million cardholders was released). While the number of retailers and account holders impacted by these events continues to increase and make headlines, issuers and merchants alike must address ways to instill confidence in their customers in short order.
Upon hearing this type of news, cardholders immediately think "Was I impacted? What do I need to do? Will my account be closed? Will I get a new account number and new debit or credit card?" These and many more questions likely flood the support lines as customers want to understand their real-life implications and steps they need to take to protect themselves.
ATLANTA — There are bad commutes, and then there is what happened here this week.
When a light snow started falling early Tuesday afternoon, Saquana Bonaparte, 31, left her factory job and headed out to get her daughter from school in one of the city’s northern suburbs.
She ended up inching along in her car for almost 12 hours and survived on a half bag of beef jerky and a small bottle of Mountain Dew. Unable to get to a bathroom, she did what she had to do as she drove. Twice.
Ms. Bonaparte spent the night on jammed roads with tens of thousands of other desperate Atlanta-area drivers who had never seen anything like the sheet of ice that coated the city.
CIO — Marketing organizations are gearing up to increase their budgets for big data marketing initiatives in 2014, but is their focus in the right place?
A report by data-driven marketing specialist Infogroup Targeting Solutions found that companies are continuing to ramp up their spending on big data marketing initiatives in 2014 (62 percent of companies expect their big data marketing budgets to increase). However, most of those companies are focusing on technology, not people—57 percent of companies say they do not plan to hire new employees for their data efforts in 2014.
That may be a costly error in the long run, says David McRae, president of Infogroup Targeting Solutions.
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
As pressure mounts to deliver value with ever-increasing speed, lines of business (LOB) are often drawn to cloud computing's ease of use, flexibility and rapid time-to-value. The resultant Shadow IT created by use of consumer grade cloud computing resources usually raises questions about enterprise security, but the real risk is the potential for downtime due to inadequate A availability.
Any interruption that impacts the customer experience will affect the bottom line -- and a company's reputation -- faster than you can say "temporarily unavailable."
So what can IT leaders do about it? With the cloud movement a foregone conclusion, how can they ensure the requisite availability standards are met -- and that their investments in availability are the right ones? There are five key success factors for addressing Shadow IT while ensuring availability in the cloud:
PC World — The year's barely started, and we've already had enough data breaches at major retailers to make a barter economy seem like a good idea. Unfortunately there are yet more security threats to look forward to in 2014. Here are the biggest ones we anticipate.
Mobile malware: The absence of any notoriously successful mobile exploit has lulled users into a false sense of confidence about the level of danger they face. Meanwhile, attackers have had a few years to test the best ways to spread mobile malware.
James Lyne, global head of security research for Sophos, notes that mobile malware is adapting and evolving faster than security tools can learn to detect and evade the threats. Variants are adopting tactics from PC malware--employing encrypted command and control servers, and polymorphism, among other techniques. The perfect storm is on its way.
The world turns, things change and new security risks continue to appear on the scene. Some organisations bury their head in the sand or cross their fingers. ‘It wouldn’t happen to us’ is their motto. Others make plans using different approaches, some better than others. Then they leave the plan untouched without updating it and expect it to hold good. Is such a policy ever justified? Do new threats mean that traditional security principles should be revised? And where should you start if you want to improve your own security risk management?
There seems to be sort of a broad agreement that social data is valuable—in theory.
It’s easy to see why social media data attracts interest. A recent report by the Pew Research Center’s Internet Project found 70 percent of adults online now use Facebook. Forty-two percent of online adults are on multiple social networking sites, reports Information Management.
It’s not easy to generate those kinds of numbers. While I’m not sure how many adults are actually online in the U.S., 70 percent of them must be quite a few eyeballs—especially since 63 percent of Facebook users visit the site every day.
CIO — Tired of waiting for lengthy approval processes, CMOs have been doing end-runs around the IT department for years. In turn, scorned CIOs would rip out the marketing department's rogue tech.
CMOs responded in kind by running to their buddy in the corner office -- the CMO and CEO are often cut from the same personality cloth -- and complain that those techies are at it again, slowing down business decisions they don't understand and letting competitors beat them to the punch.
"I can tell you horror stories," says Kevin Cochrane, a tech industry veteran who has held top marketing positions since the mid-1990s and is currently CMO at OpenText, an enterprise information management software company.
For years, the CIO and CMO have faced off in one of the rockiest executive relationships. As the two odd stepchildren in the C-level suite, they constantly must prove their worth, which often pits them against each other as they try to curry favor among their peers. Both need new technology to be successful and they must compete for scarce dollars. Making matters worse, their jobs tend to reward opposite personality traits; clashes can get ugly.
Facility Management should play a crucial role in Business Continuity – they manage the 2nd largest and most consequential business “assets” (after IT) on which day-to-day business operation rely.
Yet many Facilities Management (FM) departments are often excluded from the planning process, either because BIA surveys skew a focus toward IT dependencies and financial impacts, or because Recovery strategies lean toward alternate site configurations (under the assumption that a damaged facility will be a total loss). Both of these perspectives ignore the fact that ‘total loss’ of a facility almost never occurs.
Then there are Facilities Managers who perceive little value in planning for potential disruptions – either under the assumption that response and recovery are part of their existing job duties (and don’t require planning), or that they can’t plan for what they can’t anticipate. Both are short-sighted.
IDG News Service — Target said Wednesday that intruders accessed its systems by using credentials "stolen" from a vendor, one of the first details the retailer has revealed about how hackers got inside.
The vendor was not identified. A Target spokeswoman said she had no further details to share.
As the forensic investigation continues, the spokeswoman said Target has taken measures to secure its network, such as updating access controls and in some cases, limiting access to its platforms.
During this winter’s extreme cold spells, caused by a polar vortex creating frigid temperatures, workers are at added risk of cold stress. Increased wind speeds can cause air temperature to feel even colder. This increases the risk of cold stress for those working outdoors—including snow cleanup crews, construction workers, postal workers, police officers, recreational workers, firefighters, miners, baggage handlers, landscapers and support workers for the oil and gas industry.
The U.S. Department of Labor notes that what constitutes extreme cold and its effects can vary across the country. In regions that are not used to winter weather, for example, near freezing temperatures are considered “extreme cold.” Because a cold environment forces the body to work harder to maintain its temperature, as temperatures drop below normal and wind speeds increase, heat can leave the body more rapidly.
It’s obvious that data is making major headways in terms of its role in our lives. That’s good news for data management workers, but as I discussed in my previous post, the push to become data-driven also raises some serious questions about our ability to use data in responsible, appropriate ways.
You may not think that’s IT’s problem, but I disagree. As decisions become more data-driven, I think data modelers, data managers, CIOs and other IT data workers have a professional and perhaps moral obligation to help guide its use, at least in terms of insuring that the findings remain valid.
Frankly, I’m worried that data illiteracy might be a major barrier to embracing data-driven leadership.
BOTHELL, Wash. – Why is there so much activity right now at the FEMA Region 10 office in Bothell?
Partners from the American Red Cross to the Bonneville Power Administration to the U.S. Army, and many others, are joining FEMA for what is known as a table-top exercise, planning for a larger full-scale exercise in March.
A table-top is an exercise in which field and logistics movements are “simulated” – not actually performed – while planning and decision-making proceed as if they are. A similar scenario will play out in late March when many of the same partners participate in a full-scale exercise with real field and logistical activity.
The table-top brings more than 100 people to the Region 10 Response Coordination Center in Bothell through Thursday.
The scenario involves a magnitude 9.2 earthquake and resulting tsunami. Such a quake would be the second strongest in known history, and the largest in known U.S. history. In fact, that largest-ever U.S. quake inspired the scenario; the upcoming full-scale “Alaska Shield” exercise coincides with the 50th anniversary of the Great Alaska Earthquake of 1964.
The scenario projects the loss of hundreds of lives. Also, it has thousands displaced in an Alaska winter with no power or heat and possibly tens of thousands of buildings damaged. Other problems would include loss of communications and how to moving relief commodities to survivors despite destroyed roads and bridges.
Region 10 Administrator Ken Murphy said of the table-top, “This exercise is important for all of us to work with all of our partners leading up to Alaska Shield, and to make sure that all of our systems are working together smoothly and seamlessly.”
FEMA regularly tests procedures and practices in this way, together with local, state, tribes, and other federal agencies.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow the Alaska Shield Exercise at #Akshield and FEMA online at twitter.com/femaregion10, www.facebook.com/fema, and www.youtube.com/fema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Each January 28, Data Privacy Day is observed, with business owners and managers, vendors and concerned citizens taking time to raise their awareness of the most up-to-date approaches to keeping their companies’ and their own data safe. It’s an education effort that feels especially urgent this year, given the public’s focus on how their data is handled by the companies and vendors they have dealings with, not to mention the government and their own employers.
Today, with all of that being the case, I spoke with Jay Livens, director of product and solutions marketing for Iron Mountain, about the current state of data protection and IT’s priorities for the coming year. Iron Mountain recently conducted a survey of IT professionals that found that “with 68 percent probability, … data loss and privacy breaches are the most prevalent concern for IT leaders over the next 12-18 months.”
By now, everyone has heard some variation on the statistic about the data scientist shortage: By 2018, there will be a shortage of up to 190,000 qualified data scientists, to cite one version from the McKinsey Global Institute.
Organizations around the globe are trying to figure that one out, and the consensus seems to be that many will have to rely on a team approach when it comes to the tasks of mining Big Data.
Fair enough. But I’m beginning to think we have an even bigger problem ahead of us.
Ever since the first virtual server went into production more than a dozen years ago, speculation has been rampant that enterprise hardware is doomed. But even though it is clear (to me, at least) that hardware will still play a vital role on the enterprise going forward, that role is changing. So the question for enterprise executives is not whether to give up on hardware altogether, but to assess what sort of functionality it is to provide and then to determine how to achieve that functionality at the lowest price point.
To some, developments like the cloud represent a threat not just to enterprise hardware, but software as well. Last year closed out with some pretty stark reports indicating that more money spent in the cloud translates directly into diminished revenue from enterprise users. The message to IT vendors is clear: Adapt to the new cloud reality, and fast, or face obsolescence within two years.
Network World — Businesses with large data centers stand to net big savings in capital, power, deployment and maintenance costs if they follow server blueprints being made public by Microsoft.
The company plans to open-source a cloud server design that it says uses 15% less power than traditional enterprise servers and a 40% cost savings vs. those commercial alternatives.
The company says today that it is joining the Open Compute Project Foundation (OCP) and revealing specs and documentation for Microsoft's most advanced data-center hardware that supports its Windows Azure, Office 365 and Bing cloud services.
It’s amazing what some companies will do to get your attention so they can sell you stuff. Take, for example, those companies that are spending $4 million for a 30-second ad to get your attention during Sunday’s Super Bowl. Did you ever wonder how much those ads boost the companies’ sales? Well, in the cases of four out of five of the ads, the increase is exactly zero. Absolutely nothing. What might that $4 million have bought if it had been invested in Big Data rather than a big ad?
According to a recent article on AdAge.com, a study by advertising research firm Communicus found that 80 percent of Super Bowl ads fail to actually sell anything. The problem with Super Bowl ads is that they tend to focus more on creativity, and less on the brand. That means we all may be talking Monday morning about the hilarious commercial we saw during the Super Bowl, but chances are we have no idea what it was trying to sell. In other words, despite the obscene amount of money the company shelled out, it failed to make a connection between its brand and the consumer. Enter Lisbeth McNabb.
Coca-Cola has admitted falling prey to bizarre slow-motion data breach in which an employee apparently stole dozens of laptops over several years containing the sensitive data of 74,000 people without anyone noticing.
The unnamed former worker, said to have been in charge of equipment disposal, reportedly removed a total of 55 laptops over a six-year period from its Atlanta offices, including some that belonged to a bottling company acquired by the fizzy-drinks giant in 2010.
Only after recovering these during November and December did Coca-Cola realise that they contained 18,000 personal records that included social security numbers plus a further 56,000 covering other types of sensitive data. All but a few thousand were Coca-Cola employees or otherwise connected to the firm.
People are often cited as the most valuable resource of an organisation. The more capable an employee is and the better trained, the more an enterprise stands to profit – up to a point. Difficulties may begin when a person becomes indispensable because of unique expertise that is essential to the smooth running of the company. Those difficulties are then compounded if the expert tries to force the company to stay within that perimeter of expertise; perhaps for fear of being pushed to one side and even being made redundant. A situation like this runs counter to what business continuity is all about. What is the best way to handle it?
It’s once again time to tear open the GRC platform market and uncover all its amazing technical innovations, vendor successes, and impact on customer organizations. This afternoon, we published our latest iteration of the Forrester Wave: Governance, Risk, and Compliance Platforms.
My esteemed colleagues Renee Murphy and Nick Hayes joined me in a fully collaborative, marathon evaluation of 19 of the most relevant GRC platform vendors; we diligently poured through vendor briefings, online demos, customer reference surveys and interviews, access to our own demo environment of each vendor’s product, and as per Forrester policy, multiple rounds of fact checking and review. The shear amount of data we collected is incredible.
Emergency and Incident Mass Notification Services (ENS) are the secure automated distribution and management of important alerts and critical messages to multiple recipients on multiple devices, activated via browser (PC or mobile device) or phone. Emergency notification has become an integral and mission critical component of organizations communication strategies. On both a routine and emergency basis, notifications to affected stakeholders before, during and after an incident or crisis dramatically increases an organization’s ability to quickly restore productivity to normal levels.
Practical reality demonstrates that timely, effective, and efficient communications with employees and stakeholders during crisis situations provides fiscal and operational stability that impacts the bottom line, can reduce damage to property, and save lives. The lack of effective and appropriate communications to organizational stakeholders can negatively impact all business aspects including finance, operations, IT, and human resources.
The cloud is cheaper than standard IT infrastructure. This has been a given for so long that hardly anyone questions its veracity. And after all, who would argue the cost advantages of leasing resources from an outside provider vs. building and maintaining complex internal infrastructure?
Well, some very bright minds in the IT industry are starting to do just that.
Rob Enderle, writing for CIO.com, for one, notes that speed and flexibility, while important, do not necessarily translate into lower costs. The hard truth, of course, is that spending on IT is dropping while spending on cloud services is increasing, but this has more to do with timing and availability rather than simple economics. Indeed, recent analyses show that once internal infrastructure begins deploying cloud services of its own, it can meet enterprise needs for about $100 per user while Amazon and other providers come in at around $200 per user when purchased individually or in small groups, as is the practice with many business units. And a proprietary platform like Oracle can run as much as $500 per user.
Risk management is the most important part of an organization’s governance, risk and compliance program (GRC), according to a survey. When asked to forecast priorities, 33% of respondents stated that enterprise risk management is most important and 27% said ERM would continue to be important to their company. Out of 12 barriers to their GRC goals, organizations identified a lack of resources (52%) and lack of collaboration and cooperation (44%) as their top obstacles.
Computerworld — There's little doubt that the bring-your-own-device (BYOD) trend with smartphones and tablets has rattled a lot of nerves for IT managers.
The situation will only get more nerve-wracking in 2014 because of the 30% annual growth through 2017 expected for smartphones purchased under a BYOD approach, and the further emergence of Windows Phone as a third platform behind Android and iOS.
Businesses are concerned about supporting three smartphone platforms, and while HTML 5 was expected to solve the headaches of supporting multiple platforms, HTML 5 just has not progressed fast enough, "leaving IT managers to wrestle with issues related to cross-platform applications," research firm IDC wrote in a note earlier this month.
PC World — Three major retail chains have recently admitted being victims of massive data breaches that compromised sensitive data from over 100 million customers. Sadly, though, Target, Nieman-Marcus, and Michael's are just the beginning of a trend that isn't likely to fade away any time soon.
Verizon's annual Data Breach Investigations Report (DBIR) from May of 2013 found that 24 percent of the confirmed data breaches in 2012 affected the retail and restaurant sector--second only to the financial sector. In all, there were 156 confirmed data breaches in the retail and food services industries.
Manually combing through logs looking for anomalies that might represent a security threat is not only tedious, it also introduces a level of security fatigue that makes it more likely for a security threat to go unnoticed.
To help organizations reduce that risk, Splunk developed its Splunk App for Enterprise Security, which applies analytics to logs in a way that makes it a lot easier to identify the patterns that represent potential security threats. Released this week, version 3.0 of the app adds support for a new threat intelligence framework, additional data types and data models, and a pivot interface.
Business risk consultancy Riskskill has highlighted what it sees as the main areas of increasing business risk for UK companies in 2014:
1. Fraud Risks
In 2014 fraud risks are likely to be the major contender for exposing many businesses to significant risk as the closure of the government’s National Fraud Authority (NFA) could, some feel, be seen by fraudsters as a huge victory for the bad guys. The NFA was set up to consolidate and focus upon the handling and approach of combatting fraud and also to direct the strategic elements of the attack on the fraudster. The NFA objectives were previously diluted from eight to three, with the more 'strategic issues' removed. Now its remaining operational functions have been atomized into several government silos.
ASIS International has announced the publication of a revised version of the ANSI/ASIS Chief Security Officer - An Organizational Model. This standard provides a model for organizations to use when developing a senior leadership function responsible for providing comprehensive, integrated risk strategies to protect an organization from security threats.
This standard replaces the 2008 ANSI/ASIS Chief Security Officer Organizational ANSI version.
“Early on, it was determined that the standard’s purpose was to state the risks that need to be managed within an organization — of any size — and based on those risks, determine the skills and competencies needed to manage those risks,” said Jerry Brennan, technical committee chair, and chief executive, Security Management Resources. “By identifying who owns what, who is accountable, and what is shared, organizations can then determine what is needed within its ‘senior security executive’ position and the competencies that are best suited for that role.”
The standard’s model for a senior leadership position is presented at a high level and designed as a guide for the development and implementation of a strategic security framework. The structure is characterized by appropriate awareness, prevention, preparedness, and necessary responses to changes in threat conditions. Specific considerations and responses are also addressed for deliberation by individual organizations based on identifiable risk assessment, requirements, intelligence, and assumptions.
“The perspective through which organizations evaluate and integrate operational risk within their strategic plan continues to be a dynamic process which not only impacts the role of the ‘senior security executive’ but also the position or positions that may assume that role,” said Charles Baley, ASIS Standards and Guidelines Commission Liaison and chief security officer, Farmers Group, Inc. “This Standard focuses on the importance of the function and not a single title or position.”
Applicable to both private and public sector organizations, the standard provides a methodology to evaluate and respond to a spectrum of threats to tangible and intangible assets on both a domestic and global basis.
View the executive summary (PDF).
CIO — Recently I saw yet another slide presentation showcasing the decline of enterprise IT spending and the comparable increase in public cloud business. The conclusion? Enterprises just don't have money to spend and it's killing enterprise vendors.
This is fundamentally not true. What's really happening is that users are increasingly using public cloud services, and the expenses they incur are being reimbursed, so the money's theirs. I've also seen several studies showing that moving to the cloud is expensive — twice what it would cost to build services internally, according to an internal analysis I recently reviewed, and five times as much if one uses the Oracle alternative.
After reading this blog post, if you would like more detail, fellow Forrester analyst Christian Kane and I have collaborated on two short reports describing the acquisition of AirWatch through the lens of mobile workforce enablement and a second report through the lens of mobile security. Enjoy the reports, and as always... we love to read your comments!
Discussions about IT and business alignment are almost taboo these days. I suppose people have heard too much about it in the past decade.
Yet, that’s exactly the kind of discussion data experts seem to be calling for when it comes to how IT manages data.
“Over the past year it is becoming increasingly clear that we have to stop thinking as data managers and start thinking as data designers,” writes Forrester analyst and data management expert Michele Goetz in a recent Information Management article. “What matters is what data drives for the business first and then design a data system around that. We need to educate ourselves on what the business does with the data.”
The widening gap between economic losses and insured losses from natural catastrophes is our topic du jour.
Guy Carpenter’s GCCapitalIdeas.com just published this chart showing that approximately 70 percent of global economic losses from natural catastrophes were uninsured between 1980 and 2013:
Almost from the very beginning of the modern virtualization movement, technology futurists wondered what it would be like to have a completely virtualized data center. What would be the benefits, and the major challenges, to building entire compute/storage/networking infrastructure complete in logic?
Those questions are about to be answered now that the IT industry is taking seriously the idea of the software-defined data center (SDDC). In fact, the concept is now openly discussed as the next major segment within the increasingly diversified enterprise infrastructure market.
Organizations are turning to Big Data because they believe more information will improve decision-making, whether it’s whom to target for a sale or whether a product should be recalled.
But what if the real value of the data isn’t in providing us with more information, but in replacing us as decision makers?
Andrew McAfee, co-director of the Initiative on the Digital Economy in the MIT Sloan School of Management, goes way meta in two recent Harvard Business Review blog posts that question not just how to use data — but who should be using it.
In 'The Forrester Wave: Disaster-Recovery-As-A-Service Providers, Q1 2014' Rachel Dines overviews the current global market and ranks the key players.
The report says that there has been significant growth and adoption of disaster recovery as a service (DRaaS) across all sectors 'as I&O professionals are looking for ways to improve their recovery objectives without increasing spend'. The results of the latest Forrsights Hardware Survey show that 19 percent of 438 surveyed companies have implemented DRaaS already and a further 13 percent are planning to implement it during 2014.
Seventeen criteria are used by Forrester to provide an evaluation of DRaaS vendors. Using these, 12 companies were identified as 'the most significant service providers' : Axcient, Barracuda Networks, CenturyLink Technology Solutions, EVault, HP, IBM, iland, nScaled, Persistent Systems, Quorum, SunGard, and Verizon Terremark. Of these Forrester states that 'Iland and SunGard lead in a tight race of strong competitors' followed closely by IBM, nScaled, Verizon Terremark, and EVault.
On this day in 1901, Queen Victoria died, ending an era in which most of her British subjects know of no other monarch. She was born in 1819 and came to the throne after the death of her uncle, King William IV, in 1837. Her 63-year reign was the longest in British history. She oversaw the growth of the British Empire on which the sun never set. Queen Victoria restored dignity to the English monarchy and ensured its survival as a ceremonial political institution. She also brought a stability to the monarchy that has stayed with the country as well.
How can you bring stability to your compliance program? One of the most important steps that you can take is to regularly assess your risks through a risk assessment. I often hear some of the following questions posed by compliance practitioners regarding risk assessments: What should you put into your risk assessment? How should you plan it? What should be the scope of your risk assessment? These, and other, questions were explored in a recent article in the ACC Docket, entitled “Does the Hand Fit the Glove? Assessing Your Company’s Anti-Corruption Compliance Program” by a quartet of authors: Jonathan Drimmer, Vice President and Assistant General Counsel at Barrick Gold Corp.; Lauren Camilli, Director, Global Compliance Programs at CSC; Mauricio Almar, Latin American Regional Counsel at Halliburton; and Mara V.J. Senn, a partner at Arnold & Porter LLP.
Computerworld — Last summer, when I wrote about Apple's relationship with enterprise IT, I talked about earlier Apple decisions to stop producing its rack-mounted Xserve server and refocus its server platform, OS X Server, on the small business market. Since then, Apple has largely focused on making its consumer-oriented products -- the iPhone, iPad, and Mac -- as enterprise-friendly as possible. These devices ship with out-of-the-box support for key enterprise technologies like Active Directory, Exchange, ActiveSync, and a wide range of mobile device management (MDM) solutions that can manage both iOS devices and Macs.
That strategy makes a lot of sense because it removes the need for a large investment in infrastructure or software dedicated specifically to supporting Apple's products. The strategy also built on the BYOD trend that has reshaped the very concept of how IT handles mobile technology. It's a strategy that Apple should continue.
Last week was the 20-year anniversary of the Northridge Earthquake. The 6.7-magnitude event that hit on Jan. 17, 1994 at 4:30 a.m. stands as the second costliest disaster in U.S. history, following Hurricane Katrina. Northridge cost $42 billion in total damages, while Katrina cost $81 billion, according to federal figures.
The U.S. Geological Society (USGS) said that 60 people were killed, more than 7,000 injured, 20,000 were left homeless and more than 40,000 buildings were damaged in Los Angeles, Ventura, Orange and San Bernardino Counties.
The San Fernando Valley saw maximum intensities of magnitude-9 in the areas of Northridge and Sherman Oaks. Significant damage also occurred in Glendale, Santa Clarita, Santa Monica, Simi Valley, Fillmore and in western and central Los Angeles, the USGS said.
It wasn’t long ago that Business Continuity planning and IT Recovery Planning were done by different groups who never talked to each other. In many organizations today the two groups have begun to work together – however grudgingly – to forge links between the IT requirements of critical business functions or processes and the prioritization of recovering IT assets. The two groups may never meet in the same room, but they share the same data – and that’s a very good thing.
Of course, it still doesn’t happen in every organization. And even where it does, there are often other planning groups that keep to themselves – to the detriment of their organization.
The data center is becoming more efficient, more modular and a whole lot more flexible as it transforms from the static architectures of the past to the virtual, dynamic infrastructure of the future. But part of this bargain calls for increasingly dense hardware footprints and steadily rising utilization rates, and that inevitably leads to heat generation.
Small wonder, then, that even as demand for key hardware elements like servers is declining, the need for advanced cooling systems is on the rise. According to MarketsandMarkets, the data center cooling systems market is on pace to top $8 billion by 2018, up from 2013’s level of about $4.9 billion. Part of this growth is due to the fact that data infrastructure across the board is increasing – more data centers mean more cooling systems. But the industry is also enjoying a renaissance of sorts as new, highly efficient technologies fulfill the need to make existing infrastructure more energy efficient. As the “do more with less” mantra takes hold, one of the most significant cost-saving measures available to the enterprise is new, highly efficient cooling infrastructure.
Twenty years ago, a fault that scientists didn’t even know existed slipped, triggering a massive 6.7 magnitude earthquake centered beneath the San Fernando Valley, with shockwaves rippling throughout the greater Los Angeles area.
When the strongest shaking ceased, the region had suffered 57 deaths and more than $20 billion in damage. The newly formed Southern California Earthquake Center (SCEC), founded in 1991 and headquartered at USC, stepped in to find out exactly what happened and what could be done about it.
Statistics and scare tactics don’t work; instead the starting point is ensuring that you have a deep understanding of the business landscape, strategies and risks.
By Larry Robert
There are many approaches that business continuity practitioners can take in convincing executive management to allocate funds and resources to a robust business continuity program. Many try to overwhelm with statistics and scare tactics. I believe these actually detract from the program by making sweeping examples that are typically outdated, untrue, and not applicable. Industry statistics, in many cases are either unverifiable, or can be traced back to a vendor that may benefit from the negative information. We owe it to our profession to always strive for accurate, verifiable information when citing examples in support of developing and maintaining a program.
As you will see below, the only way to bring an awareness to senior leaders is to discuss the specific risks to their particular business. Simple, yet very effective. As you develop yourself as a mature business continuity professional, you can bring into the conversation some of your own experiences from actual events and how various solutions either contributed to a quick recovery or further complicated the recovery process.
By Reuven Harrison.
Balancing effective IT security against a business’s need for agility is an age-old issue. But today, getting that balance right is trickier than ever. Organizational networks are increasingly sprawling, complex and hard to secure, with ever more changes required at the server level to ensure businesses can securely run all the applications they need, as and when they need them. In such a highly complex environment – characterised by constant change – a reactive, manual approach to security is no longer adequate. Mistakes can (and do) creep in, exposing organizations to cyber-attacks, data breaches and industrial espionage.
Yet slowing down the change process in order to ensure security can be similarly risky, since this will stifle the very agility that is key to business survival and success. Unless network managers fundamentally rethink their manual approach and adopt fresh strategies supported by automated tools, they face a ticking time-bomb that could seriously damage not just their security, but their business credibility and competitiveness.
An interesting article in the latest NFPA Journal looks at the rise of social media and its effects on emergency response and communication management; and provides some useful general social media crisis communications advice.
#Are You Prepared? highlights several natural disasters in which social media played a key role in keeping both the affected public and emergency responders informed. It also explains how FEMA established a ‘Hurricane Sandy Rumor Control’ website to counter false and misleading information circulating on social media during that disaster.
The article stresses how important it is for emergency and crisis management professionals to understand how social media works: "If social media is able to push out emergency information to critical audiences, we have to be able to use all of these tools," says Jo Robertson, chair of the NFPA 1600 Social Media Task Group and director of crisis preparedness for the chemical company Arkema. "Social media use is a reality. We all have to get past the notion that this is something we can ignore."
Climate change is among the five most likely and most potentially impactful global risks, according to the just-released World Economic Forum (WEF) 2014 Global Risks Report.
The report assesses 31 risks that are global in nature and have the potential to cause significant negative impact across entire countries and industries if they take place.
An analysis of the five risks considered most likely and most impactful since 2007 shows that environmental risks, such as climate change, extreme weather events and water scarcity, have become more prominent since 2011.
Be honest – do you currently have a malicious software reporting policy? Just relying on the existence of anti-virus software and firewalls may be too optimistic nowadays. The potential damage to information assets and productivity, let alone identity or bank account theft, suggests that a malware reporting policy should be in place in any organisation. Even Google is asking users to contribute to tightening up security by reporting any nefarious activity from websites listed in its results pages. And as an additional source of concern, it seems malware infections are also being caused by some of the very entities that are supposed to be protecting us.
Aon Global Risk Consulting has conducted research to understand more about organizations’ attitudes to the top threats they face in today’s ‘hyper connected’ world.
With a focus on analytics, Aon wanted to further explore some of the results of its biennial Global Risk Management Survey (GRMS) published in 2013, so it subsequently asked captive directors (executive and non-executive) for their opinions on the rankings of the top 50 risks identified.
Stephen Cross, Chairman, Aon Centre for Innovation and Analytics, said “We felt that the results from the GRMS 2013 had thrown up some anomalies. With our expertise in the captive space, we approached captive directors for their opinions on the rankings of various risks to give us a more holistic view. As a result, we believe there is a real debate to be had across the risk management industry on insurable versus uninsurable risk. Understanding risk has always been a fact of business life, but today, the magnitude, complexity and speed have increased exponentially. That is why business leaders are concerned with how they manage risk.”
A new ENISA report provides advice on how to implement incident reporting in cloud computing. ‘Incident Reporting for Cloud Computing’ looks at four different cloud computing scenarios and investigates how incident reporting schemes could be set up, involving cloud providers, cloud customers, operators of critical infrastructure and government authorities.
Using surveys and interviews with experts, ENISA identified a number of key issues:
- In most EU Member States, there is no national authority to assess the criticality of cloud services.
- Cloud services are often based on other cloud services. This increases complexity and complicates incident reporting.
- Cloud customers often do not put incident reporting obligations in their cloud service contracts.
The report contains several recommendations,including:
- Voluntary reporting schemes hardly exist and legislation might be needed for operators in critical sectors to report about security incidents.
- Government authorities should address incident reporting obligations in their procurement requirements.
- Critical sector operators should address incident reporting in their contracts.
- Incident reporting schemes can provide a ‘win-win’ for providers and customers, increasing transparency and, in this way, fostering trust.
- Providers should lead the way and set up efficient and effective, voluntary reporting schemes.
CIO — This year, the IT services industry saw customers doing more of their own IT services deals, testing the service integration model, and continuing to struggle with outsourcing transitions. CIO.com again asked outsourcing observers to tell us what they think is in the cards for the year ahead. And if they're right, 2013 could be the year customers--and a few robots--take greater control of the IT outsourcing space.
1. The Rise of the Machines
Say hello to the latest IT services professional: the robot. "2014 will see significant growth in the development and implementation of robot-like technologies that will automate many tasks currently performed by full-time employees in [outsourcing] deals," says Shawn C. Helms, partner in the outsourcing and technology transactions practices at K&L Gates. "Given the rise of robots replacing people in manufacturing and logistics, it is not a stretch to predict that robots will move up the intellectual value chain as artificial intelligence continues to develop."