Industry Hot News (6199)
COMPUTERWORLD — The think-tankers on the Executive Leadership Council at AIIM systematically use a four-box matrix to reduce uncertainty, allocate investments and calibrate new product/service initiatives. This simple tool -- with "important and difficult" in the upper right and "unimportant and easy" in the lower left -- produces surprisingly powerful insights.
During year-end discussions with 40 executives in 20 vertical markets, I discovered that they all now place big data in that upper-right quadrant. Similarly, readers of Booz & Co.'s Strategy+Business blog designated big data the 2013 Strategy of the Year, and the co-directors of Cognizant's Center for the Future of Work, in a masterful white paper, placed big-data-enabled "meaning making" at the pinnacle of strategic endeavor.
That was enough to prompt me to roll up my sleeves and systematically examine, vertical market by vertical market, how organizations are organizing their path to big data mastery.
This week, we reached the inevitable point in the controversy over the credit and debit card breaches where grim-faced retail executives from Target and Neiman Marcus, industry experts and consumer advocates turned up in Washington. They raised their hands and delivered well-rehearsed statements to our elected representatives.
It’s a familiar bit of theater, but their messages about the security of our personal data when we pay using plastic were startling.
“The innovations that are driving the industry forward and presenting consumers with exciting new methods of making purchases is also rapidly expanding beyond the bounds of our existing regulatory and consumer protection regimes,” went the written testimony of James A. Reuter, speaking on behalf of the American Bankers Association. “And, as has historically been the case, the criminals are often one step ahead as the marketplace searches for consensus.”
TECHWORLD — Extreme Networks has unveiled an ASIC-based big data analytics system that marries network data with application data to make it easier to manage large networks and cloud deployments.
The Purview offering provides visibility into application use across the network, helping organisations in four ways, said Extreme.
The product can improve the experience of connected users, enhance organisations' understanding of user engagement, it optimises application performance, and protects against malicious or unapproved system use.
Network World — Tech salaries saw a nearly 3% bump last year, and IT pros with expertise in big data-related languages, databases and skills enjoyed some of the largest paychecks.
Average U.S. tech salaries climbed to $87,811 in 2013, up from $85,619 the previous year, according to Dice's newly released 2013-2014 Salary Survey. Significantly, nine of the top 10 highest paying IT salaries are for skills related to big data, says the tech career site.
At the top of the list is R, a software environment for statistical computing and graphics. Here's the full list of the top 10 highest paying IT salaries:
1. R: $115,5312. NoSQL: $114,7963. MapReduce: $114,3964. PMBok: $112,3825. Cassandra: $112,3826. Omnigraffle: $111,0397. Pig: $109,5618. Service Oriented Architecture: $108,9979. Hadoop: $108,66910. Mongo DB: $107,825
Executives from Target and Neiman Marcus still don’t know how they could have better protected their customers from cybercriminals, they said at a congressional hearing Wednesday.
Asked exactly how recent attacks occurred, Target’s John Mulligan answered: “We don’t understand that today.’’ The company is still investigating, said Mulligan, the company’s chief financial officer and executive vice president, and “certainly from that there will be learnings.”
Michael Kingston, the chief information officer of the Neiman Marcus Group, said, “We’ve not yet found any evidence of how hackers were able to infiltrate our network.’’ The attack was “customized to evade detection’’ and occurred “in real time, when the card was swiped” just milliseconds before being encrypted. The breaches prompted several congressional hearings and briefings; last week, Attorney General Eric H. Holder Jr. told the Senate Judiciary Committee that his agency is investigating them.
Wednesday’s House hearing, “Can data breaches be prevented?,” ran 31 / 2 hours, but the short answer was: No. That’s despite the “hundreds of millions” Target spent trying, and the “tens of millions” Neiman’s spent.
The Committee of Permanent Representatives has endorsed an agreement between the Hellenic Presidency of the Council and European Parliament representatives with a view to establishing a European surveillance and tracking service. This will have the aim of enhancing the security of space infrastructures and the safety of satellite operations by reducing collision risks and helping to monitor space debris.
Space infrastructure is increasingly threatened by collision risks due to the growing population of satellites and the amount of space debris. In order to mitigate the risk of collision it is necessary to identify and monitor satellites and space debris, catalogue their positions, and track their movements. When a potential risk of collision has been identified satellite operators can then be alerted in time to move their satellites.
This activity is known as space surveillance and tracking (SST) and operational SST services do not currently exist at a European level.
The new SST support framework will foster the networking of national SST assets to provide SST services for the benefit of both public and private operators of critical space-based infrastructures.
Here’s a humbling prediction for IT: By 2018, the CMO’s IT budget could “outstrip” the CIO’s budget, according to Gartner.
And that’s fine with CMOs, who now see marketing as the natural home for Big Data projects, according to a recent Harvard Business Review Blog post written by Jesko Perrey and Matt Ariker of McKinsey & Company.
Predictably enough, CIOs see the situation a bit differently. But the naked truth is that both CMOs and CIOs “are on the hook for turning all that data into above-market growth,” Perrey and Ariker note.
In publishing its “Security Research Cyber Risk Report 2013,” an annual update, HP has delved into a number of the most vexing contradictions in security and risk management. The report’s goal, states HP, is “to provide security information that can be used to understand the vulnerability landscape and best deploy resources to minimize security risk.”
Key findings included these:
“Research gains attention, but vulnerability disclosures stabilize and decrease in severity.” The number of publicly disclosed vulnerabilities remained stable in 2013, as the number of high-severity vulnerabilities dropped for the fourth year in a row. Asks HP, “Is this a good indication of the improving awareness of security in software development or does this indicate a more nefarious trend – the increased price of vulnerabilities on the black market for APTs resulting in less public disclosures?”
CIO — Last year, Yahoo made headlines for rescinding its once-liberal work-from-home policies in the interests of "productivity" and "accountability." But not having a plan in place for keeping the business running if your employees physically cannot get to the office -- in the event of a winter storm, hurricane or even day-to-day concerns like a family illness or car trouble -- could put you at a significant disadvantage.
Here's how you can prepare your workforce - and your business - for the inevitability of employees working from home.
Business As (Un)Usual
The good news is that most organizations already embrace technologies like the cloud that ease employees' capability to connect and collaborate from almost anywhere.
CIO — How can CIOs and IT executives help their teams be more productive (besides providing them with free food)? Here are the top 11 tips -- from CIOs, IT executives, productivity and leadership experts and project managers -- for getting the most out of your IT team.
1. Set goals -- and be "Agile." "Be Agile in your goal setting," says Zubin Irani, cofounder & CEO, cPrime, a project management consulting company. "Have the team set goals for the quarter -- and break the work into smaller chunks that they can then self-assign and manage."
2. Communicate goals, expectations and roles from the get-go. "Provide your team with background information and the strategic vision behind [each] project, activity, task, etc.," says Hussein Yahfoufi vice president, Technology & Corporate Services, OneRoof Energy, a solar finance provider. "Not only does providing more background and information motivate employees more, [it makes them] feel more engaged."
At a time when several large companies are being investigated for bribery in China, organizations doing business there would do well to have strong policies and training programs in place, experts advise. They also caution that using a “cookie cutter” approach for compliance is not enough.
“There are several ongoing investigations right now for hiring of relatives of foreign officials,” Michael Volkov, chief executive officer of the Volkov Law Group, LLC said in a webinar, “Navigating the Waters of Anti-Corruption Compliance in China.”
He pointed out that Qualcomm, a wireless technology company, “is under investigation for hiring relatives of foreign officials and giving them jobs strategically. This is a serious investigation, and Qualcomm is a reputable company with a sophisticated compliance program.”
CIO — All readers have their share of successful and failed software projects. Everyone has a favorite war story. But for software project managers, either in a company or in a consulting organization, there's surprisingly little up-to-date information about what causes budget overruns and schedule slips.
Of course, management consultants worth their name will claim that their methodology will fix the problem — and they'll almost certainly have a two-dimensional graph showing how their expertise will take your organization up and to the right. Reductio ad Gartner Group.
Things aren't that simple. The Standish Group's Chaos Reports — a sort of CSI for IT murders — provide solid evidence that the success of software projects depends upon dozens of factors.
Network World — The growing number of natural disasters and the rise in data loss has increased the significance of having an effective disaster recovery (DR) strategy. Thankfully new capabilities are helping smaller companies keep pace. Here's a look at the prominent trends shaping disaster recovery today:
* Cloud Services: As the adoption of cloud services increases, enterprises are realizing the cloud can become part of their disaster recovery plan. Instead of buying dedicated resources in case of a disaster, cloud computing allows companies to pay for long-term data storage on a pay-per-use basis, and to only pay for servers if they have a need to run them for an actual disaster or test.
Network World — A few years ago the only cloud game in town was the public cloud, but today private and hybrid clouds are also true contenders. In fact, private cloud implementations address a prevalent set of challenges and issues that public clouds cannot and can help speed up and smooth the way of cloud adoption.
Here are five core tenets you should assess when weighing private cloud against public cloud options:
CDC is responsible for protecting the public from a host of health threats, including some pretty scary pathogens, like Ebola virus or anthrax for example. One way we do this is through our Select Agents Program which is responsible for governing and regulating the use of certain pathogens by research facilities and labs around the world. In the beginning of December I had the remarkable opportunity to accompany the inspection team who helps regulate the Select Agents Program on one of their routine lab inspections. I was invited to an inspection of a laboratory in the Southeast region of the U.S. that handles rare and dangerous pathogens to get a glimpse of how the Inspection team operates, what they look for, and what they do to protect us.
Laboratory inspections are an important aspect of the Select Agents Program since they ensure that labs and research facilities are complying with guidelines and regulations specific to biological research. In order to improve our understanding of human health and disease, some laboratories handle rare and potentially dangerous biologic agents and toxins, which are known to cause severe infection, illness, and sometimes death in humans. Laboratories that possess and use these types of biologic agents and toxins for manufacturing purposes, research use, or diagnostics must be registered through this program. When they register with the program, they agree to follow all requirements in the regulation (42 CFR Part 73 – Possession, Use and Transfer of Select Agents and Toxins) including, safety, incident response, security, and having appropriate training in place. CDC’s job is to ensure that all precautions are being taken at laboratories so that the public remain unexposed and unharmed by these potential health threats.
The inspection that I joined actually began one week prior to the inspection date when I met with the Inspection Team to prepare a folder with all of the Southeast facility’s biosafety plans, incidence response plans, and security plans. The following week, I flew to the site to meet with the inspection team. I was set to be with the team for the first and most active day of their inspection.
The inspection started with introductions and a briefing among the group. Then there was a visitor’s training to instruct all personnel of potential hazards as well as actions to take in the event of an emergency. To avoid workplace injuries and hazards, personnel must meet all occupational health qualifications. In this laboratory, personnel must perform an exercise test to confirm adequate fitness to wear a respirator. There are two types of respirators at this facility, one that is simply a facemask and another that is a full-body suit. The team thought that I would opt for the full body respirator because it did not require that I shave my beard. However, I gladly accepted the challenge to dawn the facemask respirator (and shave my beard!) to earn my place as member of the team.
Suited up in gowns, gloves, shoe covers, masks and other inspector accessories, we were ready to begin our inspection. Our goal was to go through all of their laboratory space to check that the facility was adhering to appropriate biosafety measures. We checked biological safety cabinets and animal cages, catalogued inventory, and performed other tasks associated with laboratory compliance. Lab personnel graciously halted their work during our visit
The devoted team sought to conduct as much of the laboratory-based inspection as possible the first day. We were successful. After seven hours of tireless work and a brief stint for lunch, we had canvassed the entire facility. The personnel at the Southeast facility were pleasant, welcoming, and grateful for the visit, remarking that they looked forward to an external perspective. Having thoroughly inspected the lab, we finally retired for the day.
A Days Work is Never Done
Though I remained for only the first day, the team continued diligently throughout the week. They reviewed all of the Southeast facility’s documents, checked its security, and evaluated its waste, storage, and laboratory maintenance procedures. The team is then responsible for generating a report that lists observations that deviate from regulatory requirements. After much collaboration between the Select Agents Program and the Southeast facility, the Southeast facility is expected to implement changes to receive standard renewal.
I was incredibly impressed with the Select Agents Program’s laboratory inspection. I know that because of them, we can rest assured that high containment facilities operate at the toughest standards. Thanks to this program, the biosafety measures in place consistently enhance the safety and security that the CDC promises to uphold to the American people.
I admit it took me awhile to finally get it. I have long wondered what could have caused the explosion in Department of Justice (DOJ) and Securities and Exchange Commission (SEC) enforcement of the Foreign Corrupt Practices Act (FCPA). Starting in about 2004, FCPA enforcement has not only been on the increase from the previous 25 years of its previous existence but literally exploded. Of course, I had heard Dick Cassin and Dan Chapman, most prominently among others, talk and write about FCPA enforcement as an anti-terrorism security issue post 9/11, but I never quite bought into it because I did not understand the theoretical underpinnings of such an analysis.
I recently finished listening to the Teaching Company’s “Masters of War: History’s Greatest Strategic Thinkers” by Professor Andrew Wilson of the Naval War College. It is a 24 lecture series on the content and historical context of the world’s greatest war strategists. In his lecture on ‘Terrorism as Strategy” Professor Wilson explained that corruption is both a part of the strategy of terrorism and a cause of terrorism. After listening to his lecture and reflecting on some of the world events which invoked both parts of his explanation, it became clear to me why FCPA enforcement exploded and, more importantly, why the US government needs to continue aggressive enforcement of the FCPA and encourage other countries across the globe to enact and enforce strong international and domestic anti-corruption and anti-bribery laws.
At the start of each year, there’s always a long list of IT offerings vying for attention. With many solutions still looking for a problem, it pays to take a moment to consider the business impact rather than being seduced by the high-tech glitter. Here’s a quick rundown of what might affect business continuity in 2014.
Experts generally see Big Data as a disruptive technology. Of course, you never know with these things: Sometimes you think something is amazing and it turns out to be more evolutionary than revolutionary.
But if the tech analysts are right and Big Data is a disruptive technology, then it would follow that it could also change the structure of organizations. We saw this happen a few decades ago when the proliferation of enterprise apps and personal computers led to the elevation of the CIO.
It begs the question: Will Big Data elevate data management to a CXO level?
At a time when several large companies are being investigated for bribery in China, organizations doing business there would do well to have strong policies and training programs in place, experts advise. They also caution that using a “cookie cutter” approach for compliance is not enough.
“There are several ongoing investigations right now for hiring of relatives of foreign officials,” Michael Volkov, chief executive officer of the Volkov Law Group, LLC said in a webinar, “Navigating the Waters of Anti-Corruption Compliance in China.”
HP has published its Cyber Risk Report 2013, identifying top enterprise security vulnerabilities and providing analysis of the expanding threat landscape.
Developed by HP Security Research, the annual report provides in-depth data and analysis around the most pressing security issues plaguing enterprises. This year’s report details factors that contributed most to the growing attack surface in 2013 — increased reliance on mobile devices, proliferation of insecure software and the growing use of Java—and outlines recommendations for organizations to minimize security risk and the overall impact of attacks.
LINCROFT, N.J. -- With a new year upon us, now is an ideal time for people to review their insurance policies. Understanding the details of what specific policies cover and what the policyholder is responsible for after a disaster is important as both clients’ needs and insurance companies’ rules change.
Insurers’ decisions and legislative changes have the biggest effect on changes in policies. Consumers should make themselves aware of possible changes in these areas and know what to look for while reviewing their policies.
The first check is the most obvious: the actual coverage. Policyholders should look at the specifics of which property is covered and the type of damage that is covered. Property owners should know that floods are not covered by standard insurance policies and that separate flood insurance is available. Flood insurance is required for homes and buildings located in federally designated high risk areas with federally backed mortgages, referred to as Special Flood Hazard Areas (SFHAs). Residents of communities that participate in the National Flood Insurance Program (NFIP) are automatically eligible to buy flood insurance. According to www.floodsmart.gov, mortgage lenders can also require property owners in moderate to low-risk areas to purchase flood insurance.
There are two types of flood insurance coverage: Building Property and Personal Property. Building Property covers the structure, electrical, plumbing, and heating and air conditioning systems. Personal Property, which is purchased separately, covers furniture, portable kitchen appliances, food freezers, laundry equipment, and service vehicles such as tractors.
What’s Not Covered
Policy exclusions describe coverage limits or how coverage can be purchased separately, if possible. Property owners should know that not only is flood insurance separate from property (homeowners) insurance, but that standard policies may not cover personal items damaged by flooding. In these cases, additional contents insurance can be purchased as an add-on at an additional cost. Some policies may include coverage, but set coverage limits that will pay only a percentage of the entire loss or a specific dollar amount.
The Federal Emergency Management Agency’s Standard Flood Insurance Program (SFIP) “only covers direct physical loss to structures by flooding,” FEMA officials said. The SFIP has very specific definitions of what a flood is and what it considers flood damage. “Earth movement” caused by flooding, such as a landslide, sinkholes and destabilization of land, is not covered by SFIP.
Structures that are elevated must be built up to Base Flood Elevation (BFE) standards as determined by the Flood Insurance Rate Maps (FIRMs). There may be coverage limitations regarding personal property in areas below the lowest elevated floor of an elevated building.
Cost Impact of Biggert-Waters
The Biggert-Waters Flood Insurance Reform Act of 2012 extends and reforms the NFIP for five years by adjusting rate subsidies and premium rates. Approximately 20 percent of NFIP policies pay subsidized premiums, and the 5 percent of those policyholders with subsidized policies for non-primary residences and businesses will see a 25 percent annual increase immediately. A Reserve Fund assessment charge will be added to the 80 percent of policies that pay full-risk premiums. Un-elevated properties constructed in a SFHA before a community adopted its initial FIRMs will be affected most by rate changes. Congress is still debating the implementation of Biggert-Waters.
The General Conditions section informs the consumer and the insurer of their responsibilities, including fraud, policy cancellation, subrogation (in this case, the insurer’s right to claim damages caused by a third party) and payment plans. Policies also have a section that offers guidance on the steps to take when damage or loss occurs. It includes notifying the insurer as soon as practically possible, notifying the police (if appropriate or necessary) and taking steps to protect property from further damage.
“FEMA’s top priority is to provide assistance to those in need as quickly as possible, while also meeting our requirements under the law,” FEMA press secretary Dan Watson said. “To do this, FEMA works with its private sector, write-your-own insurance (WYO) company partners who sell flood insurance under their own names and are responsible for the adjustment of their policy holders’ claims.”
Policyholders should speak with their insurance agent or representative if they have any questions about coverage. For further assistance with Sandy-related flood insurance cases in New Jersey and New York, call the NFIP hotline at 1-877-287-9804. Comprehensive information about NFIP, Biggert-Waters and flood insurance in general can be found at www.floodsmart.gov.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
The idea of cars that communicate with each other to enhance safety and that drive themselves is counterintuitive. Airplanes, of course, have had autopilot functions for years. But Boeing 757s don’t have to pull into a parking space at Kmart or ease into traffic on the highway.
The reality is that advanced communications is playing a big role in getting from here to there. Indeed, the trend is accelerating. PCWorld and other sites report that the United States Department of Transportation (DoT) is taking steps to implement vehicle-to-vehicle (V2V) communications. The idea is straightforward:
Vehicle-to-vehicle communications refers to the emergence of Wi-Fi-like radios that could be mounted in cars and communicate with one another. Also known as Dedicated Short-Range Communications, V2V car-mounted radios would constantly communicate with other vehicles within range, providing speed and directional data to other cars' safety and navigation systems. The idea is that a car racing around a blind curve would "know" that a car was heading in the opposite direction, or a car would receive warnings that cars ahead were coming to an unexpected stop.
While every organization has its risks to deal with, mining companies—local or international—must consider myriad risks from every angle in every location. There are the risks that any company should consider, such as return on capital, supply chain and natural catastrophes, but there are others that mining operations must also pay careful attention to, which can vary by location. These include political risks, corruption, weather and even piracy and kidnapping.
A new report by Willis, “Mining Risk Review: Spring 2014” found that a top concern for a mining operation is its capital. The mining sector continues to face low commodity prices, combined with rising operational costs and supply and demand imbalances. Here are the top 10 risks reported by mining operations:
By Michael Vizard
Becoming a truly digital business involves leveraging data to create a sustainable business advantage. Clearly, there is an incredible amount of interest in creating a new generation of IT systems that leverage big data from different sources. However, while IT has never had more tools available for deriving business value from its IT investments, the biggest impediment may well be the fact that business executives often doesn't trust the data that IT has collected.
A recent survey of 442 business executives conducted by Harvard Business Review Analytics Services at the behest of QlickTech, provider of QlickView business intelligence software, finds that only 16 percent of the executives surveyed were confident in the accuracy of the data they used to make business decisions. Another 42 percent said they were not confident in their decisions simply because they couldn't get access to all the relevant data they needed.
Companies that create a culture of resilience throughout their organization are likely to be more successful in the long term, according to research by Cranfield School of Management and Airmic.
In the ‘Roads to Resilience’ report, published last week, the Cranfield authors urge boards and business leaders to challenge prevailing attitudes towards risk management and recognize that it should be a strategic priority and not just an operational or compliance issue.
Keith Goffin, Professor of Innovation at Cranfield School of Management who co-authored the report commented: “All industries are now facing unprecedented levels of risk that have real potential for harming their reputations and balance sheets. By bringing together the insights and experiences of those who have succeeded, this report challenges businesses to take the necessary actions to achieve resilience.”
Roads to Resilience examines eight leading organizations that have had to deal with significant uncertainty. Cranfield researchers interviewed senior staff with risk management responsibilities, including CEOs, at AIG; Drax Power; InterContinental Hotels Group; Jaguar Land Rover; Olympic Delivery Authority; The Technology Partnership; Virgin Atlantic and Zurich Insurance.
SIFMA has issued the following statement from Randy Snook, executive vice president, business policies and practices, in response to the Summary of Key Findings of the 2013 Pandemic Accord tabletop exercise that was held November 18-21, 2013, and sponsored by FEMA Region II, DHHS Region II, Federal Executive Board New York City, Federal Executive Board Northern New Jersey, Clearing House Association and SIFMA:
"Business Continuity Planning (BCP) is essential for ensuring a resilient financial sector that can effectively respond to any disaster or significant emergency situation. A pandemic scenario, such as a widespread influenza outbreak, is one of the most serious threats to the financial industry as it would impact the industry's most important resource - the employees that keep the financial system running smoothly. The Pandemic Tabletop exercise is an important component of resiliency planning as it enables industry and government participants to collaboratively examine how they would respond to a widespread influenza outbreak and identify best practices that will enhance pandemic response planning across the sector. Further, the findings identified by this tabletop will support the development of a full-scale pandemic exercise to take place in late 2014.”
The Pandemic Accord 2013: Continuity of Operations Pandemic Tabletop Exercise - Summary of Findings report summarizes the key findings and observations from the exercise and highlights the major themes that emerged across the four days of exercises, with a focus on business continuity planning.
TwinStrata has published the results of its ‘Industry Trends: Data Backup in 2014’ survey. Conducted between December 2013 and January 2014, the report analyzes responses from 209 IT personnel.
The results indicate an urgent need for organizations to make significant improvements to their backup strategies with one in five organizations experiencing back-up failures at least monthly and one in 10 weekly. As a result, 53 percent of organizations plan to make changes to their backup strategy this year. Incorporating cloud storage was the remedy most often cited by these respondents.
Disaster recovery was the area where backup strategies were most under stress:
- Just 12 percent of respondents predict that they can recover from a site disaster within a couple hours. Cloud storage users were twice as likely to recover in that timeframe (20 percent) as non-cloud storage users (9 percent).
- 63 percent of organizations measure site recovery time in days, with 29 percent requiring four days or more.
- More than half of organizations experience backup failure multiple times a year due to a host of issues from connectivity failure (25 percent), equipment failure (21 percent) or file corruption (18 percent).
The data breach at the Target Corp, the US supermarket chain, was a shock for many. The personal information of at least 70 million customers was stolen by hackers who intercepted the information as buyers used credit and debit cards at the company’s points of sale. The reputational damage seems to have quickly spilled over into an impact on the bottom line: Target cut its profit forecasts for the fourth quarter of 2013 by about 20 percent. However, this high profile case (third biggest US retailer) may just be a taste of the problems in line for other enterprises using the same kind of point of sale (PoS) systems.
Contrary to popular belief, meetings can be a positive experience in the workplace. Although many reasons come to mind when you consider how meetings can be ineffectual, proper planning can keep meetings on track, on time and on point.
According to the Garbuz blog, one reason many people don’t like attending meetings is because they feel there is not a clear expected objective. Attendees are most frustrated when it seems like a meeting is wasting their valuable work time.
To host an effective meeting, an event or meeting leader should do a lot of upfront planning. The IT Download “Effective Meeting Checklist” provides an extensive list of meeting essentials. It starts with a list of preparatory items, continues with a meeting execution list, and finishes with a follow-up list of items to check off after the meeting concludes.
Criminals love credit cards. As a new white paper from Symantec pointed out, credit card-related theft is one of the earliest types of cybercrime, and as we’ve seen by the recent retail breaches, credit and debit cards remain a prime target. The white paper added that Point of Sale (POS), the point at which the retailer first gathers credit card data, has become a favorite way for the bad guys to steal the data. The reason they like it so much is simple: Security hasn’t kept up with technology. These gaps make it easier than ever for thieves to take aim at retail credit card data by using POS malware.
In a Symantec blog post, Orla Cox explained:
POS malware exploits a gap in the security of how card data is handled. While card data is encrypted as it’s sent for payment authorization, it’s not encrypted while the payment is actually being processed, i.e. the moment when you swipe the card at the POS to pay for your goods. . . . Most POS systems are Windows-based, making it relatively easy to create malware to run on them.<
CIO — When a company gets a bad customer review on Yelp, Facebook, Twitter or any other social network, emotions can run high, because real damage to its reputation and sales can result.
The business owner usually has a knee-jerk reaction and responds in kind by attacking the offending customer with an emotionally charged online response.
Some businesses might take the opposite approach and choose the other extreme -- no response at all. By simply ignoring the bad review, a company hopes it will dissipate into the Internet ether, whereas a response might ignite a social media storm and cripple the company publicly.
How weird will the enterprise become in the cloud? Pretty weird, by the sound of some of the discussions taking place today.
We all know that the cloud will be extremely disruptive for existing data infrastructure. Concepts like the all-virtual, all-cloud data center were considered distant possibilities just a few short years ago, but now seem to be looming on the horizon as organizations seek to cut costs and increase data agility.
But even these notions of an ethereal data environment floating around the cybersphere are starting to look quaint compared to the ideas that some forward thinkers are coming up with now.
London-based Aon Risk Solutions, the global risk management business of Aon plc (NYSE: AON), just released its annual Terrorism and Political Violence Map. to help organizations assess terrorism and political violence risk levels across the globe. The map is produced in collaboration with global risk management consultancy, the Risk Advisory Group plc,.
The good news:
- 80 countries with terrorism perils indicated in 2014, 12% fewer than 2013
- Europe sees notable improvement with 11 countries having civil commotion perils removed
NOTE: Canada, Mexico, and the United States were not mentioned in the report, and the map
InfoWorld — In today's threatscape, antivirus software provides little piece of mind. In fact, antimalware scanners on the whole are horrifically inaccurate, especially with exploits less than 24 hours old. After all, malicious hackers and malware can change their tactics at will. Swap a few bytes around, and a previously recognized malware program becomes unrecognizable.
To combat this, many antimalware programs monitor program behaviors, often called heuristics, to catch previously unrecognized malware. Other programs use virtualized environments, system monitoring, network traffic detection, and all of the above at once in order to be more accurate. And still they fail us on a regular basis.
Here are 11 sure signs you've been hacked and what to do in the event of compromise. Note that in all cases, the No. 1 recommendation is to completely restore your system to a known good state before proceeding. In the early days, this meant formatting the computer and restoring all programs and data. Today, depending on your operating system, it might simply mean clicking on a Restore button. Either way, a compromised computer can never be fully trusted again. The recovery steps listed in each category below are the recommendations to follow if you don't want to do a full restore -- but again, a full restore is always a better option, risk-wise.
An article in The New York Times over the weekend gave a frightening account of the ongoing severe drought across California that is now threatening the state’s water supply.
As farmers, ranchers and homeowners brace for what could be the state’s worst drought in 500 years, The NYT reports that the snowpack in the Sierra Nevada, which supplies much of California with water during the dry season, was at just 12 percent of normal last week, reflecting the lack of rain or snow in December and January.
The NYT quotes Tim Quinn, executive director of the Association of California Water Agencies, saying:
SAN FRANCISCO — In the latest in a spate of online attacks affecting American businesses, White Lodging, which manages hotel franchises for chains like Marriott, Hilton and Starwood Hotels, is investigating a potential security breach involving customers’ payment information.
White Lodging Services Corporation, which works with 168 hotels in 21 states, confirmed that it was examining the data breach.
The intrusion into its systems was first posted by Brian Krebs, a security blogger, on Friday, when he reported that the breach might have resulted in the fraudulent use of hundreds of credit and debit cards used for payment at Marriott hotels between March 2013 and the end of the year.
CSO — Data privacy has gotten its fair share of attention these days, what with the high-profile data breaches that have taken place in recent months. Fittingly, PricewaterhouseCoopers released the results of its 2013 data privacy survey late last year, in which the 370 participants represented both board level members responsible for oversight of privacy programs within their organization and practitioners involved in day to day operations.
While some of the statistics were reassuring and showed that data privacy is growing in importance, it would appear that there's still a ways to go before it gets the amount of attention it deserves.
For instance, one of the many statistics indicated that the majority of respondents considered consumer privacy a "medium priority." By PwC's definition, this means that it's a business concern that gets "some attention."
Among the tech workers who anticipate changing employers in 2014, 68 percent listed more compensation as their reason for leaving. Other factors include improved working conditions (48 percent), more responsibility (35 percent) and the possibility of losing their job (20 percent). The poll, conducted online between Oct. 14 and Nov. 29 last year, surveyed 17,236 tech professionals.
Fifty-four percent of the workers polled weren't content with their compensation. This figure is down from 2012's survey, when 57 percent of respondents were displeased with their pay.
In many organizations, executives and employees – and even auditors, will ask Business Continuity Management (BCM) / Disaster Recovery (DR) practitioners if they have plans for every situation possible; every potential risk and every potential impact to the organization. Considering that the number of risks that exist in the world today is basically infinite – once you calculate all the various potential impacts to an organization from a single event – there will be communication, restoration and recovery plans that just can’t be developed, documented, implemented, communicated, validated or maintained. It is impossible to have a response to every situation; the secret it to be able to adapt to the situation and leverage the response plans you do have to help adapt to the disaster situation.
Still, the questions will come about these plans and why a response isn’t captured for a particular situation and its resulting scenarios. A BCM/DR practitioner must be able to address these questions and be able to respond with reasons as to why specific plans don’t – and can’t – exist.
There are a few key reasons that practitioners must be able to communicate to those asking the questions and they are noted below.
CSO — Target's disclosure that credentials stolen from a vendor were used to break into its network and steal 40 million credit- and debit-card numbers highlights the fact that a company's security is only as strong as the weakest link in its supply chain.
No matter how strong Target's internal security was, if the breach started with a third-party vendor, then the weakness was in how the retailer managed the security risk all large companies face when partners and suppliers interact with their networks, experts say.
"Hackers have reached a new level of mastery and companies are really struggling," Torsten George, vice president of marketing and products at risk management vendor Agiliance, said. "They're putting a lot of effort in protecting their own networks, but how do you really go after your suppliers and vendors? How do you assess the risk in doing business with them?"
Enterprise Risk Management, ERM, is simple and straight forward.
In plain and simple English, it it management of all risks across the organization that can disrupt "business as usual".
Unlike Business Continuity (BC) which, as I understand it, is concerned with "the usual suspects" of environmental events, human error, and technology error or malfunction, ERM is concerned with ALL threats, including those not directly under the auspices or control of the organization.
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Today's IT security teams are faced with rapidly mutating threats at every possible point of entry from the perimeter to the desktop; from mobile to the cloud. Fueled by the fast evolution of the threat landscape and changes in network and security architectures, network security management is far more challenging and complex than just a few years ago.
Security teams must support internal and external compliance mandates, enable new services, optimize performance, ensure availability, and support the ability to troubleshoot efficiently on demand--with no room for error. That's a lot to balance when managing network security.
What many would call a “dusting,” we Atlantans would call a “snowpocalypse” as evidence by this week’s 2 inches of snow which crippled the city, causing severe gridlock across the metro area, stranding school children and commuters who were forced to abandon cars on the highway. The mayor of Atlanta and Governor Deal have been making the media circuit, trying to explain what happened to cause the city to grind to a halt, but regardless of who’s fault it was, it’s time to take a look at the situation and see what we can learn from a preparedness perspective. Here are our top 5 lessons learned, that don’t just apply to folks in the Deep South, but to everyone who might be caught in an emergency situation.
- You can always count on…yourself. We’d like to be able to tell you that someone from your local, state, or federal government will always be available 24/7 to help everyone during an emergency, but that’s just not realistic. First responders are there to help the people in the most need, it’s important that everyone else be self-sufficient until emergency response crews have time to get the situation under control. That means you need to be prepared for the worst, with supplies, plans, and knowledge to make sure you can care for yourself and your family until the situation returns to normal.
- Keep emergency supplies in your car. So much of our lives revolve around our vehicles. For most of us that’s how we get to and from work everyday, shuttle our kids, and buy groceries. And in places like Atlanta many of us have long commutes, during which time anything could happen. You have emergency supplies in your house, why not in your car? Many motorists were stranded on the highways for 10 hours or more. You need to make sure you have a blanket, water, food, and other emergency supplies stored away in your trunk just in case.
- Make a family emergency plan. If you can’t pick up your kids who will? Many parents were stranded on the interstate and unable to get to their children’s schools. Sit down with your family and go over what you would do in different emergency situations. Is there a neighbor or relative in the area that can help out if you aren’t able to get to your kids. Let them know you’d like to include them in your plan. Make sure you also come up with a communication plan, that includes giving everyone a list of important phone numbers, not just to save in your cellphone but to keep in your wallet or kids’ backpack. Many commuters’ cell phones died while they were sitting on the roadways for hours. If all your important phone numbers are saved to your device and it died, would you be able to remember your neighbor’s number to ask them to check in on the kids when a Good Samaritan loans you their phone?
- Keep your gas tanks full. This is important to remember in other emergencies like hurricanes, when people are trying to evacuate. If there’s a chance you’re going to need your car, or your ability to get gas is going to be restricted (due to road closures or shortages), make sure you fill up your tank as soon as you hear the first warning. Many of the motorists trying to get home this week ran out of gas, worsening the clogged roads and delaying first responders from getting to people who really needed their help.
- Listen to warnings. The City of Atlanta and the surrounding metro area was under a winter storm warning within 12 hours of the first flakes, but residents and area leaders were slow to listen, most people didn’t start taking action until the snow began to fall, which lead to a mass exodus of the city. While no one likes to “cry wolf” in situations like these, it’s better to be safe than sorry. Learn the difference between a watch and a warning, and start taking action as soon as you hear the inclement forecast.
Earlier this week, I wrote about the challenges of data illiteracy. I think it’s particularly a problem in fields where data has been collected, but maybe is not seen as a way to guide strategy or output.
Education is one such field (they hate being called an industry, even though, let’s face it, they are). While education as a whole is data-heavy, its main focus is not on managing data or information, but on student output. And while data has been used to produce change, it’s not often used in a particularly strategic way. When test scores go down, that data triggers policy and sometimes theory change, but seldom is the data used to inform that change.
Data Privacy Day was earlier this week. I can’t think of a time when data privacy was more discussed among businesses and individuals than right now, and yet, this day to focus on privacy went largely unnoticed. At least, I had no idea it was coming until a couple of people alerted me. Now I know it falls every January 28.
Of course, data privacy isn’t something we should be thinking about only one day a year. Nor should data privacy be seen only in relation to NSA spying and Edward Snowden. It is something that should be practiced regularly and improved upon whenever possible in order to keep information from getting into the wrong hands (and I don’t mean the government).
As Guidance Software’s Anthony Di Bello pointed out in a blog post, data privacy and security needs to be used everywhere for it to be effective. The best practices used at work should extend to home. The trick is making sure employees understand why instituting best practices for privacy is so important. Di Bello provided an example from a chief information security officer (CISO) with whom he works, and I think this advice should be shared:
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
If you've built something yourself rather than buy it, like a book shelf or a bird house, you know the satisfaction of shaping something to your needs. And as long as nothing goes wrong, you're in good shape. But if it breaks you can't return it to the store for an exchange; you have to fix it yourself. And while repairing a bookshelf is one thing, recovering applications in a data center when they fail is something else entirely.
Linux is an excellent tool for creating the IT environment you want. Its flexibility and open-source architecture mean you can use it to support nearly any need, running mission-critical systems effectively while keeping costs low. This flexibility, however, means that if something does go wrong, it's up to you to ensure your business operations can continue without disruption. And while many disaster recovery solutions focus on recovering data in case of an outage, leaving it at that is leaving the job half done. Having the information itself will be useless if the applications that are running it don't function, and you are unable to meet SLAs.
Network World — An MIT research team next month will show off a networked system of flash storage devices they say beats relying on DRAM and networked hard disks to handle Big Data demands.
The copious amounts of data now collected for analyzation by organizations overtaxes computers' main memory, but linking hard disks across an Ethernet network to solve the problem proves too slow, according to the researchers.
Their Blue Database Machine, or BlueDBM (sounds like an IBM product!), consists of flash devices controlled by serially networked field-programmable gate array chips that can also process data. The researchers say flash systems can find random pieces of information from within large data sets in microseconds, whereas the seek time for hard disks can be more than double that.
CSO — Unless you're been living under a rock in North America, it's pretty hard to have missed news of recent high profile data breaches.
I'd venture to say these stories have made their way into the wider, global purview (note: as I write this, another report regarding a massive data breach in South Korea affecting 20 million cardholders was released). While the number of retailers and account holders impacted by these events continues to increase and make headlines, issuers and merchants alike must address ways to instill confidence in their customers in short order.
Upon hearing this type of news, cardholders immediately think "Was I impacted? What do I need to do? Will my account be closed? Will I get a new account number and new debit or credit card?" These and many more questions likely flood the support lines as customers want to understand their real-life implications and steps they need to take to protect themselves.
ATLANTA — There are bad commutes, and then there is what happened here this week.
When a light snow started falling early Tuesday afternoon, Saquana Bonaparte, 31, left her factory job and headed out to get her daughter from school in one of the city’s northern suburbs.
She ended up inching along in her car for almost 12 hours and survived on a half bag of beef jerky and a small bottle of Mountain Dew. Unable to get to a bathroom, she did what she had to do as she drove. Twice.
Ms. Bonaparte spent the night on jammed roads with tens of thousands of other desperate Atlanta-area drivers who had never seen anything like the sheet of ice that coated the city.
CIO — Marketing organizations are gearing up to increase their budgets for big data marketing initiatives in 2014, but is their focus in the right place?
A report by data-driven marketing specialist Infogroup Targeting Solutions found that companies are continuing to ramp up their spending on big data marketing initiatives in 2014 (62 percent of companies expect their big data marketing budgets to increase). However, most of those companies are focusing on technology, not people—57 percent of companies say they do not plan to hire new employees for their data efforts in 2014.
That may be a costly error in the long run, says David McRae, president of Infogroup Targeting Solutions.
NETWORK WORLD — This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
As pressure mounts to deliver value with ever-increasing speed, lines of business (LOB) are often drawn to cloud computing's ease of use, flexibility and rapid time-to-value. The resultant Shadow IT created by use of consumer grade cloud computing resources usually raises questions about enterprise security, but the real risk is the potential for downtime due to inadequate A availability.
Any interruption that impacts the customer experience will affect the bottom line -- and a company's reputation -- faster than you can say "temporarily unavailable."
So what can IT leaders do about it? With the cloud movement a foregone conclusion, how can they ensure the requisite availability standards are met -- and that their investments in availability are the right ones? There are five key success factors for addressing Shadow IT while ensuring availability in the cloud:
PC World — The year's barely started, and we've already had enough data breaches at major retailers to make a barter economy seem like a good idea. Unfortunately there are yet more security threats to look forward to in 2014. Here are the biggest ones we anticipate.
Mobile malware: The absence of any notoriously successful mobile exploit has lulled users into a false sense of confidence about the level of danger they face. Meanwhile, attackers have had a few years to test the best ways to spread mobile malware.
James Lyne, global head of security research for Sophos, notes that mobile malware is adapting and evolving faster than security tools can learn to detect and evade the threats. Variants are adopting tactics from PC malware--employing encrypted command and control servers, and polymorphism, among other techniques. The perfect storm is on its way.
The world turns, things change and new security risks continue to appear on the scene. Some organisations bury their head in the sand or cross their fingers. ‘It wouldn’t happen to us’ is their motto. Others make plans using different approaches, some better than others. Then they leave the plan untouched without updating it and expect it to hold good. Is such a policy ever justified? Do new threats mean that traditional security principles should be revised? And where should you start if you want to improve your own security risk management?
There seems to be sort of a broad agreement that social data is valuable—in theory.
It’s easy to see why social media data attracts interest. A recent report by the Pew Research Center’s Internet Project found 70 percent of adults online now use Facebook. Forty-two percent of online adults are on multiple social networking sites, reports Information Management.
It’s not easy to generate those kinds of numbers. While I’m not sure how many adults are actually online in the U.S., 70 percent of them must be quite a few eyeballs—especially since 63 percent of Facebook users visit the site every day.
CIO — Tired of waiting for lengthy approval processes, CMOs have been doing end-runs around the IT department for years. In turn, scorned CIOs would rip out the marketing department's rogue tech.
CMOs responded in kind by running to their buddy in the corner office -- the CMO and CEO are often cut from the same personality cloth -- and complain that those techies are at it again, slowing down business decisions they don't understand and letting competitors beat them to the punch.
"I can tell you horror stories," says Kevin Cochrane, a tech industry veteran who has held top marketing positions since the mid-1990s and is currently CMO at OpenText, an enterprise information management software company.
For years, the CIO and CMO have faced off in one of the rockiest executive relationships. As the two odd stepchildren in the C-level suite, they constantly must prove their worth, which often pits them against each other as they try to curry favor among their peers. Both need new technology to be successful and they must compete for scarce dollars. Making matters worse, their jobs tend to reward opposite personality traits; clashes can get ugly.
Facility Management should play a crucial role in Business Continuity – they manage the 2nd largest and most consequential business “assets” (after IT) on which day-to-day business operation rely.
Yet many Facilities Management (FM) departments are often excluded from the planning process, either because BIA surveys skew a focus toward IT dependencies and financial impacts, or because Recovery strategies lean toward alternate site configurations (under the assumption that a damaged facility will be a total loss). Both of these perspectives ignore the fact that ‘total loss’ of a facility almost never occurs.
Then there are Facilities Managers who perceive little value in planning for potential disruptions – either under the assumption that response and recovery are part of their existing job duties (and don’t require planning), or that they can’t plan for what they can’t anticipate. Both are short-sighted.
IDG News Service — Target said Wednesday that intruders accessed its systems by using credentials "stolen" from a vendor, one of the first details the retailer has revealed about how hackers got inside.
The vendor was not identified. A Target spokeswoman said she had no further details to share.
As the forensic investigation continues, the spokeswoman said Target has taken measures to secure its network, such as updating access controls and in some cases, limiting access to its platforms.
During this winter’s extreme cold spells, caused by a polar vortex creating frigid temperatures, workers are at added risk of cold stress. Increased wind speeds can cause air temperature to feel even colder. This increases the risk of cold stress for those working outdoors—including snow cleanup crews, construction workers, postal workers, police officers, recreational workers, firefighters, miners, baggage handlers, landscapers and support workers for the oil and gas industry.
The U.S. Department of Labor notes that what constitutes extreme cold and its effects can vary across the country. In regions that are not used to winter weather, for example, near freezing temperatures are considered “extreme cold.” Because a cold environment forces the body to work harder to maintain its temperature, as temperatures drop below normal and wind speeds increase, heat can leave the body more rapidly.
It’s obvious that data is making major headways in terms of its role in our lives. That’s good news for data management workers, but as I discussed in my previous post, the push to become data-driven also raises some serious questions about our ability to use data in responsible, appropriate ways.
You may not think that’s IT’s problem, but I disagree. As decisions become more data-driven, I think data modelers, data managers, CIOs and other IT data workers have a professional and perhaps moral obligation to help guide its use, at least in terms of insuring that the findings remain valid.
Frankly, I’m worried that data illiteracy might be a major barrier to embracing data-driven leadership.
BOTHELL, Wash. – Why is there so much activity right now at the FEMA Region 10 office in Bothell?
Partners from the American Red Cross to the Bonneville Power Administration to the U.S. Army, and many others, are joining FEMA for what is known as a table-top exercise, planning for a larger full-scale exercise in March.
A table-top is an exercise in which field and logistics movements are “simulated” – not actually performed – while planning and decision-making proceed as if they are. A similar scenario will play out in late March when many of the same partners participate in a full-scale exercise with real field and logistical activity.
The table-top brings more than 100 people to the Region 10 Response Coordination Center in Bothell through Thursday.
The scenario involves a magnitude 9.2 earthquake and resulting tsunami. Such a quake would be the second strongest in known history, and the largest in known U.S. history. In fact, that largest-ever U.S. quake inspired the scenario; the upcoming full-scale “Alaska Shield” exercise coincides with the 50th anniversary of the Great Alaska Earthquake of 1964.
The scenario projects the loss of hundreds of lives. Also, it has thousands displaced in an Alaska winter with no power or heat and possibly tens of thousands of buildings damaged. Other problems would include loss of communications and how to moving relief commodities to survivors despite destroyed roads and bridges.
Region 10 Administrator Ken Murphy said of the table-top, “This exercise is important for all of us to work with all of our partners leading up to Alaska Shield, and to make sure that all of our systems are working together smoothly and seamlessly.”
FEMA regularly tests procedures and practices in this way, together with local, state, tribes, and other federal agencies.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow the Alaska Shield Exercise at #Akshield and FEMA online at twitter.com/femaregion10, www.facebook.com/fema, and www.youtube.com/fema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
Each January 28, Data Privacy Day is observed, with business owners and managers, vendors and concerned citizens taking time to raise their awareness of the most up-to-date approaches to keeping their companies’ and their own data safe. It’s an education effort that feels especially urgent this year, given the public’s focus on how their data is handled by the companies and vendors they have dealings with, not to mention the government and their own employers.
Today, with all of that being the case, I spoke with Jay Livens, director of product and solutions marketing for Iron Mountain, about the current state of data protection and IT’s priorities for the coming year. Iron Mountain recently conducted a survey of IT professionals that found that “with 68 percent probability, … data loss and privacy breaches are the most prevalent concern for IT leaders over the next 12-18 months.”
By now, everyone has heard some variation on the statistic about the data scientist shortage: By 2018, there will be a shortage of up to 190,000 qualified data scientists, to cite one version from the McKinsey Global Institute.
Organizations around the globe are trying to figure that one out, and the consensus seems to be that many will have to rely on a team approach when it comes to the tasks of mining Big Data.
Fair enough. But I’m beginning to think we have an even bigger problem ahead of us.
Ever since the first virtual server went into production more than a dozen years ago, speculation has been rampant that enterprise hardware is doomed. But even though it is clear (to me, at least) that hardware will still play a vital role on the enterprise going forward, that role is changing. So the question for enterprise executives is not whether to give up on hardware altogether, but to assess what sort of functionality it is to provide and then to determine how to achieve that functionality at the lowest price point.
To some, developments like the cloud represent a threat not just to enterprise hardware, but software as well. Last year closed out with some pretty stark reports indicating that more money spent in the cloud translates directly into diminished revenue from enterprise users. The message to IT vendors is clear: Adapt to the new cloud reality, and fast, or face obsolescence within two years.
Network World — Businesses with large data centers stand to net big savings in capital, power, deployment and maintenance costs if they follow server blueprints being made public by Microsoft.
The company plans to open-source a cloud server design that it says uses 15% less power than traditional enterprise servers and a 40% cost savings vs. those commercial alternatives.
The company says today that it is joining the Open Compute Project Foundation (OCP) and revealing specs and documentation for Microsoft's most advanced data-center hardware that supports its Windows Azure, Office 365 and Bing cloud services.
It’s amazing what some companies will do to get your attention so they can sell you stuff. Take, for example, those companies that are spending $4 million for a 30-second ad to get your attention during Sunday’s Super Bowl. Did you ever wonder how much those ads boost the companies’ sales? Well, in the cases of four out of five of the ads, the increase is exactly zero. Absolutely nothing. What might that $4 million have bought if it had been invested in Big Data rather than a big ad?
According to a recent article on AdAge.com, a study by advertising research firm Communicus found that 80 percent of Super Bowl ads fail to actually sell anything. The problem with Super Bowl ads is that they tend to focus more on creativity, and less on the brand. That means we all may be talking Monday morning about the hilarious commercial we saw during the Super Bowl, but chances are we have no idea what it was trying to sell. In other words, despite the obscene amount of money the company shelled out, it failed to make a connection between its brand and the consumer. Enter Lisbeth McNabb.
Coca-Cola has admitted falling prey to bizarre slow-motion data breach in which an employee apparently stole dozens of laptops over several years containing the sensitive data of 74,000 people without anyone noticing.
The unnamed former worker, said to have been in charge of equipment disposal, reportedly removed a total of 55 laptops over a six-year period from its Atlanta offices, including some that belonged to a bottling company acquired by the fizzy-drinks giant in 2010.
Only after recovering these during November and December did Coca-Cola realise that they contained 18,000 personal records that included social security numbers plus a further 56,000 covering other types of sensitive data. All but a few thousand were Coca-Cola employees or otherwise connected to the firm.
People are often cited as the most valuable resource of an organisation. The more capable an employee is and the better trained, the more an enterprise stands to profit – up to a point. Difficulties may begin when a person becomes indispensable because of unique expertise that is essential to the smooth running of the company. Those difficulties are then compounded if the expert tries to force the company to stay within that perimeter of expertise; perhaps for fear of being pushed to one side and even being made redundant. A situation like this runs counter to what business continuity is all about. What is the best way to handle it?
It’s once again time to tear open the GRC platform market and uncover all its amazing technical innovations, vendor successes, and impact on customer organizations. This afternoon, we published our latest iteration of the Forrester Wave: Governance, Risk, and Compliance Platforms.
My esteemed colleagues Renee Murphy and Nick Hayes joined me in a fully collaborative, marathon evaluation of 19 of the most relevant GRC platform vendors; we diligently poured through vendor briefings, online demos, customer reference surveys and interviews, access to our own demo environment of each vendor’s product, and as per Forrester policy, multiple rounds of fact checking and review. The shear amount of data we collected is incredible.
Emergency and Incident Mass Notification Services (ENS) are the secure automated distribution and management of important alerts and critical messages to multiple recipients on multiple devices, activated via browser (PC or mobile device) or phone. Emergency notification has become an integral and mission critical component of organizations communication strategies. On both a routine and emergency basis, notifications to affected stakeholders before, during and after an incident or crisis dramatically increases an organization’s ability to quickly restore productivity to normal levels.
Practical reality demonstrates that timely, effective, and efficient communications with employees and stakeholders during crisis situations provides fiscal and operational stability that impacts the bottom line, can reduce damage to property, and save lives. The lack of effective and appropriate communications to organizational stakeholders can negatively impact all business aspects including finance, operations, IT, and human resources.
The cloud is cheaper than standard IT infrastructure. This has been a given for so long that hardly anyone questions its veracity. And after all, who would argue the cost advantages of leasing resources from an outside provider vs. building and maintaining complex internal infrastructure?
Well, some very bright minds in the IT industry are starting to do just that.
Rob Enderle, writing for CIO.com, for one, notes that speed and flexibility, while important, do not necessarily translate into lower costs. The hard truth, of course, is that spending on IT is dropping while spending on cloud services is increasing, but this has more to do with timing and availability rather than simple economics. Indeed, recent analyses show that once internal infrastructure begins deploying cloud services of its own, it can meet enterprise needs for about $100 per user while Amazon and other providers come in at around $200 per user when purchased individually or in small groups, as is the practice with many business units. And a proprietary platform like Oracle can run as much as $500 per user.
Risk management is the most important part of an organization’s governance, risk and compliance program (GRC), according to a survey. When asked to forecast priorities, 33% of respondents stated that enterprise risk management is most important and 27% said ERM would continue to be important to their company. Out of 12 barriers to their GRC goals, organizations identified a lack of resources (52%) and lack of collaboration and cooperation (44%) as their top obstacles.
Computerworld — There's little doubt that the bring-your-own-device (BYOD) trend with smartphones and tablets has rattled a lot of nerves for IT managers.
The situation will only get more nerve-wracking in 2014 because of the 30% annual growth through 2017 expected for smartphones purchased under a BYOD approach, and the further emergence of Windows Phone as a third platform behind Android and iOS.
Businesses are concerned about supporting three smartphone platforms, and while HTML 5 was expected to solve the headaches of supporting multiple platforms, HTML 5 just has not progressed fast enough, "leaving IT managers to wrestle with issues related to cross-platform applications," research firm IDC wrote in a note earlier this month.
PC World — Three major retail chains have recently admitted being victims of massive data breaches that compromised sensitive data from over 100 million customers. Sadly, though, Target, Nieman-Marcus, and Michael's are just the beginning of a trend that isn't likely to fade away any time soon.
Verizon's annual Data Breach Investigations Report (DBIR) from May of 2013 found that 24 percent of the confirmed data breaches in 2012 affected the retail and restaurant sector--second only to the financial sector. In all, there were 156 confirmed data breaches in the retail and food services industries.
Manually combing through logs looking for anomalies that might represent a security threat is not only tedious, it also introduces a level of security fatigue that makes it more likely for a security threat to go unnoticed.
To help organizations reduce that risk, Splunk developed its Splunk App for Enterprise Security, which applies analytics to logs in a way that makes it a lot easier to identify the patterns that represent potential security threats. Released this week, version 3.0 of the app adds support for a new threat intelligence framework, additional data types and data models, and a pivot interface.
Business risk consultancy Riskskill has highlighted what it sees as the main areas of increasing business risk for UK companies in 2014:
1. Fraud Risks
In 2014 fraud risks are likely to be the major contender for exposing many businesses to significant risk as the closure of the government’s National Fraud Authority (NFA) could, some feel, be seen by fraudsters as a huge victory for the bad guys. The NFA was set up to consolidate and focus upon the handling and approach of combatting fraud and also to direct the strategic elements of the attack on the fraudster. The NFA objectives were previously diluted from eight to three, with the more 'strategic issues' removed. Now its remaining operational functions have been atomized into several government silos.
ASIS International has announced the publication of a revised version of the ANSI/ASIS Chief Security Officer - An Organizational Model. This standard provides a model for organizations to use when developing a senior leadership function responsible for providing comprehensive, integrated risk strategies to protect an organization from security threats.
This standard replaces the 2008 ANSI/ASIS Chief Security Officer Organizational ANSI version.
“Early on, it was determined that the standard’s purpose was to state the risks that need to be managed within an organization — of any size — and based on those risks, determine the skills and competencies needed to manage those risks,” said Jerry Brennan, technical committee chair, and chief executive, Security Management Resources. “By identifying who owns what, who is accountable, and what is shared, organizations can then determine what is needed within its ‘senior security executive’ position and the competencies that are best suited for that role.”
The standard’s model for a senior leadership position is presented at a high level and designed as a guide for the development and implementation of a strategic security framework. The structure is characterized by appropriate awareness, prevention, preparedness, and necessary responses to changes in threat conditions. Specific considerations and responses are also addressed for deliberation by individual organizations based on identifiable risk assessment, requirements, intelligence, and assumptions.
“The perspective through which organizations evaluate and integrate operational risk within their strategic plan continues to be a dynamic process which not only impacts the role of the ‘senior security executive’ but also the position or positions that may assume that role,” said Charles Baley, ASIS Standards and Guidelines Commission Liaison and chief security officer, Farmers Group, Inc. “This Standard focuses on the importance of the function and not a single title or position.”
Applicable to both private and public sector organizations, the standard provides a methodology to evaluate and respond to a spectrum of threats to tangible and intangible assets on both a domestic and global basis.
View the executive summary (PDF).
CIO — Recently I saw yet another slide presentation showcasing the decline of enterprise IT spending and the comparable increase in public cloud business. The conclusion? Enterprises just don't have money to spend and it's killing enterprise vendors.
This is fundamentally not true. What's really happening is that users are increasingly using public cloud services, and the expenses they incur are being reimbursed, so the money's theirs. I've also seen several studies showing that moving to the cloud is expensive — twice what it would cost to build services internally, according to an internal analysis I recently reviewed, and five times as much if one uses the Oracle alternative.
After reading this blog post, if you would like more detail, fellow Forrester analyst Christian Kane and I have collaborated on two short reports describing the acquisition of AirWatch through the lens of mobile workforce enablement and a second report through the lens of mobile security. Enjoy the reports, and as always... we love to read your comments!
Discussions about IT and business alignment are almost taboo these days. I suppose people have heard too much about it in the past decade.
Yet, that’s exactly the kind of discussion data experts seem to be calling for when it comes to how IT manages data.
“Over the past year it is becoming increasingly clear that we have to stop thinking as data managers and start thinking as data designers,” writes Forrester analyst and data management expert Michele Goetz in a recent Information Management article. “What matters is what data drives for the business first and then design a data system around that. We need to educate ourselves on what the business does with the data.”
The widening gap between economic losses and insured losses from natural catastrophes is our topic du jour.
Guy Carpenter’s GCCapitalIdeas.com just published this chart showing that approximately 70 percent of global economic losses from natural catastrophes were uninsured between 1980 and 2013:
Almost from the very beginning of the modern virtualization movement, technology futurists wondered what it would be like to have a completely virtualized data center. What would be the benefits, and the major challenges, to building entire compute/storage/networking infrastructure complete in logic?
Those questions are about to be answered now that the IT industry is taking seriously the idea of the software-defined data center (SDDC). In fact, the concept is now openly discussed as the next major segment within the increasingly diversified enterprise infrastructure market.
Organizations are turning to Big Data because they believe more information will improve decision-making, whether it’s whom to target for a sale or whether a product should be recalled.
But what if the real value of the data isn’t in providing us with more information, but in replacing us as decision makers?
Andrew McAfee, co-director of the Initiative on the Digital Economy in the MIT Sloan School of Management, goes way meta in two recent Harvard Business Review blog posts that question not just how to use data — but who should be using it.
In 'The Forrester Wave: Disaster-Recovery-As-A-Service Providers, Q1 2014' Rachel Dines overviews the current global market and ranks the key players.
The report says that there has been significant growth and adoption of disaster recovery as a service (DRaaS) across all sectors 'as I&O professionals are looking for ways to improve their recovery objectives without increasing spend'. The results of the latest Forrsights Hardware Survey show that 19 percent of 438 surveyed companies have implemented DRaaS already and a further 13 percent are planning to implement it during 2014.
Seventeen criteria are used by Forrester to provide an evaluation of DRaaS vendors. Using these, 12 companies were identified as 'the most significant service providers' : Axcient, Barracuda Networks, CenturyLink Technology Solutions, EVault, HP, IBM, iland, nScaled, Persistent Systems, Quorum, SunGard, and Verizon Terremark. Of these Forrester states that 'Iland and SunGard lead in a tight race of strong competitors' followed closely by IBM, nScaled, Verizon Terremark, and EVault.
On this day in 1901, Queen Victoria died, ending an era in which most of her British subjects know of no other monarch. She was born in 1819 and came to the throne after the death of her uncle, King William IV, in 1837. Her 63-year reign was the longest in British history. She oversaw the growth of the British Empire on which the sun never set. Queen Victoria restored dignity to the English monarchy and ensured its survival as a ceremonial political institution. She also brought a stability to the monarchy that has stayed with the country as well.
How can you bring stability to your compliance program? One of the most important steps that you can take is to regularly assess your risks through a risk assessment. I often hear some of the following questions posed by compliance practitioners regarding risk assessments: What should you put into your risk assessment? How should you plan it? What should be the scope of your risk assessment? These, and other, questions were explored in a recent article in the ACC Docket, entitled “Does the Hand Fit the Glove? Assessing Your Company’s Anti-Corruption Compliance Program” by a quartet of authors: Jonathan Drimmer, Vice President and Assistant General Counsel at Barrick Gold Corp.; Lauren Camilli, Director, Global Compliance Programs at CSC; Mauricio Almar, Latin American Regional Counsel at Halliburton; and Mara V.J. Senn, a partner at Arnold & Porter LLP.
Computerworld — Last summer, when I wrote about Apple's relationship with enterprise IT, I talked about earlier Apple decisions to stop producing its rack-mounted Xserve server and refocus its server platform, OS X Server, on the small business market. Since then, Apple has largely focused on making its consumer-oriented products -- the iPhone, iPad, and Mac -- as enterprise-friendly as possible. These devices ship with out-of-the-box support for key enterprise technologies like Active Directory, Exchange, ActiveSync, and a wide range of mobile device management (MDM) solutions that can manage both iOS devices and Macs.
That strategy makes a lot of sense because it removes the need for a large investment in infrastructure or software dedicated specifically to supporting Apple's products. The strategy also built on the BYOD trend that has reshaped the very concept of how IT handles mobile technology. It's a strategy that Apple should continue.
Last week was the 20-year anniversary of the Northridge Earthquake. The 6.7-magnitude event that hit on Jan. 17, 1994 at 4:30 a.m. stands as the second costliest disaster in U.S. history, following Hurricane Katrina. Northridge cost $42 billion in total damages, while Katrina cost $81 billion, according to federal figures.
The U.S. Geological Society (USGS) said that 60 people were killed, more than 7,000 injured, 20,000 were left homeless and more than 40,000 buildings were damaged in Los Angeles, Ventura, Orange and San Bernardino Counties.
The San Fernando Valley saw maximum intensities of magnitude-9 in the areas of Northridge and Sherman Oaks. Significant damage also occurred in Glendale, Santa Clarita, Santa Monica, Simi Valley, Fillmore and in western and central Los Angeles, the USGS said.
It wasn’t long ago that Business Continuity planning and IT Recovery Planning were done by different groups who never talked to each other. In many organizations today the two groups have begun to work together – however grudgingly – to forge links between the IT requirements of critical business functions or processes and the prioritization of recovering IT assets. The two groups may never meet in the same room, but they share the same data – and that’s a very good thing.
Of course, it still doesn’t happen in every organization. And even where it does, there are often other planning groups that keep to themselves – to the detriment of their organization.
The data center is becoming more efficient, more modular and a whole lot more flexible as it transforms from the static architectures of the past to the virtual, dynamic infrastructure of the future. But part of this bargain calls for increasingly dense hardware footprints and steadily rising utilization rates, and that inevitably leads to heat generation.
Small wonder, then, that even as demand for key hardware elements like servers is declining, the need for advanced cooling systems is on the rise. According to MarketsandMarkets, the data center cooling systems market is on pace to top $8 billion by 2018, up from 2013’s level of about $4.9 billion. Part of this growth is due to the fact that data infrastructure across the board is increasing – more data centers mean more cooling systems. But the industry is also enjoying a renaissance of sorts as new, highly efficient technologies fulfill the need to make existing infrastructure more energy efficient. As the “do more with less” mantra takes hold, one of the most significant cost-saving measures available to the enterprise is new, highly efficient cooling infrastructure.
Twenty years ago, a fault that scientists didn’t even know existed slipped, triggering a massive 6.7 magnitude earthquake centered beneath the San Fernando Valley, with shockwaves rippling throughout the greater Los Angeles area.
When the strongest shaking ceased, the region had suffered 57 deaths and more than $20 billion in damage. The newly formed Southern California Earthquake Center (SCEC), founded in 1991 and headquartered at USC, stepped in to find out exactly what happened and what could be done about it.
Statistics and scare tactics don’t work; instead the starting point is ensuring that you have a deep understanding of the business landscape, strategies and risks.
By Larry Robert
There are many approaches that business continuity practitioners can take in convincing executive management to allocate funds and resources to a robust business continuity program. Many try to overwhelm with statistics and scare tactics. I believe these actually detract from the program by making sweeping examples that are typically outdated, untrue, and not applicable. Industry statistics, in many cases are either unverifiable, or can be traced back to a vendor that may benefit from the negative information. We owe it to our profession to always strive for accurate, verifiable information when citing examples in support of developing and maintaining a program.
As you will see below, the only way to bring an awareness to senior leaders is to discuss the specific risks to their particular business. Simple, yet very effective. As you develop yourself as a mature business continuity professional, you can bring into the conversation some of your own experiences from actual events and how various solutions either contributed to a quick recovery or further complicated the recovery process.
By Reuven Harrison.
Balancing effective IT security against a business’s need for agility is an age-old issue. But today, getting that balance right is trickier than ever. Organizational networks are increasingly sprawling, complex and hard to secure, with ever more changes required at the server level to ensure businesses can securely run all the applications they need, as and when they need them. In such a highly complex environment – characterised by constant change – a reactive, manual approach to security is no longer adequate. Mistakes can (and do) creep in, exposing organizations to cyber-attacks, data breaches and industrial espionage.
Yet slowing down the change process in order to ensure security can be similarly risky, since this will stifle the very agility that is key to business survival and success. Unless network managers fundamentally rethink their manual approach and adopt fresh strategies supported by automated tools, they face a ticking time-bomb that could seriously damage not just their security, but their business credibility and competitiveness.
An interesting article in the latest NFPA Journal looks at the rise of social media and its effects on emergency response and communication management; and provides some useful general social media crisis communications advice.
#Are You Prepared? highlights several natural disasters in which social media played a key role in keeping both the affected public and emergency responders informed. It also explains how FEMA established a ‘Hurricane Sandy Rumor Control’ website to counter false and misleading information circulating on social media during that disaster.
The article stresses how important it is for emergency and crisis management professionals to understand how social media works: "If social media is able to push out emergency information to critical audiences, we have to be able to use all of these tools," says Jo Robertson, chair of the NFPA 1600 Social Media Task Group and director of crisis preparedness for the chemical company Arkema. "Social media use is a reality. We all have to get past the notion that this is something we can ignore."
Climate change is among the five most likely and most potentially impactful global risks, according to the just-released World Economic Forum (WEF) 2014 Global Risks Report.
The report assesses 31 risks that are global in nature and have the potential to cause significant negative impact across entire countries and industries if they take place.
An analysis of the five risks considered most likely and most impactful since 2007 shows that environmental risks, such as climate change, extreme weather events and water scarcity, have become more prominent since 2011.
Be honest – do you currently have a malicious software reporting policy? Just relying on the existence of anti-virus software and firewalls may be too optimistic nowadays. The potential damage to information assets and productivity, let alone identity or bank account theft, suggests that a malware reporting policy should be in place in any organisation. Even Google is asking users to contribute to tightening up security by reporting any nefarious activity from websites listed in its results pages. And as an additional source of concern, it seems malware infections are also being caused by some of the very entities that are supposed to be protecting us.
Aon Global Risk Consulting has conducted research to understand more about organizations’ attitudes to the top threats they face in today’s ‘hyper connected’ world.
With a focus on analytics, Aon wanted to further explore some of the results of its biennial Global Risk Management Survey (GRMS) published in 2013, so it subsequently asked captive directors (executive and non-executive) for their opinions on the rankings of the top 50 risks identified.
Stephen Cross, Chairman, Aon Centre for Innovation and Analytics, said “We felt that the results from the GRMS 2013 had thrown up some anomalies. With our expertise in the captive space, we approached captive directors for their opinions on the rankings of various risks to give us a more holistic view. As a result, we believe there is a real debate to be had across the risk management industry on insurable versus uninsurable risk. Understanding risk has always been a fact of business life, but today, the magnitude, complexity and speed have increased exponentially. That is why business leaders are concerned with how they manage risk.”
A new ENISA report provides advice on how to implement incident reporting in cloud computing. ‘Incident Reporting for Cloud Computing’ looks at four different cloud computing scenarios and investigates how incident reporting schemes could be set up, involving cloud providers, cloud customers, operators of critical infrastructure and government authorities.
Using surveys and interviews with experts, ENISA identified a number of key issues:
- In most EU Member States, there is no national authority to assess the criticality of cloud services.
- Cloud services are often based on other cloud services. This increases complexity and complicates incident reporting.
- Cloud customers often do not put incident reporting obligations in their cloud service contracts.
The report contains several recommendations,including:
- Voluntary reporting schemes hardly exist and legislation might be needed for operators in critical sectors to report about security incidents.
- Government authorities should address incident reporting obligations in their procurement requirements.
- Critical sector operators should address incident reporting in their contracts.
- Incident reporting schemes can provide a ‘win-win’ for providers and customers, increasing transparency and, in this way, fostering trust.
- Providers should lead the way and set up efficient and effective, voluntary reporting schemes.
CIO — This year, the IT services industry saw customers doing more of their own IT services deals, testing the service integration model, and continuing to struggle with outsourcing transitions. CIO.com again asked outsourcing observers to tell us what they think is in the cards for the year ahead. And if they're right, 2013 could be the year customers--and a few robots--take greater control of the IT outsourcing space.
1. The Rise of the Machines
Say hello to the latest IT services professional: the robot. "2014 will see significant growth in the development and implementation of robot-like technologies that will automate many tasks currently performed by full-time employees in [outsourcing] deals," says Shawn C. Helms, partner in the outsourcing and technology transactions practices at K&L Gates. "Given the rise of robots replacing people in manufacturing and logistics, it is not a stretch to predict that robots will move up the intellectual value chain as artificial intelligence continues to develop."
According to the World Economic Forum’s Global Risks 2014 report, the chronic gap between the incomes of the richest and poorest citizens is the risk most likely to cause serious global damage in the next decade. Looking forward, the 700 experts queried emphasized that the next generation will only feel this disparity more acutely if current conditions continue. Those presently coming of age face “twin challenges” of reduced employment opportunity and rising education costs, prompting the World Economic Forum to consider the impact on political and social stability as well as economic development.
“Many young people today face an uphill battle,” explained David Cole, group chief risk officer of Swiss Re. “As a result of the financial crisis and globalization, the younger generation in the mature markets struggle with ever fewer job opportunities and the need to support an aging population. While in the emerging markets there are more jobs to be had, the workforce does not yet possess the broad based skill-sets necessary to satisfy demand. It’s vital we sit down with young people now and begin planning solutions aimed at creating fit-for-purpose educational systems, functional job-markets, efficient skills exchanges and the sustainable future we all depend on.”
CIO — The strained, dysfunctional relationship between CIOs and marketers can be overcome, in part by rallying around the customer. After all, we're in an age that requires IT and business people to put aside their differences in order to bring business technologies to bear that will win, serve and retain customers.
At least this is the key finding in a new report by Forrester Research. "The age of the customer places new demands on organizations, requiring changes to how they develop, market, sell and deliver products and service," the report says. "IT and business teams frequently inhibit successful digital experience execution by failing to work cooperatively."
Rentsys Recovery Services, a US-based provider of comprehensive and integrated disaster recovery solutions has announced that it has acquired IT-Lifeline, a provider of comprehensive disaster recovery and compliance testing solutions for the financial services industry.
The acquisition came about through a prior strategic partnership between the two companies in which IT-Lifeline offered Rentsys’ business continuity services. IT-Lifeline’s BlackCloud, a compliance-focused vaulting, testing and recovery solution, will be incorporated into Rentsys’ disaster recovery offerings. Rentsys will also retain IT-Lifeline’s support team, which brings a wealth of knowledge regarding regulatory requirements as well as cloud technology.
“This acquisition expands our product offering and enhances our ability to meet the evolving business continuity and compliance needs of our customers,” said Walt Thomasson, managing director of Rentsys Recovery Services. “IT-Lifeline’s BlackCloud along with the recent addition of RCM enhances our ability to deliver business continuity solutions that ensure our clients will have access to their critical business functions if a disaster or outage does occur.”
CIO — C-suite executives today are striving to drive data-centric transformations of their businesses, but most are struggling to connect the dots. That's according to a new report by KPMG Capital, a global investment fund created by KPMG International in November to invest in innovation in data and analytics (D&A).
"Long before the term 'big' was first applied to data, organizations were struggling to make sense of all the information they had," says Mark Toon, CEO of KPMG Capital and global leader, D&A. "Over the past five years that focus on data has started to shift. Today, the issue is no longer about owning the most data but rather about how to gain the most insight from it. In short, how to turn data into insights, and insights into real business advantage."
"Data is everywhere, telling us everything," he adds. "But do companies really know where to look? The reality is that turning mountains of data into valuable, practical and actionable business analytics is not nearly as straightforward as people believe."
IDG News Service (New York Bureau) — Dispelling any lingering doubt that IBM sees cloud computing as the way of the future, the company announced that it will invest US$1.2 billion this year in expanding its global cloud infrastructure.
"Having lots of data centers in lots of different countries around the world will be important in the long term," said IBM SoftLayer CEO Lance Crosby. "We want the world to understand that cloud is transformational for IBM."
The company plans to open 15 new data centers this year, more than doubling the cloud capacity it acquired when it purchased SoftLayer last year for $2 billion. It plans to combine the new data centers, the existing SoftLayer data centers, and the data centers it already ran before the SoftLayer purchase into a single operation that would provide public and private cloud services to its customers, as well as provide services for internal operations.
Many of our readers should find the topics and outcomes of the 2013 Risk Frontier Survey interesting reading. Although largely centering of matters of the European risk and insurance management community, this survey has valuable information that applies to all organizations in all parts of the world.
New risks require new thinking – and, this is why “The Risk Frontiers Survey” is so worthwhile as it delivers an in-depth picture of the current state of the risk management profession, gleaned from its leading practitioners. It also outlines the big risk issues and ideas on how risk managers and those in the insurance market need to respond to these risks and challenges.
The survey is split into two halves. The first focuses on the big risks and the role of the risk and insurance manager. The second half focuses more on risk transfer than management.
Stick to core competence and competitive advantage, and outsource the rest: such has been the mantra of businesses for decades now. The logic is simple. By using external partners specialised in the non-core activities, for example, accounting, logistics and pay, an enterprise can benefit from that partner’s economies of scale and superior expertise. Profits go up and business continuity is reinforced. Yet outsourcing still gives rise to disappointment and animosity. It turns out that while a watertight contractual agreement is a pre-requisite for dependable outsourcing, it isn’t sufficient. Organisations need more.
BIA results can help determine many aspects of the BCM/DR program to come; they validate what is required – and what’s not. And what’s required and what’s not is determined through the development of the various strategies and approaches that are created as a result of the BIA findings. However, that doesn’t stop individuals of all levels from believing they know what they require for their restoration and recovery strategy regardless of what the BIA findings state.
This is because many individuals have a difficult time comprehending that they may not be the most important area within the organization and thus, aren’t required to be available immediately. And if a department – or particular aspects of a department – aren’t required immediately after a disaster, many will disregard that fact and begin to state what they must have; what they want vs. what they actually need.
The difference between want and need is something that all BCM/DR practitioners must clearly understand and communicate to department leads; especially those responsible for acquiring, developing and implementing the various strategies required to address BIA findings.
Have a quick look around you and see what is powered by electricity in your buildings, pretty much everything. We need to start asking questions, questions like:
- Can we maintain Power for all our critical services through generator provision? Remembering that accessibility to large quantities of fuel becomes difficult without electricity.
- Are we up to date with our alternative power supply testing regimes?
- If our plans include the hire of generators, will they be available to you? Have you an agreement in place? Remember that everyone might want one.
As compliance programs mature, they become less top down driven and more inculcated into the DNA of a company. The more doing business ethically and in compliance becomes part of the way your company does business, the better off you will be down the road. One of the methods that you can use is to set up a compliance network within your organization. I recently read an article in the Fall issue of the MIT Sloan Management Review, entitled “Designing Effective Knowledge Networks”, by Katrina Pugh and Laurence Prusak, in which they discussed knowledge network design as a mechanism to facilitate desired behaviors and outcomes. I found their ideas very useful in the compliance context.
Generally speaking, knowledge networks are a “collection of individuals and teams who come together across organizational, spatial and disciplinary boundaries to invent and share a body of knowledge. The focus of such networks is usually on developing, distributing and applying knowledge.” This is what a compliance regime should strive for within a company’s organizational structure. The authors believe that with the design of an effective knowledge network, a company can not only affect dynamics but also drive behaviors. In designing such a knowledge network, the authors postulate that there are “8 dimensions of a knowledge network” which encompass strategic, structural and tactical issues which must be considered. They are as follows:
How’s this for a definition of Big Data: If it doesn’t make sense in a system built 10-plus years ago using principles from 30 years ago — then you’re dealing with Big Data.
That’s the unusual definition offered by William McKnight in a recent Q&A published on, oddly enough, the Huffington Post. Phil Simon, who consults, writes and speaks about technology, data and business issues, conducted the interview.
McKnight makes the case for why data outranks even the storefront as a strategic asset today.
The enterprise/IT industry has traditionally been segmented into four major groups: the SMB/SOHO, the mid-level organization, the large enterprise of the Fortune 5000, and the newest member: the hyperscale environments of Google, Facebook and other Web-facing entities.
The historical pattern has been for technologies developed for the big boys to trickle down to the smaller fry, gradually enabling advanced capabilities to percolate throughout the entire industry. When it came to hyperscale, however, the thinking was that most of the supporting technology, which was primarily customized anyway, would not apply to the average data center because the levels of scale simply were not needed. Even if Ford Motor Company were to shed all its dealerships and sell cars exclusively online, it would not approach the volume of, say, Amazon.
The Middle Ages used a phrase to describe a term that was not meaningful as “a distinction without a difference.” Oftentimes, in the desire to catch a technological/marketing wave, salespeople and consultants overuse terms coined to describe one thing to mean something entirely different. Not long ago, I was reading an article in the New York Times about department stores tracking their customers by using their wireless devices, using their movement through their stores to predict what they were interested in and what they bought. The article described this as yet another instance of the importance of Big Data. The more I read, the more I found this reference both comical and disturbing.
Twenty years on, the Northridge earthquake remains the costliest U.S. earthquake for insurers, causing $15.3 billion in insured damages when it occurred (about $24 billion in 2013 dollars), according to the Insurance Information Institute (I.I.I.).
The 6.7 magnitude quake, which hit Los Angeles on January 17, 1994, also still ranks as the fourth-costliest U.S. disaster, based on insured property losses (in 2013 dollars), topped only by Hurricane Katrina, the attacks on the World Trade Center and Hurricane Andrew.
On the global scale, the Northridge earthquake still ranks as the second costliest earthquake for insurers, after Japan’s earthquake and tsunami of 2011, according to Munich Re.
Awareness of risk can lead to unforeseen risk behaviors based on knowledge that is sufficiently convincing to lead to false positives.
By Geary W. Sikich
“The more you know, the more you know you don't know.” Attributed to Aristotle.
Knowledge is an opening door to understanding; however, the risk of knowledge is understanding how much you do not know.
Unfortunately when it comes to organizational risks we have a very limited understanding of where risk is or where risk is going to materialize.
By Garrett Freeman.
Building facility managers wear many hats. In addition to focusing on the maintenance and operational aspects of a building, you may also be in charge of controlling safety-related issues and helping ensure business continuity while property damage restoration procedures are underway. If it is necessary to move the business and its staff to a recovery site or a temporary worksite after a flood or storm, it is important for you to know the property and water damage restoration processes. This helps assure a quick recovery and provides peace of mind.
Property damage restoration steps after a flood
When water damage is involved, time is of the essence in restoring a building and its contents. Instead of tackling water damage restoration in-house, call the professional remediation service listed in the emergency disaster plan right away. The experts may tell you how to safely start the drying process as you wait for the technicians to arrive.
In data management, the way you delete information can be as important as the way you keep it. Confidential information that finds its way into the wrong hands can lead to loss of advantage over competitors, public relations crises or other threats to business continuity. However, that doesn’t mean the wholesale destruction of data within an organization: legal archival requirements exist for publicly held businesses. In addition, information is now a valuable asset for many organisations. But how do you manage its selective release or ‘sanitisation’? This is already a challenge for paper-based information; for digital data, the difficulty is greater still.
The new year is often a time for making resolutions to improve personal health, productivity and wellbeing. Why not use that opportunity to make similar resolutions for your Business Continuity Management program? This year, make a pledge to keep your Business Continuity Plan trim, while still meeting Audit and Compliance requirements.
Business Continuity professionals often walk a fine line between perception and reality. The result is often a 3-inch thick ring binder with hundreds of pages of administrative documentation. Is it possible to kick the more-is-better habit and slim down those BCP’s from overweight to featherweight? It depends…
Digitally empowered customers are disrupting every industry; the age of the customer brings with it some inherent risks that will push organizations to increase spending on security software. In Asia Pacific, security software has leapfrogged other software categories and leads the region in terms of expected software spending growth in 2014 (see figure below).
IDG News Service (San Francisco Bureau) — Neiman Marcus has been notifying customers of a data breach after hackers stole merchant card information for an undisclosed number of shoppers.
The high-end retailer said it was working with the U.S. Secret Service and a forensics firm to investigate the theft, which it said it learned about in December from its merchant card processor.
"On January 1st, the forensics firm discovered evidence that the company was the victim of a criminal cyber-security intrusion and that some customers cards were possibly compromised as a result," Neiman Marcus said in an emailed statement.
I’ve said many times in the past that physical infrastructure is and will remain a crucial component of the data environment. After all, software isn’t much good without a solid hardware foundation. But as virtualization and software-defined architectures continue to work their way into the enterprise, it is also clear that the majority of enterprise management activity will shift to these higher level architectures.
Hardware, in other words, will be important, but boring. And that poses some interesting questions as to how data environments are to be built and managed, particularly in the way the burgeoning field of enterprise architecture (EA) will come to supplant many traditional IT roles.
Along with January renewals and analyzing whether existing policies offer sufficient coverage, the new year is a perfect reminder to review company-wide emergency plans. While 2013 may have been a relatively light year for catastrophe losses, there’s no reason to assume 2014 will be, too.
Check out this infographic from Boston University’s Masters in Specialty Management program for a jump-start on identifying the risks of natural disaster and updating plans for how to handle any emergency:
Dare to ask that question of your Top Management? Maybe not, but a Risk Manager would try to understand their attitude to risk and their mythical 'Risk Appetite'. As a Business Continuity Manager, why not explore their 'Maximum Attitude to Disruption' (M.A.D.) a phrase I believe I uniquely use and created hoping it becomes more prevalent in a commercially driven BC world.
Risk appetite is a feeling, a sense of danger perhaps. Your risk attitude is what you intend to do about avoiding that danger, your Maximum Attitude to Disruption is a mixture of your Top Management’s risk appetite and risk attitude expressed in a business continuity context.
Data Privacy Day is on January 28. But isn't all hope lost when it comes to the P-word? Interestingly, Daniel Solove is one key expert who doesn't think so: His recent Year in Privacy roundup sounds a number of positive notes, largely having to do with regulatory pressure driven by public pressure. In the age of the customer, we really can see "water wear away stone" when ordinary people demand change.
CIO — Outlook connection problems? Salesforce.com system crashing repeatedly? Trouble connecting to internal human resources systems? You're not alone.
According to a recent study from Compuware, of the more than 300 business executives surveyed, 48 percent reported they experience tech performance issues daily, and three out of four of those executives say the frequency and severity of these issues isn't improving.
It's not that executives and IT leaders don't want to fix these problems, says Bharath Gowda, director of technology performance, Compuware. It's that they're pressured to focus on what are seen as larger, more pressing issues instead of these day-to-day headaches, he says.
CIO — McAfee's comprehensive 2014 security report, released at the end of December, goes beyond rehashing the same set of threats in ever-increasing volume to instead reflect the impact of digital currencies, NSA leaks and social media. Going through the report, one thing becomes eminently clear: We are in no way prepared for what's coming in 2014.
I'll cover the report's main elements, but I suggest you read it thoroughly yourself — perhaps after a couple glasses of good brandy.
According to AMI’s study, 2014 North America SMB Mobility Landscape, Opportunity Assessment & Outlook, small to midsize businesses will help the mobility market grow to a predicted $71.5 billion by 2018. The report says that small businesses account for almost eight of every 10 dollars spent on mobility services and products in the U.S. and Canada, while midsize businesses account for 20 percent of mobile-related investments.
A recent Canadian Medical Association Journal article shows how data specialists in Ontario are using integration and analytics to reinvent health systems.
Data-integration innovations are refashioning the health systems from supporting acute care to “total patient data capture,” according to the article.
I’ll say this much for it: It’s stunning in scope. These health care data projects are pushing beyond integrating health care data to create the revolutionary, 360-degree vision of people that makes marketing leaders salivate.
CSO — The Internet of Things (IoT) is a mass of billions of connected devices from cars to wireless wearable products. Cisco's Internet Business Solutions Group estimated 12.5 billion connected devices in existence globally as of 2010 with that number doubling to 25 billion by 2015.
In light of this burgeoning market, CSO identifies five categories of IoT devices at risk in the coming year. CSOs who are aware of the threats and potential damage to their organizations can prepare accordingly.
Weather damage never goes out of season. According to a new report from the Insurance Information Institute (I.I.I.), winter storms are historically the third-largest cause of catastrophe losses, behind only hurricanes and tornadoes.
“Winter storms accounted for 7.1 percent of all insured catastrophe losses between 1993 and 2012, placing it third behind hurricanes and tropical storms (40 percent) and tornadoes (36 percent) as the costliest natural disasters,” said I.I.I. President Robert Hartwig.
Between 1993 and 2012, winter storms resulted in about $27.8 billion in insured losses—or $1.4 billion per year, on average, according to Property Claims Service for Verisk Insurance Solutions.
Certification Ensures xMatters’ IT Alerting and Communications Platform is Fully Integrated with Leading SaaS IT Management Solution
SAN RAMON, Calif. – xMatters, inc., a global leader in enabling business processes with communications, announced today that its leading cloud-based, automated messaging and communications platform is now certified for integration with ServiceNow, the enterprise IT cloud company. With this integration, companies who are currently using ServiceNow’s industry-leading IT management solutions will be able to utilize xMatters’ for incident management, creating a single consolidated platform to manage all communications throughout the enterprise.
“ServiceNow’s leadership as a transformational SaaS IT provider combined with xMatters’ cloud based communications platform enables large organizations to stop overwhelming staff with too many alerts that don’t matter,” said Troy McAlpin, CEO of xMatters. “Organizations can now target the right person, deliver content in any language to any device, enabling a quicker resolution time. Our mutual clients can now take advantage of the certified integration, which assures rapid value and interoperability.”
With this certified integration, the accessibility of xMatters’ communications technology allows customers to design processes and workflows, reducing mean time to restore critical services and enable proactive communications to key stakeholders. Automated conference calls, increase in the signal to noise ratio of IT notifications, and mobile-enabled workflows are the hallmarks of successful joint ServiceNow and xMatters customers.
“Our ServiceNow implementation was the first step in becoming a more automated IT shop meeting the needs of a rapidly innovating broader organization,” said Anoop Malkani, Head of Enterprise Service Management, British Sky. “We added automated incident creation and updates via an integration with HP OpenView. xMatters extends that automation by ensuring when IT incidents occur, communications are delivered to appropriate audiences with the most relevant messages and dramatically reduce Incident Response Time. This could be a response-required SMS to a resolution team member or an ‘FYI’ email to a manager. xMatters gives us the flexibility to align our communications with the type of incident we are dealing with, to customize messages to cater to the individual recipients and with the assurance that we can focus on resolving incidents - not worrying about internal message delivery.”
xMatters’ IT management solution is now accessible through ServiceNow’s Certified Partner Integrations.
About xMatters, inc.
xMatters enables any business process or application to trigger two-way communications (text, voice, email, SMS, etc.) throughout the extended enterprise. The company’s cloud-based solution allows for enterprise-grade scaling and delivery during time-sensitive events. More than 1,000 leading global firms use xMatters to ensure business operations run smoothly and effectively during incidents such as IT failures, product recalls, natural disasters, dynamic staffing, service outages, medical emergencies and supply-chain disruption. Founded in 2000 as AlarmPoint Systems, xMatters is headquartered in San Ramon, CA with European operations based in London. More information is available at www.xMatters.com Follow us on Twitter and Facebook.
Of the five costliest natural catastrophes for the insurance industry in 2013, only two were U.S. events, though neither ranked first or second, according to Munich Re.
In its 2013 Natural Catastrophe Year-in-Review Webinar jointly presented with the I.I.I., Munich Re noted that hailstorms in Germany in July actually caused the highest insured losses of the year. This was also the insurance industry’s most expensive hail event in German history, costing $4.8 billion in overall economic losses, of which $3.7 billion was insured.
Flooding in Europe in June was the second most costly natural catastrophe for the insurance industry in 2013, causing insured losses of $3 billion, though overall economic losses from this event totaled $15.2 billion, making it the costliest natural catastrophe of the year in terms of economic losses.
By Jason Preston
Data centre / center security is a big issue: especially for co-location centres hosting multiple racks for multiple, often competing, clients. Yet whilst security to access the data centre can often be impressive, individual rack level security is often sadly limited. Given the number of in-house staff and external engineers, from cable engineers to storage and server providers, passing through a data centre on a near daily basis, poor rack level security creates unnecessary risk.
Security is about far more than putting cages into the data centre. Organizations need a robust process that combines network accessed rack level security with change controls to create a complete, rack level access audit.
Without real-time, rack level access control, organizations cannot deliver the level of data centre protection increasingly demanded by governments and banks to prevent unauthorised access and criminal activity.
ENISA, the EU’s cyber security agency, has issued a new report studying network outages caused by power cuts. It provides recommendations to the electronic communications sector on how to withstand and act efficiently after power cuts, a key point being to establish better exchange of situational awareness and improved cooperation mechanisms within the sector and with the energy sector.
The Agency makes eight recommendations to National Regulatory Authorities (NRA) and providers within the electronic communications sector to reduce the risk of network and service outages caused by power supply failures.
CIO — Will 2014 see the emergence of a big data equivalent of the LAMP stack?
Richard Daley, one of the founders and chief strategy officer of analytics and business intelligence specialist Pentaho, believes that such a stack will begin to come together this year as consensus begins to develop around certain big data reference architectures—though the upper layers of the stack may have more proprietary elements than LAMP does.
Perpetual motion, like the alchemist’s stone, makes a great legend. The idea of something that keeps going indefinitely with no external source of energy is highly seductive, but also highly impractical. Friction or resistance of some kind will always intervene to eventually bring the system to a halt. However, almost-perpetual motion that just needs a teeny bit of energy to keep going is a much more realistic proposition. This is the big difference between new sales and loyalty sales for a company, where sales costs can diminish in favour of the repeat customer by a factor of up to 10. What is the secret sauce that lets companies strengthen their sales and their business continuity by so much, and for so little?
The BYOD movement in the enterprise is already taking some unusual twists. In addition to the variety of cell phones and smart devices IT must contend with, many users are utilizing personal cloud-based infrastructure. And that is leading to a host of integration, compatibility and security issues.
The personal cloud is nothing new. Consumers have been using on-line storage and synchronization for music, video and a range of other applications for several years now. According to ABI Research, the personal cloud market nearly doubled to $1 billion over the past year and is on pace to top $3.5 billion by 2018. In terms of raw capacity, personal clouds held about 685 petabytes in 2013 and will rise to 3,520 in 2018.
Network World — Last September customers of storage provider Nirvanix got what could be worst-case scenario news for a cloud user: The company was going out of business and they had to get data out, fast.
Customers scrambled to transfer data from Nirvanix's facilities to other cloud providers or back on to their own premises. "Some folks made it, others didn't," says Kent Christensen, a consultant at Datalink, which helped a handful of clients move data out of the now-defunct cloud provider.
Nirvanix wasn't the first, and it likely will not be the last cloud provider to go belly up. Megacloud, a provider of free and paid online storage without warning or explanation suddenly went dark two months after Nirvanix's bombshell dropped. Other companies have phased out products they once offered customers for cloud storage: Symantec's Backup Exec.cloud, for example is no longer being sold by the company.
CSO — Today's information security professionals need to learn more swiftly, communicate more effectively, know more about the business, and match the capabilities of an ever-improving set of adversaries. But, it doesn't seem too long ago that all it took to survive in the field was a dose of strong technical acumen and a shot of creativity to protect the network, solve most problems, and fend off attacks.
Not so today. The role of the security professional has evolved beyond that of mere technical savvy, and now includes consultant, educator, investigator, and defender of the data.
To understand the traits and habits that matter the most, we reached out to a number of security professionals by phone, email, and social media, who are successful in their respective areas in the field.
If there's one thing that screamed out from the interviews it was this: security knowledge alone is only the beginning of the skills and habits one needs to succeed.
One of the more public and ongoing corruption scandals in the world right now seems to be happening in Turkey. To say the events and facts are confused is an understatement. At this point there are not any international players who have been implicated but given the breadth and scope of what has come out of that country over the past month or so, it would only appear to be only a matter of time. It began in December when, according to the BBC, “The arrests were carried out as part of an inquiry into alleged bribery involving public tenders, which included controversial building projects in Istanbul. Those detained in the 17 December raids included more than 50 public officials and businessmen – all allies of the prime minister. The sons of two ex-ministers and the chief executive of the state-owned bank, Halkbank, are still in police custody.”
The Prime Minister claims that all of these arrests were simply political theater, generated by supporters of Fethullah Gulen, an influential Islamic scholar living in self-imposed exile in the US. Members of Mr. Gulen’s Hizmet movement are said to hold influential positions in institutions such as the police and the judiciary and the AK Party itself. Many believe the arrests and dismissals reflect a feud within Turkey’s ruling AK Party between those who back the Prime Minister, Recep Tayyip Erdogan. On Tuesday the Prime Minister and his supporters struck back at the police by removing approximately 350 police officers from their positions in the capital, Ankara. The Prime Minister and his supporters have also attacked the judiciary leading the investigation, claiming that it is all politically motivated.
As data becomes more fungible, that is, less engaged with the physical infrastructure that supports higher level virtual and cloud architectures, the overall data environment starts to exhibit new characteristics, some of which will dramatically alter the way in which those environments are built and operated.
Of late, the concept of data gravity has been showing up in tech conferences and discussion groups. Coined by VMware’s Dave McCrory about four years ago, it describes the way data behaves in highly distributed architectures. Rather than becoming evenly distributed across a flattened fabric, data tends to collect in pockets, with smaller bits of data gravitating toward larger sets the same way that particles coalesced into galaxies after the Big Bang. Part of this is due to the nature of distributed architectures where the farther away storage is from processing centers and endpoints, the greater the cost, complexity and latency. But it is also a function of the data itself, particularly now that all information must be “contextualized” with reams of metadata for it to be useful.
What should you consider before using the cloud for disaster recovery? Martin Welsh and Patricia Palacio provide some guidance.
Whatever the company size or industry, the truth is that your business can't afford downtime but traditional DR strategy investments have been difficult to justify. The majority of organizations attempt to protect only mission critical applications, leaving second-tier, but still valuable, systems vulnerable to extended outages. It's hard to justify improving your disaster recovery capabilities when you're under pressure to cut IT costs and when DR is seen as an expensive insurance policy.
The major challenges faced when planning your disaster recovery strategies are:
Ian Kilpatrick describes six emerging technology trends that will need consideration during 2014:
Thanks to the NSA and GCHQ, (coupled with ongoing allegations against the Chinese), security, corporate privacy and encryption have moved swiftly up the corporate agenda. Identity management, which has often been seen as a ‘nice to have’, will become even more of a ‘must have.’
For many years, wireless security was an afterthought to wireless deployment. However, in 2014, with the ratification of multi GBPS 802.11ac, wireless security will become ever more important as organizations move from wired networks to wireless ones.
As one example, the majority of wireless access point deployments in SMEs are connected to the trusted network, effectively bypassing the gateway security controls and policies. This isn’t sustainable, as wireless becomes the core of the network. There will be a rise in the deployment of both 802.11ac wireless and associated access point security.
Although the dust hasn’t yet settled on the Edward Snowden revelations about the activities of the US National Security Agency, the consequences already extend beyond the purely technical. While the immediate reaction was to think of better ways in which to encrypt data, it also dawned on foreign organisations that they might want to review certain business relationships. The idea that the NSA could have direct backdoors into many US companies dampened the enthusiasm of certain international entities to continue trading with them. But will American enterprises alone have to increase their efforts to maintain business continuity, or are companies in other countries affected too?
New capacity, rate reductions and competition are a few factors contributing to a softer market and an 11% drop in reinsurance rate on line—a calculation of reinsurance premium divided by reinsurance limit—almost across the board, according to Guy Carpenter.
Much of this was driven by a decline of 15% in the United States, while property catastrophe pricing in Continental Europe and the United Kingdom fell by 10% and 15%, respectively, Guy Carpenter said.
Willis Re said in its “1st View” report that soft market conditions are not unique to the property catastrophe market. The report found that “with few exceptions rates are down on most lines at Jan. 1.”
One of the last things I wrote about in 2013 was the Target breach. I suspect that breach is going to linger for a while, not only for customers but for businesses that (I hope) are now thinking a lot more about the security of their credit card systems and their computer networks overall. I know one small business owner is, because she asked me the types of questions she should ask regarding the security of her system. (And those questions may be a blog post for another day.)
Right before I went on holiday break, I had an email conversation with some folks from Guidance Software regarding the Target breach and the forensic investigation into what happened. One of the first things I was told was that we shouldn’t have been surprised that this breach happened because it was inevitable. As Jason Fredrickson, senior director of application development at Guidance Software, told me:
CIO — In the world of IT, things can and will go wrong. Failure can come from a number of things such as rushing to get too much done in a single project instead of breaking it down into smaller, more manageable projects. It can come from not allowing enough lead time for developers to do their part on the back-end or even from a consultant or vendor that led you down the wrong path.
Whatever the case, failure does happen; it's to be expected and as the saying goes life is "10 percent what happens to you 90 percent how you react to it." Failure doesn't have to be a negative. With the right attitudes and processes in place it can be educational, informative and sometimes transformative.
You know from a logical perspective that you should learn from your mistakes. That is drilled into many of us beginning in childhood. The problem, according to experts, is that in the corporate world, a lot of companies don't handle failure well. They don't have adequate processes in place to examine why something failed, but that is a huge necessary part of the learning process.
Mobile CRM, which has been gaining momentum for quite some time, is a trend that will only get hotter in 2014, experts predict. Among other trends they expect to take root or accelerate in 2014: social CRM, more integration and smarter CRM.
Most industry observers agree that the adoption of mobile will be a dominant CRM theme in 2014 as companies look for ways to extend CRM capabilities to give employees convenient, always-on access to sales content, allowing them to address customer needs and collaborate with sales teams in real-time.
"CRM capabilities will be integrated into mobile tools to generate leads and opportunities both in-store and on the road," said Chris O'Connor, founder and CEO of Taptera. "We see companies that are using CRM continue to invest in out-of-the-box solutions through extension into mobile channels and customization to monitor, manage and drive leads, conversions, shorten sales cycles and improve customer support."
The arrival of the first major winter storm of 2014 just two days into the new year makes this a good time to take stock of the insurance implications.
The Insurance Information Institute (I.I.I.) reports that winter storms are historically very expensive and are the third-largest cause of catastrophe losses, behind only hurricanes and tornadoes.
From 1993 to 2012, winter storms resulted in about $27.8 billion in insured losses—or $1.4 billion per year, on average, according to Property Claims Service for Verisk Insurance Solutions (see chart below).
Some of the best Big Data and sensor uses come from the manufacturing and logistics world. But while supply chains and manufacturing floors can generate plenty of important business data, those functions aren’t always the best equipped to use that data.
Operations, supply chains and manufacturing are due for a technology overhaul, according to IDC Manufacturing Insights and other analysts who research these B2B functions.
The problem: Supply chain technologies and processes lag behind the highly digital world of the business side.
Before we embrace a new year, I want to share my personal picks for the best data success stories from 2013:
Feds Stop $47 Billion in Fraud, Overpayment. We often think in terms of technology solutions. For example, we ask “How much can master data management save this company?” or “Will Big Data projects pay off?” Sometimes, you can define savings by the project, but often the best results come when you combine multiple data technologies. Together, they add up to better information management and analysis.
With the New Year comes added awareness of the hazards social media can present to corporations, the risks of data exchange between business systems and other challenges inherent with technology. Here is a look at the top trends of last year and predictions for the year ahead.
2013 Key Trends
1. Growing Convergence between IT, Security and the Business
Evolving risk challenges require that internal and external stakeholders are on the same risk page. For many organizations, however, internal audit, security, compliance and the business have different views of risk and what it takes to build a risk-aware and resilient business. Effective risk management starts with good communications. This includes a common taxonomy for dealing with risk, and a collaborative discussion framework to facilitate the cross-functional sharing of ideas and best practices.
If 2013 was the year that most organizations discovered what Big Data platforms such as Hadoop were all about, then the coming year will be the one in which they discover the applications that turn all that data into something of business value.
Brett Sheppard, director of Big Data marketing for Splunk, says that in terms of Big Data, 2013 was pretty much defined by investments in plumbing. Organizations largely experimented with Big Data platforms only to discover that the cost of acquiring the platform was nothing compared to the cost of the expertise required to actually develop an application that could make sense of all that data.
Just ask anybody—2014 is going to be an even bigger year for Big Data.
“In 2014, we will see Big Data funding only grow, and at least one significant IPO possibly from a player like Cloudera,” writes Concurrent CEO Gary Nakamura.
Inhi Suh, IBM vice president of Big Data, integration, and governance, told Information Week that she foresees more organizational spending on Big Data as companies invest in a wider range of analytics, such as reporting, dashboards and planning, predictive analytics, recommendations and new cognitive capabilities.
Network World — There's a trend underway in the information security field to shift from a prevention mentality -- in which organizations try to make the perimeter impenetrable and avoid breaches -- to a focus on rapid detection, where they can quickly identify and mitigate threats.
Some vendors are already addressing this shift, and some security executives say it's the best way to approach security in today's environment. But there are potential pitfalls with putting too much emphasis on detection if it means cutting back on prevention efforts and resources.
Clearly, rapid detection is gaining traction. Research firm IDC has designated a new category for products that can detect stealthy malware-based attacks designed for cyber-espionage ("Specialized Threat Analysis and Protection") and expects the market to grow from about $200 million worldwide in 2012 to $1.17 billion by 2017.
There are different ways of looking at IT security involving end-user equipment such as PCs and mobile computing devices. One is to batten down the hatches at a corporate level, repel all viral boarders and let end-users fend for themselves. Another is to extend security to all end-user devices and take responsibility for maintaining data integrity and confidentiality from beginning to end. Whether or not your organisation has a choice in the matter may come down to the nature of your business. How then will you know which approach you should consider?
CIO — In our 13 years of conducting our annual State of the CIO survey, we've never seen anything quite like this year's results. Our profession has become a house divided, with traditional service-provider CIOs on one side and business-focused, digital-strategist CIOs on the other.
"As we plow through this period of digital disruption, where established rules for competing may no longer apply, some CIOs now question what they want for themselves," Managing Editor Kim S. Nash writes in our cover story ("State of the CIO 2014: The Great Schism"). "The profession is changing fast in an atmosphere where colleagues sometimes look upon a traditional IT group as a hindrance to corporate success."
What words spring to mind to describe the business world today – remote control, automation, speed, renewal? These concepts can all help with business continuity and competitiveness, but so can their ‘yesteryear’ counterparts. Although new technology lets organisations improve different areas of operations, it doesn’t mean that it is a panacea to be applied universally and indiscriminately. Face to face work styles, manual procedures, and re-use of old systems all still have a role to play. Here’s a quick tour of three pre-Internet methods that enterprises and their managers could continue to keep in mind.
Virtual Teams Still Need Face to Face Time
Despite the solutions available for remote working, such as video conferencing, collaboration software and even social networks, nothing replaces face to face interactions. The wealth of information in body language alone makes the difference between the two modes. Management by walking around may have given up ground in the shift to virtual team working, but it hasn’t gone away.
What good is history if we refuse to learn from it? Taking a few minutes to look back on crisis communications in 2013, I first wondered if there were any really big things that happened. I mean we didn’t have a Gulf Spill, we didn’t have a tsunami-radiation disaster, we didn’t even have a superstorm–unless you were in the Philippines. Then I saw the Bloomberg list of the top 10 reputation crises of 2013 and had to agree it was indeed a scandalous year.
And there’s my first observation: when high-flying careers (like Paula Deen), impeccable business leaders (like Jamie Dimon) and the world’s most powerful government legislative body (US Congress) have reputation crises at the level we have seen this year, and it doesn’t even seem like any major disasters happened, well, you kind of have to wonder what is going on.
This is the time of year when CIOs shore up their infrastructure deployment and development plans for the next 12 months. Naturally, this is guided by at least a rudimentary vision of what you want your data environment to look like, not just next year but in the decade ahead.
But while most plans center on hardware, software and, now, services – in essence, what you want the enterprise to be – it wouldn’t hurt to shift the focus a little toward what, exactly, you want the enterprise to do. Viewing infrastructure through the lens of functionality can often lead to innovative solutions to problems that hamper data flow and productivity.
In my previous post, I shared the three business drivers for re-evaluating Ye Old Integration Strategy: Integration costs too much, it’s too complex, and you’re too slow at it, which annoys the business.
But how are you supposed to fix those problems? In its recent Integration 2014 Trends-to-Watch report, Ovum predicts four technology strategies that will play a key role in resolving these business problems. Let’s look at each and see which ones can help you with your integration challenges.
IPaaS. Ovum predicts iPaaS solutions will evolve more in 2014. That’s a safe bet since we’re already seeing it: Silicon Angle reports that MuleSoft upgraded its iPaaS this month to offer more enterprise support.
SPRINGFIELD, Ill.—Take advantage of a new year to make your family safer in the face of future disasters.
The Federal Emergency Management Agency encourages Illinois residents to resolve to rebuild stronger and smarter, reducing the risk of potential devastation caused by events like the Nov. 17 tornadoes.
Through New Year’s Day, FEMA will offer simple tips and ideas to construct and maintain a home that can better withstand weather risks your community faces. This information will be posted and updated on FEMA’s Illinois recovery website FEMA.gov/Disaster/4157 as well as Facebook.com/FEMA and Twitter.com/FEMAregion5. Learn about rebuilding techniques and tips such as:
- Reinforcing your Residence. Retrofitting your home can provide structural updates that didn’t exist when it was constructed. For instance, a homeowner can install straps to their roof’s structural beams to make it strong enough to resist the "uplift" effect of high winds that can cause it to lift and collapse back down on the house.
- Fortify those Floors. Homeowners can secure their structure to its foundation by using anchors or straps. This can minimize the chances of a home moving off its foundation during events like tornadoes and earthquakes.
- Trim & Tighten. Consider cutting away any dangling tree branches that pose a threat to your home and securing outdoor furniture and fuel tanks that can serve as projectiles during high wind events.
- Elevation is a Smart Renovation. Flooding is a real risk in Illinois and elevating your home and its critical utilities can significantly reduce the risk of water damage. Contact your local floodplain manager to learn your flood risk and elevation requirements for your residence.
- Assure You’re Fully Insured. Take the time to review your insurance coverage. Are you adequately insured for the risks your community faces? Are you covered for wind, flood or sewer back-up coverage? Has your policy been updated to reflect the value of your home? Contact your insurance agent to get these questions answered and ensure your home is financially protected.
Survivors can apply online at DisasterAssistance.gov or with a smartphone or tablet by visiting m.fema.gov. They can also register and get questions answered over the phone by calling FEMA’s helpline, 800-621-FEMA (3362). Survivors who use a TTY can call 800-462-7585. The toll-free telephone numbers operate from 7 a.m. to 10 p.m. (local time) seven days a week until further notice.
For the latest information on Illinois’ recovery from the Nov. 17 storms, visit FEMA.gov/Disaster/4157. Follow FEMA online at twitter.com/femaregion5, facebook.com/fema and youtube.com/fema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.
With the steady rise of new cloud services, plus rapidly increasing solid-state deployment and advanced near-line and on-server solutions, storage had a pretty big year in 2013. The question for 2014, though, is whether we will see even more advanced technologies coming to the fore or whether this will be a year for capitalizing on the gains that are already in the channel. Or both?
For Instrumental CEO Henry Newman, 2014 looks to be a transitional year as the continued acceptance of solid state in the enterprise leads to greater consolidation in the industry, and possibly a few bankruptcies. As well, long-time storage solutions like native Fibre Channel and SATA will give ground to the improved performance and steadily declining costs of more advanced technologies. And if you have your heart set on finally putting PCIe 4.0 into play, well, think again. He expects the format to be delayed again until 2015.
IDG News Service (Boston Bureau) — Target has confirmed that hackers obtained customer debit card PINs (personal identification numbers) in the massive data breach suffered by the retailer during the busy holiday shopping season, but says customers should be safe, as the numbers were encrypted.
Some 40 million customer debit and credit cards were affected by the breach, but until now it wasn't clear that PINs were part of the hackers' massive haul.
"While we previously shared that encrypted data was obtained, this morning through additional forensics work we were able to confirm that strongly encrypted PIN data was removed," Target said in a statement on its website Friday. "We remain confident that PIN numbers are safe and secure. The PIN information was fully encrypted at the keypad, remained encrypted within our system, and remained encrypted when it was removed from our systems."
By now nearly everyone is familiar with bring your own device (BYOD). Some people out there still aren’t sure whether BYOD was nothing more than a buzzword in 2013 or if it really was a popular movement with serious security implications. (My personal thought is that the trendiness of the acronym downplayed the very real security concerns that the concept brought upon the enterprise.)
But no matter what you think of it, BYOD is, as Art Coviello, executive chairman of RSA, told me in an email, “so 2013.” According to Coviello, we should get ready for BYOI, bring your own identity. BYOI, Coviello added, is the next step in the trend that began with BYOD:
The next evolution will be the consumerization of ID or identity as employees increasingly push for a simpler, more integrated system of identification for all of the ways they use their devices. Identity will be less entrusted to third parties and increasingly be something closely held and managed by individuals – as closely as they hold their own devices....
As small businesses prepare for 2014, they shouldn't focus solely on increasing their bottom lines.
Paychex, a provider of payroll, human and benefits outsourcing solutions, says it's equally as important for small businesses to be aware of the legislative issues that could affect their operations in the year to come.
"Navigating all of the legislative and regulatory changes that occur throughout the course of the year can be challenging, taking business owners away from other important aspects of running their businesses," said Martin Mucci, Paychex president and CEO.
CIO — The holiday season is a great time to look back at the year, with an eye toward what we in the ever-changing world of information technology can expect in 2014. These three trends warrant your close attention in the new year.
In Light of NSA Revelations, Companies Will Be Wary of the Cloud
For most businesses, 2013 was the year of the cloud. Companies that still hosted their email in house would in large part move that expense and aggravation to someone else. Microsoft SharePoint and other knowledge management solutions could be run in someone else's datacenter, using someone else's resources and time to administer, thus freeing your own people to improve other services or, gasp, work directly on enhancing the business.
But then Edward Snowden came around in June and started to release a series of damning leaks about the United States National Security Agency's capability to eavesdrop on communications. At first, most folks weren't terribly alarmed. But as the year wore on, the depth of the NSA's alleged capabilities to tap into communications — both with and without service provider knowledge — started to shake the faith of many CIOs in the risk/benefit tradeoff for moving to cloud services.
Data center infrastructure will undergo dramatic change across the board in the coming year, but while much of the focus will be on software-defined architectures and cloud computing, bare metal changes are on tap as well.
This is actually quite a heady time for servers in particular, given that the pressure to revamp data-handling capabilities is mounting as the enterprise struggles to meet the challenges of mobility, Big Data, collaboration and other macro forces.
For InterWorx’ Graeme Caldwell, the rise of high-volume/small packet data traffic will lead directly to the ARM architecture finally breaking the “x86 monoculture” that has gripped the enterprise for so long. ARMs thrive in the chaotic universe of mobile data, so if the enterprise wishes to scale resources up and down to suit ever-changing load volumes, they would be better off with legions of low-power ARM units at their disposal than highly virtualized x86 machines. And while Intel currently holds a slight edge with its 64-bit Avoton SoC, the coming year will see 64-bit ARMs from Caldexa, Applied Micro and others.
The coming year will be a pivotal one for a wide range of data center components including everything from servers and storage to the virtual layer and cloud architectures. But before I get to all of those, I thought it would be a good idea to see what is likely to happen to the data center itself. After all, with enterprise infrastructure poised for some truly wide-scale distribution, the data center is increasingly being viewed as a single component of perhaps a global data environment.
And while some may argue that the data center will diminish in importance as responsibility for actual physical layer infrastructure falls to the cloud provider, the fact remains that for the coming year, at least, enterprises of all sizes will rely on their own data facilities to a higher degree than in years past.
If you can see what will happen in the future, you can take steps to prepare for it – or avoid it, or even change it. That’s the promise of predictive analytics, a topic that naturally interests business continuity managers. While there’s no guarantee of exact predictions, predictive analytics can indicate change patterns and emerging trends. Sensibly constructed models can show areas of combined high uncertainty and influence, where particular attention should be paid in preparing to ensure continuity. However, predictive analytics as such fall short in two areas related to business continuity: one of them can be ‘fixed’ by using a similar approach, whereas the other cannot.
Many folks take the days between Christmas and New Year’s off. Others, of course, have to work, despite the consumption of too much egg nog.
If you do have to work, it makes sense to be as productive as possible. This year, keep in mind that the late fall has been characterized by winter-like weather. It is not a good sign that suddenly the people who are in charge of this sort of thing have decided to name the storms that seem to be meandering from west to east on a regular basis.
So why not focus on a business continuity plan? These templates are vital, and may come in handy very quickly.
Nothing happens without good planning and implementation strategies and this is required when planning out the development of the Business Continuity Management (BCM) / Disaster Recovery (DR) program. It’s impossible to just start something without having any idea when you’ll be finished or what you need to reach along the way to be able to take the next step.
Often, to get proper buy-in from executives, a BCM/DR practitioner has to provide a timeline alongside the goals and deliverables the project will provide. Its one thing to provide the reasons why you need a program and if those are accepted by executives as valid reasons (let’s hope they think so…), the next question will be, “When will it be done?” So, a draft timeline must be mapped out; from how long a BIA will take and when the findings will be delivered to when the 1st test will occur.
Of course, it will all be built upon assumptions such as resource availability for example, but a high-level timeline must be provided to executives. Below are ten considerations a practitioner must keep in mind when building the BCM/DR program:
IDG News Service (Boston Bureau) — While the bulk of enterprise software is still deployed on-premises, SaaS (software as a service) continues to undergo rapid growth. Gartner has said the total market will top $22 billion through 2015, up from more than $14 billion in 2012.
The SaaS market will likely see significant changes and new trends in 2014 as vendors jockey for competitive position and customers continue shifting their IT strategies toward the deployment model. Here's a look at some of the possibilities.
A storm that left at least nine people dead and more than 400,000 without power this weekend was pushing its way into Canada on Sunday, but holiday travelers may still face slick roads as the system douses the Southeast with heavy rainfall.
The storm that brought high winds, ice, snow and rain to a wide swath of the Southeast before roaring north will affect sections of the USA through Monday night, said Frank Strait, senior meteorologist with AccuWeather.
"The main part of the storm is pulling away into Canada now and taking some of the snow with it,'' Strait said. But a lingering cold front could stretch from Virginia to Pensacola, Fla., causing heavy downpours before the system finally begins to weaken.
Distributed denial-of-service (DDoS) attacks certainly aren’t new. I’ve been talking about them for years. However, they have been changing. The traditional style of attack, the flood-the-target type that crashes a website, is still going strong. But now we are seeing an increase in application-layer attacks that have the same goal: Systems go down, resources are unavailable and the victim is scrambling to fix everything.
Recently, Vann Abernethy, senior product manager for NSFOCUS, talked to me about the changing DDoS landscape. Something he has noticed is how DDoS attacks are being used as smokescreens to cover up other criminal activity. He said:
In fact, the FBI warned of one such attack type back in November of 2011, which relies upon the insertion of some form of malware. When the attacker is ready to activate the malware, a DDoS attack is launched to occupy defenders. In this case, the DDoS attack is really nothing more than a smokescreen used to confuse the defenses and allow the real attack to go unnoticed – at least initially. Considering that most malware goes undetected for long periods of time, even a small DDoS attack should be a huge red flag that something else may be going on.
It couldn’t have happened at a worse time for a retailer. Target informed shoppers that if they charged an item at Target stores between Nov. 27 and Dec. 15, their credit and debit card accounts may have been compromised—as much as 40 million cards in all.
While online shoppers typically have been the victims, this time hackers went through the physical checkout systems inside every Target store—about 2,000 stores, 1,797 in the United States and 124 in Canada. It’s possible that every shopper who swiped a credit card or entered a pin number at the point of sale had their information stolen.
Barbara Endicott-Popovsky, director of the Center for Information Assurance and Cybersecurity at the University of Washington told TIME Magazine that hacking “is a business. The general public would be shocked and amazed by the size of the problem.”
Give the IT industry credit for facing up to the challenge of energy consumption over the past few years. Once it entered the popular consciousness that data infrastructure consumes a significant portion of total energy capacity, industry leaders across the board set to work building more efficient infrastructure.
Part of this was simple economics, of course – less energy means lower operating costs. And to be sure, virtualization came along at just the right time to slim down hardware footprints without sacrificing data processing capabilities.
And now it seems some planners are moving onto the next goal, and a rather ambitious one at that: the zero-carbon data center. A colocation firm in Iceland is nearing completion on a facility that relies entirely on hydroelectric and geothermal sources to power its fully modular data infrastructure. The company recently installed a free air cooling system from Eaton-Williams that operates without chillers or mechanical cooling of any kind, instead taking advantage of arctic winds brought in by the Gulf Stream. The Tier-3 facility measures about 23,000 square meters and is backed by redundant UPS supplies for critical systems, with power densities ranging from 4 kW to 16 kW per rack.
By Dan Watson, Public Affairs
At the end of each week, we post a "What We’re Watching" blog as we look ahead to the weekend and recap events from the week. We encourage you to share it with your friends and family, and have a safe weekend.
A Potentially Stormy Holiday
According to our friends at the National Weather Service, a storm system is set to track across the nation this weekend, impacting states in a variety of ways and potentially disrupting holiday travel. Here’s the latest forecast from the NWS:
- Heavy rain is forecast from the lower Mississippi River Valley to the Ohio Valley this weekend with a risk for flash flooding.
- A wintry mix, including freezing rain and snow, is possible from the central Great Plains, through the Great Lakes and to northern New England this weekend.
- The NWS Storm Prediction Center (SPC) has indicated a Moderate Risk of severe thunderstorms on Saturday, with possible tornadoes, for portions of the Lower Mississippi Valley.
- Severe thunderstorms are possible from the Central Gulf Coast/Lower Mississippi Valley into the Ohio Valley Saturday.
As you travel to visit friends and loved ones for the holidays, we encourage you to exercise caution and monitor weather conditions as they change. Stay up-to-date on weather conditions in your area by visit weather.gov or mobile.weather.gov on your mobile device. Also, visit Ready.gov for more winter weather safety tips and other great resources!
Be Prepared in 2014!
With the New Year around the corner, it’s time to make our resolutions. Why not Resolve to be Ready for an emergency?
This year, we’re continuing our Resolve to be Ready campaign with an emphasis on 'Family Connections' – reinforcing the importance of getting kids involved in preparedness conversations in advance of an emergency. We’re making your emergency preparedness resolution easy to keep this year with three simple tips when making a plan: who to call, where to meet and what to pack.
Here’s what you can do:
- Make a family communication plan that answers – who to call, where to meet and what to pack.
- Join our Thunderclap on Facebook and Twitter and share a New Year's resolution of preparedness with your followers. How does Thunderclap work? Once you sign up, Thunderclap will sync your social media accounts to release an automatic Facebook post, Tweet or both on January, 1, 2014 at 12:30 PM reminding your friends and followers to make a family emergency plan.
- Use #Prepared2014 in your social media messaging throughout 2014 to remind your friends and followers to be prepared for emergencies all year long.
- Share preparedness messages from the Ready Facebook and Twitter feeds.
Visit ready.gov/prepared2014 for more information on how you can Resolve to be Ready in 2014!
Photos of the Week
Here are a few of our favorite photos that came into our Photo Library this week.
New Topics on Our Online Collaboration Tool
We’ve recently launched a few new topics on our online collaboration tool and as always, we want to hear your thoughts and ideas. Head on over and share your ideas, comment on others ideas, and vote for your favorite.
- FEMA’s Strategic Priorities
- Private Sector Technology Volunteers Supporting Disaster Response
- Increasing Transparency & Enhancing Disaster Preparedness
That’s it for today’s What We’re Watching. On behalf of everyone at FEMA, we wish you and your family a wonderful and safe holiday!
DENVER – In the 100 days following the catastrophic floods that hit much of Colorado, more than $204 million has gone to individuals and households in recovery assistance, flood insurance payments and low-interest disaster loans.
In addition, more than $28 million has been obligated to begin to repair and rebuild critical infrastructure and restore vital services.
Initially, the State, federal and local objectives were to save lives, bring aid to the affected areas, provide temporary safe housing, clear debris and to make immediate repairs to damaged infrastructure to put communities on the path to recovery.
President Obama signed a major-disaster declaration for Colorado Sept. 14 after severe and unremitting rains that began on Sept. 11 inundated much of the northeast portion of the state. The flooding killed 10 people, forced more than 18,000 from their homes, destroyed 1,882 structures and damaged at least 16,000 others.
Progress by the Numbers:
- Under the Individuals and Households Program, FEMA has granted $53,816,716 for housing needs and $4,572,871 to help survivors who suffered damage to their homes. Under the Public Assistance Program, FEMA has obligated $28,338,878 to publicly owned entities and certain nonprofits that provide vital services. (See below for county-by-county breakdowns.)
- The U.S. Small Business Administration has approved 2,274 low-interest disaster loans for over $90 million to Colorado homeowners, renters, businesses of all sizes and private nonprofit organizations. Of that amount, $73 million was in loans to repair and rebuild homes and replace personal property and $17 million was in business and economic-injury loans. Approved loan amounts for some of the most impacted areas include $55.2 million to Boulder County, $14 million for Larimer County and $9.4 million for Weld County.
- More than 50 national, State and local volunteer organizations pitched in to help in the recovery efforts, involving the work of 28,664 people giving their time and energy to both short- and long-term healing and to address any unmet needs. Volunteers provided donations-coordination, home repair, child and pet care, counseling services, removal of muck and mud from homes and much more. In-kind donations amounted to $3,187,564. Valuing a volunteer hour at $22.43, the 275,860 hours of time represents a contribution of $6,162,725.
- The National Flood Insurance Program approved more than $55.7 million to settle 1,910 claims.
- More than 28,348 survivors registered for disaster assistance.
- FEMA housing inspectors in the field have looked at nearly 26,000 properties in the 11 counties designated for Individual Assistance in the president’s major-disaster declaration.
- FEMA Disaster Survivor Assistance teams canvassed hundreds of neighborhoods, visiting more than 62,000 homes and 2,741 businesses to provide information on a vast array of services and resources available to eligible applicants and made follow-up contacts in hundreds of cases.
- More than 21,500 survivors were able to visit 26 State/federal Disaster Recovery Centers to get one-on-one briefings on available assistance, low-interest loans and other information.
- Since Transitional Sheltering Assistance was activated Sept. 22, a total of 1,067 households have stayed in 177 participating hotels. The Transitional Sheltering Assistance deadline was extended five times to Dec. 14, with checkout Dec. 15. To date, 55 manufactured housing units are either in place or being placed in Boulder, Larimer and Weld counties for families unable to secure other housing resources. FEMA has ordered a total of 66 manufactured housing units.
- In the 18 counties designated for FEMA’s Public Assistance program, 236 meetings were held to discuss the details of the program and the amounts involved in each recovery project. This component of federal assistance provides at least 75 percent of the costs of repairing and rebuilding public infrastructure, reimbursement for emergency measures, helping critical services conducted by governments and certain nonprofits get back to normal, and in some cases implementing mitigation against future damage and losses. FEMA and the State fielded 238 eligible Requests for Public Assistance. The amount obligated so far: $28,338,878.
- FEMA and the State supplied disaster-assistance information to 33 chambers of commerce, six economic-development centers and 38 schools of higher education.
- FEMA’s Speakers Bureau received 85 requests from officials and other interested parties and 443 State/federal specialists have spoken at meetings and other venues. Thus more than 8,300 attendees were able to get information on assistance programs, flood insurance and low-interest loans.
- FEMA mitigation specialists counseled 15,250 survivors during outreach efforts at area big-box hardware and building-supply stores and counseled more than 4,700 at Disaster Recovery Centers.
- At , the dedicated Colorado-disaster website, there have been more than 103,000 hits – an average of 1,300 daily. The FEMA Region VIII Twitter feed has fielded more than 600 tweets and has increased the number of followers to 9,100. In the last 100 days, the State has sent out 1,025 tweets, has increased to 21,500 @COemergency followers and the COemergency Facebook page garnered 2,182 “likes.” The coemergency.com page has had 234,757 page views.
- FEMA Corps teams were instrumental in spreading the word about assistance throughout the affected areas and worked alongside FEMA regulars in the Joint Field Office in Centennial. More than 300 FEMA Corps members helped survivors in responding to and recovering from the disaster.
Avere Systems has released the findings of a cloud adoption study conducted at AWS re:Invent 2013. The overwhelming majority of attendees surveyed indicated that they currently or plan to use cloud for compute, storage, or application purposes within the next two to five years. Cost savings and disaster recovery / business continuity were found to be the factors most heavily driving cloud storage adoption, indicating that organizations believe cloud storage has the potential to increase efficiency, productivity, and the bottom line for their business.
Despite the majority of participants reporting cloud use within the next few years, attendees surveyed indicated security, performance, and organizational resistance as the largest barriers to cloud adoption. In addition, more than a third of attendees surveyed reported that their primary providers of traditional on-premises storage equipment are not helping with their adoption of cloud storage.
Here’s what I see coming in the new year:
- Enlightened CIOs will regain a key role in the acquisition and implementation of enterprise Cloud solutions, including Software-as-a-Service (SaaS) applications and Infrastructure-as-a-Service (IaaS) computing resources. They will not only put policies in place that will encourage end-users and business units to include IT in the procurement and deployment processes, but will also enable IT to play a more proactive role in the evaluation and selection process.
- Corporate end-users and business units will be forced to enlist greater IT involvement and support in the acquisition and implementation of enterprise Cloud solutions because they will face greater challenges integrating them into their existing systems, software and data sources, and ensuring their security and performance.
- IT professionals will become more receptive to acquiring Cloud-based IT management solutions that enable them to more easily and economically perform their day-to-day duties so they can dedicate more time to strategic corporate initiatives.
CIO — Around this time last year, CIO.com and its outsourcing experts made some plucky predictions for IT services in 2013 We said this would be the year that outsourcing governance finally grew up. (Hardly.) We said outsourcing customers would take matters into their own hands with more do-it-yourself deals. (They did.) And we predicted that customers would value domestic presence as a key differentiator among service providers. (It was just one of many factors.)
We revisited all of our prognostications from last year and found that three of them were right on target, four of them were off base and the other two were just beginning to take shape at year's end. As we pull together our 2014 forecast, here's how those 2013 predictions turned out:
A number of big changes will start to impact IT in 2014 — but you should likely be thinking about them over the holiday break. Here are three trends I'm watching and what they will mean as we all get ready for the New Year.
First, robotics will move very rapidly now that Google is chasing the robot market. The question: Who will buy and maintain these robots, which will be increasingly used for anything from manufacturing to security? They'll need software updates, for one, and eventually they'll need to be managed like PCs, but the jobs robots replace or supplement will reside in other functions. Like all emerging technologies that enter at the bottom line, managers will initially be making the decisions without input from IT.
If one of your goals in the New Year is to move toward using Big Data, then it’s time to move beyond the theoretical discussion to the nitty-gritty of implementations.
That doesn’t mean you should ignore your strategic goals, of course: It just means filling in the integration blanks between having Big Data and using Big Data.
TechTarget recently published a good starting point by excerpting chapter 10 from “Data Warehousing in the Age of Big Data,” written by Krish Krishnan, who is a Chicago-based executive consultant with Daugherty Business Solutions and a TDWI faculty member.
Conventional Big Data wisdom holds that in order to derive any value from technologies such as Hadoop, organizations need to invest in a cadre of data scientists to build complex analytics applications. The problem with that thinking is that by the time an organization assembles all the software and hardware expertise needed to launch a Big Data application, multiple years will have gone by.
Datameer is one of a handful of application providers that are challenging Hadoop conventional wisdom. Fresh off garnering an additional $19 million in funding this week, Datameer is making the case that what organizations really want is access to Big Data analytics applications that are about as complicated to use as a Microsoft Excel spreadsheet.
Natural catastrophes and man-made disasters worldwide reached $44 billion in insured losses in 2013—down from $81 billion in 2012, according to a Sigma preliminary report by Swiss Re.
The study found that total economic losses from disasters in 2013 totaled $130 billion and 25,000 lives were lost. Hurricane Haiyan alone, which hit the Philippines in November with record-breaking winds, claimed more than 7,000 lives. In 2012 total economic losses were $196 billion and 14,000 lives were lost.
We all (most anyway) know that social media and digital communications play a primary role in creating, expanding and responding to crises today. But it all seems sort of a mishmash, so I found these comments from Dallas Lawrence very helpful in distinguishing the three roles that social and digital media play:
First, social media is an instigator. Were there not a social platform that allows us to send out our every thought, or record every stupid thing that happens, the crisis wouldn’t have occurred.
The next role is that of accelerant. A similar crisis may have happened 20 years ago, but it would not have metastasized so quickly without social media. So Lawrence stresses we must be prepared to act immediately instead of waiting and seeing.
Just $6 billion of the $44 billion in estimated insured global losses arising from catastrophes in 2013 were generated by man-made disasters, little changed from 2012, according to Swiss Re sigma preliminary estimates.
But as an article on the Lloyd’s website reports, even though natural catastrophes may have dominated the news headlines in 2013, a series of man-made disasters have had a significant impact on a number of communities.
In fact around 5,000 lives were lost as a result of man-made disasters in 2013, according to Swiss Re sigma estimates.
IDG News Service (Bangalore Bureau) — Target has confirmed that data from about 40 million credit and debit cards was stolen at its stores between Nov. 27 and Dec. 15.
The statement from the retailer Thursday follows reports that thieves had accessed data stored on the magnetic stripe on the back of credit and debit cards during the Black Friday weekend through card swiping machines that could have been tampered with at the retailer's stores, a practice known as card skimming.
The data could have been used to create counterfeit cards that could even be used to withdraw money at an ATM, according to the reports.
Lists, kits, packs… they often exhibit order and completeness, two dimensions that are also important for effective business continuity. They are also the underlying principles of the ‘battle box’, a repository for vital information to allow an organisation to carry on operating in adverse conditions. Just like first aid kits and motorists’ emergency packs, a battle box should focus on the essentials. It should also be accessible and ‘grabable’ so that it can be made readily available to those responding to an incident. However, there’s more a viable battle box than just ticking off items to be put in it.
Privacy is on trial in the United States. Legal activist Larry Klayman asked U.S. District Judge Richard J. Leonto require the NSA to stop collecting phone data and immediately delete the data they already have. Their argument was that US citizens have a right to privacy and this is a violation of the 4th Amendment of the Constitution protecting you from illegal search and seizure. Monday' ruling that this practice is unconstitutional has privacy activists cheering in the streets, but it will not be a lasting victory.
In the United States, there is not a single privacy law on the books. (You can argue that HIPAA is a privacy law, but nuances exists that can lessen its impact.) What is protected has come from judgments based on the application of the 4th Amendment regarding search and seizure. US citizens were given "privileges”, thanks to Richard Nixon, which say we have an expectation of privacy when using a phone, which basically means that the government has to get a warrant for a wiretap. (It’s worth noting that in the UK, they don’t get that privilege.)
Data is up for grabs. And everyone is grabbing.
CSO — Christmas is fast approaching. Now, and after the office is back to normal after the first of the year, employees are going to return with several shiny new gadgets, along with the expectation that they'll "just work" in the corporate environment. Security will be a distant afterthought, because it's still viewed as a process that hinders productivity.
The back and forth between security helping or hurting productivity is a battle that has existed before the mobile device boom, and it will exist long after the next big technological thing arrives. But the fact remains security is an essential aspect to operations.
Analysts from Frost & Sullivan have estimated that mobile endpoint protection market will reach one billion dollars in earned revenue by 2017, a rather large number given that last year the market was worth about $430 million. The reason for the large projection is simple; mobile is the new endpoint, and everyone has one.
CIO - Superstorm Sandy, the Fukushima Daiichi nuclear plant near-meltdown and ongoing regional natural disasters such as Typhoon Haiyan all wreak havoc with the capability of many affected companies - thousands, if not more - to continue business operations.
We define business risk as any event or activity that threatens the capability of a company to concentrate on its primary goal of generating revenue. There's also business risk from unexpected or unbudgeted costs to a company owing to improper management or monitoring of the software running in an enterprise. Do you recognize that there may be significant business risks to your company lurking in your IT operations, even as you take the time to read this article?
Business risk is what organizations continually work to mitigate via disaster recovery or business continuity plans - and rightfully so. But a company may also be exposed to elevated business risks owing to two frequently overlooked issues: Software asset management (SAM) and software license management (SLM). Let's take a look at the how your organization can mitigate business risk using SAM and SLM.
CSO — Data loss, privacy violations, stolen source code, malware development, and more. In hindsight, 2013 was busy year for security professionals, as well as a costly one for the organizations and individuals targeted by criminals.
As mentioned, 2013 was a busy year with regard to security incidents. While there's still a month left, the fact remains that one-hundred million plus records have been compromised during the past eleven months. The source of this loss has been blamed on everything from nation state attacks and activists, to hackers with an agenda.
What challenges threaten to impact on the integrity of enterprise IT systems during the year ahead? David Gibson, VP at Varonis Systems, gives his predictions:
Knowing where your enterprise’s data is stored is no longer optional.
Privacy and other laws vary from nation to nation. Businesses and their remote offices need to know which laws they must comply with, and those laws are in a state of flux in a number of large countries. In particular, US companies doing business in Europe face the prospect of new challenges that will require more accurate knowledge of where their data – and their customers’ data – reside than most of them have today.
The proliferation of personal cloud services and mobile device capability continues to put critical data in flight, beyond not only the walls but also the awareness of the enterprise. Making this even more urgent is the realization that some governments can (legally, it appears) access data stored in cloud services.
It’s the CIO’s version of Groundhog Day: Business units want a solution, but do not want to wait on IT. So the division leaders bypass IT by funding the solution from their own budget. Eventually, it all comes out and IT has to solve the ensuing integration problems.
The cloud has only multiplied the problem and added one more complication: Now, business users aren’t willing to put up with IT taking its sweet time on solving the integration problem, even if the business caused it, Gartner VP and Research Fellow Massimo Pezzini told Information Age.
Here are my predictions for 2014:
- 2014 will bring exponential expansion and evolution of the Internet of Things (IoT).
This will also bring new opportunities for information security trailblazers unlike any we’ve seen before. The potential benefits of the IoT will be huge, but just as large will be the new and constantly evolving information security and privacy risks. We will see some significant privacy breaches resulting from the use of IoT devices as a result. New IoT risks, and resulting security incidents and privacy breaches, will bring a significant need for technology information security pros to also understand privacy concepts so they can implement privacy protections within all these new devices, and into the processes and environments where the devices are used. Even though basic information security and privacy concepts will still apply, very little has been done to actually implement security or privacy controls in these new technologies. We will need more information security and privacy professionals who can recognize new information security and privacy risks. There is no textbook to look to for these answers as risks evolve.
Cloud storage providers want your business, and they are actively exploring numerous strategies to get it.
However, catering to professional organizations is much different than catering to individuals, even if those individuals use their personal clouds to house business data. And the provider, or providers, who can establish robust, enterprise-friendly storage environments will reap a substantial reward as organizations look to scale infrastructure in order to take on Big Data and other challenges.
This is why so many cloud providers are introducing a wide range of top-tier storage features in their platforms. Box, for example, recently added a new administration console that aims to extend visibility and control into its hosted environment. The system includes protections for personal data like credit card numbers and Social Security information, as well as data and traffic analysis tools to help organizations better manage resource consumption and red-flag unusual usage patterns. There are also new automation and content management suites with improved workflow and search functions.
When is the last time you personally experienced a hard drive failure?
A few years ago, thieves broke into our RV and stole the laptops, hard drives, and basically anything not nailed down.
At the time, I had a backup strategy - but pushed the backup and swap by two days (after the weekend). As a result of that fateful decision, I lost a few weeks of work and a few gigabytes of pictures. I recreated the work, but the pictures are gone.
I learned the importance of sticking to the backup plan, having multiple backups (in different locations), and never leaving a phone with a laptop. Never.
Last summer, as the hard drive on my roughly four year old laptop signaled it was failing, I was ready. I had a backup. And to be safe, I had a backup of my backup.- See more at: http://blogs.csoonline.com/security-leadership/2874/using-evidence-hard-drive-failure-backblaze-increase-value-security#sthash.DUEc8wMl.dpuf
When is the last time you personally experienced a hard drive failure?
A few years ago, thieves broke into our RV and stole the laptops, hard drives, and basically anything not nailed down.
At the time, I had a backup strategy - but pushed the backup and swap by two days (after the weekend). As a result of that fateful decision, I lost a few weeks of work and a few gigabytes of pictures. I recreated the work, but the pictures are gone.
I learned the importance of sticking to the backup plan, having multiple backups (in different locations), and never leaving a phone with a laptop. Never.
Last summer, as the hard drive on my roughly four year old laptop signaled it was failing, I was ready. I had a backup. And to be safe, I had a backup of my backup.- See more at: http://blogs.csoonline.com/security-leadership/2874/using-evidence-hard-drive-failure-backblaze-increase-value-security#sthash.DUEc8wMl.dpuf
History is a great teacher.
Associate Pastor Ben Davidson of Bethany Community Church learned a valuable lesson during Hurricane Katrina in 2005 that benefitted him and his congregation the morning of Nov. 17, 2013, when a powerful tornado tore through Washington IL.
His quick thinking reminds me when disasters occur; having a plan can save lives and help pivot a community toward a strong recovery. I have learned this lesson many times through the faith leaders I’ve engaged as director of the DHS Center for Faith-based & Neighborhood Partnerships.
On Sunday morning Pastor Davidson was preparing to begin his adult Sunday school class, when he received an emergency phone call. A tornado had touched down and their church was in its path.
Immediately he and the staff worked to move the congregation --particularly the children -- to their designated shelter in the church location and they began to pray together as the storm passed through their community.
The entire congregation comforted one another through what Pastor Davidson recalls as "the longest 45 minutes of my life." Once all congregants were accounted for and that families could leave the sheltered location Pastor Davidson immediately went home to confirm the safety of his children who were at home sick that morning.
Immediately following the disaster, Bethany Community Church joined its fellow members of the Washington Ministerial Association, AmeriCorps and the Illinois Voluntary Organizations Active in Disaster to help coordinate the community’s recovery efforts.
Since the devastating event, more than 4,000 community volunteers have registered with Bethany Community Church to help their loved ones and neighbors during disasters. Their effort and commitment will help to increase the community’s resilience and ensure they are better prepared for emergencies.
The story of Washington, IL, and Bethany Community Church is a reminder of the care and compassion that faith-based organizations can provide all survivors in times of disaster. Their story reinforces the power of a whole community, “survivor centric” approach and the important role and responsibility of faith leaders in preparing their communities before disasters strike.
I encourage you know what to do before disaster strikes by joining the thousands of faith-based and community members on the National Preparedness Coalition faith-based community of practice and connecting with faith leaders across the country working on preparedness.
Being prepared contributes to our national security, our nation’s resilience, and our personal readiness.
CIO — It's the time of year when darkness comes early and people begin to sum up how this year has gone and next year will unfold. It's also the time of year that predictions about developments in the technology industry over the next 12 months are in fashion. I've published cloud computing predictions over the past several years, and they are always among the most popular pieces I write.
Looking back on my predictions, I'm struck not so much by any specific prediction or even the general accuracy (or inaccuracy) of the predictions as a whole. What really comes into focus is how the very topic of cloud computing has been transformed.
Four or five years ago, cloud computing was very much a controversial and unproven concept. I became a strong advocate of it after writing Virtualization for Dummies and being exposed to Amazon Web Services in its early days. I concluded that the benefits of cloud computing would result in it becoming the default IT platform in the near future.
What issues and new technologies have disrupted the IT continuity landscape in 2013 and how are these likely to develop in 2014?
By Patrick Hubbard and Lawrence Garvin, SolarWinds.
We have spent the past year speaking with hundreds of techies at every major networking trade event in 2013 and from these discussions have drawn a number of predictions for the coming year, as well as insights into how the industry has evolved and developed over the past twelve months. Below, we share our thoughts on the past year and our predictions for 2014.
2013 has been the year of vendor-led hype on buzz technologies such as SDN and cloud, but in practice very few notable advances in technologies or vendor offerings in these areas have come into fruition.
Cross-product support, and a noticeable increase in budget, has accelerated the advance of virtualization. Products such as Cisco Unified Computing System (UCS) have made it possible to integrate with VMware V-block, boosting the desktop virtualization trend and widened its reach into mid-market networks. Similarly, with the launch of Hyper V, 2013 was the year that Microsoft finally became a genuine player in the virtualization space.
New research from Corero Network Security has found that many businesses are failing to take adequate measures to protect themselves against the threat of a DDoS attack. A survey of 100 companies revealed that in spite of the reports about the cost of downtime and the potential for DDoS attacks to mask greater threats, businesses are failing to put in place effective defenses/defences or plans to mitigate the impact of a DDoS attack against their organization. More than half of companies lack adequate DDoS defense technology, and 44 percent of respondents have no formal DDoS attack response plan.
The survey asked respondents about the effectiveness of their plans to prevent, detect and mitigate the damage of a cyber attack including examining their incident response plans from the standpoint of infrastructure, roles and responsibilities, technology, maintenance, and testing. The findings revealed a lack of planning on multiple levels: whilst nearly half of businesses lacked a formal DDoS response plan, the problem was compounded by out of date network visibility as more than 54 percent of respondents have outdated or non-existent network maps. Furthermore, approximately one in three businesses lacked any clear idea of their normal network traffic volume, making it more difficult to discern between routine traffic peaks or high traffic volumes that could signal a DDoS attack.
While the web has opened wide the doors of opportunity for entrepreneurs around the world, others have shown evidence of creativity as well. Ingenious use of technologies has led to hacktivism, identity theft, distributed denial of service (DDoS) and swatting, to name but a few. Perpetrators use both the latest cyber-techniques and also old-fashioned approaches such as social engineering (a new term for the classic tactics of confidence tricksters). Business continuity and personal security both need to be safeguarded against threats like these. But what is driving the proliferation of such Internet incidents?
Risk certainly marked the year of 2013, with knock-on effects on business continuity thinking. However, in a year picking up the pieces after different disasters, the real message was a reminder that while we collectively now know a great deal about risk, we don’t always prepare or take action appropriately. The devastation caused by rainfall in the Uttarakhand state of India was one example. Environmentalists blamed what they considered to be haphazard preceding development projects of roads, resorts and hydroelectric stations for the subsequent high level of damage and deaths. Meanwhile in the US and for much of 2013, New York was applying lessons learned the hard way following Hurricane Sandy back in 2012 to produce an improved city resilience plan.
Bring your own device (BYOD) has a lot going for it. The simplicity of the approach of letting Jane and Joe use their own devices at work and compensating them in some manner is so simple and so rooted in common sense that the case against it is lost in the shuffle.
Or was lost in the shuffle. The reality is that significant downsides and obstacles to BYOD do exist. That reality may finally be dawning on corporate managers. Strategy Analytics released interesting worldwide research that revealed that everything is growing: the number of BYOD devices, the number of company-owned devices issued to employees, and the total number of devices shipped.
The percentage that deserves the most attention is the portion of corporate-liable devices:
A new study finds that in Seattle more than 10,000 buildings — many of them homes — are at high risk from earthquake-triggered landslides.
Seattle Times science reporter
With its coastal bluffs, roller-coaster hills and soggy weather, Seattle is primed for landslides even when the ground isn’t shaking. Jolt the city with a major earthquake, and a new study from the University of Washington suggests many more slopes could collapse than previously estimated.
A powerful earthquake on the fault that slices under the city’s heart could trigger more than 30,000 landslides if it strikes when the ground is saturated, the analysis finds. More than 10,000 buildings, many of them upscale homes with water views, sit in areas at high risk of landslide damage in such a worst-case scenario.
“Our results indicate that landsliding triggered by a large Seattle fault earthquake will be extensive and potentially devastating,” says the report published this month in the Bulletin of the Seismological Society of America.
How can you be sure the information you store on the cloud is safe? The short answer is you can't. However, you can take some protective measures. Here five data privacy protection tips to help you tackle the issue of cloud privacy.
CIO — The number of personal cloud users increases every year and is not about to slow down. Back in 2012 Gartner predicted the complete shift from offline PC work to mostly on-cloud by 2014. And it's happening.
Today, we rarely choose to send a bunch of photos by email, we no longer use USB flash drives to carry docs. The cloud has become a place where everyone meets and exchanges information. Moreover, it has become a place where data is being kept permanently.