According to a new study by the Ponemon Institute, sponsored by IBM, the average consolidated total cost of a data breach is $3.8 million, representing a 23% increase since 2013. The annual 'Cost of Data Breach Study' also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased 6% from a consolidated average of $145 to $154.
"Based on our field research, we identified three major reasons why the cost keeps climbing," said Dr Larry Ponemon, chairman and founder, Ponemon Institute. "First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management."
Data breaches are a significant threat to organizations as highlighted in the Business Continuity Institute's latest Horizon Scan report which revealed that 82% of respondents to a survey were either concerned or extremely concerned about this threat materialising while 74% expressed the same level of concern to a data breach, making them the first and third greatest threats respectively.
Some of the highlights from the Ponemon Institute’s research include:
- Board level involvement and the purchase of insurance can reduce the cost of a data breach. The study looked at the positive consequences that can result when boards of directors take a more active role when an organization had a data breach. Board involvement reduces the cost by $5.50 per record. Insurance protection reduces the cost by $4.40 per record.
- Business continuity management plays an important role in reducing the cost of data breach. The research reveals that having business continuity management involved in the remediation of the breach can reduce the cost by an average of $7.10 per compromised record.
- The most costly breaches continue to occur in the US and Germany at $217 and $211 per compromised record respectively. India and Brazil still have the least expensive breaches at $56 and $78 respectively.
- The cost of data breach varies by industry. The average global cost of data breach per lost or stolen record is $154. However, if a healthcare organization has a breach, the average cost could be as high as $363, and in education the average cost could be as high as $300. The lowest cost per lost or stolen record is in transportation ($121) and public sector ($68).
- Hackers and criminal insiders cause the most data breaches. 47% of all breaches in this year's study were caused by malicious or criminal attacks. The average cost per record to resolve such an attack is $170. In contrast, system glitches cost $142 per record and human error or negligence is $137 per record. The US and Germany spend the most to resolve a malicious or criminal attack ($230 and $224 per record, respectively).
- Notification costs remain low, but costs associated with lost business steadily increase. Lost business costs are abnormal turnover of customers, increased customer acquisition activities, reputation losses and diminished good will. The average cost has increased from $1.23 million in 2013 to $1.57 million in 2015. Notification costs decreased from $190,000 to $170,000 since last year.
- Time to identify and contain a data breach affects the cost. The study shows the relationship between how quickly an organization can identify and contain data breach incidents and financial consequences. Malicious attacks can take an average of 256 days to identify while data breaches caused by human error take an average of 158 days to identify. As discussed earlier, malicious or criminal attacks are the most costly data breaches.
Hanover Attains Continuous Uptime with DataCore SANsymphony-V Deployed in a Synchronous Mirror Configuration – Ensuring Data Redundancy
HANOVER, Pa. – DataCore, a leader in software-defined storage, today announced that Hanover Hospital has realized continuous uptime with its high-availability software-defined storage and has significantly reduced the time and effort it takes to provision storage and systems.
“The biggest benefit Hanover Hospital has experienced from adopting DataCore has been true high availability due to the automatically synchronized virtual disks that are mirror protected and presented to different applications spanning our two on-campus datacenters,” stated Douglas Null, senior technical architect-MIS department, Hanover Hospital. “Each data center shares critical workloads – yet provides physical separation of storage and compute in the event of a localized data center outage. DataCore SANsymphony-V is our only storage solution and it delivers ‘no touch’ failover and failback operation. It delivers a fully automated process. Other vendor solutions are replicated as active/passive, need human intervention or scripts, or require other point products or special configurations to bring the passive site online.”
Within healthcare, IT is under enormous pressure to increase storage capacity, improve resiliency and accelerate performance – all while managing costs. (See DataCore’s infographic: Healthcare IT Storage Challenges) Hanover Hospital is one of more than 1,000 healthcare customers that have trusted DataCore to virtualize its storage infrastructure – thereby making its storage software-defined.
Overcoming Downtime, Data Growth and Slow Performance
Hanover reports that with DataCore SANsymphony-V deployed in a synchronous mirror configuration, it has realized continuous uptime through with its high-availability storage and has significantly reduced the time it takes to provision storage and systems. According to Null, “DataCore keeps both our users and patients happier because of high systems’ availability. Moreover, with DataCore we have been able to simplify management and reduce the total cost of ownership of the entire storage infrastructure.”
Hanover originally started a number of years back with a single DataCore installation in one data center. Over the years, the hospital added synchronous mirroring that stretched storage availability between its two on-campus data centers. DataCore SANsymphony-V now serves as a unified storage services platform across the entire multi-site infrastructure. In particular, it is relied upon extensively for various mission-critical, enterprise and clinical applications. Examples of these include the hospital’s Healthcare BI reporting platforms, clinical middleware, medical dictation and transcription services, and Citrix XenApp, among others.
Null adds, “We get very impressive performance and bandwidth throughput for the amount of VM servers and applications we are hosting on our environment. Plus, we have improved storage utilization since we are able to over-provision storage by about sixty percent, meaning we are more efficient in our ability to meet the growth and cost demands for more capacity.”
Furthermore, the IT team wanted to deploy a Voice over IP (VOIP) telephony application and wanted the same “always up, always on” capability. “After doing some research, what we came up with was to deploy DataCore – but in this instance use the product in another way altogether. In this case, Hanover deployed DataCore Virtual SAN, which used virtualized storage controllers inside of a VMware ESX host,” stated Null. “That solution had far fewer requirements than Virtual SAN from VMware.”
Hanover Hospital is an independent, not-for-profit community hospital and part of Hanover HealthCare PLUS network of services. The hospital is located in Hanover, Pennsylvania. The hospital has approximately 1,400 staff and 93 beds across 15 buildings. Hanover manages 6,000 patient visits, 30,000 ER visits, 190,000 outpatient visits, 600,000 lab tests, 90,000 imaging scans, and over 600 births.
Hanover Hospital – Addressing the Top 3 Storage Challenges in Healthcare
To learn more, please view our recorded webinar featuring Hanover Hospital “Addressing the Top Three Storage Challenges in Healthcare”. It highlights the challenges faced by healthcare IT departments such as maintaining 24x7x365 operations, managing explosive data growth and ensuring the highest performance from critical applications.
In the webinar, Hanover Hospital’s Douglas Null, senior technical architect-MIS department, discusses his firsthand experience and best practices using DataCore’s software-defined storage.
A full case study on the deployment at Hanover Hospital is also available:
About DataCore Software
DataCore is a leader in software-defined storage. The company’s storage virtualization and virtual SAN solutions empower organizations to seamlessly manage and scale their data storage architectures, delivering massive performance gains at a fraction of the cost of solutions offered by legacy storage hardware vendors. Backed by 10,000 customer sites around the world, DataCore’s adaptive and self-learning and healing technology takes the pain out of manual processes and helps deliver on the promise of the new software defined data center through its hardware agnostic architecture.
Visit http://www.datacore.com or call (877) 780-5111 for more information.
Historic City Transforms Interdepartmental Business Processes – Reduces Latency, Simplifies File Sharing, Increases Flexibility and Reduces Overall Storage Costs
NORCROSS, Ga. – StorTrends® today announced that its high performance storage area network (SAN) storage appliances have been credited with "revolutionizing" the IT infrastructure of the City of Napoleon. Based 35 miles from Toledo, Ohio, and home to almost 9,000 residents, the City of Napoleon implemented the StorTrends SAN solution to help modernize its IT infrastructure while keeping costs down.
"Adding SAN to the City of Napoleon's IT environment has completely transformed how business is done interdepartmentally," said Dan Wachtman, MIS Administrator, City of Napoleon. "I would often lose sleep at night worrying about the data within the City of Napoleon's IT environment. But now, I sleep comfortably knowing our data is fully protected thanks to the StorTrends SANs with their enterprise class snapshots and replication for disaster recovery."
The IT department for the City of Napoleon is responsible for supporting the 19 departments that make up the city's government infrastructure. Its combination Windows and LINUX IT environment includes a myriad of physical servers from various vendors, as well as 25 HP and Citrix Virtual Servers, and Microsoft SQL Server databases.
Prior to deploying the StorTrends solution, the City of Napoleon outsourced most of its larger IT jobs. But as costs and demands continued to escalate, Wachtman, who has been involved in the city's IT division for 15 years, recognized the need for a more efficient and cost-effective solution.
Some of the benefits that the city has experienced after making the transition to SAN include reduced latency, simplified file sharing, increased flexibility, easier storage deployment, and reduced overall storage costs. The City of Napoleon found that specific features of the StorTrends solution-such as snapshots and disaster recovery (DR)-were particularly helpful.
Wachtman added that the StorTrends support plan, StorAID, helped make the jump to SAN less challenging. "The technical support that we experienced was second to none-when we needed them they were there," said Wachtman. "After experiencing this high level of service, I would tell anyone that there is no reason to buy any other competitive storage solution."
"For organizations and institutions, like the City of Napoleon, StorTrends offers a multitude of configurations in all-flash, hybrid flash and spinning disk arrays to meet the requirements and budgets of all IT environments," said Justin Bagby, Director of the StorTrends Division at American Megatrends, Inc. (AMI). "IT professionals that are interested in a price quote on the StorTrends SAN Arrays should check out our onlinePrice Quote Generatorfor a hassle-free price quote."
To read more about the City of Napoleon, and other StorTrends customers, please visit: www.stortrends.com/resourves/customer-stories.
StorTrends® from American Megatrends (AMI) isPerformance Storage with Proven Value. StorTrends SAN and NAS storage appliances are installed worldwide and trusted by companies and institutions in a wide range of industries including education, energy, finance, state and local government, healthcare, manufacturing, marketing, retail, R&D and many more. StorTrends meets the challenges and demands of today's business environments by offering a wide variety of solutions from all-flash storage, hybrid storage to spinning disk solutions. StorTrends is backed by 1,100+ customer installations, 100+ storage patents and nearly 30 Years of IT leadership from a company that millions of people trust on a daily basis, American Megatrends, Inc. For further information, please visit: http://www.stortrends.com.
How to combat social engineering, phishing and ransomware
WOKING, Surrey – VAD Wick Hill announced today that it has been appointed UK distributor for US-based KnowBe4, providers of the world’s most popular integrated security awareness training and simulated phishing programme, based on Kevin Mitnick’s 30+ year unique first-hand hacking experience. KnowBe4 is seeking to expand it UK presence through two-tiered channel distribution with Wick Hill.
Cyber criminals are increasingly targetting employees with phishing, social engineering and ransomware. KnowBe4 provides automated, internet-based security awareness training to combat these issues. It is cost-effective, continually updated, easy-to-use, requires a relatively short amount of employee time, and is suited to organisations of all sizes.
Ian Kilpatrick, chairman Wick Hill Group, commented: “A key security vulnerability is staff and many organisations are only as secure as their weakest employee. Traditionally, perimeter security addressed this risk, but now that no longer works on its own. With the continual changes in threats, it’s been nearly impossible for most organisations to train and support their entire workforce. We see Knowbe4 as meeting that requirement, by enabling organisations to test their staff at their desks and by automating the processes for providing reporting and focussed training for those who are vulnerable.”
He added: “For resellers, KnowBe4 solutions are a straightforward sell, particularly as potential clients can have a free test of how phish-prone their staff are, before buying. KnowBe4 provides annuity revenues, good margins and opportunities for additional training and professional services.”
Stu Sjouwerman, CEO of KnowBe4, said: “Phishing and spear-phishing are behind 91% of data breaches. It is much less expensive to train your staff than suffer the consequences of a data breach, whether those are financial or a loss to reputation. Companies in the UK are discovering that an essential, additional security layer is to train your users and create a ‘human firewall.’”
Sjouwerman continued: “Today, taking staff through new school security awareness training is a “must” and we are thrilled to partner with a leader in IT security products like Wick Hill, who understand how important it is to manage the social engineering problem.“
KnowBe4 solutions include -
- Kevin Mitnick Security Awareness Training
High quality, web-based interactive training combined with frequent simulated phishing attacks, using case studies, live demonstration videos and short tests. It is aimed at making sure employees understand the mechanisms of spam, phishing, spear phishing, malware and social engineering.
After the training, KnowBe4’s highly effective scheduled Phishing Security Tests keep employees on their toes. There are several correction options for employees who fall for the attacks, including instant remedial online training.
- Free Phishing Security Test
91% of successful data breaches started with a spear phishing attack - and they’re getting more sophisticated. KnowBe4 provides a free Phishing Security Test (PST) which will show you what percentage of your users is Phish-prone.
About Wick Hill
Established in 1976, value added distributor Wick Hill specialises in secure IP infrastructure solutions and convergence. The company sources and delivers best-of-breed, easy-to-use solutions through its channel partners, with a portfolio that covers security, performance, access, networking, convergence, storage and hosted solutions.
Wick Hill is part of the Wick Hill Group, based in Woking, Surrey with sister offices in Hamburg. Wick Hill is particularly focused on providing a wide range of value added support for its channel partners. This includes a strong lead generation and conversion programme, technical and consultancy support for reseller partners in every stage of the sales process, and extensive training facilities. For more information about Wick Hill, please visit http://www.wickhill.com or www.twitter.com/wickhill
Hurricane forecasting evolving with new storm surge products, upgraded modeling
NOAA’s Climate Prediction Center says the 2015 Atlantic hurricane season will likely be below-normal, but that’s no reason to believe coastal areas will have it easy.
For the hurricane season, which officially runs from June 1 to November 30, NOAA is predicting a 70 percent likelihood of 6 to 11 named storms (winds of 39 mph or higher), of which 3 to 6 could become hurricanes (winds of 74 mph or higher), including zero to 2 major hurricanes (Category 3, 4 or 5; winds of 111 mph or higher). While a below-normal season is likely (70 percent), there is also a 20 percent chance of a near-normal season, and a 10 percent chance of an above-normal season.
“A below-normal season doesn’t mean we’re off the hook. As we’ve seen before, below-normal seasons can still produce catastrophic impacts to communities,” said NOAA Administrator Kathryn Sullivan, Ph.D., referring to the 1992 season in which only seven named storms formed, yet the first was Andrew – a Category 5 Major Hurricane that devastated South Florida.
“The main factor expected to suppress the hurricane season this year is El Niño, which is already affecting wind and pressure patterns, and is forecast to last through the hurricane season,” said Gerry Bell, Ph.D., lead seasonal hurricane forecaster with NOAA’s Climate Prediction Center. “El Niño may also intensify as the season progresses, and is expected to have its greatest influence during the peak months of the season. We also expect sea surface temperatures in the tropical Atlantic to be close to normal, whereas warmer waters would have supported storm development.”
Included in today’s outlook is Tropical Storm Ana, but its pre-season development is not an indicator of the overall season strength. Ana’s development was typical of pre-season named storms, which often form along frontal boundaries in association with a trough in the jet stream. This method of formation differs from the named storms during the peak of the season, which originate mainly from low-pressure systems moving westward from Africa, and are independent of frontal boundaries and the jet stream.
With the new hurricane season comes a new prototype storm surge watch/warning graphic from NOAA’s National Hurricane Center, intended to highlight areas along the Gulf and Atlantic coasts of the United States that have a significant risk of life-threatening inundation by storm surge from a tropical cyclone.
The new graphic will introduce the concept of a watch or warning specific to the storm surge hazard. Storm surge is often the greatest threat to life and property from a tropical cyclone, and it can occur at different times and at different locations from a storm’s hazardous winds. In addition, while most coastal residents can remain in their homes and be safe from a tropical cyclone’s winds, evacuations are often needed to keep people safe from storm surge. Having separate warnings for these two hazards should provide emergency managers, the media, and the general public better guidance on the hazards they face when tropical cyclones threaten.
Also new this season is a higher resolution version (2 km near the storm area) of NOAA's Hurricane Weather Research and Forecasting model (HWRF), thanks to the upgrades to operational computing. A new 40-member HWRF ensemble-based data assimilation system will also be implemented to make better use of aircraft reconnaissance-based Tail Doppler Radar data for improved intensity forecasts. Retrospective testing of 2015 HWRF upgrades demonstrated a five percent improvement in the intensity forecasts compared to last year.
This week, May 24-30, is National Hurricane Preparedness Week. To help those living in hurricane-prone areas prepare, NOAA offers hurricane preparedness tips, along with video and audio public service announcements at www.hurricanes.gov/prepare.
"It only takes one hurricane or tropical storm making landfall in your community to significantly disrupt your life,” said FEMA Deputy Administrator Joseph Nimmich. “Everyone should take action now to prepare themselves and their families for hurricanes and powerful storms. Develop a family communications plan, build an emergency supply kit for your home, and take time to learn evacuation routes for your area. Knowing what to do ahead of time can literally save your life and help you bounce back stronger and faster should disaster strike in your area."
NOAA will issue an updated outlook for the Atlantic hurricane season in early August, just prior to the historical peak of the season.
NOAA also issued its outlook for the Eastern Pacific and Central Pacific basins. For the Eastern Pacific hurricane basin, NOAA’s 2015 outlook is for a 70 percent chance of an above-normal hurricane season. That outlook calls for a 70 percent probability of 15 to 22 named storms, of which 7 to 12 are expected to become hurricanes, including 5 to 8 major hurricanes. For the Central Pacific hurricane basin, NOAA’s outlook is for a 70 percent chance of an above-normal season with 5 to 8 tropical cyclones likely.
NOAA’s mission is to understand and predict changes in the Earth’s environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Twitter, Facebook, Instagram, and our other social media channels.
I. Bill Gates is an optimist.
Ask him, and he'll tell you himself. "I'm very optimistic," he says. See?
And why shouldn't Bill Gates be an optimist? He's one of the richest men in the world. He basically invented the form of personal computing that dominated for decades. He runs a foundation immersed in the world's worst problems — child mortality, malaria, polio — but he can see them getting better. Hell, he can measure them getting better. Child mortality has fallen by half since 1990. To him, optimism is simply realism.
But lately, Gates has been obsessing over a dark question: what's likeliest to kill more than 10 million human beings in the next 20 years? He ticks off the disaster movie stuff — "big volcanic explosion, gigantic earthquake, asteroid" — but says the more he learns about them, the more he realizes the probability is "very low."
Now that the cloud is becoming a common fixture in the enterprise, the IT industry is starting to look at how a cloud-facing, mobile-driven environment will affect that full data stack.
Naturally, this is mostly conjecture at this point because many leading experts still do not know how the technology, user requirements, business models and even entire industries will be affected by this transformation. From an historical perspective, the current decade is very similar to about 100 years ago as utility-based electrical grids were first powering up: People are in awe of an amazing new technology, even though its full ramifications cannot be discerned.
Still, there are those who are willing to give it a try, particularly when it comes to the all-software IT deployment capabilities that abstract architectures represent. MapR Technologies’ Jack Norris recently explored the potentialities of “re-platforming” the enterprise toward a more data-centric footing. This will naturally require a new view of physical infrastructure, such as the current separation of compute and storage, but it also has implications higher up the stack, as in the need to maintain separate production and analytics architectures. This new stack will also require global resource management, linear scalability and real-time processing and systems configuration.
It’s been about eight months since IT services giant and top-ranked MSPmentor 501 2015 company Dimension Data announced it would deploy globally standardized managed services for data centers.
The service, built on the organization’s managed services automation platform, manages server, storage and networks for on-premise, cloud and hybrid data centers, the company said in a statement in September. Those services can be in the client’s data centers, colocation facilities, in the public cloud, in a private cloud, or in Dimension Data’s cloud.
One of the problems that is related to our ability to understand how resilient we can possibly be in the future is that we expect the future to be based on our normalities. We expect (and would probably like) a degree of stability based upon what we know and understand to be our current terms of reference. Unfortunately, things change; and alongside the political and international tectonic shifts that appear to be accelerating at the moment, we should also consider those structures and capabilities upon which we have long relied and the fact that we may be losing control of them.
The structures of our societies, the underpinning elements of the way that we live can also have a profound influence on our ability to live in the same way in the future. An interesting combination of debt and demographic is influencing the potential longevity of our economic structures according to the European chief executive of Goldman Sachs Asset Management.
We have become not only acculturated to interruptions, but addicted to them. We have the mistaken belief that interruptions are a perfectly normal way of life, despite knowing deep down that “time is a precious commodity that we cannot afford to waste.”
Therein lies the essential message of Edward Brown, founder and president of Cohen Brown Management Group, a culture change and time management consulting and training firm in Los Angeles. But at least he’s trying to do something about it. He’s the author of “The Time Bandit Solution: Recovering Stolen Time You Never Knew You Had,” and he feels strongly enough about the issue to take time out for an in-depth email interview on the topic.
I learned a lot from that interview about the extent to which we allow ourselves to be interrupted, and the price we pay as a result. To set the stage for the discussion, Brown pointed out that there are two key types of interruptions that we tolerate: those coming from other people, and those coming from our devices. He said other people are inveterate time bandits, and the fact that their intent is innocent doesn’t matter: