Industry Hot News (6923)
Taller hard drives with multiple actuator arms supplied in multi-drive packages — these are just a few of Google's suggestions as it calls for a complete rethink of storage disk design.
In a white paper called "Disks for Data Centers," published last month, the company gives some hints as to how the hard disk drive might evolve in the coming years.
There's a need for change, the white paper asserts, because the fastest-growing use case for hard drives is for mass storage services housed in cloud data centers. YouTube alone requires 1 million GB of new hard drive capacity every day, and very soon cloud storage services will account for the majority of hard drive storage capacity in use, it says.
In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.
Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.
Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.
If you think you know what Big Data is going to be like based on the volume of today’s workflows, well, to coin a phrase, “you ain’t seen nothin’ yet.”
The fact is that with the sensor-driven traffic of the Internet of Things barely under way, the full data load that will eventually hit the enterprise will be multiple orders of magnitude larger than it is today, and much of it will be unstructured and highly ephemeral in nature, meaning it will have to be analyzed and acted upon quickly or it loses all value.
The good news is that much of the processing will be done at the edge, where it can be leveraged for maximum benefit without flooding centralized resources. But a significant portion will still make it to the data center or the data lake, which means the enterprise will need to implement significant upgrades to infrastructure throughout the distributed data environment, and soon.
Now before I start I just want to say that I’m sure there are a thousand ways in which to design an info sec department structure. I’m just going to cover off an example of how it might be set up.
I'm writing this because at one point in my life I assumed that having a single information security manager in a business was enough (or even if it was just bolted on to another job it would do!). This is clearly not the case. I now see some of the complexities involved and I wanted to share it with those who might be equally as oblivious as I was!
(TNS) - The world must brace for the threat of further attacks in the wake of this morning's bloodshed in Brussels, terror experts said today.
"We're all at risk" of Brussels turning into a coordinated multi-city, multi-country attack, said former Boston Police Superintendent-in-Chief Daniel Linskey, now managing director of the global investigations and securities firm Kroll.
"I wish it wasn't true, but it's out there. They've already talked about it, and us not talking about it won't change that," said Linskey.
At least 34 people have been reported dead in three attacks, which rocked the airport and bus terminal in the Belgian capital during this morning's rush hour. In response, Belgium has activated its highest threat level, essentially going into a lockdown.
With spring weather and mosquito season coming soon in the Unites States, the Zika virus – and the mosquitoes that carry the virus – may be a major concern. Zika is currently affecting more than 30 countries and territories in the Americas and Pacific Islands. Zika virus is primarily spread through the bite of an infected Aedes aegypti mosquito. People and communities can take steps to reduce the number of mosquitoes in their homes and communities to protect themselves from Zika.
How Does Water Help Mosquitoes Breed?
Aedes aegypti is known as a “container-breeding mosquito” because it likes to lay eggs in and around standing water. Studies show that female mosquitoes prefer to lay eggs in water that collects or is stored in manmade containers.
Aedes aegypti mosquitoes lay eggs on the walls of water-filled containers. Eggs stick to containers like glue and remain attached until they are scrubbed off. The eggs can survive when they dry out—up to 8 months. When it rains or water covers the eggs, they hatch and become adults in about a week.
Reduce mosquitoes at home
Here are a couple of steps you can take to prevent mosquitoes from living and breeding around your home.
Remove standing water
Keep mosquitoes from laying eggs inside and outside of your home. Items in and around people’s homes can collect water. Once a week, empty and scrub, turn over, cover, or throw out containers that hold water, such as
- pet water bowls
- flowerpot saucers
- discarded tires
- pool covers
- trash cans, and
- rain barrels.
These actions can help reduce the number of mosquitoes around areas where people live.
Follow safe water storage tips
If water must be stored, tightly cover storage containers to prevent mosquitoes from getting inside and laying eggs.
Reduce mosquitoes in the community
Communities also can take steps to reduce the number of mosquitoes and the chances of spreading disease.
Build systems that distribute safe water
If people have access to clean and safe water in their communities, they will not need to store it in and around their homes. Research has shown that when community-wide distribution systems are built, the number of mosquitoes decreases, because water is not being stored near areas where people live.
When water is contaminated with organic matter (for example, human or animal waste, grasses, and leaves), the chances that mosquito larvae will survive may increase because contaminated matter provides food for larvae to eat. Sanitation departments and wastewater treatment plants remove organic wastes and treat water with chlorine or other disinfectants. These activities may decrease mosquito populations and, simultaneously, prevent diarrheal diseases.
*Basic sanitation includes access to facilities for the safe disposal of human waste, and the ability to maintain hygienic conditions, through services such as garbage collection, industrial/hazardous waste management, and wastewater treatment and disposal.
Water, sanitation, and hygiene* (WASH) are critical to keep people healthy and prevent the spread of many different disease, including Zika. World Water Day recognizes the importance of safe drinking water and improved sanitation and hygiene in the health of our world’s population.
Learn more about World Water Day at www.unwater.org/worldwaterday and visit www.cdc.gov/healthywater/global for more information about CDC’s efforts to ensure global access to improved water, sanitation, and hygiene.
For more information on the Zika virus, and for the latest updates, visit www.cdc.gov/zika.
The hype cycle sure is in full swing when it comes to the importance of data. We see headlines declaring that data is the new oil. We hear analysts talk about how to hire a good data scientist. The taunts of our peers echo off the hallways: "If you don't hire a chief data officer, you must not be one of the cool kids!" Well, ok, maybe things haven't gone quite that far yet. But still.
Let's get one thing straight here. Data is nothing new for IT. It is, after all, the "information" part of information technology. Yet we spend so much time contemplating data science, analytics, and data lakes. My observation is that very little time is spent on the basics of data.
It's a good question, perhaps the most important question about data. If we don't provide business value, why are we gathering data or engaging in analytics at all?
Sometime in the early 2000s, Amir Michael responded to a Craigslist ad that was advertising a data center technician job at a company whose name was not mentioned. He applied, and the company turned out to be Google. After years of fixing and then designing servers for Google data centers, Michael joined Facebook, which was at the time just embarking on its journey of conversion from a web company that was running on off-the-shelf gear in colocation data centers to an all-custom hyperscale infrastructure.
He was one of the people that led those efforts at Facebook, designing servers, flying to Taiwan to negotiate with hardware manufacturers, doing everything to make sure the world’s largest social network didn’t overspend on infrastructure. He later co-founded the Open Compute Project, the Facebook-led effort to apply the ethos of open source software to hardware and data center design.
Today, he is the founder and CEO of Coolan, a startup whose software uses analytics to show companies how effective their choices of data center components are and helps them make more informed infrastructure buying and management decisions.
Cloud services are becoming increasingly common amongst businesses of all shapes and sizes, allowing for increased productivity while reducing on-premises infrastructure costs. As a result, Microsoft Office 365 and Exchange Online are now a common pairing that offers a hosted platform by which organisations can access their business data from anywhere in the world whether it’s from a workstation or mobile device.
Whilst there’s no denying that there are a number of benefits when moving to Office 365, it’s important to identify the risks involved and consider any changes that may be required to your business practices. Here are 4 areas that your organisation should be definitely be considering when making the move to a cloud-based platform or hybrid solution:
Sometimes risk analysis can result in paralysis. Finding your risk tolerance and applying it to specific situations requires a nuanced approach.
I am always wary of anyone who tells me categorical rules – e.g. we do not do business in Russia because it is too risky. In this era of oversimplification, such statements border on intellectual dishonesty.
A careful approach to risk analysis always involves a cost benefit framework. Compliance is not a function that is dedicated to identifying risk and avoiding all potential risks. Compliance is part of an overall cost benefit risk analysis.
Timing, as comedians say, is everything. It’s true if you’re on stage entertaining an audience. It’s also true if you’re trying to recover from IT disaster situations involving multiple systems that pass data between one another. In complex configurations with different intercommunicating systems running CRM, ERP, business intelligence, and process integration, one system going down can have a global impact on data consistency. The correct use of the recovery consistency objective or RCO, and understanding of timing issues, can help you recover data and consistency in the best way possible.
A quick refresher on RCO may be in order. Arithmetically, we define RCO as “one, minus [number of inconsistent systems/total number of systems]”.
A little mental juggling of the possibilities shows that RCO can vary between a maximum of one (totally consistent) and a minimum of zero (totally inconsistent).
Predicting the future of cybersecurity is a big deal in the security world. Every year, experts will put out their predictions of the biggest cybersecurity threats for the coming year. Sometimes, they actually get it right.
The folks at the Information Security Forum (ISF) have gone a little longer range in their predictions, with their Threat Horizon 2018 (yes, you read the year correctly). The report contains three themes that we should be preparing for: Technology adoption dramatically expands the threat landscape; the ability to protect is progressively compromised; and governments become increasingly interventionist. In a formal release, Steve Durbin, managing director of the ISF, stated:
We predict that many organizations will struggle to survive as the pace of change deepens. Therefore, at least until a conscious decision is taken to the contrary, these three themes should appear on the radar of every organization, regardless of size.
Software-defined networking. Network functions virtualization. Virtual storage. These are the new buzzwords of the channel today. But are these trends actually as novel as they seem? Viewed from an historical perspective, not really.
SDN, NFV and scale-out storage — which we can collectively call software-defined everything, or SDx if you like acronyms — offer lots of benefits for data centers and the cloud. They abstract operations from underlying infrastructure, making workloads more portable, scalable and platform-agnostic. They also create new opportunities for building more secure infrastructure. And they can lower costs by letting you get next-generation functionality out of cheap commodity hardware.
It seems pretty certain that software-defined everything is the wave of the future. From Docker containers to carrier-grade SDN projects like ONOS, these technologies are progressing rapidly through the development and adoption stages and into production use. Some of them are not there yet, but they’re on the way.
One of the S&R team’s newest additions, Principal Analyst Jeff Pollard comes to Forrester after many years at major security services firms. His research guides client initiatives related to managed security services, security outsourcing, and security economics, and integrating security services into operational workflows, incident response processes, threat intelligence applications, and business requirements. Jeff is already racking up briefings and client inquiries, so get on his schedule while you still can! (As a side note, while incident response is generally not funny, Jeff is. He would be at least a strong 3 seed in a hypothetical Forrester Analyst Laugh-Off tournament. Vegas has approved that seeding.)
Prior to joining Forrester, Jeff served as a global architect at Verizon, Dell SecureWorks, and Mandiant, working with the world's largest organizations in financial services, telecommunications, media, and defense. In those roles he helped clients fuse managed security and professional services engagements in security monitoring, security management, red teams, penetration testing, OSINT, forensics, and application security.
The enterprise is eager to implement private and hybrid clouds even though full public infrastructure is likely to be less costly, more scalable and more flexible. At the same time, organizations are looking to supplement legacy virtual resources with advanced container platforms in support of broad service- and microservice-based data environments.
Clearly, there must be a way to bring all of these technologies together so that everyone is happy.
Microsoft is looking at containers as a key opportunity to draw more enterprise workloads to its Azure cloud. The company is close to releasing the next version of Windows Servers that features Hyper-V container technology to provide a distributed environment that the enterprise can use to deploy and manage self-contained virtual environments both on-premises and in the cloud. The company is rather late to the container game, as it was with the virtual machine, but its reach into legacy data environments is considerable, and many organizations will no doubt find it appealing to suddenly gain the ability to pool container services across hybrid clouds simply by upgrading their existing server environment.
(TNS) - James Young parked his pickup where the floodwaters lapped at Allie Payne Road in Orange and grabbed his kayak out of the bed.
Then, he paddled his way home.
This has been Young's routine for several days.
He wades through ankle-deep water in his flip-flops before climbing into his blue kayak and setting off down the Sabine. It's the only way in and out of his neighborhood.
The concept of biometrics is not new. International super spies have been accessing top secret information with fingerprint and retina scans, voice recognition, and other biometric methods in movies and TV shows for decades. Things like facial recognition and fingerprint scanning have finally made their way to mainstream devices used by average consumers, though, which raises the question of whether or not they provide adequate protection—or if they are more or less secure than the traditional username and password.
Better Than Nothing
It may not be a very convincing argument, or a compelling endorsement of biometrics, but biometric security is better than nothing.
There is an individual who has reached out to me a couple times—Hitoshi Kokumai. He believes that biometric authentication that substitutes for traditional passwords or PINs is inherently less secure than the password or PIN. He created a short video explaining that biometric authentication provides a false sense of security and results in “below-one factor authentication.”
Unless ERM is treated as a team sport, with the company Board fully “on board,” the company will flounder when:
- Overwhelmed with other issues,
- Unfamiliar risks related to specific situations occur or
- The sheriffs in the C-Suite who formerly interacted with the ERM designee Board member view the political risks as too costly to point out the “175-pound gorillas” in the room.
This puts one’s business at risk of a 175-pound gorilla growing into the proverbial 800-pound gorilla or even worse, into 800 dead rats. Before the blink of an eye and with brute strength, that dreaded multimillion-dollar roof comes crashing down.
In real life, there are never enough resources vis-à-vis people, money or time needed to take advantage of the myriad opportunities to solve all of the problems rapidly piling up on one’s desk. So, how does one increase Board and organization involvement in integrating enterprise risk management into the corporate DNA? And where can the right person be found to assist in reaching that goal?
Last month I presented at “Cyber Security Exchange Day,” hosted by the folks at Bryant University and OSHEAN. It was a great event, filled with lots of discussion about what’s happening in the world of cyber security and how the threat landscape is evolving and impacting all forms of IT.
Although the cloud is being more widely adopted, cloud security remains a top concern among enterprise IT professionals. In recent years, news headlines have been filled with enough stories about compromised data security to drive executives away from networked and cloud solutions and back to the proverbial days of stuffing cash in a mattress.
However, while these high-profile news stories drive much of the narrative around data security, the reality is that the vast majority of network security attacks are far more basic in nature. It’s important for organizations to recognize that threats to a computing environment are always present, and that they need to take a more practical approach to manage against real--not simply perceived--threats.
Businesses competing on data must be masters of change. To keep pace with constantly shifting business models, markets, and customer expectations, companies must become more agile, which includes empowering employees with insights that are available at their fingertips.
Self-service analytics is one of the tactics separating industry leaders from laggards.
In today's world, "self-service" is no longer synonymous with passively consuming static reports pre-packaged by IT. It's more about building one's own reports, exploring data, and interacting with it.
Last week, I wrote about Jessica Kriegel, senior organization development consultant at Oracle, who argues that generational stereotypes, like the widespread notions we’ve all read about millennials as entitled, tech-savvy, structure-averse job-hoppers, are harmful to workplace fairness and productivity. In my interview with Kriegel, I also drilled down on the issue of stereotyping as it pertains to IT professionals, which warrants further discussion here.
I found Kriegel, a millennial herself, to be persuasive in her argument, which she makes in her new book, “Unfairly Labeled: How Your Workplace Can Benefit from Ditching Generational Stereotypes.” I also found her to be refreshingly candid. She didn’t miss a beat, for example, in responding to my question about what sorts of generational stereotyping she has found to be most common within Oracle:
I can only speak to the groups that I have worked with. I was brought in to work with the product development team. The managers were basically saying that millennials could not easily transition from college to corporate. They felt like millennials were bringing the college campus style to the corporate atmosphere. So I was brought in to resolve that issue—to teach the millennials how to be more professional, more corporate, and less casual and college-like. That manifested itself in many ways. Some of it had to do with dress code; some of it had to do with productivity; some of it had to do with expectations with regard to work/life balance.
High school students thinking about a college education and career in the cybersecurity field may want to begin preparing now.
There are numerous programs to help high schoolers learn about cybersecurity, gain experience for potential summer internships, and enhance college applications.
Hacker High School
Hacker Highschool provides a set of free hands-on, e-book lessons designed specifically for teens to learn cybersecurity and critical Internet skills. These are lessons that challenge teens to be as resourceful and creative as hackers with topics like safe Internet use, web privacy, online research techniques, network security, and even dealing with cyber-bullies. The full program contains teaching materials in multiple languages, physical books with additional lessons, and back-end support for high school teachers and home schooling parents.
The non-profit ISECOM researches and produces the Hacker Highschool Project as a series of lesson workbooks written and translated by the combined efforts of volunteers world wide. The result of this research are books based on how teens learn best and what they need to know to be better hackers, better students, and better people.
BATON ROUGE, La. – State and federal emergency management officials encourage Louisiana flood survivors to begin repairs as soon as they can.
Flood survivors do not need to wait for a visit from the Federal Emergency Management Agency or their insurance company to start cleaning up and make repairs. FEMA inspectors and insurance claims adjusters will be able to verify flood damage even after cleaning has begun.
It’s important for survivors to take photographs of damage and keep recovery-related receipts. Insurance companies may need both items, while FEMA may need receipts.
Survivors should check for structural damage before entering their homes and report any damage to local officials. They should also immediately throw away wet contents like bedding, carpeting and furniture because of health issues that may arise with mold.
Emergency management officials encourage survivors to register for FEMA assistance as soon as they can. They only need to register once and only one registration is allowed per household. Once registered, survivors should keep in touch with FEMA and update contact information if it changes.
FEMA assistance may help eligible homeowners and renters pay for a temporary place to stay, make repairs or replace certain damaged contents.
Individuals can register online at DisasterAssistance.gov or by calling toll-free 800-621-3362 from 7 a.m. to 10 p.m. daily. Multilingual operators are available.
Survivors who are deaf, hard of hearing or have a speech disability and use a TTY may call 800-462-7585. Survivors who use 711 or Video Relay Service or require accommodations while visiting a center may call 800-621-3362.
FEMA assistance is not taxable, doesn’t need to be repaid and doesn’t affect other government benefits.
Those who are referred to the U.S. Small Business Administration should complete and return the application for a low-interest disaster loan. It is not required to accept a loan offer but returning a completed application is necessary for FEMA to consider survivors for certain forms of disaster assistance.
We urge everyone to continue to use caution in areas where floodwaters remain. Monitor DOTD’s www.511la.org website for updated road closure information. Look for advisories from your local authorities and emergency managers. You can find the latest information on the state’s response at www.emergency.la.gov. GOHSEP also provides information on Facebook and Twitter. You can receive emergency alerts on most smartphones and tablets by downloading the new Alert FM App. It is free for basic service. You can also download the Louisiana Emergency Preparedness Guide and find other information at www.getagameplan.org.
Disaster recovery assistance is available without regard to race, color, religion, nationality, sex, age, disability, English proficiency or economic status. If you or someone you know has been discriminated against, call FEMA toll-free at 800-621-FEMA (3362). For TTY call 800-462-7585.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at https://twitter.com/femaregion6 and the FEMA Blog at http://blog.fema.gov.
Are you an IT employee? If so, you’re probably aware of PagerDuty and DataDog (or similar alert monitoring services). In fact, you may even be the person that gets the calls when something goes terribly wrong in your cloud infrastructure or data center.
What if you could automate self-healing scripts and turn on sirens and alarms when PagerDuty detects an incident?
Almost nine in 10 (87%) organizations have faced a disruptive incident involving third parties in the last three years, with incidents including loss of data by a third party, or failure to deliver a service or product on time. The new study by Deloitte also highlighted the increasing frequency and impact of these disruptions, illustrating the significant need for organisations to invest in better governance and risk management related to third parties.
Of the organisations surveyed, nearly all (94.3%) felt low to moderate levels of confidence in the tools and technology currently used to manage their third party risk, with similar sentiment expressed in supporting risk management processes (88.6%). At the same time, 73.9% of respondents believe that third parties will play a highly important, or even critical, role in the year ahead.
Ensuring business continuity across your supply chain is a part of ensuring business continuity within your own organization and being able to manage through disruptive events. The latest Supply Chain Resilience Report published by the Business Continuity Institute noted that almost three quarters of organizations had experienced a supply chain disruption during the previous year and that half of those disruptions occurred below the tier one supplier.
Kristian Park, partner and global head of third party governance and risk management at Deloitte, commented: “With reliance on third parties set to grow, now is the time to address the ‘execution gap’ between risk and readiness. The impact of third party incidents ranges from reputational damage, regulatory and data breaches, through to actual lost revenue and future business. Increasing frequency of third party incidents, some high profile, has driven a shift in motivation of organisations to improve their risk management.”
Last year, Deloitte calculated that fines issued directly from third party failure have ranged from £1.3m to £35m, reaching £650m for those firms operating internationally and subject to global regulation. As the market value of a company is impacted by such fines, shareholders could incur losses of up to 10 times the fine with an average share price drop in the region of 2.55%.
Kristian Park adds: “The good news is that third party risk management is starting to feature consistently in board-level discussions. Our latest survey found over half (51.1%) of respondents were united in their ambition to have integrated third party risk management systems in place in the year ahead. Rolling out common and unified standards remains a challenge as businesses are increasingly decentralised. Encouragingly, though, 86% of those surveyed have already started.”
(TNS) - When the Big One hits, one of the safest places for your child will be in school. Under 1933’s Field Act, public K-12 schools and community colleges are built to a higher standard than virtually any other building in the state.
“Our buildings are less likely to fail” in the event of a major earthquake, said Jill Barnes, coordinator of Emergency Services for the Los Angeles Unified School District. “Although that said, we do still evacuate and inspect the buildings before we let the students back in.”
Students and staff are also taught what to do in case of a major earthquake.
(TNS) - At a San Bernardino County department operations center activated in the wake of the Dec. 2 terrorist attack, a decision was made to marshal extra ambulances from Riverside County.
Three teams of five ambulances were assembled — one was sent to Redlands, another to Rancho Cucamonga and the third remained in Riverside on standby.
Given the uncertainty about what was really happening, the thought was there might be a secondary or even tertiary event, said Tom Lynch, EMS administrator for the Inland Counties Emergency Medical Agency, which coordinates emergency medical response between ambulances and hospitals in case of an emergency.
Lucy Jones received one of the most prestigious awards for a federal employee last year to the surprise of no one. The United States Geological Survey (USGS) seismologist earned the Samuel J. Heyman Service to America Medal in the citizen services category for her work in seismic research.
That award came largely because of her work in 2014 in Los Angeles Mayor Eric Garcetti’s office for a project that aimed to shape public policy toward the eminent Big One the city faces. The work resulted in the Resilience by Design report and important legislation. Garcetti called Jones’ work with Los Angeles groundbreaking in the way it bridges the gap between seismic science and public action. Jones announced that her last day with the USGS is March 30.
What’s the Resilience by Design program?
Probably the most important part of the water plan actually was the creation of what we call the Resilience by Design program within the Department of Water and Power.
There is a full-time person in charge of seismic resilience for the system developing these retrofit projects and evaluating new projects as they come forward.
The commitment is to a future of seismic-resilient pipes. The path to that is going to be starting with a network of hardened arteries. So they’re setting up priorities for replacement to maximize the network and get water to as much of the city as possible.
Organisations more and more rely on IT when it comes to allowing their business strategies to grow and change. In the end they need an IT-infrastructure that is more powerful, flexible and cost-effective than ever before. Today’s businesses need systems that are permanently available, providing ubiquitous access and delivering fast, flexible responses to their fast-changing business requirements.
To accomplish this, data centres need to exploit existing enterprise resources in a more efficient way and thereby become more flexible in order to react faster. To meet these challenges, organisations need to build a unique, network-based data centre infrastructure that combines the traditional server, storage and networking infrastructure to better support emerging business applications.
A crisis can occur at any time, anywhere. Every year, dozens of organizations find themselves in the news for the wrong reasons, whether it be because of a natural disaster, terrorist attack, or a scandal. Ensuring your staff is trained in crisis communication before an incident occurs could save your company and its image when an emergency situation strikes.
Every organization in any field can be susceptible to a media crisis without the proper preparation in crisis communication. Luckily, Kathryn Holloway and her company, Press Alert, having been helping hundreds of firms prepare for such crises by media training their spokespeople, and now she’s here to help you! Kathryn joined us recently for a webinar where she uses relevant current events to outline a list of best practices for interacting with the press during a crisis.
I was asked to do a presentation on trends for a large number of IT folks. Of course, concepts like hyper-converged computing, analytics, mobile, hybrid cloud, and the move away from passwords (finally) came to mind. But at the time, I also happened to be reading a book called Moonshot (I recommend it, by the way) by ex-Apple CEO John Sculley, in which he talks about trends that really don’t have that much to do with servers, services, processors, systems or networking gear.
Let’s talk about some of the trends behind the trends this week.
The threat of ransomware is rapidly growing with 43% of IT consultants in a survey by Intermedia reporting they have had customers fall victim to an attack. 48% saw an increase in ransomware related support inquiries while 59% of respondents expect the number of attacks to increase this year.
A ransomware outbreak creates two hard choices for businesses: Either spend multiple days recovering locked files from backups (which may be old outdated versions), or pay a ransom to an organized crime syndicate who will then be incentivized to launch further attacks.
In both scenarios, organizations are likely to face significant user downtime that overshadows the cost of the ransom. The 2016 Crypto Ransomware Report revealed that 72% of infected business users could not access their data for at least two days following a ransomware outbreak, and 32% lost access for five days or more. As a result, experts observed significant data recovery costs, reduced customer satisfaction, missed deadlines, lost sales and, in many cases, traumatized employees.
Richard Walters, SVP of Security Products at Intermedia stated, “In the age of ransomware, what matters is how quickly employees are able to get back to work. Traditional backup and file sharing solutions are increasingly inadequate when it comes to addressing this growing concern, putting businesses at risk. Modern business continuity solutions that combine real-time backup, mass file restores and remote access combat threats by minimizing the crippling effects of downtime.”
The report also noted that ransomware should no longer be seen solely as a threat to individuals and small businesses. Nearly 60% of businesses hit by ransomware had more than 100 employees, and 25% were enterprises with more than 1,000 employees.
IT consultants are not the only group to express fears about such attacks with the cyber threat featuring high in the Business Continuity Institute's latest Horizon Scan Report. This report revealed that cyber attacks and data breaches are the top two threats according to business continuity professionals, with 85% and 80% of respondents to a survey expressing concern at the prospect of one of them occurring. It was only recently that the BCI published an article regarding a US hospital that had fallen victim to a ransomware attack, and had to pay up in order to access their data again.
Felix Yanko, President at Technology & Beyond added, “As business IT consultants, we receive an astounding number of customer queries about suspicious emails and pop-ups. The world is becoming more cyber-aware, but ransomware’s depravity keeps it three steps ahead. CryptoLocker, for instance, will take down multiple offices in one sweep, should it infect a shared server. Trying to restore from ransomware attacks off traditional back-ups, businesses usually lose weeks of work due to lost files, plus a day or more of downtime while computers are wiped and reloaded. Companies must have the right security measures in place to mitigate the devastation of ransomware.”
Software-defined data centers were originally met with suspicion. In fact, two years ago, “software-defined” anything was largely considered marketing hype. According to experts, only organizations with pre-existing homogeneous environments could take advantage of it. Times change. Today, the Software Defined Data Center (SDCC) is transforming the service provider (SP) industry.
Nonetheless, service providers still face challenges, and these issues remain thorny:
- Resource constraints: There are not enough qualified cloud professionals to meet demand for services. This talent shortage, along with the complexity of the technology, puts a crimp in the SP's ability to provide customers with innovative solutions that effectively differentiate, compete, and attract new business.
- Pricing pressures: IT has always been challenged to demonstrate ROI. Even organizations with aligned IT and business capabilities continue to look for ways to reduce data center costs.
- Service Level Agreements: There is unrelenting pressure on solution providers to strengthen SLAs across the board—just to hold onto their existing customers. Seeing a way out of the conundrum can be difficult for SPs, and makes partnering with a vendor that provides the right technical solutions, training, and support critical to helping SPs expand opportunities.
LOGICnow announced today the integration of the MAX Backup and Disaster Recovery offering into a comprehensive security solution, and its remote management software for managed services providers.
The technology is described as a holistic approach that allows IT professionals to get systems up and running “within minutes” after a data loss or cyber attack, and help companies regain access to locked data following a ransomware disruption.
DENTON, Texas – Cleaning up after a flood? FEMA has some suggestions:
• Check for damage. Check for structural damage before re-entering your home. If you suspect damage to water, gas, electric or sewer lines, contact authorities.
• Remove wet contents immediately. Wet carpeting, furniture, bedding and anything else holding moisture can develop mold within 24 to 48 hours. Clean and disinfect everything touched by floodwaters.
• Tell your local officials about your damages. This information is forwarded to the state so state officials have a better understanding of the extent of the damages.
• Plan before you repair. Contact your local building inspections or planning office, or your county clerk’s office to get more information on local building requirements.
• File your flood insurance claim. Be sure to provide: the name of your insurance company, your policy number and contact information. Take photos of any water in the house and damaged personal property. Make a detailed list of all damaged or lost items.
There are also questions about when Federal Assistance is available after a disaster. In simple terms, here’s the process:
A disaster happens. Local officials and first responders respond. These officials see that their communities need assistance in dealing with it. They ask the state for help. The state responds. Sometimes, the state sees that the response is beyond its resources. That’s when the state reaches out to FEMA for assistance.
Typically, before asking for a Major Disaster declaration, the state asks for a preliminary Damage Assessment. This is done by teams composed of state and federal officials. They arrive in the disaster damaged area and local officials show them the most severely damaged areas that they can access.
Among the items considered are:
• The amount of damage
• How widespread the damages are, and the number of insured and uninsured properties involved
• Special needs populations
• Other disasters the state may be working.
Governors use this information to decide whether to request a disaster declaration. Once a governor decides to request a declaration, it is processed as quickly as possible.
If the President decides there’s a need, he signs a Major disaster declaration for either Individual Assistance, Public Assistance or both, for designated counties.
Individual Assistance means:
Individuals and business owners may be eligible for rental assistance, grants for repairs, or low interest loans from the U.S. Small Business Administration (SBA)for damages to uninsured or underinsured property.
Public Assistance means:
Government entities and certain private non-profit agencies may be eligible to be reimbursed for the cost of repairs to uninsured or underinsured facilities, as well as some costs for labor and materials.
If there is a Major Disaster declaration, survivors may register for assistance at www.disasterassistance.gov, or by calling 1-800-621-3362 or (TTY) 1-800-462-7585.
The Preliminary Damage Assessment teams often take photographs of damaged areas. After a Major Disaster declaration, photographs of your damages are accepted as documentation, in addition to your receipts.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards. Follow us on Twitter at http://twitter.com/femaregion6 , and the FEMA Blog at http://blog.fema.gov.
Craig Huitema and Soni Jiandani blogged about Cisco’s latest ASIC innovations for the Nexus 9K platforms and IDC did a write up and video. In this blog, I’ll expand on one component of the innovations, intelligent buffering. First let’s look at how switching ASICs maybe designed today. Most switching ASICs are built with on-chip buffer memory and/or off-chip buffer memory. The on-chip buffer size tends to differ from one ASIC type to another, and obviously, the buffer size tends to be limited by the die size and cost. Thus some designs leverage off-chip buffer to complement on-chip buffer but this may not be the most efficient way of designing and architecting an ASIC/switch. This will lead us to another critical point, how can the switch ASIC handle TCP congestion control as well as the buffering impact to long-lived TCP and incast/microburst packets (a sudden spike in the amount of data going into the buffer due to lots of sources sending data to a particular output simultaneously. Some examples of that IP based storage as the object maybe spread across multiple nodes or search queries where a single request may go out on hundreds or thousands of nodes. In both scenarios the TCP congestion control doesn’t apply because it happens so quickly).
In this video, Tom Edsall summarizes this phenomenon and the challenges behind it.
“It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.” – Warren Buffett
For Volkswagen, the second largest auto manufacturer in the world, it took 78 years to build its reputation and one day to lose it. Volkswagen Group has about 340 subsidiary companies. It has operations in 150 countries, including 100 production facilities. The company sells passenger cars under the Audi, Bentley, Lamborghini and Porsche brands, and motorcycles under the Ducati brand.
VW admitted installing software in diesel cars to dupe emissions control tests by making them test cleaner than they actually were—even using this information in their marketing campaign to promote these cars. Unfortunately for them, in 2014 a team of researchers at West Virginia University ran separate tests both in the lab and on the road and to their surprise the road tests showed 40 times more emissions. After 14 months of denials VW admitted they had installed “defeat” software that detected when the car’s emission system was being monitored in the lab and altered the results. As a result of the fallout, the company’s CEO resigned, criminal charges were filed, and losses are estimated to be in the billions.
No one will know for sure how much this lapse in judgment will cost Volkswagen in the long run. It makes you wonder who made the decision to cheat. Was it just one engineer, or a team of engineers? How far up the chain of command did it go? Did the CEO know? It doesn’t matter because he was forced to resign and the damage had been done.
When it comes to security and reports like those I’ve just read, I have to wonder if CEO stands for Chief Executive Ostrich, because there are a lot of them with heads buried in the sand, ignoring reality.
Take this new study by Cyphort and Ponemon Institute, for example. The email announcement I received regarding the study warned that CEOs are “completely clueless” about cyberattacks on their company, with a little more than one third of respondents saying they are never updated about security incidents. Why aren’t they learning about the attacks? The report, which surveyed 597 IT leaders in the private sector, found that 39 percent said the company didn’t have the intelligence data available to present to CEOs and convince them of the security risk. In turn, not only are companies being attacked, but it is taking way too long to detect that attack, with nearly a quarter saying it can take up to two years.
This could be because C-level executives make productivity a greater priority than security, according to the newest report from Barkly. The study found that while IT professionals want to put more emphasis on security, only 27 percent of executives want to prioritize security. Another big disconnect between IT and executives when it comes to security: The C-level suite thinks more software is the solution to improved security while IT professionals want to bump up employee education. The most ironic result of the survey was that IT pros say the uninformed employee is the network’s biggest threat while executives say it is insider threats. It’s almost like comparing green apples and red apples, isn’t it? But it does show that there is a serious lack of communication and understanding when it comes to security. As Jack Danahy, co-founder and CTO of Barkly, said in a formal statement:
NORTH LITTLE ROCK – FEMA offers a wide range of free resources for Arkansas homeowners who are either rebuilding after the winter storms or preparing for the next time disaster strikes.
FEMA maintains an extensive online library, including bilingual and multimedia resources, which describe the measures contractors or do-it-yourselfers can take to reduce risks to property. FEMA publications can be viewed online and downloaded to any computer.
For rebuilding information, go to www.fema.gov and click on “Plan, Prepare and Mitigate.” There are numerous links to resources and topics including “Protecting Homes,” “Protecting Your Business” and “Safe, Strong and Protected Homes and Communities.” There are also links to information about disaster preparedness.
The decision to rebuild stronger, safer and smarter may save lives and property in a future disaster.
http://www.fema.gov/protect-your-property - offers a comprehensive overview of available publications to help protect your home or business against hazards including earthquakes, fire, flood, high winds and others.
http://www.fema.gov/small-business-toolkit/protect-your-property-or-business-disaster - provides links to resources for protecting your community, your business and places of worship, and offers helpful links like these:
- Protect Your Business from All Natural Hazards
- Protect Your Property from an Earthquake
- Protect Your Property from Fire
- Protect Your Property from Flooding
- Protect Your Property from High Winds
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
As Paul Revere gazed out his window, seeing the signal lantern in the tower of the North Church, he cried out… “The Examiners are coming!!! The Examiners are coming!!!”
Paul’s wife, startled, leapt to her feet and excitedly remarked, “Now what do we do?”
First of all, do not panic. Most likely you have been operating for approximately 12 to 18 months as a de novo licensee, and hopefully your operations have not only been successful, but also profitable. Some regulators prefer to contact a licensee by phone with the news that the examiners will be arriving shortly to conduct the first examination. During this initial call, they will provide you with a list of required documentation to be prepared in anticipation of the examiners’ on-site arrival. Other regulatory agencies choose to send out what the examiners refer to as a “first day letter request” (FDL). The FDL details the required documentation to be readied for the examination and, in some instances, identifies and provides contact information for the Examiner-in-Charge (EIC) of the examination.
Yesterday’s archives can’t handle the challenges of today’s modern enterprise, let alone the challenges enterprises are bound to face in the future as communication channels rapidly evolve.
The variety of communication sources has dramatically changed, expanding to include instant messaging, unified communications, enterprise social networks, social media, and more.
Old school archives were built at a time when an organization’s primary communication vehicle was email, and social media sites like Twitter and Facebook had not yet gained massive popularity. Expecting these antiquated archives to perform in today’s world is akin to using your grandparent’s landline phone as a primary mode of contact.
So how does today’s archive different from the archive of yesteryear?
“There has been an imminent failure at the West Pass Dike. Diablo Dam has failed at 815. Newhalem and Diablo have been evacuated,” read the post from the Skagit County Department of Emergency Management. “Concrete and Hamilton have been evacuated and moved to higher ground. Evacuate to Concrete High School and Concrete Airport.”
As the post went up, an emergency coordination center on East College Way in Mount Vernon was already teeming with activity.
I get it, funds are limited and you probably think you’re too stressed out to even think about paying for website security protection. Your business is probably not “big enough” to be targeted by hackers and then again, what kind of destruction could they even cause? Yeah, that’s what I thought.
Early on in my business (about 4 months in to be exact), the Foxtail Marketing website was hacked . We lost our traffic, our good standings with Google, and our site was left unsalvageable. Because the hackers had infected every aspect of our site, we couldn’t trust any of the old code. We had to bite the bullet and buy a new (and expensive) website, wipe our servers, change every password we ever created, and pray that we didn’t leave any backdoors for the hackers to get in again.
Just like a spare tire, you’ll never understand how bad you need website security until it’s too late. But likely for you, it’s not too late. The following guide should help you understand just how badly you need this security.
The number of product recalls in the UK jumped by 26% to a new high of 310 in 2014/15 from 245 in 2013/14 according to a new study by law firm RPC.
The number of vehicle recalls rose dramatically in the last year after several high profile incidents within the motor industry. In the last year the UK has seen 39 different motor vehicle recalls, a 30% increase from the 30 recalled in 2013/14.
The scandal over General Motors’ failure to promptly recall cars with a potentially faulty ignition switch may have prompted other manufacturers to recall more swiftly and more frequently if they identified a potential problem with their car. US federal agencies claimed the fault caused up to 124 deaths. GM recently agreed to pay $900 million in criminal damages to settle the case and eventually recalled 800,000 cars.
Pressure on the motor industry has been further raised by the investigation into Volkswagen over emissions testing, which began in 2014. French carmaker Renault recently recalled 15,000 cars after questions were raised over emissions testing of its cars.
Product recalls may not feature high on the list of threats according to the Business Continuity Institute's latest Horizon Scan Report, but they are still a threat to some. Product quality incidents were a concern for 27% of respondents to a global survey and product safety incidents were a concern to 19%.
Gavin Reese, Partner at RPC, commented: “Sometimes it can take a huge scandal to break for an industry to sit up, take notice and ensure their products are watertight. Certainly the automotive industry is now very sensitive to accusations of being slow to recall faulty or non-compliant products. Car manufacturers are looking for irregularities more closely, as well as facing increased pressure from regulators and, therefore, it’s likely that 2016 will also see a high level of vehicle recalls.”
RPC also noted that the number of recalls relating to food and drink has significantly increased, by 50% this year from 56 to 84. After the horse meat scandal in 2013, the National Food Crime Unit was established which works to uncover incidents of food fraud in the UK. RPC says that the creation of this unit as well as the increasing importance being placed by supermarkets on their supply chains may have led to the rise in food product recalls in the last year.
Gavin Reese adds: “The horse meat scandal set off reverberations across the food industry and now a couple of years on tighter measures and an increased scrutiny have clearly made a big difference.”
Streaming analytics, self-service options, and embedding big data insights into the applications that drive the business are the new priorities for organizations as they evaluate their big data strategies.
That's according to a new TechRadar report from Forrester Research that looks at the state of big data in businesses today.
Enterprise organizations have reached a new stage in big data adoption, and in 2016 they will be looking to embed the technology into the applications that power their businesses via integration and APIs.
The World Health Organization recently announced that the Zika outbreak was a “public health emergency of international concern”. The Zika virus is a mosquito-borne virus linked to serious neurological birth disorders. It is native mainly to tropical Africa, with outbreaks in Southeast Asia and the Pacific Islands. It appeared in Brazil last year and has since been seen in many Latin American countries and Caribbean islands. Public Health England announced that four cases of the Zika virus have been confirmed in the UK. These cases are believed to have been ‘travel associated’ and not thought to have been contracted in the UK, though health officials expect to see more cases of travel associated infections. As we operate in an increasingly global world, with a constant flow of employees traveling for business, how should your organisation prepare for this pandemic?
Today’s information security landscape is a constantly evolving beast. As attack vectors continue to grow, attacks become more frequent and attackers evolve to be even more sophisticated.
This is what we call “the new normal.”
As a result, the need to continuously adapt to an increasingly hostile environment has resulted in a significant change from the familiar security measures that kept us “comfortable” only a scant 5 years ago.
While developers and IT operations professionals have been excited about the concept of DevOps, data center operators, the people who run the infrastructure for the teams upstream, haven’t generally been involved in the conversation. Jack Story, distinguished technologist at Hewlett-Packard Enterprise, thinks that is a mistake.
And people make that mistake because there is a lot of confusion about what DevOps is and isn’t. In a session at this week’s Data Center World Global conference in Las Vegas, Story attempted to make the case that data center operators should be part of the DevOps process and explain what it is.
A lot of confusion about DevOps comes from the misconception that it is about tools and automation. “It is not about automating the processes that you have today,” Story said. “It is not a tool. It is a cultural and organizational change.”
According to Frost and Sullivan, eight elements make a city “smart”: smart buildings, smart energy, smart mobility, smart health care, smart infrastructure, smart technology, smart governance and smart education, and smart citizens. Increasingly, city leaders are looking to the Internet of Things (IoT) and advances in technology to make their cities – and their citizens – work more efficiently and cost effectively. Smart building technology and sensor data analytics are being instituted in everything from lowering energy consumption to rethinking traffic flow to ordinary infrastructure maintenance.
Businesses, too, are adopting smart technology and sensor data analytics as a way to create better workspaces and improve employee productivity.
Everyone wants the latest, the greatest, the most cutting-edge technology. This is easy to do with a tablet or a smartphone but not when it’s an integrated enterprise data environment. In this circumstance, the only thing worse than falling behind the technological curve is throwing your processes out of whack with fork-lift upgrades.
This is why data infrastructure must evolve rather than change outright. Sometimes the evolution is quick and, yes, disruptive; other times it is slow, almost to the point where users don’t even know it’s happening. But overall, the change must be steady and purposeful or else the enterprise will find itself unable to compete in the emerging digital economy.
Sounds simple, right? It isn’t, of course. But even though each move must be weighed against broader architectural goals rather than simply adding more storage or compute power as in the past, there are still ways to break down the overall process into key steps while still maintaining the flexibility to alter the plan as needed.
(TNS) - Emergency medical responders and law enforcement professionals trained Monday and Tuesday for one of the worst scenarios possible: an active shooter.
“This training is designed to prepare us to render medical aid in a shooting situation much sooner than we are currently able to,” said Eliza Shaw, training coordinator for Centre LifeLink EMS, which hosted the South Central Mountains Regional Task Force training initiative.
Task Force Director Phil Lucas said the training is beneficial to both emergency medical responders and law enforcement because it helps them establish expectations and learn each other’s procedures.
(TNS) - Technicians are assessing damage at Fayette County EOC after its communications tower was struck by lightning Monday evening.
Kevin Walker, director of Fayette County Office of Emergency Management, said the tower took a direct lightning hit around 5:30 or 6 p.m. Monday and all communications, radio and phone, were lost.
The center immediately initiated mutual aid agreements with other counties, allowing neighboring centers to accept, dispatch and monitor all 911 calls, he said.
Operators and technicians worked through the night to restore service, and as of 3 a.m. Tuesday all phone lines and radios are back up and running, said Walker.
(TNS) - Washington's entire Metrorail system will close for at least 29 hours beginning at midnight tonight for emergency inspections following a tunnel fire that appeared similar to a fatal incident a year earlier, Metro General Manager/CEO Paul J. Wiedefeld said Tuesday.
The "full closure" will allow inspections of the system's third-rail power cables following an early morning tunnel fire on Monday, Wiedefeld said in a news release.
About 600 jumper cables will be examined along tunnel segments in the system.
(TNS) - Sabine River flooding that shut down westbound Interstate 10 traffic on Tuesday is "extremely inconvenient" to regional commerce, a noted Texas economist said, but likely isn't as costly as other disruptions, such as the 2008 storm surge from Hurricane Ike.
But it's early yet in what might be called a slow-motion disaster as floodwaters from Toledo Bend Reservoir move downriver toward Sabine Lake and the open Gulf of Mexico.
As of Tuesday evening, eastbound traffic could still cross into Louisiana, though Texas officials were prepared to barricade it if the swollen river crept onto its lanes. Westbound traffic to cross into Texas, meanwhile, was shunted north to Interstate 20 in Shreveport, Louisiana. That hours-long detour is expected to last through the weekend.
Locks are great for safeguarding things from unauthorized access. Keys are necessary for allowing authorized people to open those locks. What happens, though, when there are so many locks, and so many keys that you lose track? What happens when those keys fall into the wrong hands, or can be easily duplicated? That is the dilemma facing Internet security and privacy today.
Digital certificates and encryption keys form the backbone of security and privacy online. When those certificates and keys are poorly managed, however, it puts the network and data at risk. Actually, the risk is even greater than if you had no keys and certificates at all, because having them creates a false sense of confidence. The existence of the keys and certificates provides an illusion of security that can make it even easier for attackers to exploit poorly managed keys and certificates.
There is a fallacy that lightening never strikes the same place twice.
The scientist Victor Frankenstein never set out to create a monster; quite the reverse. In Mary Shelly’s novel, Frankenstein sought to assemble and reanimate a human from constituent parts. He creates life but is horrified by the result and rejects his creation, which – granted the gift of life - then seeks revenge on him.
Your IT estate evolves over time, although ‘evolves’ is perhaps not the best descriptor. Evolution implies some sort of Darwinian survival of the fittest selection process. Your IT is more like a medieval town that tends to unplanned sprawling growth, with a lack of building codes resulting in precarious constructions. Pretty soon, the defensive value of town walls are compromised by ramshackle lean-to buildings, sanitation proves inadequate, streets become alleyways and dangerous slums develop, along with conditions ripe for the spread of fire and diseases. Over decades the town goes from protecting and nurturing to constraining its populace. If the founders were still alive they would not recognise their creation.
What starts out as a carefully designed IT architecture accumulates complexity as new applications and functions are bolted on, acquisitions are integrated, and so on. Under pressure to make changes at pace and cheaply, it’s easy to create links between existing systems using whatever means are available and familiar. Before you realise it, your IT architecture looks more like a bowl of spaghetti with lines going everywhere. If you want to find out what is connected, the only way is to put your fork in, turn, lift and see what comes away on the end of it. That’s a mouthful which is messy and hard to eat.
Faxing is still a key part of today’s business world--to the tune of 100 billion (yes, “b”) pages a year, according to research firm Davidson Consulting--for a number of reasons, including security, compliance and ease of use. In fact, a CIO Insight article tells us that 72% of U.S. companies still have fax machines.
However, here’s a caution flag for you. If your clients are still running their fax processes on aging, analog-era infrastructure--desktop fax machines, internal fax servers, gateway software, analog fax lines--this might be among the least secure protocols they use for transmitting their data. Traditional faxing can present security vulnerabilities at every step in the process.
Moreover, your clients probably don’t even realize the data they send and receive via traditional fax faces these security weak points--and, by extension, puts them at risk of non-compliance. This is where you can help them.
At DRJ Spring World 2016 Tuesday, the Business Continuity Institute presented its annual North America Awards to recognize the outstanding contribution of business continuity and resilience professionals and organizations across the region.
The BCI North America Awards consist of nine categories – eight of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being chosen by BCI members in a vote. As expected, the entries received during the year were all to a high standard and the panel of judges had a difficult task deciding upon the winners.
There can be only one winner in each of the categories however, and those showered in glory were:
Continuity and Resilience Consultant
Suzanne Bernier MBCI, President of SB Crisis Consulting
Continuity and Resilience Professional (Private Sector)
Linda Laun, Chief Continuity Architect at IBM
Continuity and Resilience Newcomer
Bradley Hove AMBCI, Consultant at Emergency Response Management Consulting Ltd
Continuity and Resilience Professional (Public Sector)
Ira Tannenbaum, Assistant Commissioner for Public/Private Initiatives at New York City Office of Emergency Management
Continuity and Resilience Team
Aon’s Global Business Continuity Management Team
Continuity and Resilience Provider (Service/Product)
Fusion Risk Management Inc and the Fusion Framework BCM Software
Continuity and Resilience Innovation
Fairchild Consulting, FairchildApp
Most Effective Recovery
Aon Global Business Continuity Management Team – Americas
Brian Zawada FBCI, Director of Consulting Services at Avalution Consulting
In addition to these awards, the occasion was also used to present Des O'Callaghan FBCI with a BCI Achievement Award.
Sean Murphy MBCI of the US Chapter of the BCI and presenter the Awards, commented: "I was impressed by the high calibre of the finalists and the winners this year. The fact that so many awards were hotly contested shows the depth of talent that is available in the business continuity and resilience profession in North America. The BCI North America awards are a great opportunity to recognise outstanding professionalism and innovation from the business continuity and resilience community and I was delighted to be part of this.”
The BCI North America Awards are one of seven regional awards held by the BCI and which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners of a BCI Regional Award are automatically entered into the Global Awards.
At the RSA Conference two weeks ago, a common question from both clients and former colleagues -- “So, what’s it like being analyst?” -- led me to write this blog post.
In the interest of full disclosure, there were no massive epiphanies during my first year, but the transition from being on the vendor side for 15+ years to an analyst provided some perspectives, listed here in no specific order:
The enterprise storage industry is in transition and probably will be for some time as everything from the media to the array to software and even the physical location of all these components strives for relevancy in an increasingly complex digital economy.
The latest market research numbers should put aside all doubt that enterprise storage as we know it today is not long for this earth. But while this will certainly put a strain on the profits of the leading storage vendors, it is still too early to call for their demise. In fact, given the changes being made to their top platforms and their overall storage portfolios, there is every indication to suggest they are more than ready to roll with the changes.
As a whole, the enterprise data storage market saw a decline of 2.2 percent in the fourth quarter of 2015 even though shipments were actually up about 10 percent, according to IDC. Research manager Liz Conner pinned this on greater activity in areas like server-side storage and cloud-based deployments rather than the big storage arrays that dominated in the past. At the same time, all-Flash arrays saw a whopping 71.9 percent year-over-year gain to produce nearly $1 billion in revenues, while hybrid arrays drew nearly $3 billion to now account for about 28 percent of the overall storage market.
Whether it’s taking steps toward a healthier lifestyle, preventing diseases, or preparing for an emergency or natural disaster, public law is an important tool to promote and protect public health. The Centers for Disease Control and Prevention’s Public Health Law Program (PHLP) develops legal tools and provides technical assistance to public health colleagues and policymakers to help keep their communities safer and healthier.
Emergency preparedness is one of the most important topics PHLP covers. Most emergency response systems are based on laws that regulate when and how state, tribal, local, territorial, and federal entities can engage in an emergency response. The legal nuances are often complicated and easy to miss. PHLP offers resources and training to empower state, tribal, local, and territorial communities to better understand, prepare, and respond to public health emergencies. Together, public health and public health law can protect people from harm and help communities better prepare for disasters.
For the past 16 years, PHLP has helped public health practitioners respond quickly—and with the right legal resources—in times of crisis. PHLP’s work can be divided into two main areas: PHLP’s research initiative and the program’s workforce development activities. Through its research initiative, PHLP conducts legal research using legal epidemiology research principles. PHLP’s research looks at various critical issues to interpret how the law plays a role in diseases and injuries affecting the entire country, and examines specific topics in state and local jurisdictions.
Gregory Sunshine, JD, a legal analyst at CDC, describes the role the agency plays in our public health and legal systems and explains how this affected state Ebola monitoring and movement protocols.
PHLP’s training helps health officials learn what they need to know to prepare for an emergency and what the law allows. In 2015, staff went on a legal preparedness “roadshow,” training more than 500 people in 11 different states in just a few short months. This training showed participants how to recognize legal issues that arise during public health emergencies, offered tools for planning and implementing effective law-based strategies during an emergency, and provided an opportunity to exercise their knowledge through a fictional response scenario.
PHLP also offers emergency response support for specific emergencies. During a public health emergency, such as the Ebola epidemic, PHLP helps partners use the law to stay ahead of quickly evolving situations. After the first case of Ebola was diagnosed in the United States on October 11, 2014, enhanced entry screening was implemented in five airports, which is allowed by law to protect Americans’ health. The enhanced entry screening was implemented to help identify and monitor travelers from countries with Ebola outbreaks who could have been exposed to the disease or who had signs or symptoms of Ebola.
Stakeholders were concerned that variations in how each state monitored and controlled the movement of travelers from countries with Ebola outbreaks could cause confusion, so PHLP staff published the State Ebola Screening and Monitoring Policies on its website so travelers could access them in one easy location. This information helped people who were considering working in West Africa understand what the requirements might be after they returned home. Similar to what was done during the Ebola outbreak, the program recently published an analysis of emergency declarations and orders related to the West Nile virus as part of CDC’s response to the 2016 Zika outbreak.
PHLP helps public health partners across America answer legal questions on many emergency preparedness and response topics. Through legal research, trainings, and publishing of the latest information, PHLP is always ready to help their partners understand how to use law to protect the health and safety of the public. People interested in learning more about PHLP can visit PHLP’s website. For regular updates on public health law topics, including legal preparedness, subscribe to CDC’s Public Health Law News.
Link to TedMed Video: http://www.cdc.gov/phlp/videos/tedmed-ebola.html
At a Food and Drug Law Institute webinar last week, Robin Usi, the Director for the Division of Data & Informatics (DDI), in the Data Sharing & Partnership Group of the CMS Center for Program Integrity, made clear that accuracy matters in the reporting of spend data to CMS pursuant to the Physician Payments Sunshine Act. She further stated that CMS is working to identify inaccurate data reporters and that reporters of inaccurate data are prime targets for agency audit and/or compliance actions.
As pharmaceutical and device companies gear up to report 2015 payments and other transfers of value made to physicians and teaching hospitals as required by the Sunshine Act, many companies may be concerned that they will miss something when the March 31 deadline rolls around. And while there are statutory penalties for not reporting – as Ms. Usi made clear at the FDLI webinar – the penalties for what is reported may be much more significant if the government views a company’s payments to physicians as kickbacks intended to induce the use of the company’s product.
The CMS Open Payments database offers tremendous data mining possibilities. For 2014 – the first full year of Open Payments reporting – the database contains over 11 million transactions valued at almost $6.5 billion. This obviously makes for unprecedented public visibility into the financial relationships between the almost 1,500 reporting companies and the over 600,000 physicians receiving payments.
A service level agreement (SLA) outlines how a managed service provider (MSP) will support its customers day after day, establishing the expectations for response times for service requests.
This agreement also serves as a legally binding contract, and as such, may protect an MSP against legal action.
An SLA serves many purposes for an MSP and its customers, but did you know this agreement can deliver a key differentiator for a service provider as well?
Managed services providers can find themselves navigating sticky privacy issues, balancing their duty to cooperate with law enforcement against their responsibility to safeguard customers’ data.
Executives at Stonehill Technical Solutions won't soon forget the day about six years ago when an FBI agent contacted the Laguna Hills, Calif., managed services provider and asked them to turn over the login credentials for a client whose business had – for undisclosed reasons – drawn the scrutiny of federal authorities.
At CEO David Bryden’s request, the agent sent over some documentation and a phone number to an FBI office, proof that the people on the phone were who they said they were.
NORTH LITTLE ROCK – Federal assistance is being offered to help Arkansas communities rebuild infrastructure to higher, more disaster-resistant standards and state officials are encouraging local governments to take advantage of that funding.
The assistance to communities is part of the aid that became available following the severe storms, tornadoes, straight-line winds, and flooding Dec. 26, 2015 to Jan. 22, 2016.
“Generally, the federal Public Assistance program restores disaster damaged infrastructure to pre-disaster conditions,” said John Long, federal coordinating officer for the Federal Emergency Management Agency. “But when cost effective and technically feasible, it makes sense to rebuild to higher standards that can prevent future loss. FEMA makes available the funds to do so.”
FEMA’s Public Assistance program provides federal funds to reimburse a minimum of 75 percent of the costs for removing debris, conducting emergency protective measures and repairing levees, roads, bridges, public utilities, water control facilities, public buildings and parks. Mitigation funding may be considered in each project category.
Eligible applicants may include:
- state agencies
- local and county governments
- private nonprofit organizations that own or operate facilities that provide essential government-type services
"Studies show that every $1 paid toward mitigation saves an average of $4 in future disaster-related costs,” said State Coordinating Officer Scott Bass of the Arkansas Department of Emergency Management Agency. "By adding mitigation money to repair costs, our goal is to reduce or eliminate damages from future disasters.”
As part of the process for applying for federal assistance, experts from ADEM and FEMA help identify projects that will qualify for the special mitigation program. Officials urge applicants to take advantage of the funds.
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Cybersecurity is finally getting the attention it requires, but based on recent studies I’ve seen and conversations I’ve had, organizations have a long way to go to create a security posture that matches reality.
AttackIQ CEO Stephan Chenette wrote a blog recently that discussed the fact that most organizations are unaware of their security posture until they suffer a breach or are alerted by a third party. Yet, he wrote:
In the face of ever-increasing numbers of attacks, the average enterprise deploys 75 distinct security products (1), receives more than 17,000 alerts per day (2), and spends an average of $115 per employee on security (3). As an industry, we are getting into a cycle of buying more security technologies and then hiring more security engineers to manage those technologies. We need to get a handle on our capabilities sooner rather than later.
Most cloud experts will tell you that the real advantage of shedding static, legacy infrastructure is not the cost savings, but the enhanced agility. By quickly and easily developing new applications and pushing them out, organizations can craft a more responsive and compelling experience to customers, which should translate into higher sales.
But even after the cloud environment has been deployed, this doesn’t happen by itself. The enterprise needs to make sure that cloud functionality exists across the data environment and that business managers know how to leverage the flexibility and agility that the new service-based infrastructure offers.
One of the ways to do this, of course, is rapid deployment and configuration of resources. But as Google and others are quick to point out, the goal is not simply to deploy a new environment and let it run but to constantly configure and reconfigure resources to produce optimal results with the lowest consumption. Google’s new Custom Machine Types supports this level of functionality by offering sub-minute configuration changes, which provide the twin benefits of highly accurate load balancing and the ability to quickly change underlying resources like compute and memory to meet shifting data requirements. Essentially, it gives the enterprise what it wants when it wants it, with only a fraction of the complexity that usually accompanies infrastructure change management.
In recent years, more and more cybersecurity incidents have taken place as a result of insecure third-party vendors, business associates and contractors. For example, the repercussions of the notorious Target breach from a vulnerable HVAC vendor continue to plague the company today. With sensitive data, trade secrets and intellectual property at risk, hackers can easily leverage a third party’s direct access into a company’s network to break in.
While such incidents may cause significant financial and reputational harm to the first-party business, there is hope. Regulators are instating a growing number of legal requirements that an organization must meet with respect to third-party vendor risk management. As liability and regulations take shape, it is important to assess whether your company currently employs a vendor risk management policy, and, if not, understand how a lack of due diligence poses significant risk on your organization’s overall cybersecurity preparedness.
A vendor management policy is put in place so an organization can tier its vendors based on risk. A policy like this identifies which vendors put the organization most at risk and then expresses which controls the company will implement to lessen this risk. These controls might include rewriting all contracts to ensure vendors meet a certain level of security or implementing an annual inspection.
One of the more frustrating aspects of analytics is the amount of time it takes to put data in a format that makes it useful. By some estimates, manually making data accessible to an analytics application can consume as much as 80 percent of an analyst’s time. Given the salary analysts command, the cost of prepping data can be considerable.
IBM today announced a partnership with Datawatch under which it will resell Datawatch Monarch, a self-service tool that enables end users to automate much of the data preparation work associated with running an analytics application. In this instance, IBM intends to provide access to Datawatch Monarch to end users making use of the IBM Cognos and IBM Watson Analytics services delivered via the cloud.
Datawatch Monarch makes it possible for an end user to automatically have all the data in a file turned into rows and columns that can be easily consumed by an analytics application. It also makes it possible to join dissimilar data, all of which can be reused across the organization.
A few short decades ago, safety planning was not considered a priority for the vast majority of corporations. Instead, most incidents and emergencies were handled as they occurred, as effectively as possible given the limited technology resources available at the time.
Today, workplace health & safety departments have evolved into something else entirely. Now it is a must-have element of any corporation in order to maximize occupational health and safety.
To fully understand the importance of corporate safety planning—and to glimpse how much it has changed our modern work environment—you only need to take a quick look at how far it’s come. Let’s take a look at how workplace safety programs have evolved, as well as what worked—and what didn’t:
DUPONT, Wash. – Washington suffered its worst wildfire season in state history in 2015. Raging fires burned more than one million acres of public and private lands. After two straight years of record-breaking wildfires, vast areas of the state face a much greater risk of flash flooding, debris flow and mudslides. But a team effort by all levels of government aims to reduce those threats to public safety.
The team—called the Erosion Threat Assessment/Reduction Team (ETART)—was formed by the Washington Military Department’s Emergency Management Division (EMD) and the Federal Emergency Management Agency (FEMA) after the Carlton Complex Fire of 2014. A new ETART was formed in October 2015 following the federal disaster declaration for the 2015 wildfires.
ETART participants include EMD, FEMA, the U.S. Army Corps of Engineers, the National Weather Service, the Confederated Tribes of the Colville Reservation, the Washington State Conservation Commission, the Washington State Department of Natural Resources, the Spokane, Okanagan and Whatcom conservation districts, and many others.
Led by the Okanogan Conservation District, ETART members measured soil quality, assessed watershed changes, identified downstream risks and developed recommendations to treat burned state, tribal and private lands.
“Without vegetation to soak up rainwater on charred mountainsides, flash floods and debris flows may occur after a drizzle or a downpour,” said Anna Daggett, FEMA’s ETART coordinator. “ETART brings together partners to collaborate on ways to reduce the vulnerability of those downstream homes, businesses and communities.”
Besides seeding, erosion control measures may include debris racks, temporary berms, low-water crossings and sediment retention basins. Other suggestions may include bigger culverts, more rain gauges and warning signs, and improved road drainage systems.
While public health and safety remains the top priority, other values at risk include property, natural resources, fish and wildlife habitats, as well as cultural and heritage sites.
“ETART addresses post-fire dangers and promotes collective action,” said Gary Urbas, EMD’s ETART coordinator. “With experienced partners at the table, we can assess and prioritize projects, then identify potential funding streams to fit each project based on scale, location and other criteria, which may lead to a faster and more cost-effective solution.”
Since the major disaster declaration resulting from wildfire and mudslide damages that occurred Aug. 9 to Sept. 10, 2015, FEMA has obligated more than $2.9 million in Public Assistance grants to
Washington. Those funds reimburse eligible applicants in Chelan, Ferry, Lincoln, Okanogan, Pend Oreille, Stevens, Whatcom and Yakima counties, as well as the Confederated Tribes of the Colville Reservation, for at least 75 percent of the costs for debris removal, emergency protective measures, and the repair or restoration of disaster-damaged infrastructure.
After the 2014 Carlton Complex Fire, FEMA provided $2.4 million in Public Assistance grants specifically for ETART-identified projects. Those grants funded erosion control measures that reduced the effects of the 2015 wildfires—such as installing straw wattles, clearing culverts and ditches of debris, shoring up breached pond dams, and seeding and mulching burned lands.
FEMA also offers fire suppression grants, firefighter assistance grants, Hazard Mitigation Grants and National Fire Academy Educational Programs.
Affected jurisdictions, landowners and business owners continue to submit requests for grants, disaster loans, goods, services and technical assistance from local, state and federal sources to recover from the wildfires, protect the watersheds or reduce the risks associated with flooding and other natural hazards.
ETART recently issued its final report, which details its methodology, assessments, debris-flow model maps, activities and recommendations. Completed activities include:
Compiled and shared multi-agency risk assessments across jurisdictions through a public file-sharing site.
Developed and disseminated an interagency program guide to assist jurisdictions seeking assistance.
Transitioned ETART to a long-term standing committee to address threats, improve planning, and resolve policy and coordination issues that may thwart successful response and recovery efforts related to past fires and potential future events.
The “2015 Washington Wildfires Erosion Threat Assessment/Reduction Team Final Report” is available at https://data.femadata.com/Region10/Disasters/DR4243/ETART/Reports/. Visitors to this site may also access “Before, During and After a Wildfire Coordination Guide” developed by ETART.
More information about the PA program is available at www.fema.gov/public-assistance-local-state-tribal-and-non-profit and on the Washington EMD website at http://mil.wa.gov/emergency-management-division/disaster-assistance/public-assistance.
Additional information regarding the federal response to the 2015 wildfire disaster, including funds obligated, is available at www.fema.gov/disaster/4243.
A breach a day is the new norm. In the past 12 months there have been a number of high profile breaches. Take Sony for example, they lost control of their entire network. The hackers were releasing feature length movies onto torrent sites for people to freely download. This was very high profile at the time and it was incredibly damaging. TalkTalk, had all of their customer information dumped onto the internet for everybody to use. XBOX Game Network was hacked over the Christmas period. They had a Distributed Denial of Service – the hackers just wanted to do it for the fun of it! Famous political figures have also had their public profiles very notably defamed.
These hacks happen everyday. A breach a day is the new norm.
It’s a common phrase you have probably heard throughout your career: A crisis management plan is a living document. It’s a reminder that any crisis plan should be updated continually to reflect a business, its employees and the threats that might impact normal operations.
However, in practice, ensuring your plan is current, and your team is up-to-date, requires a significant investment in time and patience, and can be downright challenging. But if your company makes crisis management a priority, it is possible.
Here are three key ways to ensure your plan and team are always up to date:
Active shooter incidents have become an increasingly significant threat in healthcare and hospital environments. According to a study conducted by the FBI titled Workplace Violence: Issues in Response, healthcare employees experience the largest number of Type 2 active shooter assaults (assaults on an employee by a customer, patient, or someone else receiving a service).  Also, in a 12 year study conducted by Johns Hopkins, hospital-based active shooter incidents in the United States increased from 9 per year in the first half of the study to 16.7 per year in the second half. 
Because of the increased active shooter risk that healthcare and hospital facilities face, it is crucial for decision-makers to integrate active shooter preparedness into their workplace violence prevention policy and to provide reality-based training and resources for their staff. Of equal importance is an emergency response procedure and communication strategy. Shooting incidents are unique in hospitals and healthcare settings and they require a clear, concise communication action plan.
With escalating risks and uncertainty around the globe, cities are challenged with understanding and circumventing those risks to stay vital. Much as in the business world, municipalities are moving towards resilience—the capability to survive, adapt and grow no matter what types of stresses are experienced.
Recognizing that they have much to offer each other, communities and businesses are often working together to pool their experience and knowledge. Helping to foster this is a project called the 100 Resilient Cities Challenge, funded by the Rockefeller Foundation. The project has selected 100 cities around the world and provided funding for them to hire a chief resilience officer.
“Resilience is a study of complex systems,” said Charles Rath, president and CEO of Resilient Solutons 21. He spoke about resilience and his experiences with the 100 Resilient Cities Challenge at the recent forum, “Pathways to Resilience,” hosted by the American Security Project and Lloyd’s in Washington, D.C. “To me, resilience is a mechanism that allows us to look at our cities, communities, governments and businesses almost as living organisms—economic systems that are connected to social systems, that are connected to environmental systems and fiscal systems. One area we need to work on is understanding those connections and how these systems work.”
The enterprise has been sitting on a goldmine of valuable information for several decades now, but only recently has it had access to the technology to pull it all together and make sense of it. This is leading to a shift in the way organizations value both data and infrastructure – data becoming increasingly important to the business model while distributed cloud architectures and commodity hardware are diminishing the significance of infrastructure.
But raw data is like unrefined ore: There is potential there, but first it must be retrieved, cleaned, refined and then delivered to those who find it most desirable. For that, you need a top-notch data management platform.
According to a recent study by Veritas, many organizations are still squandering the value of data simply by not having a full understanding of what they have and how it can be utilized. More than 40 percent of data, in fact, hasn’t been accessed in three years. In some instances, this is due to compliance and regulatory issues, but in many cases it can be traced to improper management. Once data enters the archives, it tends to be lost forever even though it may still have value to present-day processes. As well, developer files and compressed files make up about a third of all stored data, even though the projects they supported are long gone. There is also a significant amount of orphaned data, unowned and unclaimed by anyone in the organization, and this is becoming increasingly populated with rich media files like video chats and graphics-heavy presentations.
As demonstrated in events like the 2009 H1N1 influenza pandemic and the Ebola response of 2014, children can be particularly vulnerable in emergency situations. Children are still developing physically, emotionally, and socially and often require different responses to events than adults. With children ages 0 to 17 representing nearly a quarter of the US population, the specific needs of children during planning for natural, accidental, and intentional disasters has become a national priority.
Collaboration is Key
To practice preparedness among first responders, CDC and the American Academy of Pediatrics (AAP) joined forces to host a tabletop exercise on responding to an infectious disease threat at the federal, state, and local levels. Pediatric clinicians and public health representatives within federal region VI, (i.e. the “TALON” states of Texas, Arkansas, Louisiana, Oklahoma, and New Mexico) worked in teams to develop responses to a simulated outbreak of pediatric smallpox. Representatives collaborated to identify potential disease contacts, develop plans for Strategic National Stockpile countermeasure distribution, and communicate effectively with other health leaders to meet pediatric care needs. Children tend to have different exposure risks, need different doses of medications, and have more diverse physical and emotional needs than adults during a public health emergency. This training exercise served as a model to increase the focus on the unique needs of children in emergency preparedness and response activities.
Bringing health professionals from different backgrounds together demonstrated how building connections during public health emergencies can improve response efforts and save lives. The day-long exercise gave participants the opportunity to see different problem-solving skills and unique viewpoints that other responders brought to the scenario.
One participant in the exercise, Curtis Knoles, MD, FAAP, commented, “The exercise gave a good understanding of next steps we need to take; identify all the players involved with the pediatrics community and get them tied into the state department of health.”
Practice like the Pros at Home
While the tabletop exercise focused on emergency planning and response on a broad level, there are many ways you can practice keeping your children safe during an emergency, too. Check out some of the resources below for resources and ideas on how you can keep your family prepared!
- Make creating your emergency kit fun—let your kids pick out some snacks and games to include! Be sure to have a kit at home and in the car!
- Get your kids involved with emergency preparedness with Ready Wrigley games, coloring pages, and checklists
- Make and practice plans for where to go and how to communicate in case of an emergency
The Cyber Kill Chain describes the different stages of an attack, from initial reconnaissance to objective completion. In this article Richard Cassidy describes the different elements of the Cyber Kill Chain and how to use it.
Today’s attackers are becoming increasingly sophisticated, using advanced techniques to infiltrate a business’s environment. Unlike in the past when hackers primarily worked alone using ‘smash-and-grab’ techniques, today’s attackers prefer to work in groups, with each member bringing his or her own expertise. With highly skilled players in place, these groups are able to approach infiltration in a much more regimented way, following a defined process that enables then to evade detection and achieve their ultimate goal: turning sensitive, valuable data into a profit. With attackers ready to pounce on any business at any moment, how can businesses stay ahead and ensure their sensitive data remains safe? Most attacks follow a ‘process’ that identified attackers’ behaviours, ranging from researching, to launching an attack and ultimately to data exfiltration: this is articulated as the ‘Cyber Kill Chain’.
The Cyber Kill Chain was developed by Lockheed Martin’s Computer Incident Response Team and describes the different stages of an attack, from initial reconnaissance to objective completion. This representation of the attack flow has been widely adopted by organizations to help them approach their defence strategies in the same way attackers approach infiltrating their businesses. As malicious activity continues to threaten sensitive data — whether it is personal data or company sensitive data — one certainty remains: attackers will continue to exploit weakness to infiltrate systems and extract data that they can turn into money. The best opportunity to get ahead of the hacker is to understand the steps he / she will go through, his / her motivations and techniques, and a security strategy around it.
Nick Lowe explores how current security measures against bulk data theft from organizations are broken: and how they can be fixed.
Another year, and another round of large-scale data breaches has started. We were barely a week into 2016 when Time Warner was forced to announce a breach of up to 320,000 users’ email account passwords; this followed 2015’s mega-breaches at organizations such as Ashley Madison, the US Government’s Office of Personnel Management, toy maker Vtech and many others.
Despite the scale of these ongoing data losses, and the reputational damage and remediation costs they cause, the methods for enterprise-level protection of bulk passwords and personally identifiable information (PII) have remained fundamentally unchanged over the past 20 years. And it’s evident that these approaches are simply not effective in preventing breaches.
A majority of data thefts are done from an organization’s bulk file storage. This is because once a successful attack is executed, whether via a social engineering exploit to gain administrator credentials, malware installation, or a privilege-escalation attack using known software flaws, the theft itself can be done remarkably quickly. A million username/password pairs may be stolen in just 60 seconds.
Don’t put all your eggs in one basket, or so the saying goes. When it comes to phone system resilience, this would seem to be sound advice. After all, phone availability is critical for many organisations and relying on just one solution to guarantee that availability would be foolhardy. However, single points of failure may lie in wait for the unwary, even in situations as simple as putting in a toll free number for use in an emergency.
Most data center servers operate at only 12 to 18 percent of their capacity, yet many companies aren’t taking advantage of the cost-saving potential offered by data center consolidation. Consider this: in the last five years, the US government saved nearly $2 billion by consolidating data centers. Companies like Microsoft, HPE, and IBM have likewise saved billions.
In an effort to cut costs and regain control of the data center environment, IT managers are asking that their environments be consolidated and made more efficient. The conversation revolves around aligning IT with business needs, which today often means greater IT agility. Managers and executives are trying to drive down cost and in doing so have prioritized data center consolidation and migration projects.
In creating a consolidation or data center migration plan, high-density server equipment, applications, virtualization technology, and end-user considerations all fall under the general scope.
Given the sensitivity of the data stored in customer relationship management (CRM) applications, it should come as no surprise that there is a lot of concern over how to secure that data. To address that issue, Salesforce today extended a security policy engine service that now makes it possible to limit who gets to see which data stored in its applications in real time.
Seema Kumar, senior director of product marketing for Salesforce, says the Transaction Security service is an extension of Salesforce Shield that makes use of new event monitoring tools that IT organizations can then use to either block entirely or simply generate an alert when a user tries to access a certain type of data without permission. The IT organization can use Salesforce Shield to determine the specific action across a broad set of data.
In addition, Kumar says Salesforce will soon extend this capability to not only its own applications, but all the applications that tap into the same customer records stored in the Salesforce cloud ecosystem.
Is customer-facing breach notification and response a part of your incident response plan? If should be! This is the part where you notify people that their information has been compromised, communicate to employees and the public about what happened and set the tone for recovery. It's more art than science, with different factors that influence what and how you do the notification and response. Unfortunately, many firms treat breach notification as an afterthought or only as a compliance obligation, missing out on an opportunity to reassure and make things right with their customers at a critical time when a breach has damaged customer trust.
At RSA Conference last week, I moderated a panel discussion with three industry experts (Bo Holland of AllClear ID, Lisa Sotto of Hunton & Williams, and Matt Prevost of Chubb) who offered their insights into the what to do, how to do it, and how to pay for it and offset the risk as it relates to breach notification and response. Highlights from the discussion:
(TNS) - For some 25 volunteers the objective Saturday morning was equal parts simple and perplexing: find "Joe," or maybe it's "Bob."
Jackson County Search and Rescue manager Mark Mihaljevich was purposely vague with details to the volunteers completing Search and Rescue Academy training. Joe's an elderly man, they don't know his last name, he's wearing a hunting vest and a hat, but they don't know what color.
In actuality, Joe is a duffel bag hidden somewhere on the rural county-owned Givan property off Agate Road, but the unclear details the search and rescue volunteers were given is a common beginning to a missing persons investigation.
"This is typical," instructor Micki Evans said.
(TNS) - Local schools face tough choices on how much security is appropriate as last week’s shooting in Madison Twp. brought a nationwide issue close to home for the first time.
The challenge for schools is how far to go on a continuum with tons of options. More locks? More cameras? More guards? More drills? Adding metal detectors? Arming school staff? There’s no way to make everyone happy, as there are parents who support and oppose each of those steps.
“It’s a tough spot for schools and it comes down to one word — reasonableness. What is reasonable to reduce risk?” said Ken Trump, a national school safety consultant. “The majority of parents want safe schools, want risks reduced, want genuine preparedness.
Over the last ten to twenty years, we have witnessed the expansion of federal criminal prosecution of health and safety matters. Environmental and food and drug regulatory enforcement has been supplemented by aggressive criminal enforcement.
In the last few years, we have seen some landmark criminal cases involving companies and executives for food safety violations. Compliance programs in these high-risk industries can literally be a matter of life and death. Judges are handing out tough criminal sentences when warranted.
Each week we hear about the outbreak of a new foodborne illness. Weeks after that, we then usually hear about a criminal investigation against the company and sometimes individual executives.
I’ve often run into people that have to ‘send an email’ with a question for a person that’s located on a few seats away. Are they afraid of that person? Why can’t they just get up and go see them for a couple of minutes to ask what they need to ask? It seems the art of face-to-face communication is disappearing in favor of CYA (Cover You’re A…) and audit concerns. If it’s not written down then it’s can’t be true. What have we done to ourselves?
This happens allot when it comes to developing strategies for Business Continuity Management (BCM) and other contingency related initiatives. We don’t go and ask people, we develop questionnaire’s – sent by snail mail or email – or we purchase an expensive online tool, fill it with questions that get interpreted a myriad of ways and expect recipients to respond in a timely and comprehensive manner. Huh!
Mergers generally fail and large mergers generally fail spectacularly, so I get why many of my peers think the Dell/EMC merger will be a train wreck. They also thought Dell couldn’t be taken private because, generally, for a company like Dell, the path would be virtually impossible particularly if you had a corporate raider like Carl Icahn working against you.
But here’s the thing: I’ve spent a lot of time looking at merger processes. I ran a merger clean-up team when I was at IBM (and I was really busy), and I’ve looked at Dell’s process in depth, one that was initially developed at IBM but refined at Dell. I learned there is nothing like it. Granted, a large merger will stress any process but, given EMC’s structure and Dell’s approach, there should be little customer impact for 12 to 18 months, and much of that initial impact should be positive.
In most every other large merger, there would be a reason to run for the hills, largely because most large companies don’t want to learn from their mistakes and would rather focus on shooting the people that made them. But Dell is very different. It actually has an incredibly successful merger process that, for some screwy reason, no one else seems to want to emulate.
I’ll compare the HP/Compaq merger that I thought was idiotic to the Dell/EMC merger, so you get a sense of what makes this different.
US government agencies are no longer allowed to build or expand data centers unless they prove to the Office of the Federal CIO that it’s absolutely necessary, according to a new memo released by the White House’s Office of Management and Budget.
The new Data Center Optimization Initiative replaces the now six-year-old Federal Data Center Consolidation Initiative and has much stricter goals and additional rules meant to reduce the government’s sprawling data center inventory and the amount of money it takes to maintain it.
The government spent about $5.4 billion on physical data centers in fiscal year 2014. The new initiative’s goals are to reduce data center spending by $270 million in 2016, by $460 million in 2017, and by $630 million in 2018, for a total of $1.36 billion in savings over the next three years.
The current approach to business continuity, which generally focusses on ‘what could happen’, has significant limitations says Graham Goodenough. In this article he explains why this is the case; and suggests a better, more positive, method.
The use of the term ‘resilient enterprise’ as expressed in this article, applies to a business that has been purposely designed to have the ability to adapt to significant increases, or decreases, in production/service demands from the market it serves, and which can adjust demands within an acceptable time frame that is not financially detrimental to the business. Establishing such an ability for critical activities to respond within the business for normal operations and any unplanned disruptions will provide flexibility within the organization that will enable capacities to be delivered as needed, and maintain business income, whatever may be the cause of disruption.
In the second article in a three-part series exploring ‘people and resilience’, Paul Kudray looks at a common misconception: that when disaster strikes employees will automatically rally round and play their part in helping the organization recover.
I’m sure you’re familiar with the phrase: “I hate my job!” You may even have used it: possibly on more than one occasion.
You and I know there are people who have dream jobs; they work in their favourite place, doing the things they love to do, and they even have great bosses! Yes, it happens!
The employers they work for may even have a great resilience plan. Everyone in the organization may be aware of it and each person may know what to do when the proverbial hits the fan. In short it’s a fantastic resilient organization, based around the people who make it work.
Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Dean Coleman looks at some practical steps that businesses can take to utilise these for effective IT risk management.
IT is playing an increasingly prominent role within every organization and IT service managers need to be keenly aware of the importance of risk management to ensure they have control and influence over any issues likely to get in the way of the smooth running of the business. Technology is now so pivotal to the healthy running of the majority of companies that IT risk management has become a key discussion point on the corporate agenda of many boardrooms, as downtime of critical systems – whether due to accidental or malicious intention – threatens to undermine the productivity of the entire organization. Yet, despite its importance, many organizations still use manual spreadsheets to manage risk which are not dynamically linked to IT real estate, so lack any ability to equate theoretical IT risk with the actual situation on the ground.
Businesses often overlook the usefulness of service management tools that they already have at their fingertips as a way to streamline and effectively manage internal risk processes. Many service management tools are likely to already have a database of IT assets and users, so it makes sense to link IT risk management to your overall service management capabilities. That being so, what are the practical steps that businesses can take to rest back control of their IT assets and ensure that problems in one area of the business don’t have a knock-on effect on other functions?
The Business Continuity Institute’s annual North America business continuity and resilience awards will be presented at a ceremony on March 15 at DRJ Spring World 2016 in Orlando. The shortlist of finalists is as follows:
Continuity and Resilience Consultant
Suzanne Bernier MBCI, President of SB Crisis Consulting
Christopher Duffy, Strategic BCP
Christopher Rivera MBCI, Lootok, Ltd
Continuity and Resilience Professional (Private Sector)
Pauline Williams-Banta, Business Continuity Manager, The Energy Authority
Aaron Miller MBCI, VP/Director of Business Continuity, Fulton Financial Corporation
Linda Laun, Chief Continuity Architect, IBM
Continuity and Resilience Newcomer
Bradley Hove AMBCI, Consultant, Emergency Response Management Consulting Ltd
Greg Greenwald, BCM Consultant, Lootok, Ltd
Bryan Weisbard, Head of Threat Intelligence, Investigations & Business Continuity, Twitter
Continuity and Resilience Professional (Public Sector)
Nina White, Business Continuity Manager, Talmer Bank and Trust
Ira Tannenbaum, Assistant Commissioner for Public/Private Initiatives, New York City Office of Emergency Management
Continuity and Resilience Team
Aon Business Continuity Team, Global/Americas Team
The Devry Online Service (DOS) Core Business Continuity Team
Aon’s Global Business Continuity Management Team
Health Partners Plan (HPP) Business Continuity Team
CBRE Business Continuity Management Team – Americas
Continuity and Resilience Provider (Service/Product)
Premier Continuum Inc ParaSolution BCM Software
Fusion Risk Management Inc, and the Fusion Framework BCM Software
AtHoc, a division of Blackberry
Strategic BCP® ResilienceONE® BCM Software
Continuity and Resilience Innovation
The Everbridge platform
Mars, Resiliency Summits, #WeGotThis, BCM Portal
Fairchild Consulting, FairchildApp
Most Effective Recovery
Aon Global Business Continuity Management Team – Americas
Frank Leonetti FBCI
Howard Mannella MBCI
Brian Zawada FBCI
This year’s international Business Continuity Awareness Week is taking place from 16th-20th May 2016 and a set of four posters for promoting it is now available.
The theme for BCAW 2016 is ‘return on investment’, so all four posters display the message ‘Discover the value of business continuity’.
The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. The BCI also encourages sharing of the image versions through social media channels to spread the message.
NORTH LITTLE ROCK – Arkansas residents who have registered with FEMA for disaster aid are urged by recovery officials to “stay in touch.” It’s the best way to get answers and resolve potential issues that might result in assistance being denied.
“Putting your life back together after a disaster is difficult,” said John Long, federal coordinating officer for FEMA. “While the process of getting help from FEMA is intended to be simple, it’s easy to understand how sometimes providing important information is overlooked or missed.”
Residents of Benton, Carroll, Crawford, Faulkner, Jackson, Jefferson, Lee, Little River, Perry, Sebastian and Servier counties affected by the severe storms Dec. 26 – Jan. 22, 2016 may be eligible for disaster assistance and encouraged to register for assistance with FEMA.
After registering, it’s important to keep open the lines of communication. “It’s a two-way street,” said Long. “FEMA can’t offer assistance to survivors who – for whatever reason – have not provided all the necessary information.”
After registering with FEMA, applicants will receive notice by mail within 10 days on whether or not they qualify for federal disaster assistance.
- If eligible, the letter explains how much the grant will be, and how it is intended to be used.
- If ineligible – or if the grant amount reads “0” – you may still qualify. The denial may just mean the application is missing information or that you missed an appointment with an inspector.
Applicants who are denied assistance may call the Helpline to understand why, or go online to www.disasterassistance.gov or m.fema.gov. Becoming eligible for assistance may be as simple as supplying missing paperwork or providing additional information.
FEMA looks at a number of things to determine if a survivor will receive disaster assistance. The agency must be able to:
- Verify an applicant’s identity.
- Verify damages. If you believe the inspector didn’t see all of your damages, call the FEMA Helpline at 1-800-621-3362.
- Verify home occupancy. Applicants need to provide proof of occupancy such as a utility bill.
- Collect insurance information.
“FEMA personnel are here to help,” said Scott Bass, state coordinating officer with the Arkansas Department of Emergency Management. “Keep in touch. Use the Helpline. You’ll get answers to your questions and help with understanding the assistance process, and ways to move your personal recovery forward.
To register for assistance:
- call 800-621-3362 (FEMA). If you are deaf, hard-of-hearing or have a speech disability and use a TTY, call 800-462-7585. If you use 711-Relay or Voice Relay Services, call 800-621-3362; or
- go to www.DisasterAssistance.gov
The toll-free telephone numbers will operate from 7 a.m. to 10 p.m. seven days a week. Multilingual operators are available.
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
We’re reading an item of interest from across the pond where the United Kingdom’s Institute of Directors (IoD) has issued a new report that gives insight into how companies tend to react if they are under a cyber attack.
The IoD study, supported by Barclays, revealed that most companies keep quiet, with under one third (28 percent) of cyber attacks reported to the police.
This is despite the fact that half (49 percent) of cyber attacks resulted in interruption of business operations, the IoD noted.
(TNS) - Illinois State University became the first university in Central Illinois and the second outside of northern Illinois to be designated as a “StormReady University” by the National Weather Service.
To earn the designation, the university had to meet seven criteria involving preparation to respond to severe weather conditions and weather emergencies, explained Chris Miller, warning coordination meteorologist with the National Weather Service office in Lincoln.
These included having designated storm shelters, multiple methods for issuing warnings, trained weather spotters and formal, written emergency plans that are tested.
A survey conducted by Lockheed Martin and the Government Business Council finds reasons to be hopeful about federal IT and challenges that need to be addressed
During Tuesday's House Judiciary Committee hearing on the challenge of balancing privacy with public safety, FBI Director James Comey faced skepticism about whether the agency had really fully explored how it might access an encrypted iPhone, currently the focus of a legal battle between the US government and Apple.
Though Comey insisted the FBI had sought assistance from other government agencies with cybersecurity expertise, not everyone was convinced.
Worcester Polytechnic Institute professor Susan Landau, in prepared remarks, said that law enforcement agencies should modernize their investigatory capabilities rather than relying on the assistance of the courts.
There is an old saying that there are two things certain in life: death and taxes. I would like to add a third one–data security breaches. The Identity Theft Resource Center (ITRC) defines a data security breach as “an incident in which an individual name plus a Social Security, driver’s license number, medical record or financial records (credit/debit cards included) is potentially put at risk because of exposure.” The ITRC reports that 717 data breaches have occurred this year exposing over 176 million records.
On the surface, finding a pattern across all such breaches may appear daunting considering how varied the targeted companies are. However, the ITRC argues that the impacted organizations are similar in that all of the data security breaches contained “personally identifiable information (PII) in a format easily read by thieves, in other words, not encrypted.” Based on my experience, I’d expect that a significant portion of the data breaches compromised data in on-premises systems. Being forced to realize the vulnerability of on-premises systems, organizations are beginning to rethink their cloud strategy.
For example, Tara Seals declares in her recent Infosecurity Magazine article that “despite cloud security fears, the ongoing epidemic data breaches is likely to simply push more enterprises towards the cloud.” Is the move to the cloud simply a temporary, knee-jerk reaction to the growing trend in security breaches or are we witnessing a permanent shift towards the cloud? Some industry experts conclude that a permanent shift is happening. Tim Jennings from Ovum for example, believes that a driving force behind enterprises’ move to the cloud is that they lack the in-house security expertise to deal with today’s threats and highly motivated bad actors. Perhaps the headline from the Onion, which declares “China Unable To Recruit Hackers Fast Enough To Keep Up With Vulnerabilities In U.S. Security Systems” is not so funny after all.
In the latest edition of the Business Continuity Institute's Working Paper Series, Rudy Muls MBCI draws from his extensive experience to relate cyber resilience to its implications on business continuity practice. He further demonstrates possible opportunities for business continuity professionals to collaborate with their information security counterparts.
Cyber resilience is a topic of interest among practitioners as evidenced by the wealth of research on the subject. The BCI's most recent Horizon Scan Report revealed that cyber attacks and data breaches top the list of threats practitioners are most concerned about. The results of a global survey showed that 85% and 80% respectively expressed concern about the prospect of these threats materialising.
The paper concludes that there must be greater coordination and collaboration between those working in business continuity and information security, going as far as to say there could even be integration between the two functions. Furthermore, there should be more exercises to make staff and management aware of the cyber risk and how to react to incidents, as the involvement of all lines and areas of business management early in the incident management process is very important.
To download your free copy of ‘Digital business requires digital business continuity’, click here.
During military service (Reserve Captain) and as a voluntary fireman, Rudy Muls MBCI has gained a wealth of experience in crisis situations, and provided training in rescue and life saving techniques. During his professional career within an international financial institution he has been employed in different IT related positions, the most fulfilling of which started in 2010 when he was able to combine all his experience as business continuity manager and information security officer.
Think you already have enough on your plate, dealing with Wi-Fi and other network security in your organisation? You may have to add lighting to the list as well. A French start-up, Oledcomm, has been developing Internet by light, cunningly christened (you guessed it) Li-Fi. The technology is based on the concept of light flashes from an LED, rather like Morse Code on steroids. According to its inventors, Li-Fi also has at least two sizable advantages in terms of connectivity that, hopefully, will not be undermined by the existence of yet another attack vector.
It takes a long time takes up a lot of expensive bandwidth to push 100 terabytes of data across a Wide Area Network.
Amazon’s answer to moving those kinds of data volumes from customer data centers to its cloud data centers has been to ship its customers high-capacity storage servers. The customer uploads their data to the server, which then gets shipped back to Amazon for upload to the cloud.
Amazon announced the service last year. Today, the company started offering the same service, but in reverse. If a customer has accumulated a lot of data in their AWS environment and wants to move it elsewhere, Amazon will put it on its Snowball data shipping servers and ship them to the customer.
Your data center is alive.
It is a living, breathing, and sometimes even growing entity that constantly must adapt to change. The length of its life depends on use, design, build, and operation.
Equipment will be replaced, changed, and may be modified to best equip your specific data center’s individual specification to balance the total cost of ownership with risk and redundancy measures.
Just as with a human being, the individual care and love you show your data center can lengthen the life of your partnership.
The countdown has begun for Business Continuity Awareness Week (16-20 May 2016). We are only a few months away, and now we have published the posters that will be used to promote the week. The theme for BCAW this year is return on investment, so all four posters display the message discover the value of business continuity, as ultimately we want to get the message across that business continuity can have benefits other than the obvious returns when disaster strikes.
The posters are free to download either as a PDF in various shapes and sizes, or as a JPG. They are also available with or without bleeds depending on whether you would like to print from your own computer, or you would like to get them professionally printed. Make sure you display these posters prominently in your workplace or any other suitable location, and share the image versions through your social media channels to really spread the message.
Business Continuity Awareness Week is your opportunity to help raise awareness of business continuity and highlight the value of your profession, so make sure you get involved. Ways you can take part include, but are not limited to: hosting a webinar, publishing a paper, recording a video, or writing a blog. All of which should demonstrate the theme for the week.
As an added incentive, all those who post a blog on the BC Eye blog site will be entered into a prize draw to win £250 worth of Amazon vouchers.
Cloud computing offers a wide range of solutions to companies, and online backup is one of the best: It keeps important data safe from disruptions and disasters, and provides a way to keep applications and data off-site in highly secured environment.
There are great advantages to using backup technology, such as automation functionality and encrypted data. There are some business experts who state that the cloud is not a secure source for important data. However, online backups have encryption capacity to keep data safe. Conversely, hard drive (external) storage is not secure, and could be stolen or misplaced. Online backup is also reasonably priced. By using online backup, companies are given an opportunity to keep important files and documents safe from disarray and disaster at reasonable rate.
When data center operators examine data center cost, they generally look at high-level metrics, such as gigabytes of storage or Power Usage Effectiveness. These do matter of course, but to get to the real cost, you have to zero in on lower-level components.
Do you know how much the flash drives on your servers cost? How about the CPUs or DRAM cards? A different vendor supplies each one of those components, and they make a big difference in total cost of ownership of every data center.
Web-scale data center operators like Google and Facebook learned this lesson long ago. For years, they have been re-examining each individual component of their IT gear, looking for ways to get it cheaper.
Whether they love police, hate police, or anything in between, most community members want to know more about police. Police themselves, on the other hand, are hesitant to share information, for good reason, at least most of the time. The key to successful sharing, especially with tools collectively known as social media, is to find the balance between letting people have more information and not giving out so much it causes problems.
Generally, when talking about community engagement in social media, the typical advice is to follow people, provide good content, answer questions, be transparent, and so on. These basics are important, essential even. But where the rubber meets the road is what lies a mile or so beyond the next curve.
(TNS) - A new study has provided the first evidence that the Zika virus may be the cause for a spike in cases of a severe neurological disorder called the Guillain-Barré syndrome (GBS).
The study, published in the medical journal Lancet, showed 42 patients developed symptoms of GBS, which causes the immune system to attack parts of the nervous system.
The neurological symptoms include acute motor axonal neuropathy, which is characterised by severe paralysis. It also caused respiratory problems in about a third of the patients who needed medical assistance to breathe properly, the report said.
However, none of the patient-subjects died.
A study published Thursday confirmed that the 100,000 tons of methane that flowed out of Aliso Canyon was the largest natural gas leak disaster to be recorded in the United States, and that it doubled the methane emission rate of the entire Los Angeles basin.
Researchers with the University of California's Irvine and Davis campuses, along with the National Oceanic and Atmospheric Administration (NOAA) found during the peak of the leak that "enough methane poured into the air every day to fill a balloon the size of the Rose Bowl."
University officials called it a first-of-its-kind study on the gas leak, published in the journal Science.
"The methane releases were extraordinarily high, the highest we've seen," said UCI atmospheric chemist Donald Blake in a statement. Blake, who has measured air pollutants worldwide for more than 30 years, collected surface air samples near homes in Porter Ranch.
The growing complexity of today’s enterprise computing environment means critical corporate data is stored in increasingly fragmented and heterogeneous infrastructures. Ensuring all this decentralized data is backed up in case of breach or disaster is a major cause of anxiety for both business executives and senior IT professionals.
That’s because comprehensive data protection is really not core to most people’s jobs – most of you have other things to worry about, and you just hope and pray that the systems you’ve implemented have backed up your data and will recover it in case of a disaster. But you’ve got your fingers crossed because you’re really not that confident that they will.
According to Jason Buffington, principal analyst for data protection at ESG, improving data backup and recovery systems has been a top five IT priority and area of investment for the past several years. That’s because continually-evolving computing infrastructures and production platforms are forcing companies to reexamine their data protection strategies. “When an organization goes from 30 percent virtualized to 70 percent, or from on-premises email servers to Office 365 in the cloud, these evolutions to your infrastructure drive the need to redefine your data protection strategy,” says Buffington. “Legacy approaches for data protection can’t protect all of the data in these more complex environments.”
Extending security to mobile devices and increasing the resilience of the enterprise against hackers are the two big moves Hewlett-Packard Enterprise will be announcing today at the RSA Conference in San Francisco.
The announcements mark a change of thinking at HPE, as the company wants to do a better job of weaving security into its service offerings and of responding to security issues "at machine speed," according to Chandra Rangan, vice president of marketing for HPE Security Products.
The company redefined the issues of today's threat landscape in its HPE Security Research Cyber Risk Report 2016 Report. Looking at mobility threats, HPE used its Fortify on Demand threat assessment tool to scan more than 36,000 iOS and Android apps for needless data collection. Nearly half the apps logged geo-location, even though they didn't need to. Nearly half of all game and weather apps collected appointment data, even though that information is not needed, either. Analytics frameworks used in 60% of all mobile apps can store information that can be vulnerable to hacking. Logging methods can also expose data to hacking.
(TNS) - For Harvey County Sheriff T. Walton and Community Chaplain Jason Reynolds, the past four days have been a blur.
While Walton was tasked with responding to a very dangerous situation, Reynolds was tasked with supporting first responders like Walton and all the others who showed up immediately at the mass shooting at Hesston’s Excel Industries, where four people, including the shooter, were killed Thursday and 14 others injured.
Finally, Monday was an opportunity for the two men to sit side-by-side and speak briefly of what they experienced.
For Walton, the tragedy began unfolding as he learned of a shooting victim near 12th and Meridian in Newton. As he was dealing with that incident, another 911 call came through.
“Everyone is coming to me and I hear of more shootings on the radio. I am trying to figure this out,” Walton said.
Why do we have business continuity management programmes? Is it because we want to make sure our organizations are able to respond to a disruption? Probably yes! It is common sense that we would want to be prepared for any future crisis.
In some cases however, it is also because there is a legal obligation to do so. Many organizations are tightly regulated depending on what sector they are in or the country they are based, and therefore must have plans in place to deal with certain situations. Furthermore, the rules and regulations that govern us are often being revised, and sometimes it can be difficult to keep up with which ones are applicable.
There is a solution however. The Business Continuity Institute has published what it believes to be the most comprehensive list of legislation, regulations, standards and guidelines in the field of business continuity management. This list was put together based on information provided by the members of the Institute from all across the world. Some of the items may not relate directly to BCM, and should not be interpreted as being specifically designed for the industry, but rather they contain sections that could be useful to a BCM professional.
The ‘BCM Legislations, Regulations, Standards and Good Practice’ document breaks the list down by country and for each entry provides a brief summary of what the regulation entails, which industries it applies to, what the legal status of it is, who has authority for it and, of course, a link to the full document itself.
Looking to make it simpler and less expensive to back up data, Oracle today unveiled an update to Oracle StorageTek Virtual Storage Manager System software that enables Oracle customers to back up data and archive directly into the Oracle cloud.
Steve Zivanic, vice president of the Storage Business Group at Oracle, says version 7.0 of StorageTek Virtual Storage Manager System makes it possible for IT organizations to back up and archive data from both mainframes and distributed systems to a common public cloud. In the case of the mainframe in particular, the cost savings associated with not having to locally back up data on to a mainframe platform are substantial, says Zivanic.
With more data than ever being generated by mobile computing devices, securing that information has become a major challenge for IT organizations that often don’t control either the endpoint or even the network being used to transmit data.
At the RSA Security 2016 conference today, Hewlett-Packard Enterprise (HPE) moved to address that issue with the release of HPE SecureData Mobile, a solution that extends HPE encryption software to devices running Apple iOS and Google Android operating systems.
Chandra Rangan, vice president of marketing for HPE Security, says that given the lack of control most IT organizations have over mobile computing, it’s imperative that they find a way to encrypt data both when it’s at rest and in motion. In fact, a scan of 36,000 Apple iOS and Google Android devices conducted by HPE found that many of these applications routinely collect geolocation and calendar data. That information, notes Rangan, can in turn be used by hackers to enable all kinds of socially engineered attacks. In fact, the desire to get at that data helps explain why in 2015 there were 10,000 new Android threats discovered each day. And while Apple iOS devices benefit from being on a closed network, the number of malware exploits aimed at Apple iOS rose 230 percent in 2015.
If you are wondering whether a mobile solution would be right for your crisis management plan, start with a look at how much business life has changed in recent years. Then ask whether your organization is keeping up or lagging behind when it comes to crisis planning.
In the past, it was sufficient to add crisis plans and emergency instructions to company intranets or send by email. That was a huge improvement over handing executives in the company a binder with the plans.
But now we are well into the twenty-first century, and the whole concept of crisis management has evolved. Beyond planning for fires, floods, and strikes, organizations must prepare to cope with workplace violence, terrorist attacks, epidemics, data loss, data breaches, reputation damage, and a host of other possibilities that were not even thought about twenty or thirty years ago. Some of these crises will occur with no warning, and reach catastrophic levels in minutes or hours.
Over the many years I’ve been working in a clean room, I’ve grown quite familiar with hard drives and the many pros and cons they can present. Generally speaking, hard drives can be a pretty resistant medium when used correctly and a technology I confidently use for storing my personal files. However, I know bad things can happen to good data as I have witnessed countless damages and failures to these devices that can cause data loss.
In this post I will focus on physical issues in hard drives (HDDs) as the problems faced by this technology are completely different from those experienced by other alternatives available in the market, such as solid state drives (SSD).
Buyers, beware! While a car with one careful previous owner (we’ve all heard that one, right?) may still be a viable purchase proposition, somebody else’s security may be ill-suited to your organisation. Second-hand security can crop up in situations like company mergers and acquisitions. One of the challenges is to see beyond what the other party is telling you. Your prospective business partner may be assuring you with all the honesty in the world that security in its firm covers all requirements. However, what is true for one organisation does not necessarily carry over to another.
The modern business is directly tied with the capabilities of IT. Most of all, your data center now impacts how you create business goals and entire strategic directives. This means that business leaders and data center facilities managers must work in unison to create a truly cohesive ecosystem.
And decisions and actions on the IT side of the house can have a profound impact on mechanical systems and resulting operating costs and capacity of the data center.
When all sides of the house collaborate, there are specific benefits to the business and the entire data center environment. Consider these top challenges that collaboration aims to overcome:
Google announced a number of new security features for Gmail users in the enterprise today. Last year, the company launched its Data Loss Prevention (DLP) feature for Google Apps Unlimited users that helps businesses keep sensitive data out of emails. Today, it’s launching the first major update of this service at the RSA Conference in San Francisco.
The DLP feature allows businesses to set rules for what kind of potentially sensitive information is allowed to leave and enter its corporate firewall through email.
The most important new feature here is that DLP for Gmail can now also use optical character recognition to scan attachments for potentially sensitive information (think credit card numbers, drivers license numbers, social security numbers, etc.) and objectionable words (maybe a swear words or a secret project’s codename).
While most IT organizations are still held accountable for security breaches, many of them are now judged by the way they respond to a breach when one inevitably occurs. To help those IT organizations put a consistent incident response plan in place, today IBM at the RSA Security 2016 conference announced it has acquired Resilient Systems Inc.
As one of the pioneer vendors in the category, Caleb Barlow, vice president of IBM Security, says Resilient Systems extends IBM’s security portfolio to cover protecting and detecting threats, to include how to programmatically respond to them when they occur. Instead of wasting days trying to figure out what needs to be done in the event of a breach, Barlow says organizations need to have a plan in place that everyone in the organization can follow. That plan, adds Barlow, needs to cover everything from remediating the breach to informing the media and appropriate government agencies.
WASHINGTON – The Federal Emergency Management Agency (FEMA) is pleased to announce that the application period for the 2016 Individual and Community Preparedness Awards is open. The awards highlight innovative local practices and achievements by individuals and organizations that made outstanding contributions toward making their communities safer, better prepared, and more resilient.
Emergency management is most effective when the entire community is engaged and involved. Everyone, including faith-based organizations, voluntary agencies, the private sector, tribal organizations, youth, people with disabilities and others with access and functional needs, and older adults can make a difference in their communities before, during, and after disasters.
FEMA will review all entries and select the finalists. A distinguished panel of representatives from the emergency management community will then select winners in each of the following categories:
- Outstanding Citizen Corps Council
- Community Preparedness Champions
- Awareness to Action
- Technological Innovation
- Outstanding Achievement in Youth Preparedness
- Preparing the Whole Community
- Outstanding Inclusive Initiatives in Emergency Management (new category)
- Outstanding Private Sector Initiatives (new category)
- Outstanding Community Emergency Response Team Initiatives
- Outstanding Citizen Corps Partner Program
- America’s PrepareAthon! in Action (new category)
Winners will be announced in the fall of 2016 and will be invited as FEMA’s honored guests at a recognition ceremony. The winner of the Preparing the Whole Community category will receive the John D. Solomon Whole Community Preparedness Award.
More information about the awards is available at ready.gov/preparedness-awards.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
The RSA security conference is being held this week in San Francisco where security pros come together to discuss strategy. IBM made several security announcements this morning ahead of the conference, headlined by the purchase of Resilient Systems.
Instead of trying to prevent an attack, Resilient gives customers a plan to deal with a breach after it’s happened. While IBM offers pieces for protecting and defending the network, no security system is fool-proof and there will be times when hackers slip through the defenses (or the attack comes from within).
“What happens when an attack happens, which unfortunately has become an inevitably. You need resilience to get back up and running and minimize the damage. There has to be muscle memory of what you will do and how you will react,” Caleb Barlow vp of security at IBM told TechCrunch.
What do you think happens when the computer reservation system of an airline company crashes? Well, a major airlines experienced that exact situation last September – watch this three minute video and learn about the domino effect.
When a problem occurs with an airline computer system, it’s a ripple effect which can quickly become a real mess as passengers are stuck in airports. The airline will soon order that all aircrafts be grounded. Passengers will start complaining and calling into the reservation desk to book other flights. In addition, labor laws will prevent the crew from working or flying.
Renewable energy is tricky to use, and it’s even trickier to use in data centers, which have to be running around the clock, regardless of whether or not the sun is shining or the wind is blowing.
For data center operators that have turned to renewable energy, the three answers have been a) using a combination of renewable generation and energy storage to supplement a data center’s power supply, not replace it; b) investing in renewable energy generation for the same grid that feeds the data center – the grid that also has coal, nuclear, and other traditional energy sources; and c) simply buying Renewable Energy Credits equivalent to some or all energy a data center consumes.
Researchers behind an experimental project in Massachusetts hope to push the progress further by studying over time performance of a solar-powered micro data center launched this month. The test bed is called Mass Net Zero Data Center, or MassNZ. The project’s goal is to help researchers understand how to reduce data center energy consumption and increase data centers’ ability to use renewable energy.
(TNS) - The last damaging earthquake in Washington struck 15 years ago, on Feb. 28, 2001.
The next one is scheduled for June 7.
The ground isn’t expected to actually shake this spring. But nearly 6,000 emergency and military personnel will pretend it is during a four-day exercise to test response to a seismic event that will dwarf the 2001 Nisqually quake: A Cascadia megaquake and tsunami.
Called “Cascadia Rising,” the exercise will be the biggest ever conducted in the Pacific Northwest. Which is fitting, because a rupture on the offshore fault called the Cascadia Subduction Zone could be the biggest natural disaster in U.S. history.
Resilient specializes in incident response, which helps IT security teams bolster their defenses against data breaches. The Resilient incident response platform is deployed in more than 100 of the Fortune 500 corporations – which is IBM’s sweet spot.
“We are thrilled with our plans to have the Resilient team join IBM Security” said Marc van Zadelhoff, general manager, IBM Security. “The Resilient team includes some of the best security talent in the industry, along with leading products that enable clients to automate and consistently manage all aspects of responding to a security incident.”
The storage industry has historically been left out of the conversation when discussing the innovative and ground-breaking feats coming out of the technology world. Now, that focus is shifting thanks to three emerging trends in the enterprise: the move away from integrated systems to software-on-commodity hardware architectures, the focus on utilization rates of physical resources, and the increasing need to support millions of individual workloads.
Companies such as Facebook, Google and Amazon have devoted massive resources to build and maintain customized data center infrastructures from the ground up. In doing so, these companies have realized tremendous levels of scalability, flexibility, and efficiency. Enterprises today are experiencing large and growing amounts of data storage requirements and are focused on achieving the same benefits, driving these trends.
A majority of IT projects undertaken by government fail to deliver satisfactory results, cost more than anticipated or take longer to implement than planned. How often do we, as project managers and government employees, hear things like this: “It’s software development; it’ll take as long as it takes.” “I know it’s what I told you to build, but it’s not what I need.” “Tell me again why it’s going to cost an additional $50,000.”
Faced with tighter budgets, increasing expectations and closer public scrutiny, government IT organizations are under extreme pressure to deliver technology solutions that meet the needs of their users quickly and at low cost. Where the traditional project management approach has failed, agencies need to find alternatives to address these heightened expectations. Agile development has sparked the interest of public-sector change-makers as a way to save government IT from the debacle of skyrocketing costs and redundant systems.
Agile is an entirely new way of approaching project delivery, especially for public agencies. Many of the concepts employed in agile are not particularly new. They have been used in software development under names like prototyping, extreme programming or rapid application development. Frameworks like Scrum bring a structured methodology to these same concepts. It’s about breaking up large, complex projects into easily digested pieces and routinely getting feedback to make sure what is being delivered is in line with what’s needed.
Disaster Recovery Journal Spring World 2016 is taking place March 13-16, 2016 at Disney’s Coronado Spring Resort in Orlando, FL. We’re looking forward to another amazing show with numerous educational sessions and awesome people!
We have a lot planned during DRJ Spring World, and we hope you’ll join us:
- for lunch and a demo of Catalyst
- at the booth (707/709)
- for an educational session
Please take a look below for more details. We look forward to seeing you soon!
JOIN US FOR LUNCH AND A DEMO
OPTIMIZE YOUR CONTINUITY PROGRAM WITH CATALYST
Monday, March 14, 2016 | 12:00-1:15 PM | Coronado E | Lunch Provided
Speakers: Brian Zawada and Dustin Mackie, Avalution Consulting
Reserve your seat in advance: bccatalyst.com/drj
Catalyst makes business continuity and IT disaster recovery planning easy and repeatable for every organization. Join us to learn how Catalyst:
- Delivers the fastest implementation on the market
- Covers the ENTIRE continuity lifecycle
- Generates truly insightful (and automatic!) program metrics
- Saves you time by automating all administrative tasks
- Provides the lowest total cost of ownership
JOIN US IN THE EXHIBIT HALL
BOOTHS 707 & 709
Stop by our booths during exhibit hours to meet our team, learn about our business continuity and IT disaster recovery consulting and software solutions, and enter for a chance to win a hoverboard (don’t worry – we’ll ship it home for the winner)!
Want to learn more about our products and services before the show? Check out:
JOIN US AT A SESSION
FAILING BACK HOME CAN BE A TRIP
Solutions Track 3
When: Sunday, March 13, 2016 | 4:00-5:00 PM
Speakers: Michael Bratton and Bill DiMartini, Avalution Consulting
Many organizations design IT Disaster Recovery solutions like they’re booking a one-way flight – able to get to their destination but without a plan on how they’ll get back home. Even if plans include procedures to return to the restored data center, many times they are rarely tested and validated. This session is for you if you are responsible for the development and maintenance of your organization’s IT Disaster Recovery Plan or auditing IT Disaster Recovery Programs.
BCI 20/20 – THE FUTURE OF THE CONTINUITY INDUSTRY
General Session 3
When: Monday, March 14, 2016 | 10:30-11:45 AM
Moderator: Tracey Forbes Rice, Fusion Risk Management
Panelists: Brian Zawada, Avalution Consulting, Ann Pickren, MIR3, John Jackson, Fusion Risk Management
Where will continuity be in 10 years? What‘s new in the continuity tool box? This panel of subject matter experts consisting of DRJ’s executive council members will be discussing the BCI 20/20 visionary think tank project and what the future holds for the professionals of this industry. Discussion will include eliminating blind spots, and recognizing the risk posed by near and far-sighted thinking. The panel will be thinking outside the box with the goal of developing a 360 degree view of risk in today’s leading organizations. Join this lively discussion to form a vision of what the future holds for this profession.
BCI HORIZON STUDY – A COMPREHENSIVE LOOK AT THE 2015 RESULTS
Senior Advanced Track 2
When: Monday, March 14, 2016 | 2:45-3:45 PM
Speakers: Brian Zawada, Avalution Consulting, John Jackson, Fusion Risk Management
The Horizon Scan Survey seeks to consolidate the assessment of near-term business threats and uncertainties based on in-house analysis of business continuity (BC) practitioners worldwide. This session will present and discuss the results of the survey.
We sat down with VMware CEO Pat Gelsinger during the 2016 Mobile World Congress to learn more about the company's strategic partnership with IBM. Gelsinger also opened up about how the Dell-EMC deal has been affecting VMware's business, and shared an update on partner relationships.
BARCELONA – VMware's latest strategic partnership with IBM, the challenges it's faced as part of the Dell-EMC merger, and the status of partner relationships were among the topics discussed by VMware CEO Pat Gelsinger during an interview with InformationWeek at Mobile World Congress here.
On Feb. 22, IBM and VMware announced a strategic partnership that aims to enable enterprise customers to easily extend their existing workloads, as they are, from their on-premises software-defined data center to the cloud. As part of the deal, according to Gelsinger, IBM is "taking the full set of VMware technologies -- VSphere, NSX, plus our storage, plus our management -- and delivering that full set to the IBM cloud customers. IBM as an enterprise cloud provider is very significant, with 45 data centers worldwide. and they are making very vast investments into that strategy."
Storage is one of the hottest IT topics today. Acquisitions are happening regularly, as more users are moving to flash and new types of storage controller ecosystems. We’re seeing powerful hybrid systems emerge and even more impact around extending environments to cloud storage. Throughout all of this, organizations must understand how to utilize these new types of storage resources, and where they apply to their data centers.
The challenge to virtualization and storage engineers is this: How do you manage and work with all the new storage capabilities? Even more important, how can you dynamically manage workload storage requirements within a virtual environment?
Small businesses are bracing for another year of costly compliance change and complexity from Washington, D.C. While expecting a cascade of regulations, focus is on three priorities—the Affordable Care Act, Fair Labor Standards Act overtime regulations and mandatory paid family and medical leave.
Responding to a data breach is one of the more challenging events any company can face. On the one hand, a data breach requires nearly instantaneous decision making. Which servers are affected and should be removed from the network (but not shut off)? Who should be notified? Should law enforcement, a regulator or the insurer be contacted first? When should the breach be made public, if at all? What experts should be engaged, how much do their services cost and can that budget be approved on a Sunday night? And what is the home phone number for the Director of IT?
Even for the most agile of companies, informed and responsible decision making requires the input of an array of constituencies, some of whom rarely, if ever, have been in the same room together. The classic example is the C-Suite and IT personnel. The executives may have a difficult time understanding the scope of the breach, and the language IT speaks is decidedly not the language of the boardroom. The legal requirements can be contradictory—for example, a regulator (or the FBI) may ask that you notify no one, but your insurer may require notice within 10 days to trigger coverage. The scope of the breach may be unknown, resulting in over-protection or even paralysis based on the lack of information. These complications multiply with the size and public profile of the organization.
Iron Mountain, the nearly 70-year-old “information management” company that grew out of a big early 20th century underground mushroom growing operation, has joined a White House program created to push companies and government agencies to improve their data center energy efficiency.
President Barack Obama’s administration rolled out the Better Buildings Initiative in parallel with its clean energy investment program in 2011. The Better Buildings Challenge, one part of the initiative, called on companies and agencies to make specific energy efficiency improvement commitments for their facilities in return for access to some technical assistance from the government, shared best practices, and, of course, good publicity.
So far, Boston-based Iron Mountain is one of 11 private-sector data center operators to have accepted the challenge, pledging to reduce energy intensity of eight of its data centers by 20 percent in 10 years. The others are eBay, Facebook, Intel, Intuit, Home Depot, Staples, and Schneider Electric, as well as data center providers Digital Realty Trust, CoreSite Realty, and Sabey Data Centers.
(TNS) -- Area hospitals are riddled with cybersecurity flaws that could allow attackers to hack into medical devices and kill patients, a team of Baltimore-based researchers has concluded after a two-year investigation.
Hackers at Independent Security Evaluators broke into one hospital's systems remotely to take control of several patient monitors, which would let an attacker disable alarms or display false information.
The team strolled into one hospital's lobby and used an easily accessible kiosk to commandeer computer systems that track medicine delivery and bloodwork requests — more opportunities for malicious hackers to create mayhem.
The firm worked with the knowledge and cooperation of a dozen hospitals, including hospitals in Baltimore, Towson and Washington. They did not release the names of the hospitals.
(TNS) - Jakki Lewis was nearing the end of her first day of work at Excel Industries on Thursday, when she heard gunshots.
"I never did see him. We just heard bullets," Lewis said. "He was running all over the plant, chasing people."
Another employee, a man armed with a long gun and a pistol, pulled into the parking lot of the plant where about 1,000 people work, manufacturing lawn mowers, and started shooting. He walked inside, where he shot three people near the front office, Harvey County Sheriff T. Walton said later.
After hearing shots, Jeff Lusk, who was at Excel for an interview at 5 p.m., said he saw the shooter and then got under a desk.
Living with Climate Change: How Communities Are Surviving and Thriving in a Changing Climate (Jane Bullock, George Haddow, Kim Haddow, Damon Coppola) is a wide-ranging look at many aspects of past and present disaster mitigation efforts across the United States. The authors look at these efforts through the lens of climate change, and they understand that the debate on the cause of a warming climate is not accepted in all political circles. The book includes a number of case studies that look specifically at the previous benefits of the FEMA Project Impact program.
The body of the text comes primarily from a wide selection of contributors who have direct experience in academia, as well as emergency management practitioners. While book’s anticipated primary use might be as a classroom text for undergraduate and graduate students pursuing degrees in emergency management, it also has broad application for practicing emergency managers at the local, state and federal levels. We are entering a new era where climate impacts are beginning to reveal themselves. Emergency managers will need a resource that documents what has worked in the past and can apply to a new and undetermined future in which climate change exacerbates what were previously considered rare weather phenomena.
With new and more aggressive hazards come the need to understand terminology that is being used in different contexts. The two-page monograph by Cooper Martin, in which he tries to explain the difference between the terms “sustainability” and “resilience,” is quite helpful.
(TNS) - The county’s emergency planning agency is betting that moviegoers, after watching a 300-foot tsunami barrel through a Norwegian fjord toward a small town, will be more receptive to information about disaster preparedness.
The Clark Regional Emergency Services Agency will host a screening of the disaster thriller The Wave 6 p.m. March 4 at Kiggins Theatre in Vancouver. It’s the first of what agency Emergency Management Coordinator Eric Frank hopes will be a recurring disaster movie night.
A movie night might draw a bigger and different crowd than the agency’s other modes of outreach, he said. “We do a lot of events every single year, but we know we’re still missing some demographics in there.”
The Zika virus, a mosquito-borne virus linked to neurological birth disorders, continues to be a serious problem worldwide. More cases in the US are being announced every day, with 14 new cases of sexually transmitted Zika virus being announced by the CDC just this week, several of which among pregnant women. The CDC wrote in a recent statement, “These new reports suggest sexual transmission may be a more likely means of transmission for Zika virus than previously considered.”
As the Zika outbreak progresses, Zika preparedness and planning becomes a critical talking point for leaders in public and private sectors. Questions such as how to handle an infected employee in the office or where to direct citizens so they can acquire accurate, up to date information need to be addressed and answered to ensure the highest level of citizen and employee safety through Zika preparedness.
Managing and analyzing big data -- the exponentially growing body of information collected from social media, sensors attached to "things" in the Internet of Things (IoT), structured data, unstructured data, and everything else that can be collected -- has become a massive challenge. To tackle the task, developers have created a new set of open source technologies.
The flagship software, Apache Hadoop, an Apache Software Foundation project, celebrated its 10th anniversary last month. A lot has happened in those 10 years. Many other technologies are now also a part of the big data and Hadoop ecosystem, mostly within the Apache Software Foundation, too.
Spark, Hive, HBase, and Storm are among the options developers and organizations are using to create big data technologies and contribute them to the open source community for further development and adoption.
It’s no secret that Microsoft already has a lot of cloud data centers around the world. And the company is planning to build a whole lot more as it attempts to bite further into Amazon’s stranglehold on the cloud services market.
As it continues to build out its global cloud data center empire, Microsoft has to make sure it’s doing it in the most environmentally responsible way it can. It is one of tech’s biggest names and as such, it is under a lot of scrutiny by environmentalists and the public.
To help the cause, Microsoft has created a new role, dedicated specifically to data center sustainability. Not corporate sustainability, not energy strategy, not data center strategy, but data center sustainability. This week, the company announced it has hired Jim Hanna, who until recently led environmental affairs at Starbucks, to fill that role.
Cybercrime and cyber security attacks hardly seem to be out of the news these days and the threat is growing globally. Be it a major financial institution or an individual, nobody would appear immune to malicious and offensive acts targeting computer networks, infrastructures and personal computer devices. Firms clearly must invest to stay resilient.
Indeed, and according to the latest results of the 2016 Global Asset Management and Administration Survey from Linedata, a NYSE Euronext-listed IT vendor providing solutions to the investment management industry around the world, cybercrime is being viewed as the “greatest business disruptor” over the next five years. But alongside this regulation remains a priority for financial firms.
The 20-page survey, which was conducted by the fintech vendor in the fourth quarter of 2015 and canvassed two hundred market participants either face-to-face at Linedata Exchange events in London and San Francisco or via an online survey, found that more than a third (36%) of respondents were concerned about the threat from cyber criminals.
The 2015-16 El Nino season is far from over, and for many parts of the United States, the last couple of months have not been easy. In fact, the City of Pacifica, CA declared a state of emergency last month after pounding waves and powerful winds caused destruction up and down the coastline . The effects of El Nino span globally too – Stephen O’Brien, a United Nations’ under-secretary-general, said that El Nino has pushed the planet into “uncharted territory.” According to O’Brien, “the impacts, especially on food security, may last as long as two years .”
But has this El Nino season gone as planned? Back in December of 2015, we sat down with David Gold and Mike Gauthier of Weather Decision Technologies who took us through several prediction scenarios and preparation techniques for the impending El Nino season. Fast forward two months and we are back to take a look at how the current season is panning out. The results may surprise you.
The parade of data center REITs reporting exceptional Q4 and full-year 2015 results has just become even more impressive.
CyrusOne (CONE) crushed results across the board during 2015, including record leasing of 30MW across more than 200,000 square feet of data center space in the fourth quarter alone. The company is expanding capacity across six markets, but its biggest expansion plans are in New Jersey.
CyrusOne CEO Gary Wojtaszek said the flexibility for his customers to lease anywhere from a single rack to 10MW of capacity was a key reason for success in 2015. He also pointed to the company’s ability to deliver data halls in just a few months’ time at less than $7 million per megawatt.
In his final budget proposal, President Obama is asking for an increase in spending on cybersecurity -- $19 billion, which is $5 billion more than last year. The requested increase is a response to the rise in cybersecurity threats being made against government agencies.
The budget request follows a trend as we’re seeing more organizations bumping up their cybersecurity budgets. In fact, estimates are that cybersecurity spending will continue to rise, with expectations of more than $170 billion spent on security by 2020.
But is all this spending actually doing anything to improve cybersecurity? A new study from Venafi hints that perhaps much of that money is being wasted because it isn’t working on certain attacks. The problem, according to the CIOs surveyed, is that layered security defenses aren’t able to tell the difference between which keys and certificates should be trusted and which shouldn’t. A whopping 86 percent of those CIOs believe that stolen encryption keys and digital certificates are going to be the next big attack vector, which is a serious problem because, according to Information Age:
(TNS) - Cedar Rapids Mayor Ron Corbett said Wednesday officials are bracing for the increasing possibility that new federal flood protection money, which once seemed locked in, will never arrive.
At stake could be $70 million to $80 million for flood walls, levees and pump stations to protect low-lying areas from rising tides on the east bank of the Cedar River. Congress authorized $73 million in spending in 2014, but never appropriated the money.
“We are in serious risk of never being funded,” Mayor Ron Corbett said during his State of the City address.
The sentiment marks a transition for a city rocked by flooding in 2008 from hopeful waiting to wondering if it’s time to plot a Plan B. Eight years later, Cedar Rapids still is recovering.
(TNS) - McLean Fiscal Court approved the purchase of a critical communication service that is expected to help emergency management personnel keep the public better informed and alert.
The court approved the purchase of AlertSense, a public alert system that Emergency Management Director David Sunn said he believes could ultimately be a money saver for the county.
In the event of a critically dangerous event such as a hazardous material spill, the fire department, Sunn said, would be able to use AlertSense to determine a certain radius around the spill and send automatic phone calls or text messages to residents within the radius. That's important, he added, since the county includes vast portions of rural land where communication can be scarce.
Over the past several years, the cloud-based software-as-a-service (SaaS) model has proven to be a popular choice for enterprise applications, delivering efficiencies and value to organizations in many ways. Chief among these benefits are avoiding the major undertaking and licensing costs of deploying business-critical software across the organization and relieving IT of the burdens typically associated with maintaining on-premises software—including performing upgrades, installing patches and managing availability. Additionally, cloud-based solutions can enhance flexibility and scalability for enterprise applications and workloads. Of course, the benefits to be gained from adopting SaaS solutions in the enterprise must be balanced against potential risks. Exploring the path to ensuring your cloud applications are highly secure needs to be top priority.
Luke Bird highlights the requirement for many different organizational departments and professions to work together for effective organizational resilience and provides some ideas for how to overcome the associated challenges.
Organizational resilience is a highly complex and sometimes controversial term. It comes with a variety of challenges in trying to understand how it works (or potentially how it could work) in organizations. The likes of the BCI and Continuity Central have worked tirelessly to generate wider discussion and thought leadership on this topic. However, our ongoing dialogue in recent years has barely progressed beyond reaching an agreement for a simple definition (despite many of us helping to produce the British Standard 65000).
The recently published BCI Position Statement certainly highlights that we’re still not quite there in our understanding as to how to take this forward. Hopefully their official line will provoke a second wind of debate as many of us take the time to decide whether we agree or disagree. Although much of my own focus and interest is on the subject of multi-disciplinary collaboration and some of the challenges that we could potentially face.
Geary Sikich explains why enterprise risk and business continuity managers need to think more broadly about organizational risks. He describes how the use of ‘risk dimensions’ and ‘risk spheres’ can help.
There exists an overabundance of guidance for conducting risk assessments. Yet, it seems that we still have difficulty in getting risk assessments to reflect the appropriate level of concern for the identified risks that we are assessing. We also tend to view risk in relation to the place where we are employed and the industry that we work in. When we look at risk assessment from this perspective it should be clear that we are missing the point, or at best, are being too narrowly focused, when it comes to assessing risk for our organizations. This is not to say that our efforts are wasted. The risk assessment process is valuable regardless of how limited or narrowly focused it is. So, the question we should be asking ourselves as we prepare to implement a risk assessment is: ‘What future are we planning for?’
Nearly any discussion of contemporary channel trends includes a lament that dates back to the era when 20 megabyte floppy disks were considered state of the art. To wit: How do you offset shrinking profit margins?
Nothing new under the sun here. Profit erosion is an inevitable by-product of commodity competition. And it has been part of the tech scene - especially on the hardware side - since the first PCs rolled off the assembly lines. There’s little point in building a business around keeping hardware up and running - not when the cloud’s self-service on-demand provisioning promise is being realized.
But while you can’t make a living by only focusing on hardware any longer, another part of the value chain is thriving.
In business utopia, organisations automatically avoid problems, suppliers are selected by computer on the basis of their reliability and cost-efficiency, and machines repair themselves before they break. In business dystopia, too often seen in real world situations, the converse occurs. Organisations automatically engender problems, suppliers are selected by computer by default, and machines break themselves without proceeding to repairs. Automation can play a big part in both scenarios, but the results in terms of business continuity can be poles apart.
Workload cloud migration startup Ravello Systems was acquired on Monday by Oracle to ease enterprise adoption of its public cloud. Oracle is reported to have paid between $400 and $500 million for the California-based company which maintains a research presence in Israel, and Oracle is now expected to open a cloud research and development facility in Israel, according to Ha’aretz.
Ravello was started in 2011 by the team behind the KVM hypervisor. It offers nested virtualization solutions, allowing KVM and VMware workloads to be developed, tested, and demonstrated in the cloud without migration, and migrations to new cloud providers and management platforms without rewriting applications. KVM was passed in benchmark tests by Canonical-backed Linux container hypervisor LXD in May.
NORTH LITTLE ROCK –Teams of specialists from FEMA will offer tips and techniques to lessen the impact of future disaster-related property damage at building supply stores in three Arkansas locations Thursday, Feb. 25 – March 1, 2016.
The teams will be at these Lowe’s stores:
- Jefferson County: 2906A E. Harding Ave., Pine Bluff
- Faulkner County: 1325 Hwy. 64W, Conway
- Benton County: 1100 NW Lowes Ave., Bentonville
Teams will be at each location from 8 a.m. to 4:30 p.m. Thursday – Tuesday except for Sunday. Hours on Sunday are from 8 a.m. to 1:30 p.m.
FEMA specialists offer “how-to” information on both retrofitting buildings to make them more resistant to weather damage and ways to elevate utilities against flooding. They also provide tips to clean and help prevent mold and mildew.
Many of the tips and techniques are specifically geared for the do-it-yourselfer and for building contractors. If you have a disability and need an accommodation to access materials such as Braille, large print, or ASL interpreters please let our representatives know.
FEMA offers a number of free online resources for home and property owners. To get started, go to
# # #
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Application and data migration remains one of the most significant barriers to cloud adoption in the enterprise these days. And while today’s solutions are not perfect, there is at least a strong commitment on the part of vendors and cloud providers to address the issue.
The biggest move came this week with the announcement from IBM and VMware that they would work together to move legacy data center functions onto the IBM cloud. The pact is significant for two reasons. First, it combines the technical knowhow of two leading IT vendors – IBM on the hardware and services side and VMware on the virtual layer – to craft what will likely be a very robust hybrid cloud infrastructure (Disclosure: I provide content services to IBM).
Secondly, it enables organizations to move legacy apps to the cloud without having to rewrite code. As The Wall Street Journal’s Angus Loten points out, this is crucial for organizations that are seeking the flexibility and scalability of the cloud but still need to leverage existing infrastructure for ongoing business processes.
(TNS) -- When people in the Kansas City area need emergency help, they can now send a text message to 911.
Text-to-911 service has been growing more common among cities across the country in recent years and is now fully operational at all emergency dispatch centers in the Kansas City metro area, the Mid-America Regional Council announced last week.
Sending a text to 911 instead of calling could be a lifesaving option for people in situations where they can’t speak safely, such as home invasions or active shooter incidents, according to MARC.
Over the past decade, the amount of power outages in the United States has increased. A recent Federal study shared that the U.S. electric grid loses power 285% more often than it did 30 years ago.  These surprising numbers are mainly attributed to aging infrastructure, a growing population, and more severe weather patterns. On top of the financial burden that this has on business, resident’s daily lives are affected by these unexpected failures.
What can residents do to be best prepared in the case of a power outage? One of the key elements in being prepared is having a line of communication. During a power outage, watching the news for information from local officials is not an option. Having a system to send out a mass text or email notification is a huge advantage when traditional means of communication are cut off. During a power outage, residents are often left in the dark about how long the power will be out for, what was the cause, and if the problem is being solved. By using Nixle, police departments and other officials can keep a line of communication with residents to update them on the progress of the outage.
A new survey of 1,080 IT professionals conducted by cloud services company Evolve IP indicated the cloud has "gained corporate alignment, increased real business benefits and has near ubiquitous adoption."
Evolve IP's "2016 North American Cloud Adoption Survey" revealed 86 percent of respondents said they believe cloud computing represents "the future model of IT."
The hybrid cloud is going mainstream as more companies seek to capitalize on the benefits of both the private and public cloud.
But this tech transition is not without its sundry challenges, particularly when it comes to security - and that’s where managed service providers can play key roles as customers transform their IT infrastructures.
Many smaller companies view the hybrid cloud as a sensible balance between offloading storage and computational time to a public cloud, and keeping a firm’s computational services all on premises. The good news is that unlike bigger enterprises, SMEs moving to hybrid clouds won't need to jerryrig older legacy infrastructures - potentially opening security holes in the computer network. MSPs can steer that migration to the hybrid cloud with "clean" deployments by starting from scratch.
(TNS) - At least three people have died in severe weather in the southern states of the United States, where tornadoes, damaging hail and flash floods left a swath of destruction.
Tornadoes churned across many states, from Louisiana to Georgia, but the most destructive were in Louisiana and Mississippi.
More than 30 people were injured in the storms. Two people died in the hamlet of Convent, Louisiana, after a tornado demolished more than 160 mobile homes.
The third casualty died in a trailer park in Purvis, Mississippi.
The storm left tens of thousands of people without power in Louisiana, and John Bel Edwards, the state governor, declared a state of emergency in seven parishes.
The powerful storm developed when the jet stream dived across the region on Tuesday. A jet stream is a fast-flowing ribbon of air, blowing high above the Earth's surface, which can dictate the path of storms and can also encourage their development.
State health officials were heartened when President Barack Obama this month asked Congress for $1.8 billion to combat the spread of the Zika virus because they fear they don't have the resources to fight the potentially debilitating disease on their own.
Budget cuts have left state and local health departments seriously understaffed and, officials say, in a precariously dangerous situation if the country has to face outbreaks of two or more infectious diseases -- such as Zika, new strains of flu, or the West Nile and Ebola viruses -- at the same time.
"We have been lucky," said James Blumenstock of the Association of State and Territorial Health Officials, of states' and localities' ability to contain the flu, West Nile and Ebola threats of the last five years.
Cloud computing has become a significant topic of conversation in the technology industry and is being seen as a key delivery mechanism for enabling IT services. Today’s reality is that most organizations already are using some form of cloud because it opens up new opportunities and has become engrained in the fabric of how things are done and how business outcomes are achieved.
Cloud offers a host of service and deployment models: both on- and off-premises, across public, private, and managed clouds. We see some organizations starting with public cloud because of the perceived ease of entry and lower costs. Some organizations, such as test and development groups, use public clouds because they need to quickly stand-up infrastructure, test and run their application and take it down, and this can’t be supported by their existing IT team. Other companies, such as startups, use public clouds because they simply don’t have the resources to build, own and manage a private cloud infrastructure today. We’re also seeing a rather significant shift back towards private clouds, which are becoming much easier and quicker to deploy and still come with IT control and piece-of-mind security benefits.
That said, every organization’s cloud is a unique reflection of its business strategies, priorities and needs; and this is why there is a great variation in how companies go about implementing their own specific clouds.
We’re constantly hearing about how the lack of rain in much of the Southwest has contributed to the worst drought in the history of the region, but the subject of water doesn’t come up much with respect to data centers.
However, it should garner just as much attention—specifically water treatment programs—according to Data Center World speaker Robert O’Donnell, managing partner of Aquanomix.
“The water management program is a huge risk in data centers; one that many facility owners don’t understand or give enough credence to,” he says.
Outbreaks of Zika have been reported in tropical Africa, Southeast Asia, the Pacific Islands, and most recently in the Americas. Because the mosquitoes that spread Zika virus are found throughout the world, it is likely that outbreaks will continue to spread. Here are 5 things that you really need to know about the Zika virus.
Zika is primarily spread through the bite of an infected mosquito.
Many areas in the United States have the type of mosquitoes that can become infected with and spread Zika virus. To date, there have been no reports of Zika being spread by mosquitoes in the continental United States. However, cases have been reported in travelers to the United States. With the recent outbreaks in the Americas, the number of Zika cases among travelers visiting or returning to the United States will likely increase.
These mosquitoes are aggressive daytime biters. They also bite at night. The mosquitoes that spread Zika virus also spread dengue and chikungunya viruses.
Protect yourself from mosquitoes by wearing long-sleeved shirts and long pants. Stay in places with air conditioning or that use window and door screens to keep mosquitoes outside. Sleep under a mosquito bed net if air conditioned or screened rooms are not available or if sleeping outdoors.
Use Environmental Protection Agency (EPA)-registered insect repellents. When used as directed, these insect repellents are proven safe and effective even for pregnant and breastfeeding women.
Do not use insect repellent on babies younger than 2 months old. Dress your child in clothing that covers arms and legs. Cover crib, stroller, and baby carrier with mosquito netting.
Read more about how to protect yourself from mosquito bites.
Infection with Zika during pregnancy may be linked to birth defects in babies.
Zika virus can pass from a mother to the fetus during pregnancy, but we are unsure of how often this occurs. There have been reports of a serious birth defect of the brain called microcephaly (a birth defect in which the size of a baby’s head is smaller than expected for age and sex) in babies of mothers who were infected with Zika virus while pregnant. Additional studies are needed to determine the degree to which Zika is linked with microcephaly. More lab testing and other studies are planned to learn more about the risks of Zika virus infection during pregnancy.
We expect that the course of Zika virus disease in pregnant women is similar to that in the general population. No evidence exists to suggest that pregnant women are more susceptible or experience more severe disease during pregnancy.
Because of the possible association between Zika infection and microcephaly, pregnant women should strictly follow steps to prevent mosquito bites.
Pregnant women should delay travel to areas where Zika is spreading.
Until more is known, CDC recommends that pregnant women consider postponing travel to any area where Zika virus is spreading. If you must travel to one of these areas, talk to your healthcare provider first and strictly follow steps to prevent mosquito bites during the trip.
If you have a male partner who lives in or has traveled to an area where Zika is spreading, either do not have sex or use condoms the right way every time during your pregnancy.
For women trying to get pregnant, before you or your male partner travel, talk to your healthcare provider about your plans to become pregnant and the risk of Zika virus infection. You and your male partner should strictly follow steps to prevent mosquito bites during the trip.
Returning travelers infected with Zika can spread the virus through mosquito bites.
During the first week of infection, Zika virus can be found in the blood and passed from an infected person to a mosquito through mosquito bites. The infected mosquito must live long enough for the virus to multiply and for the mosquito to bite another person.
Protect your family, friends, neighbors, and community! If you have traveled to a country where Zika has been found, make sure you take the same measures to protect yourself from mosquito bites at home as you would while traveling. Wear long-sleeved shirts and long pants , use insect repellant, and stay in places with air conditioning or that use window and door screens to keep mosquitoes outside.
For more information on the Zika virus, and for the latest updates, visit www.cdc.gov/zika.
February is American Heart Month. In light of that, it seems only fitting that we should check the pulse of a challenge faced by many in Healthcare IT: disaster recovery.
In a training class several weeks ago, Ryan, an incredibly enthusiastic sales engineer, and I had a conversation about disaster recovery. “Disaster recovery is so much more than the question of, ‘Will I pass the audit?’” he began. “Buildings fall apart, water rises, systems fail, snow falls, power surges,” he explained, making imaginary drawings in the air to emphasize his points. “Anything that stops hospital operations for a period of hours is definitely a disaster.”
“The great thing is that Citrix is on top of it,” he confidently added. Ryan backed that statement with a contrasting tale of two US hospitals – one in Texas that was plagued by human error and another in the Southwest that experienced equipment failure after a power surge.
The Internet of Things (IoT) generates a lot of data, which organizations can store in the cloud. But how are they keeping it all safe?
Many companies are realizing they face this challenge and are ramping up efforts to improve data security as they embrace new platforms, including IoT and cloud-based applications, according to a recent survey conducted by 451 Research.
The survey, sponsored by data and cloud security vendor Vormetric, polled 1,114 senior IT executives, representing companies ranging from $50 million to more than $2 billion in annual sales.
CHICAGO — With a forecast that includes the potential for heavy snow and high winds, the U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) Region V encourages everyone to get prepared.
“If you must leave home in dangerous weather conditions, take precautions to get to your destination safely,” FEMA Region V Administrator Andrew Velasquez III said. “Taking simple steps to prepare before the storm not only keeps you safe, but others as well.”
Follow the instructions of state and local officials and listen to local radio or TV stations for updated emergency information. If you are told to stay off the roads, stay home, and when it is safe, check on your neighbors or friends nearby who may need assistance.
Find valuable tips to help you prepare for severe winter weather at www.ready.gov/winter-weather or download the free FEMA app, available for your Android, Apple or Blackberry device. Visit the site or download the app today so you have the information you need to prepare for severe winter weather.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
You have probably heard the old saying that “a lie will go round the world while truth is pulling its boots on.” But you may not have considered this: “A crisis can do half its damage before the crisis plan is even found!”
And every minute a crisis goes unmanaged, costs may be piling up.
For example—the longer your people go without clear guidance or worst wait to execute on your crisis management plans , the more likely it is that your situation will escalate. And what if the instructions for shutting down a manufacturing line come too late? That expensive equipment could end up a total loss.
Application containers, namely Docker containers, have been heralded as the great liberators of developers from worrying about infrastructure. Package your app in containers, and it will run in your data center or in somebody’s cloud the same way it runs on your laptop.
That has been the promise of the technology based on the long-existing concept of Linux containers the San Francisco startup named Docker devised its application building, testing, and deployment platform around. While developers love the concept of Docker, IT managers that oversee the infrastructure those applications eventually have to be deployed on have certain processes, policies, requirements, and tools that weren’t necessarily designed to support the way apps in Docker containers are deployed and the rapid-fire software release cycle they are ultimately meant to enable.
This week, Docker rolled out into general availability its answer to the problem. Docker Datacenter is meant to translate Docker containers and the set of tools for using them for the traditional enterprise IT environment. It is a suite of products that enables the IT organization to stand up an entire Docker container-based application delivery pipeline that is compatible with IT infrastructure, tools, and policies already in place in the enterprise data center.
Dell Inc. said Tuesday that it has received U.S. regulatory clearance to proceed with its planned $67 billion purchase of data storage company EMC Corp.
Round Rock, Texas-based Dell Inc. has passed a mandated waiting period under antitrust laws that are intended to allow the U.S. Federal Trade Commission time to review the purchase. If no FTC action is taken, the purchase can proceed.
But the Dell Inc. deal still has to receive regulatory approvals from other jurisdictions and from EMC shareholders. Reuters new service reported last week that European regulatory approval is expected.
In an article aimed at people new to business continuity, Jennifer Craig examines the basic content of a business continuity plan, describing seven components that should be incorporated in every plan:
1. Initial response
When something disrupts day-to-day operations, everyone should understand what – if anything – they should do immediately. By planning for that – and exercising it – no one will be running in circles muttering “What’ll we do? What’ll we do?”
Whoever notices the ‘event’ should know what to do (like calling emergency services, alerting Security, pulling the fire alarm, etc.). Protocols for alerting the proper decision-makers should be planned (along with contact information for those decisions-makers).
The initial response should also include a clear plan for who will be ‘in charge’. Whether that’s locally, regionally, or corporately, making it clear so that all participants will understand.
This year’s Disaster Recovery Journal Spring World event is nearly here, don’t miss MissionMode at this year’s show.
Event: DRJ Spring World 2016
Location: Orlando, FL
Date: March 13-16, 2016
This year’s theme Innovation to Ensure Resiliency is perfect for the largest assembly of business continuity professionals in the industry. This is your opportunity to learn about the latest tools and best practices for BC/DR success.
Make the most out of your time at Spring World:
Do Some Pre-Reading
Download MissionMode’s latest whitepaper, “Incident Management Systems – A Business Continuity Program Game Changer” to see how more and more companies are improving BC/DR program maturity by adopting incident management systems. These systems, including MissionMode’s Situation Center Suite, drive business continuity management efficiency and process standardization. Read our white paper on your trip to Orlando and stop by our booth for a demo.
Visit MissionMode Booth #507
Meet the MissionMode team and get a live demonstration of our Situation Center Suite. You won’t believe how easy the system is to use and how quickly it can help your business continuity teams better execute the plans you’ve developed.
Schedule time to meet with MissionMode Chief Operations Officer, Jason Zimmerman
For a serious discussion of how your organization can benefit from deployment of MissionMode Incident Management Solutions, schedule time with the experts. Jason has helped hundreds of MissionMode clients scope their needs and customize our Situation Center tools to address key pain points.
Have some fun in Orlando!
It’s winter, it’s Florida and it’s fun! Take a little time to enjoy some of Orlando’s top attractions:
- Walt Disney World
- The Wizarding World of Harry Potter
- Universal Studios
- Cirque du Soleil
Or just enjoy the area’s fine dining and warm winter temperatures. Today’s temperature – 81 degrees!
How your organization would respond if under attack from a physical assault or fire is obvious. Someone would dial 911 and emergency services would arrive quickly to assist. Unfortunately, the same can’t be said if your organization is the target of a cyber attack. Your best offense in this scenario is to create a resilient defense against cyber attacks. Let’s take a look at the top priorities any organization should adopt to build a reliable defense against cyber threats.
Evaluate Your Skills, Fill Gaps
It’s crucial to evaluate your security team’s core capabilities when it comes to shielding the organization from cyber threats. When gaps in expertise are uncovered, develop training, schedule mock exercises and partner with other entities who make it their business to shield yours from cyber attacks.
(TNS) - Before the walls shook, before the two-by-fours twisted and the roof began tearing off, Amanda Bose saw news about the tornado on television.
“Everybody in the bathroom — right now!” the 36-year-old mother told her 5-year-old and 15-year-old. There was almost no time to wonder, she says, whether the home would protect them — or collapse around them.
Similar scenes played out in homes across North Texas during the Dec. 26 storm, which destroyed 159 houses and did major damage to 311 in Rowlett alone. Damage from the storm will reach $1.2 billion, the Insurance Council of Texas estimates.
When an IT incident strikes, every minute spent offline could cost your company thousands. When Amazon.com experienced a 100 millisecond slowdown in webpage load times, it resulted in a 1% decrease in sales. This correlates to a loss of $660 million in online revenue!
Communication with your IT support team is the key to getting your company back up and running, but what if your IT professionals work thousands of miles away from the problem? Make sure you’re optimizing your communication strategies so you can resolve IT incidents faster and avoid costly disruptions.
Justin Ong moderated this panel discussion that covered best practices to reduce an IT incident’s Mean Time To Know, the leading cause for why IT incidents aren’t resolved as quickly as they could be. The webinar’s expert panel consisted of IT professional Liz Tesch, and Everbridge’s own Vincent Geffray and Frank Basso.
Directly addressing concerns about its readiness for production, application container leader Docker is rolling out "container-as-a-service" platform designed to ease application development and management at scale.
The Docker Datacenter unveiled Tuesday (Feb. 23) seeks to combine the inherent agility of application containers with greater control and security as enterprises attempt to scale container technology. Aiming to deliver on its "build, ship and run" mantra, the new container service is a "metaphor for pulling everything together" as container technology moves to production, according to Scott Johnson, Docker's senior vice president of product management.
Docker's holistic approach includes a control plane that can be used in the datacenter or in a private cloud along with the company's trusted registry and lightweight runtime. As an example of container agility, Johnson noted in an interview that the new service could help reduce the time needed to push an application change to production from weeks to as little as a day.
At a time when security is top-of-mind for every IT and business leader–from the boardroom to the executive suite to the front lines of operations–Citrix is coming to RSA with solutions and strategies to address the latest enterprise security requirements.
To set the stage, this post provides essential resources for everyone concerned with managing risk in the enterprise to bring you up to date on the latest thinking so you can use your time at RSA productively.
As transformative trends like mobility, BYO and the Internet of Things drive the expansion and evolution of the network perimeter, enterprises need new ways to provide access for employees, contractors, partners and customers while managing risk. With Citrix solutions, companies can secure and control applications, data and usage in any scenario to keep people productive wherever and however they choose to work.
Read our solution brief “Managing Risk by Protecting Apps, Data and Usage” and watch the video below to learn more about the Citrix approach to enterprise security.
Docker announced a new container control center today it’s calling the Docker Datacenter (DDC), an integrated administrative console that has been designed to give large and small businesses control over creating, managing and shipping containers.
The DDC is a new tool made up of various commercial pieces including Docker Universal Control Plane (which also happens to be generally available today) and Docker Trusted Registry. It also includes open source pieces such as Docker Engine. The idea is to give companies the ability to manage the entire lifecycle of Dockerized applications from one central administrative interface.
Customers actually were the driving force behind this new tool. While companies liked the agility that Docker containers give them, they also wanted management control over administration, security and governance around the containers they were creating and shipping, Scott Johnston, SVP of product management told TechCrunch.
(TNS) - Pennsylvania Gov. Tom Wolf today asked President Barack Obama to declare last month's record snowstorm a major disaster, which would make the state and municipalities in at least 26 counties eligible for reimbursement of 75 percent of their costs.
In a news release, the administration said that Pennsylvania has identified more than $55.4 million in expenses related to cleanup from the storm Jan. 22-23. The state Emergency Management Agency has been compiling costs reported by communities throughout the state to make the initial request for federal disaster relief.
The storm, which was concentrated more in central and eastern Pennsylvania, dumped more than three feet of snow in some areas. Weather-related traffic accidents tied up west-bound traffic on the Pennsylvania Turnpike and stranded some motorists for more than 24 hours between Bedford and Somerset.
When it comes to business IT solutions, cloud computing is unquestionably the way forward for many companies. Over the last few years, this technology has gone from being a hyped-up buzzword to a central part of the way organisations of all shapes and sizes operate. But if you’re coming to the cloud for the first time it may seem like a minefield, with a huge range of tools and deployment options to choose from. Get it right and you can be well set for years to come, but go down the wrong route and it can be costly and time-consuming to correct your course. One of the biggest decisions you’ll have to make is what type of cloud to go for. There are three key options here – public, private and hybrid. Each have their own pros and cons and may be better-suited to some scenarios that others. So which option is the best for your business? This decision will depend on many factors, such as the type of data you have, how flexible you need to be and your level of in-house IT resources. If you’re unsure about what will work best when you’re choosing a cloud solution, read on for our top tips on each option and what it could do for your business.
Pacific Rim economies’ exposure to the increasing threat of natural disasters has provided impetus for governments and the private sector to jointly address the need for more robust safeguards in the region.
Finance officials from the 21 APEC member economies, the world’s most disaster affected region, ramped up their collaboration to improve risk assessments and insurance coverage during meetings that concluded recently in Lima. The focus was on narrowing gaps in data gathering and financial protection needed to build economic resiliency among them, boosted by policy inputs from disaster risk experts from the OECD, the World Bank and industry.
“About two-thirds of reported disaster losses in APEC economies are uninsured on average and vulnerabilities in the region’s developing economies are even more severe,” noted Gregorio Belaunde, director of risk management at the Ministry of Economy and Finance of Peru, who guided the proceedings. “Quantifying disaster risk exposure is a prerequisite for reducing financial protection gaps which APEC is working to facilitate as climate change raises the stakes. It also helps to reduce physical disaster risk.”
APEC economies collectively account for about 3 billion people, half of global trade, 60 percent of total GDP and much of the world’s growth. They also experience more than 70 percent of all natural disasters and these are increasing in frequency and intensity as a result of climate change. Significantly, APEC economies incurred over USD 100 billion annually in related losses over the last decade.
Officials pinpointed the components of disaster risk as well as the technical requirements for model development and data gathering necessary to accurately assess them, drawing on best practices and case studies from the public and private sectors. They also shared real world lessons and guidance for creating systems that bring insurance companies together to form ‘catastrophe insurance pools’ that can rapidly boost insurance penetration.
The findings from IDC's recent IT services end-user survey reveal that the top themes for IT services spending in the Asia Pacific, excluding Japan, (APeJ) region are: security enhancement; business continuity and disaster recovery services; and IT staff retention and training.
“A comparison of two years’ results on the top themes for IT services spend shows that APeJ organizations have moved beyond the infrastructure consolidation phase to focus on improving reliability, security and resilience of the enterprise infrastructure and systems in order to be better prepared for the digital transformation wave. This is a huge and necessary positive step, allowing the CIO focus to shift from technology to people and process. As a result, we expect the IT education and training services market in the region to grow strongly, driven by a huge demand for re-skilling,” said Cathy Huang, research manager, Services and Cloud Research Group, IDC Asia/Pacific.
The survey data reveals interesting sub-trends within the broader context of enterprise expectations of transformative technologies and services.
The Internet of Things is rich in promises. Besides the old (by now) examples of connecting your fridge or coffee machine to the Web, the possibilities for connecting, controlling and optimizing “things” are vast. They range from monitoring and reducing energy usage in buildings to preventing oil pump failure in remote oil fields, and from cutting aircraft jet engine fuel bills to helping people park better in cities. In fact, “better” is often the keyword. The IoT or IIoT (Industrial Internet of Things) offers considerable potential for improvement. But what does it do for business continuity – and could we conceivably end up worse off for BC because of the IoT?
In today’s 24-hour news environment, most senior legal officers across corporate America acknowledge the importance of communications with stakeholders during high-profile lawsuits. Yet the majority have outdated strategies or no strategies at all to direct communications outside of court, according to a new survey conducted by Greentarget.
This lack of preparation leads to overly conservative communications, the survey shows, with decisions and actions that are often impulsive and governed by the fear of negative media attention. Ironically, these instincts can compound the likelihood of reputational damage.
“The fact is that most senior legal officers can name the top two or three lawsuits they never want their companies to face,” said Larry Larsen, senior vice president of Greentarget and head of the firm’s Crisis & Litigation Communications Group. “They should take some level of control and prepare for what’s to come.”
The Homeland Security Simulation Center offers realistic training on disaster preparedness and response through a virtual reality platform
After first responders in Gresham, Ore., handled a high school shooting, emergency management officials realized that they needed to improve their training, especially for law enforcement.
The incident had “a lot more complexity than just neutralizing a threat, which is what they’re focused on,” said Kelle Landavazo, emergency management coordinator for Gresham.
Reuniting students with panicked parents who are arriving at a campus — while keeping track of who has been picked up, and by whom — is a major logistical challenge. So is coordinating the efforts of everyone who is responding.
It seems the enterprise is approaching container technology with a mixture of anticipation and trepidation as it seeks to establish architectures that offer broader scalability and are more suitable to microservices than standard virtualization.
But the growing number of deployments is starting to point out the challenges inherent in container-based data environments, although it appears that most of the issues can be overcome by a proper management stack and a reasonably good understanding of what containers can and cannot do.
At the moment, much of the momentum behind containers comes from developers, says CIO.com’s Clint Boulton, while CIOs and other c-suite executives are a little more wary. At a recent Wall Street Journal gathering, Docker CEO Ben Golub focused primarily on the technology’s ability to support cloud-based app development and testing even as an online poll showed a fair amount of skepticism of containers’ value proposition and whether it could do anything that simple virtualization or platforms like Red Hat’s OpenShift could not. One key advantage that containers brings to the table is that it does not rely on a guest operating system, which in turn should provide a more integrated change management structure to enable the kind of continuous delivery and integration required of cloud-based apps and services.
The world’s biggest technology companies are handing over the keys to their success, making their artificial intelligence systems open-source.
Traditionally, computer users could see the end product of what a piece of software did by, for instance, writing a document in Microsoft Word or playing a video game. But the underlying programming – the source code – was proprietary, kept from public view. Opening source material in computer science is a big deal because the more people that look at code, the more likely it is that bugs and long-term opportunities and risks can be worked out.
Openness is increasingly a big deal in science as well, for similar reasons. The traditional approach to science involves collecting data, analyzing the data and publishing the findings in a paper. As with computer programs, the results were traditionally visible to readers, but the actual sources – the data and often the software that ran the analyses – were not freely available. Making the source available to all has obvious communitarian appeal; the business appeal of open source is less obvious.
(TNS) - Will Montgomery County build a backup 911 call center or opt for a regional service?
County officials, who have been mandated by the state to offer a backup facility, will have to make a decision concerning the center. The practicality of a backup was made plain the week of July 4, 2012, when a strong wind ripped the roof from the current center.
County Manager Matthew Woodard said the N.C. Legislature has passed a bill mandating that counties have a reserve facility in case the regular call center goes off-line or a widespread emergency requires backup. He said that in the case of the 2012 windstorm, emergency communications could have been disrupted if rain had damaged equipment.
(TNS) - Floodwaters, like many natural disasters, are not contained by political boundaries.
But on Monday, when overflowing Cowiche Creek inundated county and city homes, emergency management staff for both jurisdictions were not talking to each other about services for displaced residents.
“Between our office, the Red Cross, and the individuals in the Riverview Manor Mobile Home Park, there were some difficulties getting ahold of the city,” said Scott Miller, director of the Yakima County Office of Emergency Management.
NEW YORK – STOPit announces the launch of STOPit PRO – the only compliance reporting platform that enables companies to mitigate risk and prevent financial liabilities by empowering employees to anonymously report fraud, unethical behaviors and product-related issues.
As a 21st century solution for deterring, mitigating and investigating all forms of inappropriate conduct in the workplace, STOPit PRO provides uniquely anonymous two-way dialogue between the employee and company officials – including risk managers, general counsel and HR departments.
Employees can use the STOPit PRO mobile app to provide real-time reports and messages, including incident-related photo and video documentation. Employers can then follow up for additional information through the app, with all interactions remaining anonymous.