Industry Hot News (6926)
Venafi has published new research reevaluating the risk of attacks that exploit incomplete Heartbleed remediation in Global 2000 organizations.
Using Venafi TrustNet, a cloud-based certificate reputation service designed to protect enterprises from the growing threat of attacks that misuse cryptographic keys and digital certificates, Venafi Labs found that 84 percent of Forbes Global 2000 organizations’ external servers remain vulnerable to cyber attacks due to Heartbleed. This leaves these organizations open to reputational damage and widespread intellectual property loss.
When the Heartbleed vulnerability was discovered in April 2014, many organizations scrambled to patch the bug, but failed to take all of the necessary steps to fully remediate their servers and networks. But despite significant guidance from Gartner and other industry experts, the majority have failed to take the necessary steps to fully remediate their servers and networks.
“A year after Heartbleed revealed massive vulnerabilities in the foundation for global trust online, a major alarm needs to be sounded for this huge percentage of the world’s largest and most valuable businesses who are still exposed to attacks,” said Jeff Hudson, CEO, Venafi. “Given the danger that these vulnerabilities pose to their business, remediating risks and securing and protecting keys and certificates needs to be a top priority not only for the IT team alone, but for the CEO, BOD, and CISO.”
Download the Venafi Heartbleed +1 Year Analysis (PDF) at:
Like virtualization, it seems that containers are going to work their way into the enterprise by stealth – that is, whether the people in charge of technology and infrastructure want them or not.
Part of this is due to the advent of the cloud. The more the enterprise offloads data and applications to third-party infrastructure, the less it has to say about the make-up and configuration of that infrastructure. But part is due to the fact that, like virtualization, containers are making their way into leading data platforms where they will exert their influence through standard upgrade and refresh cycles.
A case in point is container management firm CoreOS’s decision to integrate Google’s Kubernetes cluster management system into its new Tectonic platform. According to ZDnet’s Steven J. Vaughn-Nichols, this will enable the enterprise to manage Linux containers within their data centers in scale-out cloud fashion and by extension foster compatibility with existing Google applications that are almost universally housed on containers managed by Kubernetes. As the enterprise gravitates toward private clouds, particularly Linux-based clouds, an integrated container stack will be crucial for the delivery of applications and microservices to a diverse workforce. Other Linux developers such as Mirantis and Mesosphere are also working to integrate Kubernetes into the platforms.
Risk professionals aren’t prepared for the age of the customer. Empowered consumers and changing market dynamics are upending longstanding business models and lines of operation, but risk professionals largely stand pat, and continue to neglect risks related to their organizations’ most critical asset – company reputation. Yesterday we published a report on "Brand Resilience" that will hopefully help you change that legacy risk mentality.
(TNS) — The National Oceanic and Atmospheric Administration is testing a new feature that lets people get a look at what kind of damage and storm surges are possible, and using Charleston, S.C., for the preliminary model.
The Experimental Storm Surge Simulator shows a street-level view of where water could rise in a storm surge.
"Surveys of the public show there is still a consistent misunderstanding of what the storm surge is, and how deadly it can be," reads the introduction to the app. "In part this is due to the challenge scientists encounter in trying to simplify the complex physics of hurricanes for the public, and in part this is due to poor misunderstanding of flood zone maps that represent the flooding scenario as it might be viewed from above."
(TNS) — Haunted by the public health community's failure to prevent or contain Ebola, a top Houston expert is spearheading a government-sponsored effort to prepare North Africa and the Middle East so that the region doesn't spawn the next infectious disease epidemic.
Dr. Peter Hotez, named a U.S. science envoy in December, fears the next virulent outbreak of a neglected tropical disease or emerging infection could strike ISIS-occupied territories in Syria, Iraq, Yemen or Libya, all of which fit the historical mold for such a disaster. He is working to identify institutions in the region that could send scientists to train in Houston, then ramp up back at home to produce vaccines in time to prevent an epidemic.
"We can't wait for catastrophic epidemics to happen and only then start making vaccines," said Hotez, an infectious disease specialist at Baylor College of Medicine and Texas Children's Hospital. "We need to start anticipating the next threat."
(TNS) — So many earthquakes rumble through south-central Kansas these days that the Harper County Herald charts them in each week’s edition the way some papers run baseball box scores.
They run on page 12. Right next to the oil and gas industry news as a not-so-subtle reminder that there’s a likely connection between the quakes and an upswing in drilling operations.
“For a while there, every day, several times a day it was shaking,” said Herald editor-in-chief Kate Catlin.
While many companies would like to adopt cloud services, many still resist over concerns about data security. Here's how managed service providers (MSPs) can overcome the two main objections to cloud computing and cloud-based file sharing in 2015.
As a recent article from CloudWedge says, “The most cited barrier to entry for cloud into the enterprise continues to be the security concerns involved with an infrastructure overhaul.” The problem with that lingering concern is that the enduring lack of education is hindering the market for MSPs. Yet, this knowledge also presents an opportunity.
What these hesitant or resistant organizations really fear is the unknown. And, what they don’t know is what adopting the cloud will mean for their most valuable, most highly-protected data.
Concepts and fashions in business come and go. And sometimes they come back again with a new look or a different name. The origin of the DevOps name is simple to guess. It’s a combination of development and operations. The advantages cited of using a DevOps approach include a lower failure rate of software releases, a faster time to fix, and a faster time to recover if a new release crashes your server. DevOps is currently a buzzword in IT circles, but despite an inception date of 2008, just how new is it?
The data experts are still sounding the warning bell about data lakes, prognosticating a list of problems that data lakes will cause you.
Meanwhile, word on the street is that enterprises are building data lakes anyway, because everyone else thinks it’s a great idea. This means that many enterprises are now stuck looking for ways out of the prognosticated problems.
It’s going to get interesting for the rest of us—and possibly very expensive for some.
Gartner Director of Public Relations Christy Pettey revisited the problems of data lakes, drawing on Research Director Nick Heudecker’s presentation at the Business Intelligence & Analytics Summit. Pettey’s article identifies the three main problem areas with data lakes:
If your system has been hacked, what would your first reaction be?
Speaking for myself, I think I would want to know who did it and figure out how it was done. That’s my personality, to learn the who, what, and why of a situation first, and then focus on the damage control. I suspect that this is human nature for a lot of people, too.
On the other hand, when I asked that question to a security professional during an informal conversation, his response was this: Find out what information was hacked and determine whether the FBI needs to be involved immediately. You have to figure the data had already been compromised, he said, so you’ve got to work on minimizing the damage.
According to Edward J. McAndrew, assistant United States attorney and cybercrime coordinator with the U.S. Attorney’s Office in the District of Delaware, and Anthony DiBello, director of strategic partnerships for Guidance Software, the security professional I spoke with is on the right track. When a hack happens, it is important to resist human nature regarding the hacker (at least immediately). Instead, you want to focus on mitigating damage and data loss and providing information to law enforcement so the cops can identify and take action against the bad guys.
The cloud has given business units within the enterprise a chance to do an end-run around IT when they need quick resources to complete a given task.
The CIO is rightly concerned about this, given the security and governance issues that such free-wheeling activity promotes. But in the front office, the end results of greater productivity and lower costs are hard to resist, particularly once the appropriate agreements are struck with cloud providers that enable broad protection and availability measures for data placed on third-party infrastructure.
It stands to reason, then, that many providers are positioning their services away from the technical elements of the enterprise and more toward the people who actually stand to benefit – the line-of-business managers who are under increasing pressure to get the job done no matter what. This is why we are seeing the rise of cloud services tailored toward key functions, such as marketing, as opposed to generic server and storage resources.
The role of the IT manager ain’t what it used to be. There was a time when responsibilities primarily included building a software stack, managing the company’s infrastructure, and operating company-owned equipment. With the rapid adoption of cloud technology, including cloud-based file sharing, those roles and responsibilities have changed dramatically – and it’s critical for MSPs to understand this shift.
IT managers now fill more of a relationship manager role and are ideally viewed as partners by business leaders and department heads. MSPs looking to provide cloud services to clients need to understand this shift in roles in order to work – and be successful with – the new IT department.
Russ Banham from Forbes recently outlined some of the things IT pros are doing now instead of managing infrastructure. Here are a few things IT managers are doing now that MSPs should be prepared for:
World Backup Day 2015 gave managed services providers (MSPs) a great opportunity to educate their customers about the importance of backing up personal data.
And even though this year's event has come and gone, MSPs don't have to wait until 2016 to teach customers about the value of data protection.
For example, a new survey from data backup and disaster recovery (BDR) solutions provider Kroll Ontrack revealed 61 percent of data recovery customers had a backup solution in place at the time of data loss.
(TNS) — Critics call it “sharpening the pencil.”
Since the Diablo Canyon nuclear power plant opened on a rocky stretch of California coast in 1985, researchers have discovered three nearby fault lines capable of stronger quakes than the one that struck Napa last year.
And yet the plant’s owner, Pacific Gas and Electric Co., insists that Diablo isn’t in greater danger than previously thought. If anything, it’s in less.
PG&E has, at several times in Diablo’s complicated history, changed the way the company assesses the amount of shaking nearby faults can produce, as well as the plant’s ability to survive big quakes.
For years enterprises have attempted to move away from spreadsheets in favor of enterprise resource planning (ERP) systems, accounting systems and various other software systems and applications. Yet, no matter how hard organizations try, it seems spreadsheets will not go away.
Besides being easy to use and accessible, people are comfortable working with spreadsheets. When they have a job to do, spreadsheets are there—not waiting for IT. Yet when left unmanaged, the risks associated with spreadsheets can prove costly, resulting in bad business decisions, regulatory penalties, and even lawsuits. In some instances, unmanaged spreadsheets are costing organizations millions of dollars.
For example, last October a spreadsheet mistake cost Tibco shareholders $100 million during a sale to Vista Equity Partners. Goldman, Tibco’s adviser, used a spreadsheet that overstated the company’s share count in the deal. This error led to a miscalculation of Tibco’s equity value, a $100 million savings for Vista and a slightly lower payment to Tibco’s shareholders.
Last week we began the first workshop in our MSc Organisational Resilience from the module that has a specific focus on Security Management. We covered the usual discussions about crime theory and motivational influence before going on to discuss the scope and parameters of security. So far so routine: vanilla security management ideas. Then we began to move onto the more interesting and challenging elements of the workshop, where the contextualised approach was developed. Where does security management ‘fit’ with other resilience disciplines; and what does the critically evaluative approach that we undertake at postgrad level reveal about security’s true profile and organisational relevance?
It is context that is important and that is something that we can develop and analyse extremely well. How? Because our students and tutors are multi-disciplinary. If you undertake a security management course and staff it with criminologists; and all of your students are from a security, military or law enforcement background; you get bias. Bias is not something that we are too fond of as it tends to skew research and its outcomes. So with, for example, business continuity and emergency and crisis management specialists within our group, we have the opportunity to challenge the rigidity of thought that some see as the underlying trait of many security people. We have covered the theories of crime and we will not cover the processes of security (and its multiple sub activities) in any more detail from now on. However, we will look at the development of ideas, thoughts and research into security management in the organisation and its resilience; dismantling the behaviours and attitudinal approaches that restrict organisational capability from much wider viewpoints.
Everyone knew the cloud was going to be big when the term first appeared in tech circles five or so years ago. But the speed at which it is taking over data infrastructure and the enthusiasm it has generated in the enterprise are surprising nonetheless.
As a rule, the enterprise does not alter the fundamentals of its data infrastructure lightly – even the transition from one core switch or centralized server or storage platform to another is a study in careful planning, particularly when a change in product lines or vendors is on the table. So when word came down that organizations could remove virtual architectures to entirely new resource sets that are not even controlled by the enterprise, there was every reason to think that maybe this would happen, someday.
But someday seems to be approaching at lightning speed if the latest research is to be believed. Goldman Sachs recently projected that spending on cloud computing and infrastructure will jump from today’s $16 billion – which is already a three-fold increase from the beginning of the decade – to more than $43 billion by 2018. And according to CenturyLink, 2020 will unfold with upwards of 70 percent of IT infrastructure residing in the cloud, nearly the opposite of what it is today. And reports coming in from the field indicate that most organizations expect to see improved service in the cloud compared to legacy infrastructure, as well as lower costs.
The April 2013 Boston bombing may have marked the first successful terrorist attack on U.S. soil since the September 11, 2001 tragedy, but terrorism on a global scale is increasing.
Yesterday’s attack by the Al-Shabaab terror group at a university in Kenya and a recent attack by gunmen targeting foreign tourists at the Bardo museum in Tunisia point to the persistent nature of the terrorist threat.
Groups connected with Al Qaeda and the Islamic State committed close to 200 attacks per year between 2007 and 2010, a number that grew by more than 200 percent, to about 600 attacks in 2013, according to the Global Terrorism Database at the University of Maryland.
New survey results suggest some communities are much better prepared for emergencies than others.
The Census Bureau and U.S. Department of Housing and Urban Development released data this week showing the extent to which Americans in different parts of the country have taken measures to prepare for natural disasters or other emergencies. Disaster preparedness questions were a new addition to the 2013 American Housing Survey, intended to assist policymakers and emergency responders with planning.
Nationwide, just over half of households had prepared an emergency evacuation kit. Only a third had communication plans in place, while 37 percent had established emergency meeting locations.
Recently, President Obama issued an executive order to address cyberspying and other maliciously intended cyber activities conducted by hackers and spies in foreign countries. The order will assess penalties for overseas cyberspying and those that knowingly benefit from the act. In an email message to me, Greg Foss, senior security researcher with LogRhythm, called it an “interesting move,” adding:
This is primarily because attribution within the information security space is not nearly as easy as it sounds. It is trivial for hackers to pivot through other countries and misplace blame in order to create the illusion that an attack originated from a specific location. Malware can and will be created that contains false data, to shift culpability.
(TNS) — After nearly seven years without a large hurricane threatening the entire Gulf Coast from Texas to Florida, emergency planners say they're having a difficult time getting residents to prepare for the upcoming season.
"It's human nature," said Rick Knabb, director of the National Hurricane Center. When hurricanes don't happen, people forget about them.
This week the country's leading emergency managers and hurricane officials are meeting in Austin at the annual National Hurricane Conference, and this year the buzz has been about the recent lull in Gulf of Mexico activity and how that has made preparations for the season, which begins June 1, more difficult.
If the title of this post makes you go cross-eyed, don’t worry. All will become clear. Let’s explain. Active/active IT configurations consist of computer servers that are connected in a network and that share a common database. The ‘active/active’ part refers to the capability to handle server failure. First, if one server fails, it does not affect the other servers. Second, users on a server that fails are then rapidly switched to another server that works. The database that the servers use is also replicated so that there is always one copy available. Now for the other two acronyms: HA stands for high availability; DR (of course) for disaster recovery. It is DR that is more affected in this case.
There are many products and services on the market today designed to help notify the right people with (hopefully) the right messages in the event of disruption of day-to-day operations.
Yet we in Business Continuity (and Emergency Management, Crisis Management and ITDR) spend little time, money or effort streamlining how we receive intelligence about events that could potentially disrupt our businesses. Why all the emphasis on outgoing information yet so little on incoming intelligence?
We already know what kind of intelligence we should be anticipating. After all, successful Business Continuity Management and Risk Management uncover knowledge of events that may negatively impact day-to-day operations. And there are many readily available sources which can alert us to those potential, impending or current events for both personal and business use.
During the first quarter of 2015 Continuity Central conducted an online survey asking business continuity professionals about their expectations for the rest of 2015.
239 responses were received, with the majority (82.8 percent) being from large organizations (companies with more than 250 employees). The highest percentage of respondents were from the United States (35.6 percent), followed by the UK (24.7 percent). Significant numbers of responses were also received from Australia and New Zealand (6.7 percent), Canada (5.9 percent) and India (4 percent).
BSI has published a white paper that explores the role of metrics in the ISO 22301 business continuity standard and aims to help people understand the standard’s BCM measurement requirements.
The executive summary of the 'Measurement matters: the role of metrics in ISO 22301' white paper states that ISO 22301 recognizes the importance of having accurate performance information, laying down requirements for ‘monitoring, measurement, analysis and evaluation’. However, the emphasis on monitoring performance, measurement and metrics in ISO 22301 has caused confusion in some organizations. This whitepaper clarifies the requirements around measurement in ISO 22301. In addition three BSI clients describe how they have approached these requirements.
Read the white paper (PDF).
On this day we celebrate the greatest upset in the history of the NCAA Basketball Tournament, when Villanova beat Georgetown for the 1985 national championship. Georgetown was the defending national champion and had beaten Villanova at each of their regular season meetings. In the final the Wildcats shot an amazing 79% from the field, hitting 22 of 28 shots plus 22 of 27 free throws. Wildcats forward Dwayne McCain, the leading scorer, had 17 points and 3 assists. The Wildcats’ 6’ 9” center Ed Pinckney outscored 7’ Hoyas’ center, Patrick Ewing, 16 points to 14 and 6 rebounds to 5 and was named MVP of the Final Four. It was one of the greatest basketball games I have ever seen and certainly one for the ages.
I thought about this game when I read an article in the most recent issue of Supply Chain Management Review by Jennifer Blackhurst, Pam Manhart and Emily Kohnke, entitled “The Five Key Components for SUPPLY CHAIN”. In their article the authors asked “what does it take to create meaningful innovation across supply chain partners?” Their findings were “Our researchers identify five components that are common to the most successful supply chain innovation partnerships.” The reason innovation in the Supply Chain is so important is that it is an area where companies cannot only affect costs but can move to gain a competitive advantage. To do so companies need to see their Supply Chain third parties as partners and not simply as entities to be squeezed for costs savings. By doing so, companies can use the Supply Chain in “not only new product development but also [in] process improvements”.
Confusion surrounds the topic of how to bring some sense of order to Big Data. Depending on the day, the discussion might come down to data quality, data governance or master data management.
Here’s a hint: One of these is much less necessary than the others. You should always understand the quality of your data — big or otherwise. And it’s just basic legal smarts to create governance rules about data lest you fall afoul of regulatory compliance.
But when it comes to master data management and Big Data, you may be better off leaving each to its own. If you’re not clear on why, I recommend this post by veteran integration technologist Kumar Gauraw, who takes you through his thought process on why MDM and Hadoop don’t match.
As I’ve mentioned often in the past, the enterprise is not transitioning to the cloud, but many clouds. And with the advanced automation systems hitting the channel, it will soon be a relatively simple matter to deploy workloads to the appropriate cloud with little or no oversight from users or IT managers.
But how do you determine which cloud is the right cloud? And how exactly will all these clouds work together to produce at least the semblance of an integrated data environment?
According to EMC’s Peter Cutts, the either/or debate surrounding public and private clouds is over. Enterprises that have chosen both, in fact, are likely to see significant advantages over those who restrict themselves to pure-play infrastructure. The public cloud’s scalability cannot be denied, of course, but neither can the security, governance and performance of private infrastructure. In a hybrid scenario, the enterprise has the ultimate in flexibility when it comes to compiling the optimal resources for the business objective at hand.
Businesses are more dependent on their supply chains than ever, with supply chain disruption one of the leading causes of business instability. To thrive, companies need to be resilient, and part of that is their location and the location of suppliers. According to FM Global’s 2015 FM Global Resilience Index, Norway tops the list of resilient countries, with Switzerland in second place.
The study’s purpose is to help companies evaluate and manage their supply chain risk by ranking 130 countries and regions in terms of their business resilience to supply chain disruption. Data is based on: economic strength, risk quality (mostly related to natural hazard exposure and risk management) and supply chain factors (including corruption, infrastructure and local supplier quality).
Business continuity is not just for businesses – public sector organizations and third sector organizations are perhaps just as likely to be affected by a disruptive event as any private sector organization. So are non-profits doing enough to protect the way they operate?
‘Business continuity challenges within the non-profit sector’ is the subject of the latest edition of the Business Continuity Institute's Working Paper Series. In this edition, Rina Bhakta CBCI discusses how there is a lack of shared knowledge on the way business continuity works in the non-profit sector. She argues that while there are various standards and benchmarking from other industries, it can be difficult to relate it to non-profits because a lot of it is not applicable.
Rina notes that the main challenge is that any programme adopted is usually based on best practice. Although the Charity Commission in the United Kingdom outlines the requirements of risk management, the section on business continuity is limited. It then becomes difficult in influencing appropriate buy-in and commitment when such aspects are not enforced by regulation.
In 'business continuity challenges within the non-profit sector', Rina talks through the six stages of the business continuity management lifecycle and provides case studies to highlight how each stage would apply to a non-profit organization. To read the full document, click here.
There isn’t a week that goes by without some headline news on a data security issue. Whether it’s data theft, operating system and browser vulnerabilities, or malware threats, today’s small to midsize businesses face dangers from every corner. Unfortunately, most SMBs don’t understand the impact these threats can have until it’s too late. Many also don’t realize it takes more than a simple anti-virus solution to get the job done. Yet SMBs don’t have the time or the expertise to install and manage the level of security software that is necessary to protect against modern security threats. How can managed service providers help?
The SMB market is highly dependent on managed service providers (MSPs) to deliver managed security services to protect corporate assets. It’s an opportunity that’s there for the taking, but to be successful MSPs need to take a multipronged approach--one that encompasses vulnerability assessment, Windows and third-party patch management, anti-malware, content control and filtering. Endpoint security, along with policy management and enforcement, is also an important part of the mix for maximizing SMB protection.
Many CIOs are struggling to realise the full benefits of their increasingly virtualized IT estates, largely due to the strains of staying secure. But Reuven Harrison says it doesn’t have to be this way...
Over the past decade, businesses have been virtualizing ever more of their IT architecture. At first, CIOs were primarily attracted by the huge efficiency improvements and reduced need for capital expenditure. But as cloud computing has evolved and matured, firms are increasingly eying the main prize: the potential to attain unparalleled levels of business agility.
Being able to deploy resources such as servers, storage and connectivity on demand, and scale them up (and down) at will, has resulted in IT departments shifting more and more systems and applications over to private and (to a lesser extent) public clouds. And as firms move inexorably towards a fully software-defined environment – where systems are not only virtualized, but every part of them can be managed, monitored, configured, optimised and secured centrally and automatically – virtual nirvana seems tantalisingly close.
With the growing reliance on digital business processes in most companies today, the IT department has more responsibility than ever. But, according to new research, businesses are disrupted within the first few minutes of an IT outage and poor communications management means finding the right person to investigate the issue can take as long as, or longer than resolving it.
Forty-five percent of IT professionals reported that their business is impacted if IT is down just 15 minutes or less, and 17 percent said disruption occurs the instant an IT outage develops, according to research by Dimensional Research for a new report, the ‘Business Impact of IT Incident Communications: A Global Survey of IT Professionals.’ The report was commissioned by xMatters, inc.
As the old adage goes, “Time is money,” and in the interest of saving money, we must not waste time. This is especially true when it comes to disaster preparedness and recovery—an area where many companies continue to fall short, as evidenced by the Disaster Recovery Preparedness Council’s 2014 Disaster Preparedness Benchmark Survey.
As part of the study--which surveyed companies of all sizes, from a broad range of industries across the globe--the Disaster Recovery Preparedness Council found that three out of four companies worldwide are at risk for failing to adequately prepare for disaster. Furthermore, the council found that incidents and costs of outages associated with disaster remain a major challenge for many organizations.
The NCAA basketball tournament takes hundreds of good college teams from around the country and boils them down to 64 qualifiers, a round of 32, a Sweet Sixteen, an Elite Eight, Final Four and then two finalists who fight it out for the glory.
Similarly, we have whittled down the many flash storage tips from a multitude of sources into a handful. A couple of weeks back, we provided some tips focused on how to maximize flash performance. But so hot is the flash arena that we are now following it up with an Elite Eight among flash storage tips, these ones focused on product selection.
Amid all the time, attention and money devoted to upgrading and improving enterprise infrastructure, we should keep in mind that it is still just a means to an end. While the specifics may vary, that end is generally considered to be improved productivity, streamlined infrastructure and a more vibrant, dynamic user experience.
But none of this is going to happen without a complete renovation of data center infrastructure and, by extension, the mindset that governs not only design and architecture but human interaction with the digital ecosystem.
To Hiroshige Sugihara, president and CEO of Oracle Japan, this can be summed up in a single word, which unfortunately defies English translation. But generally speaking, it refers to the rejection of conceptual categorization that often prevents us from seeing the big picture – kind of like failing to see the forest through the trees. In the enterprise, this often leads to the one-to-one thinking that lumps together applications and hardware and ultimately produces the silo-based infrastructure that hampers interactivity and innovation. In the new century, the enterprise will need to base strategies on results, rather than what resources must be brought to bear on particular data sets.
Wi-Fi has serious security issues. As my colleague Carl Weinschenk wrote last year in a blog post discussing the vulnerability problems of Wi-Fi, particularly in the age of BYOD and working from anywhere, “… the world outside the firewall simply isn’t as secure as the world within.”
If we needed a reminder about the insecure world outside of the firewall, we got it last week with the news of a vulnerability discovered in hotel Wi-Fi. The flaw was discovered in ANTLabs InnGate devices, which provide in-room access for hotel guests, as well as the type of temporary Wi-Fi connections used in other public places such as convention centers. As explained by Wired:
The vulnerability, which was discovered by the security firm Cylance, gives attackers direct access to the root file system of the ANTlabs devices and would allow them to copy configuration and other files from the devices’ file system or, more significantly, write any other file to them, including ones that could be used to infect the computers of Wi-Fi users.
(TNS) — It's one of the few things that just about everyone seems to agree government should be doing.
But there's less consensus when it comes to figuring out how to pay the bill for making sure a call to 911 results in emergency responders rushing to help.
Pennsylvania's decades-old system for funding emergency call centers — a fee on monthly phone bills — hasn't been generating enough money to keep up with operating costs. And that's left local tax dollars plugging the gap.
This year, Berks County expects to put $2.53 million in county taxes and $2.97 million in fees it collects from municipal governments toward 911 center operations.
"This has become an enormous issue," said Christian Y. Leinbach, Berks County commissioners chairman.
You’d think master data management would be an easy sell in a world where everyone wants an accurate “360-degree” view of the customer. And certainly, that’s a leading driver of adoption.
Yet it’s not always enough to make a winning business case, according to a recent Computing survey of IT decision makers.
The UK tech site interviewed 150 IT decision makers about MDM. The survey found that 38 percent were either currently scoping a project or implementing a project, while another 29 percent had already implemented MDM successfully.
The most-cited factors driving MDM were improving customer experience (60 percent) and improving the quality of strategic decision making. Despite these key business drivers for MDM, IT leaders still struggled to make the MDM business case. When asked about the primary challenges in obtaining funding for customer data management projects, the respondents said:
The sector continues to advance in its adoption of security services. As reported on MSPmentor, this is a rapidly expanding market, with continued opportunity for solution providers. With a fast-growing segment of the market being mid-sized businesses, this seems like a ripe opportunity to deliver services.
According to Gartner (and as quoted in the most recent CompTIA security report), the global security market was expected to reach $71.1 billion dollars by the end of 2014. So this is big business. Interestingly, based on analysis of data on successful attacks, many stats indicate that security should at least be a solvable problem for mid-sized businesses:
S&R pros, is there a Chief Data Officer (CDO) in your organization? Do you work with them? Previously, John and I wrote about the CDO role and how we believe that CDOs will help to drive security policy in the future because they can 1) directly tie business value to data assets, 2) have a deep understanding of data identity and purpose, and 3) possess a great incentive to protect the company’s data (it’s a strategic business asset after all!). Colleagues like Gene have also written about the CDO and the importance of the CDO in data management.
So things didn’t go as well as you planned; either your project implementation didn’t go the way you wanted – without any hiccups – or your organization didn’t respond the way you’d expected them to when the proverbial hit the fan. Well, get used to it. That’s the way things go. You always plan for the worst and hope for the best and having a project management background as well as my BCM/DR background, things don’t always go as planned no matter how hard you try. However, if something does go wrong, it’s a good idea to learn from it.
With most post-activities – either project implementations or responses to disasters and crises, there is usually one activity that’s always held; the Lessons Learned or Post Incident Review.
During these sessions, which I’m sure you’re familiar with, the focus always tends to be what went wrong and people trying to find the faults but most importantly, the person or area for where to lay the blame and shame them for their error. Well, to some degree that’s OK; you want to find the cause and find out what went wrong to cause the problem but it shouldn’t be to lay blame or just to focus on the negative. Often, these Lessons Learned meetings tend to be sessions where people can vent their frustration due to how inconvenienced they became as a result of the situation. Again, focusing on the negative. But that’s not all you should be addressing.
We often hear references to a holistic view of risk. “Holistic” is a term used in risk management to emphasize the importance of understanding the interrelationships among individual risks (or groups of related risks) and the coordinated approach that an organization’s operating units and functions undertake to manage risk. A holistic approach to risk management is, by definition, one that is not fragmented into functions and departments, but rather is organized with the intention of optimizing risk management performance.
A silo approach to managing risk is dangerous in today’s rapidly changing environment. Organizations can face change with greater confidence with an enterprise-wide perspective. That is why an enterprise risk management (ERM) approach is intended to be holistic in its perspective toward risk and how it is managed. While the goal of thinking holistically is laudable, the question arises as to what it means from a practical standpoint.
Geary W. Sikich introduces ‘risk absorption capacity’, ‘risk saturation point’, ‘risk deflection’ and ‘risk explosion’ and explains their usefulness to risk managers.
What is risk? Think about it before you leap to answer. Do we really know and understand risk? Some facts to consider:
- Risk is not static, it is fluid.
- Risk probes for weaknesses to exploit.
- Risk, therefore, can only be temporarily mitigated and never really eliminated.
- Over time risk mitigation degrades and loses effectiveness as risk mutates, creating new risk realities.
Risk management requires that you constantly monitor recognized risks and continue to scan for new risks. This process cannot be accomplished with a ‘one and done’ mindset. Risk needs to be looked at in three dimensions and perhaps even four dimensions to begin to understand the ‘touchpoints’; the aggregation of risk; and its potential to cascade, conflate and/or come to a confluence.
With 81% of large UK businesses and 60% of small companies suffering a cyber security breach in the last year, a new report published by the UK Government and Marsh entitled UK Cyber Security: The Role of Insurance in Managing and Mitigating the Risk has highlighted the exposure of firms to cyber attacks among their suppliers.
Cyber threats are estimated to cost the UK economy billions of pounds each year with the cost of cyber attacks nearly doubling between 2013 and 2014. The report found that, while larger firms have taken some action to make themselves more cyber-secure, they face an escalating threat as they become more reliant on online distribution channels and as attackers grow more sophisticated. The report issues a call to arms for insurers and insurance brokers to simplify and raise awareness of their cyber insurance offering and ensure that firms understand the extent of their coverage against cyber attack.
The cyber threat is also a very real for business continuity professionals with the Business Continuity Institute’s latest Horizon Scan report highlighting that cyber attacks are now perceived to be the number one threat to organizations. 82% of respondents to a survey expressed either concern or extreme concern at the prospect of this threat materialising.
The report recommends that organizations stop viewing cyber largely as an IT issue and focus on it as a key commercial risk affecting all parts of their operations, and that they examine the different forms of cyber attacks they face, to stress-test themselves against them and to put in place business-wide recovery plans.
The report also notes a significant gap in awareness around the use of insurance with around half of firms interviewed being unaware that insurance was available for cyber risk. Other surveys suggest that despite the growing concern among UK companies about the threat of cyber attacks, less than 10% of UK companies have cyber insurance protection even though 52% of CEOs believe that their companies have some form of coverage in place.
Francis Maude, Minister for the Cabinet Office and Paymaster General, said: “Insurance is not a substitute for good cyber security but is an important addition to a company’s overall risk management. Insurers can help guide and incentivise significant improvements in cyber security practice across industry by asking the right questions of their customers on how they handle cyber threats”.
Mark Weil, CEO of Marsh UK and Ireland, added: “While critical infrastructure in regulated sectors, such as banks and utility firms, are used to this kind of risk, most firms are not and their risk management practices are geared around lower-level, slower moving risks. Companies will need to upgrade their risk management substantially to cope with the growing threat of cyber attack, including introducing disciplines such as stress-testing, and creating a joined-up recovery plan that brings together financial, operational, and reputational responses.”
The results of the research show that few businesses have comprehensive workforce strategies, with the majority taking a piecemeal approach to planning human capital. Only 15% of organizations polled said there is a clear link between their workforce planning and their overall strategic business plan, showing that where workforce plans exist, they often do so in isolation.
Research conducted by the Business Continuity Institute has shown that workforce planning is also a concern for business continuity professionals with the results of a recent survey conducted for the the annual Horizon Scan report revealing that a third of respondents consider availability of talents/key skills to be a concern for organizations, while nearly two-thirds consider loss of key employee as an issue that organizations need to be aware of.
Organizations tend to react to workforce challenges, rather than plan for them. An alarming 47% of those surveyed by CRF said that recruitment forecasts for the next 12 months have not been undertaken in their organisations. This reluctance to identify workforce risks leads to poor succession planning, insufficient anticipation of recruitment needs and a lack of understanding of future skill requirements.
David Knight, Associate Partner at KPMG comments: “One of the biggest issues that business will face in the coming years is the management of human capital. Poor planning can make it difficult to adapt to changing market conditions, as well as retain talent in competitive industries. The ability to forecast skills requirements, pre-empt workforce risks and deploy resources efficiently will underpin financial success for organisations in future.”
Mike Haffenden, from Corporate Research Forum’s, comments: “In today‘s world of ever-increasing complexity, it is even more important to prepare for an uncertain future armed with a flexible plan, rather than simply reacting to unforeseen events. Adopting a strategic approach to workforce planning will leave organisations better prepared to deal with a dynamic and fast-changing environment.”
After a harsh, cold winter, the clear, sunny skies and rising temperatures of spring are much appreciated. Businesses, however, also need to be ready for the possibility of flooding that may result from heavy rains combined with melting ice and snow.
The National Oceanic and Atmospheric Administration (NOAA) notes that flooding causes more damage in the United States than any other weather-related event. On average, flooding causes $8 billion in damages and 89 fatalities annually. Warming weather also often brings ice jams along rivers, streams and creeks, which can cause further flooding.
“In addition to the threat of floods that occur when severe weather hits, snow and ice have been piling up in many areas of the U.S. this winter,” Bill Boyd, senior vice president with CNA Risk Control, said in a statement. “When temperatures rapidly increase, so does the rate at which snow and ice melt…” which can create serious problems for those heavily affected this winter. “As spring temperatures begin to rise, it’s imperative for businesses to create emergency plans for flooding, which could cause costly property damage or disrupt operations,” he said.
Could the age of the virtual desktop finally have arrived? Rising demand for virtual desktops could create new opportunities for managed service providers (MSPs) over the next few years, according to a new survey from managed service provider Evolve IP.
Nearly 37 percent of organizations said they have implemented or tested some level of virtual desktops, while almost 33 percent noted that they plan on doing so in the next three years, according to the study, 2015 Evolve IP State Of The Desktop.
The survey also showed that nearly 98 percent of virtual desktop users are "very pleased" with the technology.
At the DRJ Spring World Conference in Orlando, FL on Tuesday 24th March, the Business Continuity Institute recognized the outstanding contribution made by a select group of individuals and organizations from across the continent as they presented their annual BCI North America Awards.
The BCI Awards consist of eight categories – seven of which are decided by a panel of judges with the winner of the final category (Industry Personality of the Year) being voted upon by BCI members from all over the United States and Canada. The number of nominations for each category was high, as was the standard of the nominations, leaving the judges with a difficult job to do in choosing the winners. But choose they must, and those who went home celebrating were:
Continuity and Resilience Consultant of the Year 2015
Roberta Atabaigi MBCI of KPMG
Continuity and Resilience Professional (Private) of the Year 2015
Cheryl Hirst of Erie Insurance Group
Continuity and Resilience Newcomer of the Year 2015
Garrett Hatfield of MetLife
Continuity and Resilience Team of the Year 2015
ETS Enterprise Resiliency Department Educational Testing Service
Continuity and Resilience Provider of the Year 2015
Continuity and Resilience Innovation of the Year 2015
Send Word Now
Most Effective Recovery of the Year 2015
Industry Personality of the Year 2015
Brian Zawada FBCI, Chairman of the US Chapter of the BCI; said: “Congratulations to all the winners who have shown themselves to be an asset to the profession. The high caliber of entries to these awards demonstrates the capability that exists within the business continuity and resilience industry, meaning that many C-Suite executives need not worry about whether their organization can manage a crisis, they can worry about other things instead.”
The BCI North America Awards are one of seven regional held by the BCI and which culminate in the annual Global Awards held in November during the Institute’s annual conference in London, England. All winners in the BCI North America Awards are automatically entered into the Global Awards.
A new report by the Business Continuity Institute, supported by certification body NQA, has shown that 6 out of 10 organizations adopt ISO 22301, the international standard for business continuity management. Organizations with strong top management commitment to standardising business continuity practice are four times more likely to adopt ISO 22301 than those who do not.
There are many reasons why an organization would want to embrace ISO 22301, most notably it provides assurance of continued service with 61% of respondents identifying this as a significant reason. By certifying to the Standard, organizations can provide reassurance to their stakeholders that, in the event of a crisis, it will still be able to function. Other reasons include:
- Reputation and brand management (48%)
- Reduced risk of business interruption (48%)
- Greater resilience against disruption (45%)
- Quicker recovery from interruption (44%)
There are of course barriers that prevent such commitment and those identified were resource constraints (25%), complexity of implementation (19%) and top management buy-in (18%). It is perhaps encouraging that these barriers each had relatively low percentages suggesting that the barriers aren’t that widespread.
If reassurance is one of the primary reasons to commit to the Standard then one can only wonder why many organizations don’t expect the same of their suppliers as supply chains can only be as strong as their weakest link. It could be considered alarming that 82% or respondents stated that their organization does not seek certification to the Standard from their suppliers.
Deborah Higgins MBCI, Head of Learning an Development at the Business Continuity Institute, commented: “It is encouraging that uptake is beginning to increase as organizations recognise the value investing in an effective business continuity programme, however there is still a lot of work to be done, most notably when it comes to persuading other organizations within the supply chain to also adopt ISO 22301.”
Kevan Parker, Head of NQA, stated “ISO 22301 provides an excellent framework for building organizational resilience and the benefits of adoption are becoming increasingly recognised. This is very positive but, as highlighted, a supply chain is only as strong as the weakest link; it is a responsibility of those with ISO 22301 certification to lead their peers towards adoption and elevate organizational resilience to total supply chain resilience.”
Fifteen or twenty years ago, when you thought about record retention and electronic communications, “electronic mail” or, email, was the only thing to worry about. Back then, firms and the regulators scrambled to interpret how to apply existing rules pertaining to communications to the new modality of email. Nowadays, email is just a one piece of a more complex communications landscape. Companies are deploying new forms of communication and the pace is only accelerating. Your firm might be using Unified Communications platforms like Microsoft Lync and IBM Sametime, collaboration tools like Chatter, IBM Connections, or Jive, or IM networks such as corporate Lync IM or perhaps public-facing such as Yahoo! Messenger. Your firm may even be using community networks geared towards specific industries such as Reuters and Bloomberg , widely used in the financial services sector, or ICE within the energy markets. And, of course, your regulated users, such as financial advisors, may also be clamoring to use social networking sites such as Facebook, LinkedIn, Twitter, YouTube, Google+, Pinterest, Instagram to prospect and conduct business.
How often have you heard the expression ‘no pain, no gain’? These four words sum up the idea that if you are to receive benefits, then you must suffer (or at least make an effort). Alternatively, you could take it to mean that if you don’t make an effort, you can’t expect benefits. An example in the domain of disaster recovery might be ‘if you skip regular data backups (no effort), you’ll fail when your hard disk crashes (no benefit)’. The problem comes when people use chop logic to infer from ‘no pain, no gain’ that ‘if pain, then gain’ is true as well.
Unstructured data received a boost from Big Data technologies such as Hadoop. Finally, organizations had an in-road to an estimated 70 to 80 percent of data that was largely unusable.
But Big Data isn’t the last work when it comes to leveraging unstructured data. A recent Baseline Magazine piece outlines the options for obtaining new business insights by combining structured data with unstructured data.
Blueocean Market Intelligence’s Senior VP of Analytics, Durjoy Patranabish, and Shreya Sharma, analytics consultant, collaborated to write the article. The consultancy focuses on solutions in marketing, life sciences, digital and, of course, Big Data. The resources section of Blueocean’s site is worth exploring in its own right since it includes quite a few papers, studies and webinars.
Even though the U.S. government has broadened its pursuit against corruption, only about 9% of organizations see Foreign Corrupt Practices Act monitoring as a top concern, according to “Bribery and Corruption: The Essential Guide to Managing the Risks” by ACL.
Many companies have policies against corruption, but it still exists. Although remaining competitive can be difficult in some parts of the world that see payments, gifts and consulting fees as part of doing business, companies need to identify these risks and manage them across the organization. There is much is at stake, as penalties are rising and more companies globally are being fined, the study found.
According to ACL, if a formalized ERM process exists within an organization, then the anti-bribery and anti-corruption (ABAC) risk assessment process should ideally be carried out within that ERM framework. In some organizations, however, the overall risk management process is fragmented, meaning that the risks of bribery and corruption are considered in relative isolation. Whichever approach is taken within an organization, the process of defining the risks should involve individuals with sufficient knowledge of the regulations and ways the business actually works.
(TNS) — The man wasn’t any sicker at first than many of the other patients who arrive at University of Kansas Hospital, infectious disease specialist Dana Hawkinson recalls.
But he went downhill fast. Fever spiking, kidneys failing, breath so short he needed supplemental oxygen.
He had been bitten by ticks while working outdoors, so he probably had one of the many diseases commonly spread by bug bites in the Midwest, Hawkinson figured. But the tests the doctor ran — for ehrlichiosis, Rocky Mountain spotted fever, Lyme disease, West Nile virus — all turned up negative.
The virus escaped control as countries and global agencies failed to acknowledge and contend with the magnitude of its spread. Treatment centers were overwhelmed. Sick people died on city streets, and new cases multiplied inside health care facilities, killing a significant proportion of the already inadequate health work force of the three most affected countries — Liberia, Sierra Leone and Guinea.
However, after two American aid workers and a traveler to Nigeria fell ill last summer, setting off a panic, a huge global initiative to combat Ebola swung into place. The effort has been messy, inefficient and expensive, often lagging the epidemic’s twists in tragic ways.
But the effort has also established expertise that may be built upon to prevent similar tragedies in the future — and shown personal and institutional bravery.
“Every company also needs to be a data company,” Leo Mirani, a reporter for the London-based Quartz, warned last fall.
I love that line, and once agreed. But in the past few months, I’ve had cause to rethink that premise and have decided that it’s not true for two reasons.
First, it ignores the ugly truth that not every company can be a data company. Everyone loves a success story, especially start-ups and vendors, so you don’t often hear about the failures. Companies that waste time and money trying to squeeze value from Big Data or other data projects don’t hire PR firms to put out press releases. But these stories exist, lurking in the subtext of data company success stories.
This GreenTechMedia story on utility data analytics is a good example. It’s a success story about start-up utility data analytics companies, but lurking among the unfathomably large market numbers and tech descriptions, our second story emerges:
Despite some early difficulties configuring and deploying private clouds, the enterprise is still gung ho for them as a way to have a little piece of the cloud close to home for the most critical data.
But the knock on private clouds is undeniable: Unless you are willing to set up a vast array of modular infrastructure, private resources simply do not scale as well as public ones. And if a cloud can’t scale, is it really of much use?
To the first point, a private cloud may not offer “unlimited scalability” the way AWS does, but there are still plenty of ways that scalability can be architected into local resources to provide a decently large data environment. Infoblox is current working on private cloud scalability from the networking side, offering the new Cloud Network Automation stack for its NIOS 7.0 operating system. The idea is to provide a single management console for VMware, Microsoft, OpenStack and other platforms as they make the transition from pilot programs to full, multiplatform production environments. The system relies on an advanced GUI and a scalable virtual appliance architecture that handles the management of IP addresses and DNS/DHCP services, all backed by specialized adapters that enable consistent operation across multi-vendor platforms.
When it comes to damaging cyberattacks, a horror movie cliche may offer a valuable warning: the call is coming from inside the building.
According to PwC’s 2014 U.S. State of Cybercrime Survey, almost a third of respondents said insider crimes are more costly or damaging than those committed by external adversaries, yet overall, only 49% have implemented a plan to deal with internal threats. Development of a formal insider risk-management strategy seems overdue, as 28% of survey respondents detected insider incidents in the past year.
In the recent report “Managing Insider Threats,” PwC found the most common motives and impacts of insider cybercrimes are:
ScaleArc has released the results of a new survey into 'The State of Application Uptime in Database Environments'. The 451 Research survey solicited responses from more than 200 enterprises of varying size, across a wide range of vertical markets, to learn more about the impact that an organization's underlying database infrastructure has on application availability.
Specifically, respondents were asked about their database infrastructure and its effect on both planned and unplanned downtime. The survey reveals key insights into the IT decision-making process, including the risks organizations are willing to take when choosing between application availability and security.
Commenting on the survey, Matt Aslett, research director at 451 Research said: "As enterprises struggle to improve application availability, understanding how the database affects application uptime is critical. The survey results indicate that enterprises cannot afford to maintain the status quo when it comes to database availability. Having your most critical applications be offline for 20 minutes to three hours, more than once a month, should not be acceptable to any enterprise today."
Key insights from the survey include:
- Database failover takes down the applications: for the majority of organizations, users see application errors for the duration of an unplanned outage. Failover is manual in most cases, and applications have to be restarted 62 percent of the time.
- Database outages are too frequent and too long: too frequently, the database is the source of unplanned downtime. A surprising 65 percent of all enterprises surveyed experience between 20 minutes and 3 hours of downtime, on average, for their most critical applications.
- Database maintenance crushes resources: more than 70 percent of respondents reported that they performed maintenance updates on a weekly or monthly basis. Those surveyed also indicated that key development resources are pulled in to assist with maintenance tasks 50 percent of the time.
- Deferred ‘security patching’ is rampant, placing enterprises at risk: more than 60 percent of respondents postponed critical security patches because of concerns over application downtime.
For the full survey report, please click here (registration required).
If an organization’s backup system was designed before data volumes began to grow exponentially – or before IT infrastructures became highly virtualized – the company may find itself in a tight spot. Modernization is the key, and Logicalis US has identified six benefits CIOs can realize by updating their organization’s data storage and backup infrastructure.
"Working with an outdated backup system can create significant challenges in IT service levels,” says Bill Mansfield, solution architect, Logicalis US. “One sign it’s time to modernize your storage and backup/recovery infrastructure is when it’s too difficult to manage - you have to add staff to manage different backup products for physical and virtual servers, or you have to constantly fight fires to keep backups working. Another sign is when it’s just not working anymore. You can’t meet backup windows or recovery objectives because your backup techniques or storage are outdated, or your virtual environment’s performance degrades during routine backup operations. These are warning signs that you are working too hard to maintain an infrastructure that isn’t up to par, and that you could experience a significant loss if a disaster were to occur.”
DDoS attacks are now one of the most common and affordable cyberweapons. They are used by unscrupulous competitors, sinister extortionists or just everyday cyber-vandals. More and more companies, regardless of their size or business, are encountering this threat. And, according to the results of a survey conducted by Kaspersky Lab and B2B International, the majority of companies believe that revenue and reputation losses are the most damaging consequences of a DDoS attack.
According to the figures, companies regard lost business opportunities – the loss of contracts or on-going operations that generate guaranteed income – as the most frightening consequence of a DDoS attack. 26 percent of companies that encountered DDoS attacks regarded this as the biggest risk.
Reputational risks (23 percent) were viewed as the next most frightening consequence, likely to be since a negative customer or partner experience can drive away future contracts or sales. Losing current customers who could not access the anticipated service due to a DDoS attack was in third place: named by 19 percent of respondents. Technical issues were at the bottom of the pile: 17 percent of respondents identified a need to deploy back-up systems that would keep operations online as the most undesirable consequence, followed by the costs of fighting the attack and restoring services.
The research also revealed that respondents from companies in different fields take different views of the consequences of DDoS attacks. For example, industrial and telecoms companies, as well as e-commerce and utilities and energy organizations, tend to rate reputational risks ahead of lost business opportunities. In the construction and engineering sector there is more concern about the cost of setting up back-up systems, perhaps because larger companies face higher expenditure on this kind of system.
DDoS attacks on company resources are becoming a costly problem but only 37 percent of the organizations surveyed said they currently have measures in place to protect against them.
“People who have not yet faced a particular threat often tend to underestimate it while those who have already experienced it understand which consequences might be the most damaging for them. However, it makes little sense to wait until the worst happens before acting – this can cost companies a lot, and not only in financial terms. That is why it is important to evaluate all possible risks in advance and take appropriate measures to protect against DDoS attacks”, said Evgeny Vigovsky, Head of Kaspersky DDoS Protection, Kaspersky Lab.
In 2010, Google’s then-CEO Eric Schmidt gave a presentation at the annual Techonomy conference. He told attendees about Android’s incredibly phenomenal growth rate, but the real bombshell he shared was an interesting fact about data management.
From the beginning of human history--cave paintings until 2003--human beings created 2 exabytes of data. Total. That’s all the symphonies, all the movies, all the books--everything. Now we are replicating that every two days. That’s “Big Data.”
Even more staggering, about 80% of all the data we’ve ever created was generated in the past two years, and 90% of that is file, or unstructured, data. With data volumes expected to double every two years over the next decade, many IT leaders are feeling the pain of an infrastructure that isn’t scaling for capacity and performance.
No drought relief in sight for California, Nevada or Oregon this spring
According to NOAA’s Spring Outlook released today, rivers in western New York and eastern New England have the greatest risk of spring flooding in part because of heavy snowpack coupled with possible spring rain. Meanwhile, widespread drought conditions are expected to persist in California, Nevada, and Oregon this spring as the dry season begins.
“Periods of record warmth in the West and not enough precipitation during the rainy season cut short drought-relief in California this winter and prospects for above average temperatures this spring may make the situation worse,” said Jon Gottschalck, chief, Operational Prediction Branch, NOAA’s Climate Prediction Center.
NOAA’s Spring Outlook identifies areas at risk of spring flooding and expectations for temperature, precipitation and drought from April through June. The Spring Outlook provides emergency managers, water managers, state and local officials, and the public with valuable information so they will be prepared to take action to protect life and property.
Spring Outlook 2015. (Credit: NOAA)
Record snowfall and unusually cold temperatures in February through early March retained a significant snowpack across eastern New England and western New York raising flood concerns. Significant river ice across northern New York and northern New England increase the risk of flooding related to ice jams and ice jam breakups. Rivers in these areas are expected to exceed moderate flood levels this spring if there is quick warm up with heavy rainfall.
There is a 50 percent chance of exceeding moderate flood levels in small streams and rivers in the lower Missouri River basin in Missouri and eastern Kansas which typically experience minor to moderate flooding during the spring. This flood potential will be driven by rain and thunderstorms.
Moderate flooding has occurred in portions of the Ohio River basin, including the Tennessee and Cumberland rivers from melting snow and recent heavy rains. This has primed soils and streams for flooding to persist in Kentucky, southern Illinois, and southwest Indiana with the typical heavy spring rains seen in this area.
Minor river flooding is possible from the Gulf Coast through the Ohio River Valley and into the Southeast from Texas eastward and up the coast to Virginia. The upper Midwest eastward to Michigan has a low risk of flooding thanks to below normal snowfall this winter. Though, heavy rainfall at any time can lead to flooding, even in areas where overall risk is considered low.
El Niño finally arrived in February, but forecasters say it’s too weak and too late in the rainy season to provide much relief for California which will soon reach its fourth year in drought.
Drought is expected to persist in California, Nevada, and Oregon through June with the onset of the dry season in April. Drought is also forecast to develop in remaining areas of Oregon and western Washington. Drought is also likely to continue in parts of the southern Plains.
Forecasters say drought improvement or removal is favored for some areas in the Southwest, southern Rockies, southern Plains, and Gulf Coast while drought development is more likely in parts of the northern Plains, upper Mississippi Valley and western Great Lakes where recent dryness and an outlook of favored below average precipitation exist.
Current water supply forecasts and outlooks in the western U.S. range from near normal in the Pacific Northwest, northern Rockies, and Upper Colorado, to, much below normal in California, the southern Rockies, and portions of the Great Basin.
If the drought persists as predicted in the Far West, it will likely result in an active wildfire season, continued stress on crops due to low reservoir levels, and an expansion of water conservation measures. More information about drought can be found at www.drought.gov.
Above-average temperatures are favored this spring across the Far West, northern Rockies, and northern Plains eastward to include parts of the western Great Lakes, and for all of Alaska. Below normal temperatures are most likely this spring for Texas and nearby areas of New Mexico, Colorado, Kansas, and Oklahoma.
For precipitation, odds favor drier than average conditions for parts of the northern Plains, upper Mississippi Valley, western Great Lakes, and Pacific Northwest. Above average precipitation is most likely for parts of the Southwest, southern and central Rockies, Texas, Southeast, and east central Alaska. Hawaii is favored to be warmer than average with eastern areas most likely wetter than average this spring.
Now is the time to become weather-ready during NOAA’s Spring Weather Safety Campaign which runs from March to June and offers information on hazardous spring weather -- tornadoes, floods, thunderstorm winds, hail, lightning, heat, wildfires, and rip currents -- and tips on how to stay safe.
NOAA’s mission is to understand and predict changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and to conserve and manage our coastal and marine resources. Join us on Facebook, Twitter, Instagram and our other social media channels.
Zetta.net's "The State of Backup Survey" of 425 IT professionals revealed nearly 97 percent of respondents said they currently are using some form of disaster recovery (DR). Additionally, 31 percent said they plan to leverage a new DR method in the future, and more than half of these respondents intend to use cloud-based DR solutions. Here's everything you need to know about Zetta.net's new survey.
New research from Zetta.net showed that the demand for cloud-based backup and disaster recovery (BDR) solutions from managed service providers (MSPs) could increase soon.
Zetta.net's The State of Backup Survey of 425 IT professionals revealed nearly 97 percent of respondents said they currently are using some form of DR. Additionally, 31 percent said they plan to leverage a new DR method in the future, and more than half of these respondents intend to use cloud-based DR solutions.
Think you know it all when it comes to business continuity? That’s great. Think you can store all that knowledge? Think again. The way most information technology has developed, it’s great for storing information (bunches of related data), but not so hot for knowledge (insights and deeper relationships). There is no shortage of information to define business continuity, list its component parts, describe planning methodologies and offer case studies. You can access that information, transfer it and store it on your PC or mobile computing device. The problem is in storing your understanding of that material, and the model you develop to see them as a connected whole.
Premera Blue Cross, a health insurer based in the Seattle suburbs, announced Tuesday it was the victim of a cyberattack that may have exposed the personal data of 11 million customers — including medical information.
The company said it discovered the attack on Jan. 29 but that hackers initially penetrated their security system May 5, 2014. The attack affected customers of Premera, which operates primarily in Washington, Premera's Alaskan branch as well as its affiliated brands Vivacity and Connexion Insurance Solutions, according to a Web site created by the company for customers. "Members of other Blue Cross Blue Shield plans who have sought treatment in Washington or Alaska may be affected," according to the site.
The company said its investigation has not determined if data was removed from their systems. But the information attackers had access to may have included names, street addresses, e-mail addresses, telephone numbers, dates of birth, Social Security numbers, member identification numbers, medical claims information and bank account information, according to the company's Web site. The company said it does not store credit card information.
It seems like the breach cycle goes in full circles.
When data breaches began to make the news, the health care industry was hardest hit. Eventually, attacks against the health care industry, while they didn’t disappear, moved off the headlines in order to make room for breaches against the financial industry and retail and entertainment. But then came the Anthem breach, and now the announcement that Premera Blue Cross was hacked, with possibly millions of customers’ medical data exposed. I wouldn’t be surprised if we saw a flurry of news on health care-related attacks in the coming months, either.
The reasons are simple. First, health care organizations hold so much data that is valuable on the black market. You are looking at names, birthdates, addresses, Social Security numbers, insurance numbers, medical records and more.
Have you ever experienced severe diarrhea or vomiting? If you have, it’s likely you had norovirus. If you haven’t, chances are you will sometime in your life. Norovirus is a very contagious virus that anyone can get from contaminated food or surfaces, or from an infected person. It is the most common cause of diarrhea and vomiting (also known as gastroenteritis) and is often referred to as food poisoning or stomach flu. In the United States, a person is likely to get norovirus about 5 times during their life.
Norovirus has always caused a considerable portion of gastroenteritis among all age groups. However, improved diagnostic testing and gains in the prevention of other gastroenteritis viruses, like rotavirus, are beginning to unmask the full impact of norovirus
For most people, norovirus causes diarrhea and vomiting which lasts a few days but, the symptoms can be serious for some people, especially young children and older adults. Each year in the United States, norovirus causes 19 to 21 million illnesses and contributes to 56,000 to 71,000 hospitalizations and 570 to 800 deaths.
While there is hope for a norovirus vaccine in the future, there are steps you can take now to prevent norovirus.
Additionally, norovirus is increasingly being recognized as a major cause of diarrheal disease around the globe, accounting for nearly 20% of all diarrheal cases. In developing countries, it is associated with approximately 50,000 to 100,000 child deaths every year. Because it is so infectious, hand washing and improvements in sanitation and hygiene can only go so far in preventing people from getting infected and sick with norovirus.
This is why efforts to develop a vaccine are so important and why in February 2015 the Bill and Melinda Gates Foundation, CDC Foundation, and CDC brought together norovirus experts from around the world to discuss how to make the norovirus vaccine a reality. Participants were from 17 countries on 6 continents and included representatives from academia, industry, government, and private charitable foundations.
Important questions remain regarding how humans develop immunity to norovirus, how long immunity lasts, and whether immunity to one norovirus strain protects against infection from other strains. There are also relevant questions as to how a norovirus vaccine would be used to prevent the most disease and protect those at highest risk for severe illness. These are all critical questions for a vaccine, and this meeting was a step toward finding answers to these questions and making a norovirus vaccine a reality.
For more information on norovirus visit CDC’s webpage: http://www.cdc.gov/norovirus/.
We all know that we need to exercise our business continuity plans, it’s the only way to find out whether they will work. Of course that’s with the exception of a live incident, but during a disaster is never a good time to find out your plan doesn’t work. But what type of exercises should you run, how often should you run them, how to you plan them and how do you assess them?
These are all important questions and are all vital to ensuring that you have an effective business continuity programme in place, one that will provide reassurance to top management that, in the event of a crisis, the organization will be able to deal with it.
This is why the Business Continuity Institute has published a new guide that will assist those who have responsibility for business continuity to manage their exercise programme. ‘The BCI guide to… exercising your business continuity plan’ explains what the main types of exercises are and in what situation it would be appropriate to use them. It explains how to plan an exercise and what needs to be considered when doing so, from the setting of objectives to conducting a debrief and establishing whether those objectives have been met.
Following feedback from those working in the industry, testing and exercising was chosen as the theme for Business Continuity Awareness Week and the BCI is keen to highlight just how important it is to effective business continuity. A recent study showed that nearly half of respondents to a survey had not tested their plans over the previous year and half of those had no plans to do so over the next twelve months. This guide is intended to make it easier for people to develop an exercise programme and demonstrate that it does not have to be an onerous task to do so.
SEATTLE — The Ebola epidemic in West Africa has killed more than 10,000 people. If anything good can come from this continuing tragedy, it is that Ebola can awaken the world to a sobering fact: We are simply not prepared to deal with a global epidemic.
Of all the things that could kill more than 10 million people around the world in the coming years, by far the most likely is an epidemic. But it almost certainly won’t be Ebola. As awful as it is, Ebola spreads only through physical contact, and by the time patients can infect other people, they are already showing symptoms of the disease, which makes them relatively easy to identify.
By Gabriel Gambill
You would be pretty worried if you didn’t have fire safety and evacuation plans in your office, so why would you not put the same contingency strategy in place for your data?
Too many businesses don't have a disaster recovery plan, so my advice is to sit down and consider it pronto. Disaster recovery as a service (DraaS) or cloud-based DR strategies are now making data recovery plans far less complicated and highly efficient for businesses. But despite being able to re-think their DR plans in the cloud and make them so much easier, companies are still lax about testing the plan on a regular basis.
To put it into context, perhaps it’s best to start by defining what a disaster could be. When we say ‘disaster’ often we mean something that is out of our hands. Floods, hurricanes power cuts and earthquakes all spring to mind. However a disaster could be something as mundane as a software update or a simple human error. They're often not as newsworthy as a natural disaster but have just as much impact on an organization’s ability to operate.
HOUSTON—The recent spike in oil and natural gas production has led trucking companies to grow so quickly that they sometimes scramble to find qualified drivers. This has meant tightening coverage with a limited number of carriers and a market in “disarray,” Anthony Dorn, a broker with Sloan Mason Insurance Services said today at the IRMI Energy Risk and Insurance Conference.
“Carriers have taken a bath on construction risks,” he said. “Only nine carriers will write crude hauling.”
He added that there is a “huge need for risk management in trucking right now. A lot of these are fly-by-night companies. They are running with drivers that have no experience, they are getting violations from the DOT left and right for not having licenses and adequate brakes on their trucks and they are running on dirt roads that aren’t made for 100,000 pound units,” Dorn said. “It’s a very risky place for underwriters. If we don’t do something as agents and as risk managers there will be fewer carriers.”
How things change. For years, even decades, people have been getting rid of tape. They bought into the idea that disk was the way to go and that tape was “old hat.”
But the realities of a Big Data world and the advances in tape technology, density, reliability and usability have brought the realization to many that they shouldn’t have been so hasty. And that’s showing up in the raw numbers. According to the Active Archive Alliance, nearly 250 million Linear Tape Open (LTO) tape cartridges have been shipped since the format’s inception. That’s more than 100,000 PB of data on LTO.
Tape, then, is returning to some organizations that dumped it a while back. Its role is steadily being expanded in others who remained faithful, and it now serves as the backbone data repository for many of the major cloud data providers.
Keeping up with and fending off cybersecurity threats is a daily topic for all organizations, but for health care providers and systems, failure in that regard can result in much more dire results than a financial or reputational loss. It can result in bodily harm or death. It’s possible that you could draw a line to such severe consequences in other industries and lines of work, but for the health care industry, that added layer of urgency is always present in cybersecurity protections.
A large research project devoted to determining how best to protect patient health while maximizing use of digital tools and resources, named IMMUNE-SECURE, got a boost in attention from health care IT organizations and other technologists with the announcement today that Dr. Larry Ponemon, well-known in IT circles for his work through the Ponemon Institute, has joined the advisory board for the project.
The growing proliferation of mobile devices continues to make business faster, more agile, and more efficient. However, a recent study suggests U.S. workers remain concerned about the security of their mobile devices when it comes to cloud-based file sharing.
According to a recent study, 73 percent of the 1,000 U.S. employees surveyed said that they preferred to use email over file-sharing services, up 4 percent from the 69 percent in the previous year's survey. Those who made use of file-sharing services dropped to 47 percent, down from 52 percent in 2013.
Panda Security accidentally flagged itself as malware last week, causing some user files to be quarantined.
What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:
WASHINGTON—The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA), in coordination with state and tribal emergency managers and state broadcasting associations, will conduct a test of the Emergency Alert System (EAS) on Wednesday, March 18, 2015 in Kentucky, Michigan, Ohio, and Tennessee. The test will begin at 2:30 p.m. Eastern Daylight Time (EDT) and will last approximately one minute.
“The goal of the test is to assess the operational readiness and effectiveness of the EAS to deliver a national emergency test message to radio, television and cable providers who broadcast lifesaving alerts and emergency information to the public,” said Damon Penn, Assistant Administrator of FEMA’s National Continuity Programs. “The only way to demonstrate the resilience of the system’s infrastructure is through comprehensive testing to ensure that members of tribes, and the residents of Kentucky, Michigan, Ohio, and Tennessee, receive alerts when an emergency occurs.”
The test will be seen and heard over radio and television in Kentucky, Michigan, Ohio, and Tennessee, similar to regular monthly testing of the EAS conducted by state officials and broadcasters. The test message will be nearly identical to the regular monthly tests of the EAS normally heard by public. Only the word “national” will be added to the test message: “This is a national test of the Emergency Alert System. This is only a test...”
The test is designed to have limited impact on the public, with only minor disruptions of radio and television programs that normally occur when broadcasters regularly test EAS in their area. Broadcasters and cable operators’ participation in the test is completely voluntary. There is no Federal Communications Commission regulatory liability for stations that choose not to participate.
In 2007, FEMA began modernizing the nation’s public alert and warning system by integrating new technologies into existing alert systems. The new system is known to broadcasters and local alerting officials as the Integrated Public Alert and Warning System or IPAWS. IPAWS connects public safety officials, such as emergency managers, police and fire departments, to multiple communications channels to send alerts to warn when a disaster happens. For more information, please visit www.fema.gov/media-library/assets/documents/31814.
(TNS) — Many of those who lived through last August’s 6.0 magnitude South Napa Earthquake suffered mental health issues as a result, with about a quarter of those at risk for PTSD, according to a newly released survey, Napa County officials announced.
The California Department of Public Health recently released the final results of the door-to-door survey of Napa and American Canyon households conducted September 16-18. The Community Assessment for Public Health Emergency Response final report was based on the survey that asked questions about residents’ experiences during and after the temblor to assess the extent of injuries, chronic disease exacerbation and mental health issues associated with the earthquake, and the degree of disaster preparedness of these communities.
Mental health issues were extremely common among residents of both cities, with about 79 percent of Napa households and 73 percent of American Canyon households reporting a traumatic experience or mental health stressor during or since the earthquake.
This is a tale from the mists of time; from days of yore when it was difficult to get people interested in business continuity management and even more difficult to secure their involvement in exercises and tests (OK, in fairness, that could have been this week, but just indulge me for a moment).
Some of you may have heard me tell this story before, but recounting ancient tales didn’t do Hans Christian Anderson (or my Dad) any harm and, in any case, I’m a big fan of recycling.
Having been asked to contribute something on exercising and testing to this year’s Business Continuity Awareness Week Flashblog, and despite conforming in terms of using the snappy title demanded of all the contributors, I really couldn’t bring myself to write about strategy or methodology or process or the difference between a test, exercise, rehearsal, etc, etc, etc. So I’ll leave that to those whose boats are floated by that sort of thing and tell you my favourite exercising story instead.
Capital Weather Gang cites a weather.com report that not a single tornado has been reported to the National Weather Service in March, typically the first month of severe weather season in the Plains and Southeast.
The only other year since 1950 that there have been zero tornado reports in the first half of March was 1969, according to the Weather Channel’s severe weather expert Dr. Greg Forbes.
Per Dr. Forbes’ report from January 1 to March 12, only 27 tornadoes had been documented across the nation – the slowest start to the year since the 21 tornadoes recorded through March 12, 2003.
Training, testing and exercising are methods by which we are able to validate our plans. Validation is designed to confirm that plans will work and that the organisation will be able to remain resilient, and plans without exercised and trained key and supporting personnel to execute them are pointless. It is essential for success that the processes in plans are tested and practiced to ensure that when pressure is applied, an incident has occurred and impacts are felt, the organisation can meet its BCM objectives and targets. So, our testing needs to be rigorous, but balanced, to ensure that it goes far enough – but not too far.
It’s really good practice to take the approach that the plans themselves should be tested and exercised incrementally to ensure that overload of subjects and excessive disruption to routine operations and procedures is avoided. When exercised, all plans will have failings exposed or areas for refinement identified. The resulting confidence and capability of the personnel tested should provide realisable benefits – particularly if a real incident is experienced. Documents such as the the BCI’s GPG 2013 identify some of the activities that may need to be exercised and the effective programme will ensure that it encompasses these and the associated aims as minima. As with the other processes and professional practices, the effective BCM practitioner will need to go beyond the initial lists and consider carefully what is required and to what level.
Do you like being taken out of your comfort zone? Having some of your professional weaknesses highlighted and reported on? Finding out that your organisation isn’t perhaps as well-prepared for a disruption as you’d hoped? No??...I didn’t think so. I suppose the idea of taking part in an exercise presents all of the above as a possibility. So why ever would you want to put yourself through it?
Because…if done right it can be a positive and valuable learning experience for the business and you!
By Harriet Wood
In the 2014 Supply Chain Resilience Report published by the BCI 76 percent of respondents reported at least one disruption within their supply chain.
For all of us supply chain failure is a major issue. Within the brewing and pub industries the list and variety of suppliers seems endless. Butchers, bakers and beer bottle makers combine with engineering and IT businesses to create a mind-boggling range of possible disruptions.
For years we had worked hard to write, review and exercise our own plans but around five years ago we realised the need to extend our exercise program out to key suppliers. We quickly established that ‘key suppliers’ could not be identified simply by asking Purchasing for the names of the highest value contracts. We approached our business – and led by the Director of Supply Chain – they came back to us with the names of three suppliers. They were essential to our business, could not easily be replaced and I would never have guessed any of them were so critical!
BSI, the business standards company, has published a list of tips to help those new to the business continuity profession. The BSI's top ten tips for business continuity planning are:
1. Identify critical business functions: once critical business functions have been identified, it is possible to apply a methodical approach to the threats that are posed to them and implement the most effective plans.
2. Remember the seven 'P's needed to keep your business operational: providers, performance, processes, people, premises, profile (your brand) and preparation.
3. Understand and track past incidents with suppliers: obtain country-level intelligence so you understand what factors may cause a supply chain disruption e.g. working conditions, natural disasters, and political unrest.
4. Assess and understand vulnerabilities and weak points: conduct risk assessments to evaluate supplier capabilities to effectively adhere to your business continuity plans and requirements.
5. Agree and document your plans: these should never just be hidden away in the mind of the managing director. Assess your critical suppliers to make sure their business continuity plans fit with your objectives and are defined within your contract.
6. Make sure plans are communicated to key staff and suppliers: equally, share them with other key stakeholders to boost their confidence in your ability to maintain business as usual. This is particularly important for small businesses or those working with suppliers / buyers for the first time.
7. Try your plans out in mock scenarios: if possible include suppliers in your exercises and remember to test them not only in scenarios where there may be a physical risk, such as poor weather conditions making premises inaccessible, but people risks such as supply chain challenges and boardroom departures.
8. Expect the unexpected: while lean and efficient supply chains make good economic sense, unexpected events can have a significant impact on the operations and reputation of businesses.
9. Make sure your continuity plans are nimble and can evolve quickly: if your plans look the same as they did 10 years ago, then they probably won't meet current requirements. Organizations engaged in business continuity management will be actively learning from their internal audits, tests, management reviews and even from incidents themselves.
10. Make sure you're not just box-ticking: plans which get the tick against the 'to do' list but don't actually reflect the organization's strategy and objectives can lack credibility and are unlikely to succeed in the long-term. Instead, make sure your plans allow you to get back up and running in a way that aligns with your organization's objectives.
Over the past year, Phoenix has found that customers using disaster recovery as a service (DRaaS) such as cloud backup & recovery, virtual disaster recovery or data replication services, all undertook rehearsals of their plans last year, highlighting that customers find it easier to test with DRaaS in place than customers who have traditional business continuity services, where Phoenix has seen only 40 percent of its customers testing.
Phoenix has found that DRaaS makes it much easier for customers to test because the data is with the same provider and the logistical issues usually found around testing, such as tape transportation and getting IT staff to the recovery centre, are removed. Furthermore, as it’s disaster recovery as a service, the service provider can initiate the recovery so customers are able to remotely access the recovered infrastructure to ensure that everything they needed to recover, has been recovered. The ‘live’ service element of DRaaS ensures a regular flow of communication which in turn increases awareness of testing.
Recent figures published by Phoenix show that just 45 percent of customers in total, tested last year with only 12 percent testing more than once. With environmental and hardware failures the most common reasons why customers put Phoenix on standby to use its disaster recovery services, the company is urging organisations to test their plans at least once a year to protect themselves against unforeseen but commonplace disruptions.
During Business Continuity Awareness Week (16th - 20th March 2015) Phoenix is offering tours of its facilities: to register log-on to: http://www.phoenix.co.uk/bc-open-day-registration-form/
Tape data storage just keeps on going. It’s almost like the steam punk of IT, a branch off into a different universe where everybody reads with bigger candles instead electric light bulbs. But it works. In fact, it works well enough for the largest IT vendors to continue pushing the envelope on data storage density on tape and storage and recovery speeds too. However, tape is not disk. You cannot ‘dip into’ tape in the same way you can randomly access a hard drive. And so, for backup and recovery in particular, the virtual tape library was invented to offer advantages of tape and disk altogether. Nevertheless, there are both pros and cons to consider.
The cloud wants enterprise data, and so far it has been fairly adept at gathering the low-hanging fruit: mostly bulk storage, archives, B&R, low-level database workloads and other non-critical stuff.
But the real money is in the advanced applications – the kind of data that organizations will pay a premium to support because it brings the highest value to emerging business models. This is a conundrum, however, because that high value also causes the enterprise to keep critical data close to the vest, which means cloud providers need to go the extra mile to win enterprise trust. And for the most part, that has not happened yet.
This is a shame because in terms of both security and uptime, the cloud is at least on par with the typical enterprise and in certain key metrics is actually superior. Cloud tracking site cloudharmony.com offers service status data for many of the top cloud providers going back at least a year, and its latest chart shows many services delivering four- or even five-nines availability. That puts outages at providers like Amazon EC2 and Google Cloud Service at mere minutes per year, while even three-nines performers confine their downtime to a few hours at most. A perfect record? Not by a longshot, but certainly no worse than the vast majority of enterprises out there.
(TNS) — In 2015, the hydrologists tasked with forecasting how high the Minnesota River will rise have supercomputers, advanced radar systems and satellites.
In 1965, they had slide rules, rain gauges and grave diggers.
Pedro Restrepo, the 65-year-old hydrologist in charge at the North Central River Forecast Center in Chanhassen, can relate to the tools available 50 years ago even as he uses the technology of today. When he first started working in hydrology in the 1970s, the instruments being used were much the same as in 1965.
"I still have my slide rule," Restrepo said, producing from his office the well-worn tool used by engineers and scientists to do calculations before the invention of the calculator.
Despite numerous emergencies making headlines last year and major events impacting communities in Oso, Wash.; Napa, Calif.; and Detroit, 2014 was considered a relatively quiet year in terms of federally declared disasters.
After years of hearing about how the number of disaster declarations has been rising, 2014 had the lowest number of declared disasters and fire assistance grants in at least 14 years. FEMA reported that 45 major disaster declarations were made by the president in 2014. And six emergency declarations, which are issued in advance of an event, were declared. The highest number of emergency declarations was in 2005 with 68 events.
In addition, the agency provided 33 fire management grants, a lower than average number. It was “a higher number compared to 2013 (28) but far fewer than the 118 provided in 2011, or the 86 provided in 2006,” according to a FEMA blog post.
Sungard Availability Services has released its 2014 UK invocation figures, which show the highest number of incidents since 2009.
Overall incidents of downtime, in which staff are unable to work from their usual office or access business critical systems, rose by over one third (38 percent) compared to 2013, leading to concerns that organizations are failing to sufficiently invest in availability and business continuity strategies and solutions.
While workplace related disruptions, in which the office environment is rendered inaccessible have remained fairly stable – with only a minor increase in 2014 – disruptions due to technology failures have more than doubled, increasing by 140 percent. Sungard AS’ 2014 invocation statistics show that hardware has been the main issue, causing a fifth of all problems (21 percent). The year-on-year spike in technology-related incidents, also including power and communications, is particularly worrying, suggesting that while many organizations are now entirely dependent on their IT systems, they are struggling to maintain them.
A new industry survey has found that of those who responded the largest group (37 percent) estimated that the cost-per-minute of downtime in their organization fell into the £10,000 - £20,000 bracket.
With 80 percent of those questioned giving their recovery time objectives as two hours or greater, the results mean that the potential losses to UK businesses are high.
The study, conducted by Timico, gave a comprehensive insight into the disaster recovery habits of IT managers in the UK, and revealed a distinct lack of awareness, despite the predicted cost of outages.
The survey revealed that almost a quarter (24 percent) of IT managers acknowledged having an outage within the past month but despite that, over 70 percent admitted to never having worked out the cost of the resulting downtime.
The research also found that over 60 percent of SMEs had not yet rolled out any form of cloud-based back up within their business. Moving to the cloud can negate the need for dual site replication, an option still favoured by 18 percent of those businesses questioned. Shockingly, despite the risks, a minority of respondents even admitted to never backing up their data.
The potential value in the Internet of Things (IoT) is bringing to a fever pitch the focus on data as one of the enterprise’s most valuable assets. Clearly, those who carefully collect, transform, analyze, model and report on IoT data are seeing their influence rise. As much of this work is settling around the data scientist role, I talked with Don DeLoach, CEO of Infobright, provider of an analytics database platform, about what data scientists are being asked to do now, and how those responsibilities around IoT data might change in the near future.
DeLoach says it’s definitely early days when you look at what data scientists are being asked to examine:
“Look at the progress of the Internet of Things. Most, probably 95 percent, of the focus is on the closed loop message response systems that make up the use cases: service models for capital equipment, focus on specific silos, alerting to problems, not having to send service professionals out when they’re not needed, or information like temperatures in machines, or lighting levels that are appropriate for time or conditions. It’s grabbing a message off a sensor, and then determining whether an action is needed. We’re at an early stage.”
Who needs a data scientist when you can have a robot analyze your data? No, seriously, that’s an actual question enterprises may be asking if this Computerworld article on artificial intelligence is right.
Technically, I guess artificial intelligence isn’t a robot until you add a body, but the question still stands: Can artificial intelligence solve the data deluge better than humans? AI experts certainly think so.
"The notion that a human analyst can look at all of this data unaided becomes more and more implausible," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told senior reporter Sharon Gaudin. "You can't have a person sitting there watching Twitter to protect your brand. … You need A.I. tools."
The obvious use case is with security, where humans are already failing to keep up with the ever-changing threat. Algorithms can “learn” from the data and flag deviations.
Cloud computing and modular infrastructure are working hand-in-hand to remove the hassles of physical infrastructure from the enterprise’s list of concerns.
If it all goes as planned, the loss of any one server, storage or networking component will cease to be the service-killing event that drives IT into a state of near-insanity. If a piece goes down, an automation system simply reroutes traffic to another module and a replacement device is swapped in at IT’s leisure, perhaps by a robotic arm.
But that does not mean IT is on easy street. Rather, responsibility for the smooth flow of data simply travels up the stack, to the application and service layers, to be precise. And exactly how the enterprise prepares for data management on that level will go a long way toward determining how well the bosses in the executive suite can fulfill their business models.
(Tribune News Service) -- New York state's top bank regulator told a University at Albany audience on Thursday that one of the greatest threats to the economy today is a "cyber 9/11" attack that causes widespread panic in financial markets.
Benjamin Lawsky, who as superintendent of the state Department of Financial Services oversees 3,800 banks and insurance companies, said that trying to stop cyberattacks on the state's financial system — from data breaches to cyberterrorism — is his biggest concern.
"It's the one issue that I personally work on every single day," Lawsky said at UAlbany's Business School, where he delivered the first-ever Massry Lecture. "What should we do to prevent these nightmare scenarios?"
Although Lawsky doesn't have criminal prosecution powers, his office has been aggressive in negotiating civil penalties with banks that have been investigated for wrongdoing in New York state. On Thursday, just an hour before his UAlbany speech, his office announced a $1.45 billion fine for Commerzbank of Germany — of which $610 million will go to New York state.
(TNS) — Aiming to minimize the number of victims, the Japanese government is hurrying to establish a network of undersea cables to monitor the occurrence of tsunami on the floor of the Pacific Ocean, where a huge earthquake is expected to take place.
The cables connect tsunami gauges and other observation devices for that purpose.
On seabeds stretching from off Hokkaido to off Chiba Prefecture, the National Research Institute for Earth Science and Disaster Prevention (NIED) is installing tsunami gauges and other devices in 150 locations. The total length of the undersea cables will be 5,700 kilometers.
“There is no precedent anywhere in the world for such a large-scale tsunami observation network," NIED President Yoshimitsu Okada said. “Completion is scheduled for fiscal 2015. After that, it will be possible to detect tsunami waves 20 minutes earlier than we do now."
MSPs who offer cloud-based file sharing have a full time job. It isn’t enough to simply sell and set up cloud services for your client – you then need to monitor them.
Surprisingly, 44 percent of corporate data stored in the cloud environment is not managed or controlled by the IT department.
While you could try to make it easier for customers to monitor the cloud sharing you set up, there are advantages to being the one to handle this task. For one, you obviously want to make sure that the file sharing system you set up is working properly. You also want to be able to tell when your client may need additional functions or storage based on their use. Finally, your clients care about it, so being the one to offer it will increase your value to them.
Here are four things your clients care about, and things you should be actively monitoring:
Many information security professionals are looking for help with security and may very well partner with managed security service providers (MSSPs) this year. That's according to a new report from Trustwave. Here are the details.
The 2015 Security Pressures Report revealed that most businesses expect the pressure to secure their organizations against cyber threats will increase in 2015. Also, 78 percent of information security professionals said they are likely or plan to partner with an MSSP to protect their organizations.
According to the Occupational Health and Safety Administration, 4.1 million U.S. employees experience work-related injuries or illnesses each year and 1.12 million of those employees lose work days as a result. With the average employee missing eight days per injury, even a minor injury can create a domino effect in your company.
When employees experience illness or injury, it often impacts their ability to perform their jobs, especially in occupations that are more labor intensive. As soon as your worker is able, it is in everyone’s best interest to return him or her to work in some capacity. Oftentimes, this is done through formalized return to work programs. Return to work programs are extremely effective because they provide benefits to not only the employee, but also your company.
Often, when an organization initiates its Business Continuity Management (BCM) / Disaster Recovery (DR) program, it a pretty manual process: documents, power points and spreadsheets abound. They look good and they serve a purposes but when the program needs to mature and grow, the manual maintenance and monitoring processes just can’t keep up properly. Suddenly, the person responsible – use is usually only assigned to BCM/DR part time – can’t keep up and things begin to fall apart. It’s time for some help to automate the BCM process to keep it current and maintainable (not just the plans being maintained).
So where do you start and what needs to be considered when determining what software is best for you? Here are some helpful tips to consider when you get to that point.
Often, when an organization initiates its Business Continuity Management (BCM) / Disaster Recovery (DR) program, it a pretty manual process: documents, power points and spreadsheets abound. They look good and they serve a purposes but when the program needs to mature and grow, the manual maintenance and monitoring processes just can’t keep up properly. Suddenly, the person responsible – use is usually only assigned to BCM/DR part time – can’t keep up and things begin to fall apart. It’s time for some help to automate the BCM process to keep it current and maintainable (not just the plans being maintained).
So where do you start and what needs to be considered when determining what software is best for you? Here are some helpful tips to consider when you get to that point.
If you are the IT person who handles security for your company, where do you feel the most pressure when it comes to protecting business interests and consumer privacy? The folks at Trustwave sought to discover what was causing the most stress and concerns for IT and security professionals, and they just released their findings in the 2015 Security Pressures Report.
It’s an interesting perspective to study. All professionals are under pressure to perform well in their job duties, but as more companies reveal disastrous breaches and security breakdowns, IT security pros are really in the spotlight right now, with minimal room for failure. In fact, as the study stated in the introduction:
Few white-collar professions face as much mounting pressure as the information security trade. It is a discipline that, due to the widely publicized data breach epidemic, has suddenly crept out from behind the shadows of the mysterious, isolated and technical — and into the public and business mainstream.- See more at: http://www.itbusinessedge.com/blogs/data-security/stress-levels-on-the-rise-for-security-professionals.html#sthash.Txh7nrOk.dpuf
If you are the IT person who handles security for your company, where do you feel the most pressure when it comes to protecting business interests and consumer privacy? The folks at Trustwave sought to discover what was causing the most stress and concerns for IT and security professionals, and they just released their findings in the 2015 Security Pressures Report.
It’s an interesting perspective to study. All professionals are under pressure to perform well in their job duties, but as more companies reveal disastrous breaches and security breakdowns, IT security pros are really in the spotlight right now, with minimal room for failure. In fact, as the study stated in the introduction:
Few white-collar professions face as much mounting pressure as the information security trade. It is a discipline that, due to the widely publicized data breach epidemic, has suddenly crept out from behind the shadows of the mysterious, isolated and technical — and into the public and business mainstream.- See more at: http://www.itbusinessedge.com/blogs/data-security/stress-levels-on-the-rise-for-security-professionals.html#sthash.Txh7nrOk.dpuf
(TNS) — A new report from the U.S. Geological Survey shows it is increasingly likely a magnitude 8.0 or greater earthquake will hit California, but that "doesn’t change the bottom line” for the state’s emergency management workers, an agency official says.
Lucy Jones, a USGS seismologist and Mayor Eric Garcetti’s adviser on earthquakes, tweeted Tuesday about the randomness of big quakes.
"This new science doesn't change the bottom line for emergency managers," she wrote. "Which one happens in our lifetimes is a random subset."
The tweet was in response to a question posed to Jones about the practical takeaway for those trying to prepare the state for just such a disaster.
Accessing analytics of any type has always been a complex endeavor. But starting this week, Ryft Systems wants to make real-time analytics running on a 1u server built using field programmable gate arrays (FPGAs) a single application programming interface (API) call away.
Pat McGarry, vice president of engineering for Ryft Systems, says that by deploying a dedicated Ryft ONE server that runs a “Linux-like” operating system to process analytics IT organizations can once and for all eliminate I/O bottlenecks.
The biggest challenge with Big Data, says McGarry, is not so much the size of that data that needs to processed at any given time, but rather the velocity at which that data needs to be processed. Rather than relying on a general-purpose processor, McGarry says that Ryft has combined FPGAs with up to 40 solid-state disk drives that can process up to 48TB of data at a rate of 10 gigabytes per second.
Let’s start with the notion that nobody is perfect. I know, that will drive the perfectionists up a wall, but it is true. No person, no organization, no company is perfect. This means we will all make mistakes. So why not plan for it.
Plan for it! Yes. We all know that someday there will be a screw up, a goof, or God forbid an intentional negative act. For example, consider the recent experience of a Comcast customer. Lisa wanted to find a way to save money, so she decided that the family could do without the cable portion of the family bill. The Comcast customer service representative was not happy with this request, tried to retain her, and when she still refused Lisa got her next Comcast bill addressed to – “Asshole Brown”. Needless to say Lisa was upset about trying to get the name changed back to her real name. Even that task was not easy.
So here we go. Like I said, no one is perfect and in this case Comcast certainly deserves a black eye.
CompTIA's new "Enabling SMBs with Technology" study revealed many small- and medium-sized businesses (SMBs) want innovative technology partners, and a lack of innovative technology solutions is one of the primary reasons why some of these companies choose to switch IT firms.
CompTIA reported that more than 70 percent of SMBs said they have used an outside IT firm at least occasionally over the past 12 months. Also, 46 percent of SMBs noted that they look to outside IT firms when they need greater expertise and new options, which could create new opportunities for innovative managed service providers (MSPs).
"For an MSP to be innovative, it must focus on business results at a broad scale and proactively determine the best technology solution," Seth Robinson, CompTIA's senior director of technology analysis, told MSPmentor.
By James Stevenson
The first few exercises I ran were pretty nerve wracking. Would the plans work? Would the team play nicely or start throwing stuff? Would they realise I was new to this?
Since then I’ve been fortunate to work with many different groups around the world facilitating exercises, coaching and training new business continuity managers to design and run their own successful exercises.
It’s not rocket science but there is a skill to setting up and running a great exercise.
To help with this, the ten steps below are packed full of tips and suggestions to develop this skill, run great exercises and maximise your business continuity programme:
Carbonite, Inc., a provider of cloud and hybrid business continuity solutions for small and midsize businesses, has published a report on recent business continuity and channel research. Entitled, ‘Business Continuity: A Growing Opportunity in a Digitalized World,’ the report details the results of research conducted through Spiceworks Voice of IT, and identifies trends, challenges and strategies related to business continuity.
According to the report, 67 percent of channel partners reported an increase in demand for business continuity solutions from small and medium sized businesses, and 77 percent expect the demand to continue growing over the next three years. 87 percent of channel partners agree that business continuity solutions are worth the investment, but they are faced with two key challenges when selling related products: lack of customer education (45 percent) and budget concerns (45 percent).
Where are the weak points in your organisation and its operations? Where could disasters or criminals do the most damage? Vulnerability testing, as its name suggests, is done to find out where the soft underbelly is. Then protection and security can be suitably reinforced. In a general sense, it can cover everything: from freak weather conditions to power outages, supplier failure and IT disasters. Indeed, the latter category of IT is where vulnerability testing is often the most performed. This is partly because of the critical role of IT throughout many organisations, and partly because IT vulnerability testing is relatively easy to automate. However, even systematic automated testing can’t do it all. So what’s the solution?
Consensus is building that the cloud will subsume traditional data center infrastructure within the next decade. This is not to say that local resources will go the way of the dinosaur, but that whatever remains in the data center will be cloud-based.
This means that both the hardware and software platforms that hope to support future data architectures will have to cater more toward cloud functionality than traditional data center constructs. And yet, it seems that only recently have we seen anything that can be described as cloud-specific enterprise systems in the channel.
HP took the wraps off of its Cloudline server this week, aimed specifically at helping cloud service providers gain an edge on competitors by offering not just lower costs, but advanced functionality as well. This includes open management capabilities that enable a broad range of third-party solutions, as well as broad ties to the OpenStack format through HP’s Helion platform. This should give providers a wedge in crafting hybrid cloud solutions for enterprises that convert their legacy architectures to OpenStack-based clouds. At the same time, Cloudline supports the HP Altoline open network switch, which itself supports the Cumulus Networks Linux networking distribution aimed at building web-facing hyperscale infrastructure.
(TNS) — Dallas startup accelerator Tech Wildcatters is launching a program focused on wearable technology for police officers, firefighters and emergency medical personnel.
The unique public-private experiment will be announced Wednesday.
The pilot program is funded by the Department of Homeland Security’s research and development arm, and Tech Wildcatters is one of two U.S. accelerators tapped to run it. The program is being managed by the Center for Innovative Technology, a Virginia-based nonprofit.
This is the first time Homeland Security’s research division has experimented with accelerators. The federal agency is interested in wearable technology such as advanced sensors, smart voice and data communication chips embedded in gear, and health-related monitors.
Cyber attacks against businesses may dominate the news headlines, but recent events point to the growing number and range of cyber threats facing public entities and government agencies.
City officials yesterday confirmed that city and county computer systems in Madison, Wisconsin were being targeted by cyber attackers in retaliation for the shooting death of Tony Robinson, an unarmed biracial man, by a Madison police officer last Friday. A Reuters report says the cyber attack is thought to have been initiated by hacker group Anonymous.
Then on Sunday the website of Colonial Williamsburg was hit in a cyber attack attributed to ISIS. The attack targeted the history.org website and comes just a week after the living history museum offered to house artifacts at risk of destruction in Iraq.
However, the study, titled "Business Continuity: A Growing Opportunity in a Digitalized World," also showed that channel partners are typically faced with two key challenges when selling business continuity products: lack of customer education (45 percent) and budget concerns (45 percent).
In case you thought Microsoft was lagging behind in mobile productivity, you might want to reconsider. Microsoft and other cloud companies have taken some big steps to extend Microsoft Office far beyond the desktop and into the cloud. In the last couple of months, Google and Box have separately announced online editing features and close integration with Office desktop apps, while Microsoft announced just two weeks ago that Office users on iPad will be able to save their documents in any kind of cloud storage.
What’s the upshot for MSPs given these moves? The days of employees storing their important data on a single server or within a single cloud repository are long gone. Now, as the applications that take advantage of cloud storage are becoming more cloud platform-agnostic, employees can and will store their data in any number of cloud services, such as Google Drive, Office 365 and Box. MSPs need to make sure that clients’ data is properly controlled, no matter where it resides.
One of my favorite virtual friends is Dr. Andrea Bonime-Blanc, the Chief Executive Officer (CEO) and founder of GEC Risk Advisory LLC, the global governance, risk, integrity, reputation and crisis advisory firm which serves executives, boards, investors and advisors in diverse sectors, growth stages and industries, primarily in the Americas, Europe and Africa, providing strategic and tactical advice to transform risk into value. Dr. Bonime-Blanc is an extensively published author and editor of several books and numerous articles. She writes The GlobalEthicist column for Ethical Corporation Magazine. She also co-authored and co-edited The Ethics and Compliance Handbook for the ECOA Foundation. While her career and current consulting is wide-ranging, I want to focus on one of her recent book, The Reputation Risk Handbook, which should be read by any compliance practitioner, senior executive or board member.
Why should you read this book? It is because you should recognize that “Reputation risk has become strategic because of the age of hyper-transparency.” The book provides a variety of examples of reputation risk and explains its special nature. The book also provides strategies for management of reputation risk. Bonime-Blanc concludes her book by going into the veiled land of the future to opine on not only risk management techniques but also the “transformation of this risk into an opportunity and value for the organization.” Her book is broken down into three general areas, I. Understanding Reputation Risk, II. Triangulating Reputation Risk, and III. Deploying Reputation Risk.
No doubt enterprise IT technology will be vastly different in five years’ time. We’re not just talking about better, faster, more flexible infrastructure, but a top-to-bottom overhaul of what data infrastructure is all about and how it should be architected for the new digital economy.
But what gets lost in the whirlwind of activity surrounding the cloud, modular infrastructure, mobility and all the rest is how this will change the day-to-day operations of the data center, and in particular the responsibilities of the IT staff and the skillsets required to fulfill those responsibilities.
We can start with the CIO. Traditionally, this position is served by someone steeped in technical knowledge and the careful relationships that must be maintained between the various layers of the IT stack. (Yes, there is much more to it than that, but in general terms this is good for our discussion.) But as Mike Altendorf, CEO of systems integrator Conchango told CIO.com, a technology background will become steadily less valuable as things unfold, and more traditional business-minded skills will rise. These include not only budgeting and management, but marketing, customer relations and even sales as IT becomes more integrated with the business side of the operation.
Even though the U.S. government has broadened its pursuit against corruption, only about 9% of organizations see the Foreign Corrupt Practices Act’s monitoring of corruption as a top concern, according to “Bribery and Corruption: The Essential Guide to Managing the Risks” by ACL.
Remaining competitive can be difficult in some areas due to expectations of payments, gifts and consulting fees, but companies need to identify and manage the risks across the organization. Much is at stake as penalties are rising and reputations are at risk.
According to ACL:
(TNS) — In Pennsylvania, nearly 1.5 million people are in potential danger if a train carrying crude oil derails and catches fire, according to a PublicSource analysis.
That is about one in every nine Pennsylvanians, or 11.5 percent of the state's population.
The analysis also found 327 K-12 schools, 37 hospitals and 61 nursing homes in the state are at risk.
These numbers take on new meaning in the wake of the recent derailment near Mount Carbon, W. Va. And, a federal report predicts 15 trains carrying crude oil and ethanol in the United States could derail in 2015 alone.
Looking to put an end to spearphishing attacks that have made a mockery of IT security defenses, Check Point Software Technologies Ltd. today unveiled technology that automatically extracts malware from both documents attached to email and content downloaded from Web sites.
Gabi Reish, vice president of product management for Check Point, says Check Point Threat Extraction software works by decomposing content in real time into a set of digital bits and then removing any and all code that is identified as malware. The content is then reconstituted and send on to the intended user.
Running on security gateways from Check Point, Reish says Check Point Threat Extraction software is the second major IT security innovation Check Point is bringing to market in as many months. Last month Check Point acquired Hyperwise, a provider of software that identifies threats at the processor level.
When gathering food for an emergency kit, we often think about items that do not require cooking or refrigeration and have a long storage life. Yet, we often forget to check the nutritional value of the food in our emergency kits. March is National Nutrition Month and a great time to review the food in your emergency kit and makes sure it is healthy and not expired. Here are a few healthy tips to keep in mind when gathering food for your emergency kit and reviewing the food you have already stored.
1. Avoid salty snacks.
Salty snacks make you thirsty and increase your need to drink water. When you have a limited supply of food and water, you don’t want foods that will make you want to drink more water than you need or planned for.
2. Include protein.
While you may not be able to rely on your normal sources of protein like meat, after an emergency, you should still include some good sources of protein in your emergency kit. Nuts, protein bars and peanut butter can be sustaining foods that can help keep you full and are easy to store in your emergency kit.
3. Look for high-energy foods.
Food with protein, carbohydrates, and good fats can help keep your energy up, which can be very important during or after a disaster. Choose foods like nuts, dried meat, whole grains (crackers, cereal, etc.) and canned beans, fruits, or vegetables.
4. Don’t forget water.
Water is a crucial part of any emergency kit. Store at least 1 gallon of water per day for each person and each pet. If possible, try to store a 2-week supply of water or at least a 3-day supply of water for each person in your family. Unopened, commercially bottled water is the safest and most reliable emergency water supply.
5. Make sure your emergency kit food is healthy and safe.
In addition to choosing the right foods for your emergency kit, you should also regularly review the content of your kit to make sure none of your food has expired or become dented or damaged. Keep the food in your emergency kit in a dry, cool spot, out of the sun to help ensure that the food does not become damaged or unusable.
6. Stick with what you know.
The most important part choosing food for your emergency kit is making sure you know how to prepare and will want to eat the food you store. Stick with foods you know your family will eat. Also, do not forget about food allergies or dietary needs of your loved ones. Consider how you will meet everyone’s unique nutritional needs if you can only access your emergency kit food supply.
For more information about choosing and storing food for your emergency kit, visit CDC’s webpage http://emergency.cdc.gov/disasters/foodwater/index.asp.
The results of a Risk Management Association (RMA) and MetricStream survey on third-party and vendor risk management in financial institutions has been published.
The survey drew responses from over 100 leading financial institutions and addressed vendor management frameworks, vendor selection and monitoring processes, critical vendors and critical activities, tools and techniques, contracts, regulatory compliance, and fourth-party suppliers.
With the growing need to grow the business, provide new offerings, reduce overall costs, and maximise profitability and revenues, outsourcing to third-party service providers has become the norm for most banks and financial institutions (FIs) worldwide. Larger organizations have tens of thousands of vendor relationships to manage, and in this scenario, are increasingly exposed to financial loss and reputation if they fail to maintain adequate quality control over all third-party activities.
“Managing the risks inherent in vendor and other third party relationships has become critically important in recent years, as the actions of vendors can cause significant financial and reputational impact to organizations, no matter their size or industry,” said Edward J. DeMarco, RMA's general counsel and director of operational risk.
BitSight Technologies has released the results of a commissioned study, conducted by Forrester Consulting on behalf of BitSight, which reveals third-party security as a top business concern for enterprises. The findings suggest a significant appetite for monitoring third-party security but a steep disconnect in resources available to adequately and objectively manage this.
The study, ‘Continuous Third-Party Security Monitoring Powers Business Objectives and Vendor Accountability,’ is based on surveys of IT security and risk-management decision makers in the US, UK, France and Germany.
Forrester found that when it comes to tracking third-party risk, critical data loss or exposure (63 percent) and the threat of cyber attacks (62 percent) ranked as the top concerns, above standard business issues, including whether the supplier could deliver the quality and timely service as contracted (55 percent). Despite the desire for more robust insight into third-party security practices, only 37 percent of survey respondents reported tracking any of these metrics on a monthly basis.
Data theft is becoming big business if the estimated damages of recent breaches are any indication. Can you imagine being insured for US $100 million against such events, yet having to bear costs that exceeded even that figure? The recent attack against Anthem, the second largest health insurer in America, involved as many as 80 million records being stolen. The associated expenses have been estimated at more than the $100 million policy taken out by the enterprise. Elsewhere, supermarket chain Target (also in the US) estimated costs of over US $148 million after 100 million customer records were compromised at the end of 2013. But the attack similarities don’t end there – and could apply to any company.
At the end of last week, I started getting email messages warning me about the latest TLS/SSL vulnerability that has been discovered. This one is called the FREAK Attack and a site dedicated to informing users about the attack describes this new vulnerability in this way:
It allows an attacker to intercept HTTPS connections between vulnerable clients and servers and force them to use weakened encryption, which the attacker can break to steal or manipulate sensitive data.
The first reports of FREAK Attack, which like Heartbleed involves open source code, were via initial warnings through Mac and Android-native browsers—although Chrome appeared to be safe, as is Firefox. BlackBerry browsers are also affected by the vulnerability. At first glance, it looked like Windows machines were okay. A second glance, however, tells a different story.
(TNS) — For more than 100 years, people have questioned whether taking oil and gas from the depths of the earth can cause tremors.
When an earthquake shook Austin in 1902, some thought an explosion in the oilfields of Spindletop, in southern Beaumont, might be to blame.
The 1902 earthquake was naturally occurring. But the link between human activity and earthquakes is very real and well established, said Cliff Frohlich, associate director and senior research scientist with UT's Institute for Geophysics.
"When people make the statement that it hasn't been established that humans can cause earthquakes, they're either woefully uninformed about the research by myself and hundreds of others over the last 70 years or they're trying to mislead you," he said. "That's like people saying the world is flat; that evolution hasn't been proven or that humans can't cause climate change."
KANSAS CITY, Mo. — The woman’s voice on the intercom was anguished.
“There’s a shooter in the building. Lockdown! Lockdown!”
Inside the library at Independence’s Pioneer Ridge Middle School, about 65 teachers and staff members — who knew this was all pretend but were warned it may be unnerving — assumed their positions under desks and crouched between rows of children’s books.
Someone switched off the lights as instructed. Maybe the shooter won’t see them hiding. The rest of the school stood empty.
It was part of training increasingly occurring in the nation’s schools, hospitals and other workplaces to drive home lessons, some of them controversial, on how not to become an armed intruder’s sitting duck.
(TNS) — Ohio tops the country with the most school threats in the first half of the school year, according to a recent report by a national school-safety consultant.
From August to December 2014, Ohio had 64 reports of school threats, more than California (60), New York (46) and Texas (41).
Across the nation, school threats are up 158 percent from last year, the first year of the survey conducted by Cleveland-based National School Safety and Security Services.
Local safety experts question the company’s figures because they are based on news reports instead of police records. The local experts say that schools and media outlets tend to underreport threats.
When it comes to so-called “shadow IT,” the enterprise has three basic responses. You can accept it, you can fight it, or you can ignore it.
Unfortunately, it seems that a large number of organizations are choosing option three, ignoring it, which is probably the worst approach to take because shadow IT can, in fact, become a strategic asset to the enterprise, provided it is not left to its own devices.
Ideally, the enterprise should accept shadow IT, but with conditions. With the coming of the mobile-first generation to the knowledge workforce, IT needs to recognize that enterprise data will find its way onto personal smartphones and tablets, and that the best thing to do is encourage this level of flexibility but impress upon people the need to maintain an adequate security posture.
CHICAGO – You may be ready to enjoy more daylight hours after we “Spring ahead” an hour on March 8, but are you ready for the threat of flooding that warmer months can bring?
“With the change of seasons comes the risk of snow melt, heavy rains, and rising waters—we’re all at some level of flood risk,” said Andrew Velasquez III, FEMA Region V administrator. “It is important we prepare now for the impact floods could have on our homes, our businesses and in our communities.”
Take action with these simple steps to protect what matters before a flood threatens your community:
• Ensure you’re insured. Consider purchasing flood insurance to protect your home against the damage floodwaters can cause. Homeowners’ insurance policies do not typically cover flood losses, and most policies take 30-days to become effective. Visit FloodSmart.gov for more information.
• Keep important papers in a safe place. Make copies of critical documents (mortgage papers, deed, passport, bank information, etc.). Keep copies in your home and store originals in a secure place outside the home, such as a bank safe deposit box.
• Elevate mechanicals off the floor of your basement—such as the water heater, washer, dryer and furnace—to avoid potential water damage.
• Caulk exterior openings where electrical wires and cables enter your home to keep water from getting inside.
• Shovel! As temperatures warm, snow melt is a real concern. Shovel snow away from your home and clean your gutters to keep your home free from potential water damage.
• Build and maintain an emergency supply kit. Include drinking water, a first-aid kit, canned food, a radio, flashlight and blankets. Visit www.Ready.gov for a disaster supply checklist for flood safety tips and information. Don’t forget to store additional supply kits in your car and at the office too.
• Plan for your pet needs. Ensure you have pet food, bottled water, medications, cat litter/pan, newspaper, a secure pet carrier and leash included in your emergency supply kit.
• Have a family emergency plan in place. Plan and practice flood evacuation routes from home, work and school that are on higher ground. Your family may not be together when a disaster strikes so it is important to plan in advance: how you will get to a safe place; how you will contact one another; how you will get back together; and what you will do in different situations.
To learn more about preparing for floods, how to purchase a flood insurance policy and the benefits of protecting your home or property investment against flooding visit FloodSmart.gov or call 1-800-427-2419. For even more readiness information follow FEMA Region V at twitter.com/femaregion5 and facebook.com/fema. Individuals can always find valuable preparedness information at www.Ready.gov or download the free FEMA app, available for Android, Apple or Blackberry devices.
FEMA’s mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
My reading and research includes white papers from the Big Four accounting firms. I note for the record that Deloitte, a firm that has consistently produced excellent white papers on risk, has upped its game past white papers with a weekday Risk & Compliance Journal for executives in the Wall Street Journal, a convenient daily reminder of what’s at stake for publically listed firms. But it’s Deloitte’s 2014 Global Survey on Reputation Risk that I’d like to discuss here, and then make note of several other useful and available white papers.
It’s always been difficult to quantify reputation, whether individual or corporate. We claim to know when a firm’s reputation has been compromised, and often the market punishes that firm directly. Yet there are other cases where direct actions taken to save a reputation -- notably investigations, which may lead to the removal of the CEO or other executives – seem autocratic and insufficient. We express our own judgments by comment and retweet, often becoming part of a groundswell of distrust and dissatisfaction on social media that has a longer term impact on the firm in question. Social media in that sense is more innovative than traditional data analytics. [It is hard to know whether social media commented more upon the corporate reputation of NBC or the individual reputation of anchor Brian Williams, but that particular groundswell led to the six month suspension without pay of Williams, who is said to make $10 million a year. So far at least, NBC Evening News is holding its own in the ratings, but the company is making significant changes in its management staff; and it is not clear that Williams will ever return to the news desk.]
Project managers—especially in the tech sector—know all too well how many factors can cause a project to miss its deadline or go over budget. Keeping a project within its projected scope is one of the most difficult challenges for project managers.
Issues such as project omissions, slow or no user involvement, customer over-expectation and lengthy application development times can often not be avoided. One thing that can usually be reined in is the scope of the project, which includes the objective, timeline, goals, resources, tasks, team and budget.
By properly defining these requirements, a project has a better chance of staying within these guidelines. Of course, often the collection of data to define these requirements can be a huge challenge within itself.
The book “Project Scope Management: A Practical Guide to Requirements for Engineering, Product, Construction, IT and Enterprise Projects,” provides instruction on developing and defining project requirements to keep projects on track and within scope. It deals with practical tools and simple techniques for project managers to use in the daily struggle to avoid scope creep.
For most companies, the on-premise appliance sits firmly rooted at the center of their backup world--making disc-to-disc (D2D) the preferred data protection method for backup and recovery of critical data, servers and applications. While D2D isn’t a perfect solution--often characterized by its high cost, capacity planning challenges and finite storage constraints--it’s tested, trusted and reliable.
With the cloud becoming more broadly adopted, many companies are considering cloud backup as a viable option for their disaster recovery (DR) strategy. Who doesn’t want lower costs and increased efficiency?
Heeding the call, the backup industry, which has always let the appliance drive its product vision, introduced hybrid backup appliances to the market. These appliances, designed to deliver cost savings, act as your local D2D backup. The cloud becomes your replication repository.
Facing a future where extreme weather events are more common, cities on the East Coast are building up their resiliency to power outages.
At-risk cities, especially those on the East Coast that haven’t historically had to prepare for hurricane-induced problems, are trying to improve their infrastructure and emergency plans to prevent power outages.
A recent analysis from Johns Hopkins University ranked Philadelphia as the second most likely city in the United States to experience more power outages.
Whenever a project is being planned, risk management has to be part of the equation – things rarely go smoothly or completely as expected, and there will always be areas that present more risks than others. Whether they affect the projected timeframes, budgets or outcomes, it is the job of the project manager to identify them and ensure that provisions are in place to limit their impact should they occur.
However, failures are made in risk management every day – they helped to trigger the economic crisis in 2008, demonstrating that even the world’s biggest banks, which take financial and logistical risks every day, are not immune to risk mismanagement. With this in mind, it’s understandable that smaller projects and processes might suffer from errors made in risk management.
Why aren’t we performing risk management well, then? With project management an ever-growing sector and more and more jobs being created every day, the next generation of risk managers needs to be able to identify issues in order to rectify them.
It may seem a bit incongruous to talk about solar energy when nearly half the country is covered in snow, but the data center is still the energy hog of today’s economy and is constantly ratcheting up its consumption with every new hyperscale facility.
But even as Apple, Facebook and other top firms embrace solar power and other renewables, the question remains whether this is a viable option for the broader enterprise community. And if not, will the pressure to shed local data infrastructure still come from environmental corners anxious to foster greater dependence on cleaner, utility-style computing?
The key test of a technology like solar power is not in its ability to generate electricity, but in its ability to do so reliably. In a recent report, the North American Electric Reliability Corp. (NERC) said that the influx of renewables into the bulk energy grid of the U.S. and Canada and the closing of aging coal-fired facilities is lowering energy reliability in the region. This could cause rates to increase as utilities up their reserve fuel stores to maintain adequate load. The report is disputed by many, to be sure, but it does point up the uncertainties that accompany changes to such fundamental infrastructure as the energy grid.
By Joe Schreiber
Once you’ve come to terms with the harsh reality of the world, you come to understand that sooner or later, you will be the victim of a security breach. Chances are that it may not be this month, or even this year, but as the insightful Tyler Durden so shrewdly observed, “On a long enough timeline, the survival rate for everyone drops to zero.”
Getting breached doesn’t establish whether or not you have a decent security program in place: but how you respond to a security breach does.
If you come to accept Murphy’s Law; that everything that can go wrong will do so – and usually with the worst possible timing, there are several steps that can be taken today to help soften any future blows. These motions that you can set in place give you the ‘freedom’ to expect the unexpected.
Try to rid yourself of any notion that the work you do in network security is ‘protecting’ the company’s assets. Your mission is to look into and analyze how the network can be attacked, with the anticipation that you can control the battlefield smoothly enough to be able to respond to all attacks satisfactorily. So, think strategically about what can be done today and what can be delayed for later. The following are six key actions you can take to make sure you and your organization are more than prepared.
As conflict continues in Ukraine, and fears of an expansionist Russia throw a shadow of war over Europe, the Cambridge Risk Centre for Risk Studies has urged businesses to incorporate geopolitical conflict scenarios into their business continuity planning.
Interstate conflict was the number one concern of nearly 900 businesses and academics who responded to the Global Risks 2015 Report published in January by the World Economic Forum.
"These risks are continuing to grow in this new era of political uncertainty," said Dr Andrew Coburn, director of the Centre for Risk Studies Advisory Board at Cambridge Judge Business School. "Businesses should reappraise their readiness to manage possible disruption to their activities from armed conflicts in different parts of the globe," he said at the Centre's risk briefing held recently in the City of London.
The Centre for Risk Studies and its research partner, Cytora, has identified more than 100 potential country-to-country conflicts based on recent antagonist statements towards each other, antithetical values and historical enmity. All have the potential to cause severe disruption to business activities.
Cytora's risk map of potential future conflicts highlights a number of regional hot-spots, including the obvious Middle East, Central and Eastern Africa; the Eastern European margins; the Indian subcontinent, parts of Latin America and the emerging Southeast Asian powers.
Toby Owen of Peer 1 Hosting identifies four drivers for the hybrid cloud:
- Big Data
- The Internet of Things
But covering data issues changes your perspective on these things, because when you boil it down, most technology is about sharing, securing or using data and information. Since information is just unstructured data — it really all boils down to data. So I look at that list and see only two drivers:
- Shared services (supported by federation and interoperability)
- Data. Just data.
Establishing relationships with potential clients and partners is absolutely necessary for succeeding in business. One of the most effective ways to build such connections is to hold a lunch-and-learn event. That is, it’s effective when done right.
When done wrong, you’ll end up giving a presentation to a near-empty room in some dingy hotel conference space. Or even if you have a full house at a nice venue, it might as well be empty because your message is so unclear, cliché-ridden, and poorly delivered that it convinces no one to use your services.
It’s easy to avoid these ugly scenarios if you know what you’re doing. I spoke with David Russell, CEO of MANAGEtoWIN, who has over 41 years of experience in business, and has held too many lunch-and-learn events to count. Recently we held a webinar together to share what it takes to hold a successful lunch and learn. Here are the main tips we shared:
(TNS) — A study funded by a $10,000 grant will look at whether post-Sandy Long Islanders are better prepared when the streets are blocked, phones die and the water or snow turns into life-threatening challenges.
Sustainable Long Island, a nonprofit organization that promotes economic development, social equity and the environment, said a State Farm insurance grant awarded last month will develop and launch a Disaster Preparedness Program.
Under the three-prong plan, the group will conduct surveys to assess whether Long Islanders have strategies and supplies ready; teach high school and college students on how to let their peers know about the effectiveness of social media in helping residents during disasters; and work with Long Beach to create a pilot program that would educate the public about disaster preparedness.
As a business continuity manager, CIO or company risk office, you’ve probably already done numerous risk value calculations. In order to make a table to compare risks and their impacts, you might assign percentages or relative scores to risks, and monetary values or relative scores again to impacts. The risk value in each case is then simply “risk X impact”. You get a simple table that allows you to rank risks in order of their risk value and set your priorities accordingly. However, what may be forgotten is that risk calculations can be positive as well as negative.
This harks back to the perception of business continuity planning and management exclusively as something that prevents interruptions (negative) and ensures that operations continue as usual (zero change). This is true, but it is only half the story. Increasingly, business continuity is becoming an opportunity not just to do as well as usual, but better (positive). For example, BCM must contain negative risk of suppliers failing, but can also encourage positive risk of increased profitability thanks to higher efficiency stemming from BC measures.
Did I pack socks? Check. Toothbrush? Check. Business cards, phone charger, passport? Check, check, and check. Do I know what I need to do and what not to do to protect myself, my devices and the company’s data while I’m on the road and traveling for work? [awkward silence, crickets chirping]
S&R pros, how would employees and executives at your firm answer that last question? It’s an increasingly important one. Items like socks and toothbrushes can be replaced if lost or forgotten; the same can’t be said for your company’s intellectual property and sensitive information. As employees travel around the world for business and traverse through hostile countries (this includes the USA!), they present an additional point of vulnerability for your organization. Devices can be lost, stolen, or physically compromised. Employees can unwittingly connect to hostile networks, be subject to eavesdropping or wandering eyes in public areas. Employees can be targeted because they are an employee of your organization, or simply because they are a foreign business traveler.
Cold snaps are the weather phenomenon most likely to damage UK business performance according to new research commissioned by cloud services company, 8x8 Solutions, to highlight the need for businesses to prepare for adverse weather to limit lost productivity. Economists from the Centre for Economics and Business Research (Cebr) examined the relationship between different weather events and economic growth across the UK’s main industries over the last decade.
They found that since 2005, periods of very cold weather have seen quarterly GDP growth on average 0.6 percentage points lower than typical levels. When minimum temperatures are one degree Celsius lower than average, quarterly GDP is on average £2.5 billion lower. This is a bigger negative effect than any other form of adverse weather, including snowfall, heat waves or flooding.
The fall in GDP results from lower output across a number of industries and lost productivity as transport links and staff availability suffer. Those who do get to work on particularly poor weather days often meet a skeleton staff, hindering productivity.
Whilst cold has the biggest negative effect on the economy, different industry sectors are impacted by different forms of extreme weather. For example, professional services and accommodation and food are the sectors that take the biggest hit from heavy rainfall. High rainfall has a big impact on office-based jobs, with just ten millimetres above average costing the economy £86 million in a single quarter. In January 2015 rainfall was 26.5mm above the 2004-2014 January average of 126.8mm – potentially costing the economy £76.3million over the quarter.
The research also explores the resilience of businesses of different sectors and sizes. The information and communications sector is one of the few to see positive growth during poor weather. Cebr concluded that this is because the sector leads the way in using cloud-based technology allowing employees to work from home. On average, nearly two thirds (65%) of all companies in this sector use some form of cloud technology compared to just 15-30% of all other businesses.
But the report warns that smaller businesses are at a disadvantage in terms of poor weather, as Scott Corfe, Head of UK Macroeconomics, Cebr explains: “Many small offices are unprepared for such events as they often lack remote access to their work due to security concerns and a lack of infrastructure. This is compounded in many cases by inadequate internet connections or computing power at staff homes. In addition SMEs tend to suffer more than their larger counterparts who can spread the setup and maintenance costs of remote working infrastructure across many more staff.”
Kevin Scott-Cowell, CEO of 8x8 Solutions, says, “Bad weather hits businesses hard, and medium-sized companies are more vulnerable than their larger counterparts. Until now, the technical infrastructure to enable remote working and guard against disruption has been out of reach for many companies, but cloud solutions are changing this. It’s now affordable for any size business to put in place a plan and deploy the right remote working technology. This can make sure it’s business as usual for customers, whatever the weather.”
The research is released in the run up to Business Continuity Awareness Week, an initiative run by the Business Continuity Institute. Lyndon Bird FBCI, Technical Director at the BCI, said, “This research is a timely reminder of the need for companies to adopt business continuity management best practice. That means having the plans and technology in place to manage risks to the smooth running of their organisation or delivery of a service, ensuring continuity of critical functions in the event of a disruption, and effective recovery afterwards.”
By Duncan Ford MBCI
Could you get more out of your business continuity exercises? Do you have an inner concern that last year’s exercise programme didn’t demonstrate as much as you would have liked, or that there may be alternative ways of delivering the exercise that would be more cost effective and less effort?
Guidance from the various business continuity institutes and regulators, also included in recognised standards, puts a strong emphasis, quite correctly, on the essential requirement to exercise plans and recovery procedures. However, how do you assess the quality of the exercises, as opposed to the quantity? Are different types and styles of exercises being used, within an integrated programme, to meet different business needs?
Take a couple of seconds to consider whether:
- The maximum return is being gained from the time people commit to exercises;
- Different techniques could be used to engage directors and senior managers;
- The exercise(s) sufficiently challenge the organization’s assumptions about its ability to respond and recover.
Verisk Maplecroft has published its 2015 Natural Hazards Risk Atlas, which ranks over 1300 cities in 198 countries on their exposure to natural hazards to help organizations identify and compare risks to populations, economies, business and supply chains.
According to the Atlas, the strategic markets of Philippines, China, Japan and Bangladesh are home to over half of the 100 cities most exposed to natural hazards, highlighting the potential risks to foreign business, supply chains and economic output in Asia from extreme weather events and seismic disasters. Of the 100 cities with the greatest exposure to natural hazards, 21 are located in the Philippines, 16 in China, 11 in Japan and 8 in Bangladesh. Analysis for the Natural Hazards Risk Atlas considered the combined risk posed by tropical storms and cyclones, floods, earthquakes, tsunamis, severe storms, extra-tropical cyclones, wildfires, storm surges, volcanoes and landslides.
The Philippines’ extreme exposure to a myriad of natural hazards is reflected by the inclusion of eight of the country’s cities among the ten most at risk globally: including Tuguegarao (2nd), Lucena (3rd), Manila (4th), San Fernando (5th) and Cabantuan (6th). Port Vila, Vanuatu (1st) and Taipei City, Taiwan (8th) are the only cities not located in the Philippines to feature in the top ten.
The Cloud Standards Customer Council has released version two of its guide to cloud security.
The abstract reads as follows:
“Much has changed in the realm of cloud computing security since the original Security for Cloud Computing whitepaper was published in August, 2012. The aim of this guide is to provide a practical reference to help enterprise information technology (IT) and business decision makers analyze the security implications of cloud computing on their business. The paper includes a list of steps, along with guidance and strategies, designed to help these decision makers evaluate and compare security offerings from different cloud providers in key areas.”
Business Continuity Planning is often theoretical. After all, we can’t really know what we’ll need until a disruption occurs (and by then, it’s too late for planning!). As a result, we have little choice but to make our best guess as to what we’ll need when something hits the proverbial fan. A previous article discussed the pitfalls of assigning Business Continuity tasks to individuals because of risks to their availability. You should also be cognizant of the limitations of those teams and individuals assigned to carry out recovery tasks.
BC Planning deals with many unknowns: what will happen, when it will happen, how severe the disruption may be. We also don’t know how long the disruption – or the recovery from it – will last. We may assume that assigned teams or individuals will stick with the recovery process until normalcy is achieved. Is that likely? Who knows? But if it isn’t (if, for example, the recovery lasts more than 3 days) what is in our Plan to account for the limitations on assigned personnel? What kinds of ‘limitations’ must be accounted for?
Anyone who has ever used Business Continuity Management System (BCMS) knows that having access for your business, IT, and executive planners is essential for two critical reasons:
- YOUR SYSTEM MAY INHIBIT DATA GATHERING AND ANALYSIS: You need quite a bit of data from many sources in your organization in order to formulate your BCP. While meeting with all users is fantastic, it simply is not feasible—even in the smallest of organizations. Even though your BCMS is supposed to streamline this activity, limiting users can do the exact opposite. It FORCES YOU to gather data by going directly to the user or utilizing outside methods (e.g. spreadsheets or external survey tools). This requires extensive work outside the BCMS.
It is the end of an era for the Business Continuity Institute as Lyndon Bird FBCI has announced he is to stand down from his role of Technical Director. Over the last 21 years, Lyndon has become an integral part of the Institute, from his role as one of the founding members, through his position as Chairman of the Institute, to his job as Technical Director.
In nine years as Technical Director at the BCI, Lyndon has ensured that the BCI continues to have an effective and consistent voice on all matters of Business Continuity Management within the business, government, regulatory and academic communities. During his time, the Good Practice Guidelines have become a well respected source of global best practice, and the BCI has contributed significantly to the development of national and international standards.
On announcing his decision, Lyndon reflected that “although the BCI's work in all of these fields is ongoing, I feel my role as the main catalyst for this has changed. The BCI has grown to the point where it is staffed by a wide range of very competent people who are more than capable of dealing with the future challenges the Institute and the discipline might face. It is therefore an ideal time for me to move on and seek other interesting and challenging projects.”
On what lies ahead for him, Lyndon explained that "the opportunities created by the emergence of a wide-scale global resilience movement are very exciting and I look forward to continuing with my diverse writing, editing, teaching, commentating and consulting activities wherever in the world such opportunities emerge. I will no doubt be working with many BCI members in the future, albeit in a different capacity, but still with the same enthusiasm and passion for our subject.”
David James-Brown FBCI, Chairman of the Institute, described Lyndon as being "intimately involved with the establishment and growth of the Institute and has dedicated an enormous amount of his time and energy to making the BCI what it is today. Lyndon is truly one of the fathers of the industry and has been an inspiration to so many."
"On behalf of the BCI Board and the Membership I would like to express our heartfelt thanks and appreciation for an exceptional contribution; not just in terms of work but the personal attributes that Lyndon has brought. Lyndon will be sorely missed around the office for his wisdom, humour and humility; for his mentoring, his support and his encouragement. He will be missed by the Board for his dependability, his insightfulness and his clear thinking."
Steve Mellish FBCI, former Chairman of the BCI, and close friend to Lyndon, said of him: "Lyndon has always been reliably consistent in his passion for the subject and has such an astute capability to analyse situations and information to see connections or trends that many just don’t see. His devotion to the BCI has been there from ‘day one’ as one of the founding members. He has probably spent more time on the Board than anyone else I know including two terms as Chairman. To this day he still talks enthusiastically about the future and how business continuity and the BCI has and will continue to drive the whole resilience agenda going forward."
"If it wasn’t for Lyndon I know that I would not have achieved half of what I have done as a business continuity professional and without doubt, never have been so involved with the Business Continuity Institute. His wise counsel and support enabled me to face and deal with many challenging situations over my 12 years on the Board."
What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:
Whilst SSD usage is up, the technology is still a cause of downtime: one third of respondents to a Kroll Ontrack survey confirm they have experienced some sort of SSD technology malfunction.
According to a recent solid state disk (SSD) technology use survey by Kroll Ontrack, while nearly 90 percent of respondents leverage the performance and reliability benefits of SSD technology within their organisation, one-third confirmed they experienced some sort of SSD technology malfunction. Of those who did, 61 percent lost data and fewer than 20 percent were successful in recovering their data, highlighting the known complexity of SSD data recovery.
In the UK, 27 per cent of respondents had experienced a failure of their SSD technology and of these 56 per cent experienced data loss as a result. A slightly higher number than the global figure (26 per cent) were able to recover their data following a failure.
Risk management and risk transfer must work together to make organizations more resilient, as firms become more exposed to major disasters and subsequent business interruptions as a result of their increasingly complex global networks. Traditional property damage/business interruption policies were never designed to meet the risks faced by organizations today, and the business interruption insurance market has not kept pace with these rapid changes, according to Marsh.
In a new Marsh Risk Management Research report, the firm highlights how the limitations of existing business interruption insurance, including gaps in cover and inaccurate valuations, are resulting in less than optimal coverage for clients and makes the case for insurance modernisation.
Based on concerns raised by colleagues, clients, loss adjusters, lawyers and insurers, the report focuses on five core areas where Marsh believes improvement is required: insured values; indemnity periods; wide area damage scenarios; supply chain; and claims.
Caroline Woolley, Global Leader of Marsh’s Business Interruption Center of Excellence, commented: “A property damage event remains one of the major exposures any company can face, and business interruption is one of the main insurances purchased. Business interruption policies, however, have done little to evolve since the middle of the last century.
“The insurance industry needs to acknowledge the shortcomings of existing business interruption cover and build a better solution for buyers. This report is Marsh’s contribution to the debate as we seek to improve existing solutions and reshape the industry to address insurance buyers’ evolving needs.”
The report ‘Business Interruption Insurance Efficacy: Five Key Issues’ can be found after registration here.
Disaster recovery planning for your IT installations may use automated procedures for a number of situations. Virtual machines can often be switched or re-started in case of server failure, and network communications can be rerouted without human intervention. For other requirements, people will be involved in getting IT systems up and running properly after an incident. But people do not switch into auto-run modes like a machine. They can be affected by the surprise factor of an IT disaster and by the pressure to bring things back to normal. Five aspects of usability may need to be designed into your DR planning if you want the best chances of a satisfactory recovery.
For many of our readers and the organizations where they work, any kind of supply chain disruption could easily qualify as a serious incident and one that would easily have been discussed and included in their disaster preparedness planning process.
With that thought in mind, our staff recommends reading and potentially adding a recent EventWatch™ 2014 Supply Chain Disruption report to your organization’s business continuity and disaster preparedness team’s reading resource library.
This report This report was funded and supported by Resilinc’s database of over 40,000 suppliers and over 400,000 parts which are tracked in its cloud supplier intelligence repository, and, analyzed incidents by risk type, industry, geography, severity, and seasonality and compared 2014 data in these categories with 2013.
Two of my favorite bloggers, Tony Jaques in Australia and Jonathan and Erik Bernstein from California, had excellent posts and two of the most important topics: rumor management and apologies.
Tony tells the story of a hepatitis A scare in Australia that got linked to a frozen berry product. The company out of an abundance of caution as they like to say, voluntarily recalled their product without verification their product was the cause. From there as you will see the media did their thing and the company apparently did not do enough to correct the misreporting.
The lesson is clear: a lie (or error) repeated often enough becomes the truth. The only way I know to deal with this is to loudly, clearly over and over and over tell the truth and correct the misinformation.
Cybersecurity is a priority for enterprise executives and their boards, but a serious disconnect also exists in the C-suite on what the risk priorities should be and why, according to recent research. Some of the gap can be attributed to the day-to-day focus of different executive functions, but much of it goes far deeper into problems with culture and communication.
When consulting firm Protiviti and the Enterprise Risk Management (ERM) Initiative at the North Carolina University Poole College of Management recently conducted the third annual survey of business executives for “Executive Perspectives on Top Risks for 2015,” and examined the ranking of 27 risks by job function, they found that CFOs and chief audit executives (CAEs) perceived a riskier business environment than CEOs and the board. And CEOs and board members each had their own focus on the types of risks they perceived as most important.
Protiviti examined the relationship between the job functions of the executives it surveyed and whether they ranked macroeconomic, strategic or operational risks as of highest concern, and a pattern emerged. Board of directors members collectively named four strategic risks among their top five concerns, along with one macroeconomic issue; CEOs collectively named four macroeconomic risks among their top five, along with one strategic risk. And other executives named more operational risks to their top five lists.
(TNS) — Army researchers in a lab outside Washington worked for years on a software tool to help soldiers understand how hackers were targeting military computers.
Late last year they did something unusual: They released their project for anyone on the Internet to poke and prod.
William Glodek, the leader of the project, said the Army Research Lab hopes that if his team gives something, they'll get something.
"The Army is open and willing to collaborate," he said. "Hopefully, we can attract some bright talent to contribute to the project."
The federal government is looking for ways to improve the security of the nation's computers, but its plan to share information about threats faces legal obstacles before it can get moving. By offering up code, rather than data, Glodek's team has been able to take a step forward — and join a growing movement among military and intelligence community coders to share what they make.
Virtualization has been changing the business IT landscape since the first hypervisor solution debuted in 1999. The technology initially targeted large enterprises and data center operators that could take advantage of its ability to add capacity and scale without physical components or the power and cooling costs required by hardware assets. During the past several years, though, virtualization has made significant in-roads in the SMB market due to a reduction in upfront investment costs, improved reliability and the proliferation of virtualization-dependent cloud services.
Industry research points to the continued growth of virtualization, and, according to social business platform provider Spiceworks’ 2014 State of IT Report, the adoption of virtualization among IT pros is currently at 74 percent worldwide. The Spiceworks report found that just over half of SMBs with fewer than 20 employees are currently leveraging virtualization, while 70 percent of SMBs with 20 to 99 employees and 83 percent of SMBs with 100 to 249 employees have adopted the technology for everything from productivity applications to databases to managed services.
(TNS) — Emergency personnel responding to an oil train derailment in West Virginia last week applied lessons learned from a rail disaster more than three decades ago, and likely prevented a bad situation from becoming much worse.
This week marks 37 years since a deadly explosion in Waverly, Tenn. On Feb. 24, 1978, a derailed tank car carrying liquid propane violently ruptured, killing 16 people, including the small town’s police and fire chiefs.
Emergency response and training has changed dramatically in decades since the tragedy.
Buddy Frazier, the city manager of Waverly, about 65 miles west of Nashville, who was a young police officer when he witnessed the 1978 explosion, said that emergency responders are better trained and better equipped today. Still, he understands the challenges they face.
If the value that data analytics has brought to businesses can be measured in the extent to which it enables those businesses to retain their customers, it makes sense to drill down on exactly what that enabler is. Most observers would argue that the enabler is Big Data. But the real enabler just might be small data.
That was my key takeaway from a recent conversation with John Rode, senior director of demand generation at Preact, a provider of cloud-based data analytics services in San Francisco that’s focused on reducing customer churn. According to Rode, “small data” is typically CRM data, which he said is the starting point for almost every decision about customers, whether it’s targeting prospects, conversion, up-sell or retention. Rode explained the significance of that this way:
While this data is most definitely “small,” it tells a lot about the customer—how much they pay, for which product, how many employees they have, which industry they are in, their decision-making authority, and so on. Once you begin to analyze customer behavior [associated with] your product, you are essentially operating a dial that takes you from small data to Big Data, depending on the sophistication of your analysis. You can analyze the behavior of each individual separately … and apply algorithms that analyze how their behavior is trending, and thus determine whether they are a churn risk. While this is a lot of data, most folks would still characterize this as small data.
By John Zeppos, FBCI
Business continuity management in large organizations with many different departments and diverse personalities can be a challenge at times.
When you’re trying to implement good business continuity management in a company that spans countries and time zones it gets even more complicated. Throw in cultural differences between the various regional offices on top of the business-cultural differences within each office, and it can seem like a hard road to nowhere.
As a top-level manager in a multi-national company, you will understand the challenges in getting your own staff to understand the concept of business continuity, let alone the difficulties involved in communicating these plans to managers in overseas branches: understanding business continuity jargon is hard enough in one language, but communicate you must because resilience to business disruption affects not only their own staff, but the stability of the business as a whole.
Excellent exercises take time and resources to prepare and run; but they are an essential component of a business continuity programme to prove capability and to train people. It is important to get the best out of them and make sure they deliver against the business recovery objectives.
What makes a good exercise?
With this question in mind, Corpress has created an Exercise Checklist as an aide-memoire to help business continuity, crisis management and emergency professionals develop, run and observe exercises. The document shares Corpress partners’ combined experience gained over 20 plus years delivering global programmes for testing and training.
The Exercise Checklist includes a number of new ideas and approaches to exercises and simulations, which are designed to engage senior executives, reduce development time and maximise engagement across the business.
Get the best from your exercise programme in the year ahead by downloading the Checklist after free registration using the form below:
The current global influenza situation is characterized by a number of trends that must be closely monitored, says the World Health Organization (WHO) in a recent briefing document.
According to WHO these trends include:
- An increase in the variety of animal influenza viruses co-circulating and exchanging genetic material, giving rise to novel strains;
- Continuing cases of human H7N9 infections in China; and
- A recent spurt of human H5N1 cases in Egypt.
- Changes in the H3N2 seasonal influenza viruses, which have affected the protection conferred by the current vaccine, are also of particular concern.
The highly pathogenic H5N1 avian influenza virus, which has been causing poultry outbreaks in Asia almost continuously since 2003 and is now endemic in several countries, remains the animal influenza virus of greatest concern for human health. However, over the past two years, H5N1 has been joined by newly detected H5N2, H5N3, H5N6, and H5N8 strains, all of which are currently circulating in different parts of the world. In China, H5N1, H5N2, H5N6, and H5N8 are currently co-circulating in birds together with H7N9 and H9N2.
“The diversity and geographical distribution of influenza viruses currently circulating in wild and domestic birds are unprecedented since the advent of modern tools for virus detection and characterization. The world needs to be concerned,” states WHO.
Virologists interpret the recent proliferation of emerging viruses as a sign that co-circulating influenza viruses are rapidly exchanging genetic material to form novel strains.
The emergence of so many novel viruses has created a diverse virus gene pool made especially volatile by the propensity of H5 and H9N2 viruses to exchange genes with other viruses. The consequences for animal and human health are “unpredictable yet potentially ominous” says WHO.
On many levels, the world is better prepared for an influenza pandemic than ever before, according to WHO. However, the level of alert is high and although the world is better prepared for the next pandemic than ever before, it remains highly vulnerable, especially to a pandemic that causes severe disease. Nothing about influenza is predictable, including where the next pandemic might emerge and which virus might be responsible. The world was fortunate that the 2009 pandemic was relatively mild, but such good fortune is no precedent, says WHO.
The Business Continuity Institute’s North America awards will take place on 24th March 2015 during the DRJ Spring World in Orlando. The awards recognise the achievements of business continuity professionals and organizations based in the USA and Canada.
The BCI has now issued the shortlist for the awards which is as follows:
Continuity and Resilience Consultant
- Robbie Atabaigi, KPMG
- Jeff Blackmon FBCI, Strategic Continuity Solutions
- Christopher Duffy, Strategic BCP
- Paul Kirvan FBCI
- Debjyoti Mukherjee, KPMG
Continuity and Resilience Newcomer
- Garrett Hatfield, MetLife, Inc.
- William Kearney, Cameron
- Tamika McLester, Crawford & Company
Continuity and Resilience Team
- Business Resiliency Office (BRO), Automatic Data Processing (ADP)
- ETS Enterprise Resiliency Department, Educational Testing Service
- TMG Health Team, TMG Health
Continuity and Resilience Provider (Service/Product)
- ClearView Continuity
- Fusion Risk Management, Inc.
- Strategic BCP
- Virtual Corporation
- xMatters, Inc.
Continuity and Resilience Innovation
- 9yahds, Inc.
- Strategic BCP
- Send Word Now
- Quorum Technologies
- Suzanne Bernier MBCI
- Christopher Duffy
- Frank Leonetti FBCI
by Ben J. Carnevale
Business Continuity, Resiliency and Emergency Management Planning teams are often looking for additional ideas, programs and campaigns to help those teams be more prepared and ready to mitigate losses from potential disasters affecting the organization where they work, and the community where they work and live with their families.
Our staff believes that the America’s PrepareAthon™ campaign qualifies as one of the best resources for those teams to look for ideas and assistance for taking action to increase emergency preparedness and resilience.
America’s PrepareAthon! ™ is a grassroots campaign for action within the United States to increase community emergency preparedness and resilience through hazard-specific drills, group discussions, and exercises. Throughout the year, America’s PrepareAthon! ™ helps communities and individuals across the country to practice preparedness actions before a disaster or emergency strikes.
Will 2015 be the year the cloud gets past the hype? While cloud-based file sharing and other cloud services are being adopted by almost all businesses, the cloud is still in the early stages of its technological revolution. Whether it is personal computers, the internet, or 3D printing, every new technology goes through a period of hype and disillusionment before the really productive innovation takes place.
Gartner calls this the Hype Cycle of Emerging Technologies. According to Gartner, cloud computing has already passed the inflated expectations people had about it and everyone is beginning to become disillusioned by it. But that’s not a bad thing! Once the hype ends, real enlightenment can begin, and that’s where really useful and significant things get created.
So now that the hype over the cloud is over, is 2015 the year of enlightenment?
(TNS) — The tornado that struck Joplin, Mo., nearly four years ago left 161 people dead and much of the city devastated.
But the storm taught forecasters lessons that may have saved lives during subsequent disasters, including the May 2013 tornadoes in the Oklahoma City area, a National Weather Service official said Wednesday.
During a keynote address Wednesday at the National Tornado Summit in Oklahoma City, National Weather Service Deputy Director Laura Furgione discussed lessons the agency learned from a series of deadly tornadoes in the spring of 2011.
Among the many services state and local governments provide, few are as popular, as trusted or as essential as 911. Americans place roughly 240 million 911 calls each year, says the National Emergency Number Association, and access to 911 is nearly universal. Nevertheless, the system so many Americans rely on today to report emergencies and other problems stands on the brink of obsolescence.
While Americans are now accustomed to using Twitter, Facebook, Instagram and other social-media platforms for the rapid-fire sharing of news and information, most 911 systems can't handle the texts, videos, data and images that we increasingly use to communicate.
That's because in many parts of the country 911 is still rooted in the landline-telephone-based infrastructure that gave the system its start in 1968. As of November 2014, just 152 counties in 18 states even had the capability for citizens to text to 911. And only a handful of states -- such as Iowa and Vermont -- have taken the leap to Internet-enabled 911, known as "next-generation 911."
(TNS) — Joplin, Springfield and Branson, Mo., have agreed to a set of procedures that will standardize how outdoor storm-warning sirens are activated and how they are tested.
The objective is to create a uniform standard across the region where none exists now. The adoption of the procedures by three of Southwest Missouri’s largest communities already has spurred other communities, such as Carthage, Bolivar, Pierce City and Monett, to participate in the guidelines.
The new procedures were unveiled during a news conference on Wednesday at the Springfield-Greene County Office of Emergency Management. Officials from the communities and representatives of the National Weather Service forecast office at Springfield were on hand for the announcement.
Board members and C-suite executives across industries perceive the global business environment in 2015 as somewhat less risky for organizations than in the past two years. In “Executive Perspectives on Top Risks for 2015,” consulting firm Protiviti and the Enterprise Risk Management Initiative at the North Carolina State Univeristy Poole College of Management found that this is far from bad news for risk managers, as organizations are actually more likely to invest additional resources for risk management. Internal challenges like succession, attracting and retaining talent, regulation and cybersecurity are drawing the most attention, according to the report.
“Our survey findings indicate that operational risk issues are keeping many senior executives up at night,” said Mark Beasley, Deloitte Professor of Enterprise Risk Management and NC State ERM Initiative director. Indeed, for the third consecutive year, regulatory changes and heightened regulatory scrutiny ranked as the number one risk on the minds of board members and corporate executives, with 67% indicating that it will “significantly impact” their organizations. More than half of global survey respondents indicated that insufficient preparation to manage cybersecurity threats is a risk that will “significantly impact” their organizations in 2015, pushing cyberrisk up three spots from last year to the third-greatest risk.
If there’s one thing a lot of SMBs have a hard time outsourcing, it’s their HR operation, simply because of its critical nature. Add the notion of allowing the management of that operation to reside in the cloud, and the reluctance, for some, may increase exponentially. But to what degree is that reluctance warranted?
I recently had the opportunity to discuss that issue with Eric Sikola, general manager of TriNet Cloud at TriNet, a human resources services provider in San Leandro, Calif. As the founder of ExpenseCloud, which TriNet acquired in May 2012, Sikola is a vocal advocate of empowering SMBs with better HR options.
“I founded ExpenseCloud in 2008 because I wanted to help companies and their employees better manage their expense process,” Sikola said. “Having personally felt the pain of the old way of managing expenses, I knew there was a better way, and I wanted to help small- and medium-sized business.”
Sikola said when TriNet acquired Expense Cloud, it gained an additional level of innovation.
The data center is dead. Long live the data center.
This may be a bit premature, but if the traditional enterprise data center is not dead yet, it certainly is approaching the twilight of its years.
The latest word from 451 Research is that enterprise data center construction is essentially flat across the globe while the new crop of cloud-facing, hyperscale facilities is on the rise. Results for the fourth quarter of 2014 have the installed base growing a paltry 0.2 percent to 4.3 million facilities, propped up only by increased activity among the cloud, service provider and multi-tenant sectors. Enterprise IT still controls an overwhelming portion of the worldwide data infrastructure, some 95 percent, and maintains about 83 percent of data center square footage, according to the report. But for now at least, the trend lines are clearly pointing away from owned-and-operated data center facilities toward more cloud- and service-based activity.
The line between consumer and business technology has gotten increasingly blurry during the past decade. Consumer devices are almost indistinguishable from enterprise gear. But the gap between software and applications in each category is far wider.
That’s a good thing to understand as wearables become more common at work. This conversation between Jim Haviland, VoxMobile’s chief strategy officer and IT Business Edge’s Don Tennant gives a good overview of the current situation with wearables. At one point, Haviland makes clear that the real action will be on the software front:
Hardware always gets the headlines, but apps are where the value creation happens in the enterprise. We have been using the mantra, ‘the right information on the right screen at the right time,’ because the key to valuable innovation with mobility is all about application success and user experience. Wearables expand the possibilities for how and when people interact with apps and data, which can lead to more dramatic successes.
Data can really be anything, including images, geolocation figures, texts, numbers or some combination thereof.
Thanks to the Internet of Things, more of that data is actually describing a physical thing. For us sci-fi geeks, that inevitably raises the question: Can data create a virtual world to actually interact with these things?
InfoWorld reports that Space-Time Insight is exploring this idea with a pilot data project. It’s using virtual reality headsets such as the Oculus Rift as a way to interact with the data.
The company’s data has a unique physicality to it, since it’s a B2B partner for power, oil and gas, logistics and related industries. For instance, in the power industry, the company collects data about transformers. Space-Time Insight’s solution allows you to see a 3D model of a transformer, as well as any warning signals about what’s wrong. Users could even bypass another application, acting from the 3D space or calling in a work team, InfoWorld reports.
Throughout the last few years social media has become a key communications strategy for emergency managers. Whether it’s for sharing preparedness messages during blue-sky times or getting crucial information out in real time during an emergency, platforms like Twitter and Facebook are now part of nearly every agency's public-outreach plan. This evolution in crisis communications has been followed by many, and a recently released study sought to understand what affected populations, response agencies and other stakeholders can expect from tweets in various types of disaster situations.
The study, What to Expect When the Unexpected Happens: Social Media Communications Across Crises (PDF), examined tweets posted during 26 emergency situations in 2012 and 2013. With the goal of measuring the prevalence of different types of tweets during various situations, the researchers examined both the information and its source.
The tweets were classified into six categories, and researchers determined the average percentage of tweets for each: affected individuals (20 percent), infrastructure and utilities (7 percent), donations and volunteering (10 percent), caution and advice (10 percent), sympathy and emotional support (20 percent), and other useful information (32 percent). Tweets classified as other useful information varied significantly, the report says. “For instance, in the Boston bombings and LA Airport shootings in 2013, there are updates about the investigation and suspects; in the West, Texas, explosion and the Spain train crash, we find details about the accidents and the follow-up inquiry; in earthquakes, we find seismological details.”
(TNS) — When tornado sirens went off in Logan County on May 24, 2011, three Guthrie churches that had volunteered to serve as storm shelters were quickly overrun — and not just by people.
Dogs, cats and birds were packed together in church basements with residents looking to escape the tornado, said Logan County Emergency Management Director David Ball. One man showed up to a church with a boa constrictor wrapped around him, Ball said.
While everyone else was jockeying for space, the man and his snake always seemed to have plenty of room to themselves, Ball said.
Ball spoke Tuesday at the National Tornado Summit in Oklahoma City. Since the May 2011 storm, emergency managers have increasingly concluded that public shelters can do more harm than good, he said. Convincing residents to take steps to make sure their homes are safe during tornado season can be a challenge, he said, but it’s the most viable way to keep residents safe.
What worries chief information officers (CIOs) and IT professionals the most? According to a recent survey commissioned by Sungard Availability Services, security, downtime and talent acquisition weigh heaviest on their minds.
Due to the increasing frequency and complexity of cyber attacks, security ranks highest among IT concerns in the workplace for CIOs. As a result more than half of survey respondents (51%) believe security planning should be the last item to receive budget cuts in 2015.
While external security threats are top of mind for IT professionals, internal threats are often the root cause of security disasters. Nearly two thirds of the survey respondents cited leaving mobile phones or laptops in vulnerable places as their chief security concern (62%), followed by password sharing (59%). These internal security challenges created by employees, lead 60% of respondents to note that in 2015 they would enforce stricter security policies for employees.
Second to security, downtime is also a leading concern for CIOs. Two in five (42%) respondents consider the testing of their disaster recovery plans vital to their organizations and also among the last line items that should be cut from 2015 IT budgets. Not only is downtime expensive, but the damage to an enterprise’s reputation far outweighs the monetary costs.
Disaster recovery testing dramatically reduces downtime (by 75%) for enterprises deemed 'best-in-class' in disaster recovery and business continuity. In addition, according to the Aberdeen Group, those that adopt strong resiliency plans can expect 90% less downtime per event compared to the industry average.
“Today CIOs are more concerned with the resiliency of their organizations and the consequences a disaster can have on an organization’s reputation and revenue stream,” said Keith Tilley, executive vice president, Global Sales & Customer Services, Sungard AS. “The implications that information security and downtime threats place on a business have evolved and become more complex in the last several years, making it a high priority for CIOs.”
It is not just CIOs and IT professionals who are concerned about the cyber threat. According to the Business Continuity Institute's latest Horizon Scan report, cyber attacks are the biggest concern for business continuity professionals as well with 82% of respondents to a survey expressing either concern or extreme concern at the prospect of this threat materialising. Data breach came third on the list with 75%.
Budding tech entrepreneurs with dreams of being the next Bill Gates should look to BJ Farmer as a shining example of how to succeed in this industry.
Listen to the entire interview click here.
While he may not be quite as successful as Gates (is anyone?), Farmer has enjoyed much more success than most people who start their own tech companies. He the founder and president of CITOC, a Houston-based IT services firm that specializes in providing premium cloud services and Microsoft 365 consulting.
CITOC recently celebrated its 20th anniversary (1995 – 2015), and in that span CITOC (an acronym for Change Is the Only Constant) has received a slew of awards, most notably winning Houston’s Microsoft Partner of the Year Award in 2013 and 2014. In addition, CITOC was listed in the 2011 rendition of Inc.com’s annual Top 5000 list (ranked #3997 for its 2010 revenue of $4.6 million), and it has also been recognized as one of the Top 50 fastest growing tech companies in the Houston metro area seven years running by the Houston Business Journal.
We previously talked to Farmer about a client prospect of his that had a rotating cycle of CIOs being hired and then soon leaving, and this was costing them a lot of money. We wanted to catch up with Farmer on how he helped this client.
Why are your customers using the cloud? Why aren’t others using it? As an MSP working with cloud-based file sharing, you should know what motivates your clients and prospects to either adopt or avoid the cloud.
Results from a new survey offer an interesting view into what people think of the cloud, how they use it, and what concerns you should address to bring more people into the cloud. Understanding what influences cloud sharing decisions will help you better position your services and be better prepared to handle objections.
Here are some findings from the survey that show why people either are or are not using the cloud, and how you can use that information to your advantage.
HP has published the 2015 edition of its annual Cyber Risk Report, which looks at the security threat landscape through 2014 and indicates likely trends for 2015.
Authored by HP Security Research, the report examines the data indicating the most prevalent vulnerabilities that leave organizations open to security risks. This year’s report reveals that well-known issues and misconfigurations contributed to the most formidable threats in 2014.
“Many of the biggest security risks are issues we’ve known about for decades, leaving organizations unnecessarily exposed,” said Art Gilliland, senior vice president and general manager, Enterprise Security Products, HP. “We can’t lose sight of defending against these known vulnerabilities by entrusting security to the next silver bullet technology; rather, organizations must employ fundamental security tactics to address known vulnerabilities and in turn, eliminate significant amounts of risk.”
One of the major reasons for the surge in shadow IT services in recent years is that many internal IT organizations couldn’t really provide a file sharing and synchronization capability for users of mobile computing devices, which those users naturally went out and found on their own via any number of cloud computing services. Now many of those same IT organizations are building their own private clouds, which naturally require file sharing and synchronization.
To address that need, Connected Data developed file share and synchronization appliances, two more of which the company is unveiling today.
After targeting larger enterprises with previous generations of appliances, Jim Sherhart, vice president of marketing, says the Transporter 15 and 30 appliances are aimed at remote offices and small-to-medium (SMB) organizations; the solution starts under $2,500 for 8TB of storage, 6TB of which is actually usable for storing data.
The extreme weather that has hit much of the country this winter has been labeled “historic” in many quarters, including where I live in eastern North Carolina. While the Northeast has been battered with record-breaking snowfalls, much of the South has been experiencing ice storms and single-digit temperatures for the first time in the lives of many adults. It all begs the question of the impact all of this is having on IT professionals and the organizations they’re charged with keeping up and running.
While it may well be too late for many organizations that entered this winter ill-prepared from a data protection standpoint, what this winter has taught us is that such unexpected events as the collapse of the roof of a data center due to heavy snow and ice need to be anticipated and addressed in order to be fully prepared for next winter.
Deloitte Analytics Senior Advisor Tom Davenportwarned last year that data scientists waste too much time prepping data. After interviewing data scientists, Davenport concluded that they needed better tools for data integration and curating.
Now, a Ventana Research column shows that data scientists aren’t the only ones wasting enormous amounts of time on data preparation at the expense of actual analysis.
Ventana CEO Mark Smith shares research from several reports, all of which demonstrate how much of a time suck data preparation can be without the right tools.
Unrelenting frigid weather often means frozen water pipes – one of the biggest risks of property damage. In fact, a burst pipe can cause more than $5,000 in water damage, according to IBHS research.
Structures built on slab foundations, common in southern states, frequently have water pipes running through the attic, an especially vulnerable location. By contrast, in northern states, builders recognize freezing as a threat and usually do not place water pipes in unheated portions of a building or outside of insulated areas.
Freezing temperatures can be prevented with the installation of weather stripping and seals. This offers two major benefits: keeping severe winter weather out of a structure, and increasing energy efficiency by limiting drafts and reducing the amount of cold air entering.
Innovation has become accepted as central to competitiveness in today’s world, both in new product development and in enhancement of internal processes. Companies struggle with innovation, and there have been numerous attempts to regularize and program it. But the development of truly breakthrough ideas is difficult, and recognizing them when they do arrive can be harder still. We have processes available for vetting ideas and passing them through a series of increasingly selective gateways until they reach the point of usefulness or are discarded altogether. But we do not have good processes for stitching together new ideas and reaching that eureka moment that says a critical new idea has been found.
Some of the ways that ideas are sourced include crowdsourcing, internal suggestions, brainstorming, and the like. There are idea factories employing innovative individuals who apply diverse experience to create an “out of the box” concept. And, there are programs such as TRIZ, an innovation program developed in Russia in 1946 that seek to apply a systemic process to ideation itself, based around principles extracted from patent literature subjected to contradiction, synthesis, and new arrangement. But creation of ideas is forever thwarted by the fact that we don’t really understand the creative process and may, in fact, be generalizing a multitude of processes in a way that makes them impossible to replicate.
Predictive analytics is apparently lucrative for businesses, investors and, of course, predictive analytics companies.
In a recent Forbes column, Lutz Finger noted that predictive analytics companies are attracting multi-million dollar investment deals. Most recently, a company called Blue Yonder secured $75 million in funding from a global private equity firm, which is the “biggest deal for a predictive analytics company in Europe….”
If you’re not familiar with Finger, he’s a director at LinkedIn, an expert on social media and text analytics, and the co-founder and former CEO of Fisheye Analytics. The column shares highlights of his interview with Blue Yonder’s CEO Uwe Weiss, so it’s no surprise that it makes the case for predictive analytics as a sound investment.
It’s not a hard case to make. Gartner predicts a compound annual growth rate of 34 percent from 2012 to 2017, and estimates the market will reach $48 billion. To give you an idea of how that compares, Gartner says MDM was worth $1.16 billion last summer.
Despite your best efforts – and despite the advanced levels of security in your cloud-based file sharing solution – MSPs may eventually find themselves on the wrong end of a data breach. The key question isn’t how to prevent such an incident from happening; even the world’s most security-conscious organizations suffer breaches. Rather, the key question is how much will this inevitable data breach cost you?
Today, the cost is relatively limited and abstract for MSPs. While a data breach can certainly result in a lost customer, or time spent trying to resolve the issue, the real financial costs tend to fall on the client. They are the ones who will pay the compliance violations and lose revenue. After all, it is their data.
But as data breaches increase in both frequency and severity – and as clients rely on you for more of their critical IT functions – it’s only a matter of time before someone decides that the MSP should be held responsible when things go wrong. After all, it is your solution they are using to share data.
ContinuitySA provides advice for organizations based in areas where power supplies are unstable.
One risk that has become very real for South African businesses is load-shedding. An unstable power supply with the potential of extended periods of power outages over the next several years creates a range of risks that have to be integrated into current business continuity plans.
“We know that load-shedding is going to occur and, in order to put mitigation strategies in place, we first need to understand what the implications are,” says Michael Davies, CEO of ContinuitySA. “What are the issues that businesses should be looking at? Now is a good time to update your business continuity plans in order to assess the impact of load-shedding on your business and weigh up what your risk appetite is.”
Davies says that because electricity is now so integral to modern society, load-shedding creates a complex and interdependent set of risks over and above the task of just keeping the business’s lights on. These risks need to be understood within the context of each business's strategic plan.
Picture this. A main water pipe bursts and water begins to flood the warehouse, which is also where you happen to be, smartphone in pocket. To avert serious damage and downtime, you need to find the cut-off valve – quickly. At this point, two scenarios are possible. First scenario: you try to find out who can help by calling reception and trying to note the names they suggest and the phone numbers. Second scenario: you access a directory of resources directly from your smartphone, call the person concerned and turn the call into a video call from that person’s desktop so that you can be remotely guided to where the cut-off valve is and how to shut it. How do you get from scenario one to scenario two?
Although Washington remains stuck in partisan gridlock, there is one thing that Democrats and Republicans agree on: the need to reduce gridlock in the rest of the country by bringing America's infrastructure into the 21st century.
The basis for that rare consensus is painfully clear. The nation's infrastructure has earned a grade of D+ from the American Society of Civil Engineers, which estimates that it will cost $3.6 trillion to bring our systems to a state of good repair. Across the nation, aging and deteriorating bridges and water treatment plants pose a real threat to public health and safety and a drain on economic growth.
How and when Republicans and Democrats might find common ground to fix the problem remains to be seen. But when that does come to pass, here's another idea that should win support from both sides: Our next-generation of infrastructure must be resilient.
Many efforts to implement ERM are unfocused, severely resourced constrained, and pushed down so far into the organization that it is difficult to establish relevance. The near-term results are “starts and stops” and ceaseless discussions to understand the objective. Risk is often an afterthought to strategy and risk management an appendage to performance management. Ultimately, the ERM implementation runs out of steam and is no longer sustainable.
While there is no one-size-fits-all, the following design principles will help overcome these issues:
A new survey from identity and access management (IAM) solutions provider SailPoint has revealed there is a "clear disconnect" between cloud usage and IT controls in many businesses.
SailPoint's "2014 Market Pulse Survey" of at least 3,000 employees worldwide showed that one out of every four workers admitted they would take copies of corporate data with them when they leave a company.
Survey researchers also pointed out that one in five employees is "going rogue" with corporate data and has uploaded this information to a cloud application such as Dropbox or Google Docs with the intent to share it outside the company.
"The challenge with cloud applications is that IT organizations must now manage applications that are deployed – and accessed – completely outside the firewall," SailPoint President Kevin Cunningham wrote in a blog post. "Adding to the complexity, employees are starting to use consumer-oriented applications for work-related activities, creating a significant blind spot when it comes to risk."
As the number of platforms where enterprise IT organizations can store data proliferates, getting data in and out of those platforms quickly has become a major IT challenge.
To address that issue, Syncsort has released an update to its suite of data integration offerings that adds an “Intelligent Execution Layer” that enables users to visually design a data transformation once and then run it anywhere—across Hadoop, Linux, Windows, or Unix—on premise or the cloud.
Tendü Yoğurtçu, general manager for Big Data at Syncsort, says version 8.0 of the company’s DMX Software is designed to provide not only a consistent approach for collecting, transforming and distributing data across multiple platforms, but also one that embeds algorithms that automatically select the optimal execution path based on the type of platform, the attributes of the data and the condition of the cluster.
The goal, says Yoğurtçu, is to allow business users and data scientists to take advantage of a run-time environment that allows them to transform data in flight in a single step.
Well, it’s time to work on the Business Continuity Management (BCM) / Disaster Recovery (DR) program based on the maintenance schedule. You’ve got your plan all well laid out and people know it’s coming and are ready to participate…sometimes begrudgingly. Yet, for some reason your well-thought out plan isn’t going to plan at all.
Sometimes that because what one believes they have, they really don’t. For example, just because you have an executive buy in on the need for the BCM/DR program and what’s needed, doesn’t always translate to mean the same thing as having their support. For example, an executive may buy in to the idea that a specific initiative is needed and give the go ahead but no one really follows along as expected because the executive themselves doesn’t offer or provide support to the BCM/DR practitioner and when others see this they quickly realize that the BCM/DR is just a make-work effort and isn’t something the company executives really – and I mean really – supports.
The executive may see it as a checkbox on an audit report and wants it quickly to go away; to quickly have the golden checkmark in that tick box appear on a report so that BCM/DR goes away. Again, they see the need to do something but don’t provide the means, communication channels and support, resources (both physical and financial) or moral support to get it done.
The debate about build versus buy has raged for years. But the total cost of owning your own data center outweighs the perceived benefits, and it looks like the argument in favor of “buy” may have gained the upper hand once and for all.
Let’s talk about it, though, from the point of view of people who are considering building their own and see how their claims stand up to the current state of backup.
In all the big news about the impact of mobile technology on small to midsize businesses (SMBs), one item that stands out is that SMBs that adopt mobile strategies outperform those that do not. This data comes from a recent study on the mobile revolution by the Boston Consulting Group and Qualcomm. Another report from Juniper Research found that in 2014, SMBs contributed $630 billion to the growing mobile industry, which is nearly triple the number from four years prior.
That kind of growth proves that SMBs are not only adopting mobile technologies; they are relying on it to fuel their business growth and change the ways that business is done.
Everyone in IT is anxious to see how the cloud shakes out. When all is said and done, what will the enterprise look like when cloud computing becomes the established model for IT infrastructure?
And some are looking even farther into the future, wondering what, if anything, will come after the cloud?
To be sure, there is no shortage of predictions over how the cloud will evolve over time. IDC’s most recent assessment has hybrid infrastructure heading into 65 percent of enterprises within the year and predicts that by 2017, 20 percent of the industry will be using the public cloud as a strategic resource. As well, more than three quarters of IaaS offerings will be redesigned, rebranded or phased out over the next two years as providers concentrate on more lucrative services higher up the stack.
The utility of the cloud is beyond question at this point, so while most experts can debate the merits of the various architectures, it is hard to imagine IT in the future without a significant cloud presence. NetSuite CEO Zach Nelson told the Australian Financial Review last fall that he believes the cloud to be “the last computing architecture,” because there is no way to improve upon always-on data access from any device anywhere in the world. This may be true, but it was also true in the early 1970s that computer technology was simply too expensive and too complex for the average citizen.
As your customers decide whether or not to move their cloud-based file sharing to a hybrid cloud, they will have many questions along the way. Of course, some questions are more common than others – and as their managed service provider, you should be prepared to answer them.
(Tribune News Service) -- A hiker lost in the mountains of New Mexico called 911 repeatedly, but was routed seven times to non-emergency lines.
A 911 call made by an elderly woman from her home in Texas was picked up by an emergency dispatcher in Tennessee, some 700 miles away.
And an emergency call made last month from a middle school in Delano, Calif., after a young student collapsed and later died there, was routed to a 911 dispatcher in, of all places, Ontario, Canada.
Hundreds of millions of Americans have moved rapidly from traditional land lines to relying on various forms of wireless phone services, making the 911 emergency system ever more complex, experts say, and therefore more subject to misrouted calls or misidentified locations.
Recently, we had a client pick up a new contract with a company that was escaping a relationship with a bad IT provider. The transition was a nightmare for the business. Why? Because their previous IT company had constantly kept them in the dark about the state of their technology.
How transparent are YOU with your clients?
When you say, “Honesty is the best policy,” you'd better mean it. Be as open as possible with your clients without overloading them on the technical stuff. It’s all about building trust, and you can’t do that if they think you’re keeping secrets from them. Even if something goes wrong with a bad bug or a security breach, you need to keep them in the loop. Own up to everything you do, good and bad, and if it’s bad – make it right.
In the wake of a natural disaster, about a quarter of businesses never reopen. Whether due to primary concerns like a warehouse flooding, secondary complications like supply chain disruption, or indirect consequences like transportation shutdown that prevents employees from getting to work, there are a broad range of risks that can severely impact any business in the wake of a catastrophe that must be planned for.
Planning and securing against natural disaster risks can be daunting and exceptionally expensive, but researchers have found that every dollar invested in preparedness can prevent $7 of disaster-related economic losses. Check out more of the questions to ask and ways to mitigate the risk of natural disasters for your organization with this infographic from Boston University’s Metropolitan College Graduate Programs in Management:
The widespread popularity of social media and associated mobile apps, especially among young people, has potential in public safety, a new study finds.
Use of such sites as Facebook and Twitter has become so significant that universities should strongly consider utilizing them to spread information during campus emergencies, according to a study from the University at Buffalo School of Management called Factors impacting the adoption of social network sites for emergency notification purposes in universities.
Social media not only enables campus authorities to instantly reach a large percentage of students to provide timely and accurate information during crisis situations, the study states, but sending messages through social networking channels also means students are more likely to comply with emergency notifications received.
For many people, stepping into the office can feel like stepping back in time. In an age where so many people carry around mobile computers in their pockets, employees have become frustrated at being forced to use cumbersome technologies such as VPN and FTP to remotely access files stored on an on-premises file server. As a result, many of these employees have resorted to storing more of their data in free, non-secure cloud services like Dropbox.
How do MSPs reconcile the virtues of the file server with the benefits of cloud file sync? One way is to cloud-enable the file server. Here are three ways cloud-enabling the file server keeps the file server sexy and makes your clients happy:
Leveraging Big Data for operational analytics is generating more interest these days, despite integration concerns. Companies are always looking for ways to reduce operational expenses, and Big Data promises to help.
A recent SCM World report, “The Digital Factory: Game-Changing Technologies That Will Transform Manufacturing Industry,” asked 200 manufacturers around the globe about Big Data and other new technologies. The report is available to clients only, but Forbes recently shared some key findings.
The survey revealed that 49 percent see advanced analytics as a way to “reduce operational costs and utilize assets efficiently,” Forbes notes. It’s telling, too, that only 4 percent said they saw no use case for Big Data analytics in their future.
I recently had a conversation with someone about BYOD and security. He told me that he thought that enterprise was having BYOD fatigue and there was a growing attitude that its security problems were overblown. This person wasn’t alone in his feelings. I had read some articles and heard others repeat similar complaints about BYOD. Perhaps mobile devices weren’t as bad of a security issue as once thought?
Or maybe the threats are even worse than we realized. Some recent studies show just how much of a security risk mobile devices have become within the workplace, and this carries over into BYOD security risks as well.
First, a study conducted by Alcatel-Lucent's Motive Security Labs found that mobile malware has increased by 25 percent in 2014, and 16 million devices – mostly Androids but not exclusively – are infected. For the first time, we’re seeing infection rates of mobile devices that rival those on Windows computers. Out of the top 20 threats, six of them involved spyware meant to track location and monitor the user’s communications. The reason for all this malware, according to an eSecurity Planet article, comes down to the device owner: